hbase git commit: HBASE-17807 correct the value of zookeeper.session.timeout in hbase doc
Repository: hbase Updated Branches: refs/heads/master 11dc5bf67 -> 941070939 HBASE-17807 correct the value of zookeeper.session.timeout in hbase doc Signed-off-by: tedyuProject: http://git-wip-us.apache.org/repos/asf/hbase/repo Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/94107093 Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/94107093 Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/94107093 Branch: refs/heads/master Commit: 941070939f4bf65536ee74b9f62d3b5114da826b Parents: 11dc5bf Author: chenyechao Authored: Mon Mar 20 14:28:42 2017 +0800 Committer: tedyu Committed: Tue Mar 21 18:53:09 2017 -0700 -- src/main/asciidoc/_chapters/troubleshooting.adoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/hbase/blob/94107093/src/main/asciidoc/_chapters/troubleshooting.adoc -- diff --git a/src/main/asciidoc/_chapters/troubleshooting.adoc b/src/main/asciidoc/_chapters/troubleshooting.adoc index e1d1717..1cf93d6 100644 --- a/src/main/asciidoc/_chapters/troubleshooting.adoc +++ b/src/main/asciidoc/_chapters/troubleshooting.adoc @@ -1050,7 +1050,7 @@ If you wish to increase the session timeout, add the following to your _hbase-si zookeeper.session.timeout - 120 + 12 hbase.zookeeper.property.tickTime
hbase git commit: Update home page to say hbasecon2017 is on google campus in MTV, not SF
Repository: hbase Updated Branches: refs/heads/master 1cfd22bf4 -> 11dc5bf67 Update home page to say hbasecon2017 is on google campus in MTV, not SF Project: http://git-wip-us.apache.org/repos/asf/hbase/repo Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/11dc5bf6 Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/11dc5bf6 Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/11dc5bf6 Branch: refs/heads/master Commit: 11dc5bf6715a1cd8fe191cfcb299688af24865f8 Parents: 1cfd22b Author: Michael StackAuthored: Tue Mar 21 14:01:01 2017 -0700 Committer: Michael Stack Committed: Tue Mar 21 14:01:06 2017 -0700 -- src/main/site/xdoc/index.xml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/hbase/blob/11dc5bf6/src/main/site/xdoc/index.xml -- diff --git a/src/main/site/xdoc/index.xml b/src/main/site/xdoc/index.xml index 49b9c0d..83c9f01 100644 --- a/src/main/site/xdoc/index.xml +++ b/src/main/site/xdoc/index.xml @@ -83,7 +83,7 @@ Apache HBase is an open-source, distributed, versioned, non-relational database - June 12th, 2017 https://easychair.org/cfp/hbasecon2017;>HBaseCon2017 in San Francisco + June 12th, 2017 https://easychair.org/cfp/hbasecon2017;>HBaseCon2017 at the Crittenden Buildings on the Google Mountain View Campus December 8th, 2016 https://www.meetup.com/hbaseusergroup/events/235542241/;>Meetup@Splice in San Francisco September 26th, 2016 http://www.meetup.com/HBase-NYC/events/233024937/;>HBaseConEast2016 at Google in Chelsea, NYC May 24th, 2016 http://www.hbasecon.com/;>HBaseCon2016 at The Village, 969 Market, San Francisco
hbase-site git commit: Add missing hbasecon2017 img
Repository: hbase-site Updated Branches: refs/heads/asf-site a90b1b57b -> 28f8231f6 Add missing hbasecon2017 img Project: http://git-wip-us.apache.org/repos/asf/hbase-site/repo Commit: http://git-wip-us.apache.org/repos/asf/hbase-site/commit/28f8231f Tree: http://git-wip-us.apache.org/repos/asf/hbase-site/tree/28f8231f Diff: http://git-wip-us.apache.org/repos/asf/hbase-site/diff/28f8231f Branch: refs/heads/asf-site Commit: 28f8231f6d15322d1a0b38685519c8e5ef84e49e Parents: a90b1b5 Author: Michael StackAuthored: Tue Mar 21 13:53:31 2017 -0700 Committer: Michael Stack Committed: Tue Mar 21 13:53:31 2017 -0700 -- images/hbasecon2017.png | Bin 0 -> 3982 bytes 1 file changed, 0 insertions(+), 0 deletions(-) -- http://git-wip-us.apache.org/repos/asf/hbase-site/blob/28f8231f/images/hbasecon2017.png -- diff --git a/images/hbasecon2017.png b/images/hbasecon2017.png new file mode 100644 index 000..4b25f89 Binary files /dev/null and b/images/hbasecon2017.png differ
[02/13] hbase-site git commit: Added hbasecon website at www.hbasecon.com
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/a90b1b57/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/fa-fonts/fontawesome-webfont3295.ttf -- diff --git a/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/fa-fonts/fontawesome-webfont3295.ttf b/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/fa-fonts/fontawesome-webfont3295.ttf new file mode 100644 index 000..26dea79 Binary files /dev/null and b/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/fa-fonts/fontawesome-webfont3295.ttf differ http://git-wip-us.apache.org/repos/asf/hbase-site/blob/a90b1b57/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/fa-fonts/fontawesome-webfont3295.woff -- diff --git a/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/fa-fonts/fontawesome-webfont3295.woff b/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/fa-fonts/fontawesome-webfont3295.woff new file mode 100644 index 000..dc35ce3 Binary files /dev/null and b/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/fa-fonts/fontawesome-webfont3295.woff differ http://git-wip-us.apache.org/repos/asf/hbase-site/blob/a90b1b57/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/fa-fonts/fontawesome-webfont3295.woff2 -- diff --git a/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/fa-fonts/fontawesome-webfont3295.woff2 b/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/fa-fonts/fontawesome-webfont3295.woff2 new file mode 100644 index 000..500e517 Binary files /dev/null and b/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/fa-fonts/fontawesome-webfont3295.woff2 differ http://git-wip-us.apache.org/repos/asf/hbase-site/blob/a90b1b57/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/fa-fonts/fontawesome-webfontd41d.eot -- diff --git a/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/fa-fonts/fontawesome-webfontd41d.eot b/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/fa-fonts/fontawesome-webfontd41d.eot new file mode 100644 index 000..9b6afae Binary files /dev/null and b/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/fa-fonts/fontawesome-webfontd41d.eot differ http://git-wip-us.apache.org/repos/asf/hbase-site/blob/a90b1b57/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/wrangle-fonts/Carnevalee-Freakshow.ttf -- diff --git a/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/wrangle-fonts/Carnevalee-Freakshow.ttf b/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/wrangle-fonts/Carnevalee-Freakshow.ttf new file mode 100644 index 000..bbc0ec2 Binary files /dev/null and b/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/wrangle-fonts/Carnevalee-Freakshow.ttf differ http://git-wip-us.apache.org/repos/asf/hbase-site/blob/a90b1b57/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/www_fonts/CalibreWeb-Light.eot -- diff --git a/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/www_fonts/CalibreWeb-Light.eot b/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/www_fonts/CalibreWeb-Light.eot new file mode 100644 index 000..4aefb80 Binary files /dev/null and b/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/www_fonts/CalibreWeb-Light.eot differ http://git-wip-us.apache.org/repos/asf/hbase-site/blob/a90b1b57/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/www_fonts/CalibreWeb-Light.woff -- diff --git a/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/www_fonts/CalibreWeb-Light.woff b/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/www_fonts/CalibreWeb-Light.woff new file mode 100644 index 000..9666811 Binary files /dev/null and b/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/www_fonts/CalibreWeb-Light.woff differ http://git-wip-us.apache.org/repos/asf/hbase-site/blob/a90b1b57/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/www_fonts/CalibreWeb-Lightd41d.eot -- diff --git a/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/www_fonts/CalibreWeb-Lightd41d.eot b/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/www_fonts/CalibreWeb-Lightd41d.eot new file mode 100644 index 000..4aefb80 Binary files /dev/null and b/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/www_fonts/CalibreWeb-Lightd41d.eot differ
[08/13] hbase-site git commit: Added hbasecon website at www.hbasecon.com
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/a90b1b57/www.hbasecon.com/etc/clientlibs/granite/jquery/granite.min.js -- diff --git a/www.hbasecon.com/etc/clientlibs/granite/jquery/granite.min.js b/www.hbasecon.com/etc/clientlibs/granite/jquery/granite.min.js new file mode 100644 index 000..6933872 --- /dev/null +++ b/www.hbasecon.com/etc/clientlibs/granite/jquery/granite.min.js @@ -0,0 +1,92 @@ +(function(c,b,d){var a; +b.Granite=b.Granite||{}; +b.Granite.$=b.Granite.$||c; +b._g=b._g||{}; +b._g.$=b._g.$||c; +a=Granite.HTTP; +c.ajaxSetup({externalize:true,encodePath:true,hook:true,beforeSend:function(f,e){if(typeof G_IS_HOOKED=="undefined"||!G_IS_HOOKED(e.url)){if(e.externalize){e.url=a.externalize(e.url) +}if(e.encodePath){e.url=a.encodePathOfURI(e.url) +}}if(e.hook){var g=a.getXhrHook(e.url,e.type,e.data); +if(g){e.url=g.url; +if(g.params){if(e.type.toUpperCase()=="GET"){e.url+="?"+c.param(g.params) +}else{e.data=c.param(g.params) +},statusCode:{403:function(e){if(e.getResponseHeader("X-Reason")==="Authentication Failed"){a.handleLoginRedirect() +); +c.ajaxSettings.traditional=true +}(jQuery,this)); +(function(e,b){e.Granite=e.Granite||{}; +if(e.Granite.csrf){return +}e.Granite.csrf={initialised:false,refreshToken:l}; +function h(){this._handler=[] +}h.prototype={then:function(r,q){this._handler.push({resolve:r,reject:q}) +},resolve:function(){this._execute("resolve",arguments) +},reject:function(){this._execute("reject",arguments) +},_execute:function(q,r){if(this._handler===null){throw new Error("Promise already completed.") +}for(var s=0,t=this._handler.length; +s
[13/13] hbase-site git commit: Added hbasecon website at www.hbasecon.com
Added hbasecon website at www.hbasecon.com Project: http://git-wip-us.apache.org/repos/asf/hbase-site/repo Commit: http://git-wip-us.apache.org/repos/asf/hbase-site/commit/a90b1b57 Tree: http://git-wip-us.apache.org/repos/asf/hbase-site/tree/a90b1b57 Diff: http://git-wip-us.apache.org/repos/asf/hbase-site/diff/a90b1b57 Branch: refs/heads/asf-site Commit: a90b1b57b0bf3ef0ede5aa2ec2fd3df2963386c3 Parents: fadf6d5 Author: Michael StackAuthored: Tue Mar 21 09:57:39 2017 -0700 Committer: Michael Stack Committed: Tue Mar 21 09:57:39 2017 -0700 -- .../content/dam/events/hbase/HBASE-Logo2.png| Bin 0 -> 15375 bytes .../content/dam/events/hbase/hbase-logo.png | Bin 0 -> 2825 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 162045 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 128433 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 161121 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 190008 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 314462 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 194292 bytes .../_jcr_content/renditions/original.jpg| Bin 0 -> 97268 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 169858 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 264076 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 260608 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 6155 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 6155 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 217927 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 182476 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 195797 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 220189 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 156549 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 181751 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 187323 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 168370 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 217023 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 149077 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 157860 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 98381 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 252256 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 268539 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 126054 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 192894 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 256721 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 247871 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 231762 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 299589 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 252018 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 264524 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 6155 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 197174 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 265809 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 248983 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 148835 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 264361 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 182234 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 274769 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 6155 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 6155 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 175988 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 238880 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 273858 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 202368 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 114702 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 268968 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 6155 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 159371 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 254954 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 219581 bytes .../_jcr_content/renditions/original.jpg| Bin 0 -> 33279 bytes .../_jcr_content/renditions/original.jpg| Bin 0 -> 24986 bytes .../_jcr_content/renditions/original.jpg| Bin 0 -> 27472
[05/13] hbase-site git commit: Added hbasecon website at www.hbasecon.com
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/a90b1b57/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/bootstrap/glyphicons-halflings-regular.svg -- diff --git a/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/bootstrap/glyphicons-halflings-regular.svg b/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/bootstrap/glyphicons-halflings-regular.svg new file mode 100644 index 000..94fb549 --- /dev/null +++ b/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/bootstrap/glyphicons-halflings-regular.svg @@ -0,0 +1,288 @@ + +http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd; > +http://www.w3.org/2000/svg;> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
[06/13] hbase-site git commit: Added hbasecon website at www.hbasecon.com
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/a90b1b57/www.hbasecon.com/etc/designs/sites/clientlibs-events/css/assets/icons/scrollup.png -- diff --git a/www.hbasecon.com/etc/designs/sites/clientlibs-events/css/assets/icons/scrollup.png b/www.hbasecon.com/etc/designs/sites/clientlibs-events/css/assets/icons/scrollup.png new file mode 100644 index 000..d5acba5 Binary files /dev/null and b/www.hbasecon.com/etc/designs/sites/clientlibs-events/css/assets/icons/scrollup.png differ http://git-wip-us.apache.org/repos/asf/hbase-site/blob/a90b1b57/www.hbasecon.com/etc/designs/sites/clientlibs-events/css/assets/icons/xminus_sm.png.pagespeed.ic.dQt-uO8m7u.png -- diff --git a/www.hbasecon.com/etc/designs/sites/clientlibs-events/css/assets/icons/xminus_sm.png.pagespeed.ic.dQt-uO8m7u.png b/www.hbasecon.com/etc/designs/sites/clientlibs-events/css/assets/icons/xminus_sm.png.pagespeed.ic.dQt-uO8m7u.png new file mode 100644 index 000..9d2fc28 Binary files /dev/null and b/www.hbasecon.com/etc/designs/sites/clientlibs-events/css/assets/icons/xminus_sm.png.pagespeed.ic.dQt-uO8m7u.png differ http://git-wip-us.apache.org/repos/asf/hbase-site/blob/a90b1b57/www.hbasecon.com/etc/designs/sites/clientlibs-events/css/assets/icons/xplus_sm.png.pagespeed.ic.omhsxIbQPH.png -- diff --git a/www.hbasecon.com/etc/designs/sites/clientlibs-events/css/assets/icons/xplus_sm.png.pagespeed.ic.omhsxIbQPH.png b/www.hbasecon.com/etc/designs/sites/clientlibs-events/css/assets/icons/xplus_sm.png.pagespeed.ic.omhsxIbQPH.png new file mode 100644 index 000..77a8b31 Binary files /dev/null and b/www.hbasecon.com/etc/designs/sites/clientlibs-events/css/assets/icons/xplus_sm.png.pagespeed.ic.omhsxIbQPH.png differ http://git-wip-us.apache.org/repos/asf/hbase-site/blob/a90b1b57/www.hbasecon.com/etc/designs/sites/clientlibs-events/css/assets/icons/xred_minus_sm.png.pagespeed.ic.MWIDXMJpsO.png -- diff --git a/www.hbasecon.com/etc/designs/sites/clientlibs-events/css/assets/icons/xred_minus_sm.png.pagespeed.ic.MWIDXMJpsO.png b/www.hbasecon.com/etc/designs/sites/clientlibs-events/css/assets/icons/xred_minus_sm.png.pagespeed.ic.MWIDXMJpsO.png new file mode 100644 index 000..8ae34f8 Binary files /dev/null and b/www.hbasecon.com/etc/designs/sites/clientlibs-events/css/assets/icons/xred_minus_sm.png.pagespeed.ic.MWIDXMJpsO.png differ http://git-wip-us.apache.org/repos/asf/hbase-site/blob/a90b1b57/www.hbasecon.com/etc/designs/sites/clientlibs-events/css/assets/icons/xred_plus_sm.png.pagespeed.ic.wFWUDoADCr.png -- diff --git a/www.hbasecon.com/etc/designs/sites/clientlibs-events/css/assets/icons/xred_plus_sm.png.pagespeed.ic.wFWUDoADCr.png b/www.hbasecon.com/etc/designs/sites/clientlibs-events/css/assets/icons/xred_plus_sm.png.pagespeed.ic.wFWUDoADCr.png new file mode 100644 index 000..bcd100a Binary files /dev/null and b/www.hbasecon.com/etc/designs/sites/clientlibs-events/css/assets/icons/xred_plus_sm.png.pagespeed.ic.wFWUDoADCr.png differ http://git-wip-us.apache.org/repos/asf/hbase-site/blob/a90b1b57/www.hbasecon.com/etc/designs/sites/clientlibs-events/css/assets/icons/xwhite_minus_sm.png.pagespeed.ic.xzzjHf20pJ.png -- diff --git a/www.hbasecon.com/etc/designs/sites/clientlibs-events/css/assets/icons/xwhite_minus_sm.png.pagespeed.ic.xzzjHf20pJ.png b/www.hbasecon.com/etc/designs/sites/clientlibs-events/css/assets/icons/xwhite_minus_sm.png.pagespeed.ic.xzzjHf20pJ.png new file mode 100644 index 000..8ce0acb Binary files /dev/null and b/www.hbasecon.com/etc/designs/sites/clientlibs-events/css/assets/icons/xwhite_minus_sm.png.pagespeed.ic.xzzjHf20pJ.png differ http://git-wip-us.apache.org/repos/asf/hbase-site/blob/a90b1b57/www.hbasecon.com/etc/designs/sites/clientlibs-events/css/assets/icons/xwhite_plus_sm.png.pagespeed.ic.jqf_V18Myb.png -- diff --git a/www.hbasecon.com/etc/designs/sites/clientlibs-events/css/assets/icons/xwhite_plus_sm.png.pagespeed.ic.jqf_V18Myb.png b/www.hbasecon.com/etc/designs/sites/clientlibs-events/css/assets/icons/xwhite_plus_sm.png.pagespeed.ic.jqf_V18Myb.png new file mode 100644 index 000..cf1420d Binary files /dev/null and b/www.hbasecon.com/etc/designs/sites/clientlibs-events/css/assets/icons/xwhite_plus_sm.png.pagespeed.ic.jqf_V18Myb.png differ http://git-wip-us.apache.org/repos/asf/hbase-site/blob/a90b1b57/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/bootstarp/glyphicons-halflings-regulard41d.html -- diff --git
[10/13] hbase-site git commit: Added hbasecon website at www.hbasecon.com
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/a90b1b57/www.hbasecon.com/etc/clientlibs/granite/jquery-ui/css/images/ui-bg_glass_55_fbf9ee_1x400.png.pagespeed.ce.-PRVjguS_y.png -- diff --git a/www.hbasecon.com/etc/clientlibs/granite/jquery-ui/css/images/ui-bg_glass_55_fbf9ee_1x400.png.pagespeed.ce.-PRVjguS_y.png b/www.hbasecon.com/etc/clientlibs/granite/jquery-ui/css/images/ui-bg_glass_55_fbf9ee_1x400.png.pagespeed.ce.-PRVjguS_y.png new file mode 100644 index 000..ad3d634 Binary files /dev/null and b/www.hbasecon.com/etc/clientlibs/granite/jquery-ui/css/images/ui-bg_glass_55_fbf9ee_1x400.png.pagespeed.ce.-PRVjguS_y.png differ http://git-wip-us.apache.org/repos/asf/hbase-site/blob/a90b1b57/www.hbasecon.com/etc/clientlibs/granite/jquery-ui/css/images/ui-bg_glass_75_dadada_1x400.png.pagespeed.ce.wSxlENrT6_.png -- diff --git a/www.hbasecon.com/etc/clientlibs/granite/jquery-ui/css/images/ui-bg_glass_75_dadada_1x400.png.pagespeed.ce.wSxlENrT6_.png b/www.hbasecon.com/etc/clientlibs/granite/jquery-ui/css/images/ui-bg_glass_75_dadada_1x400.png.pagespeed.ce.wSxlENrT6_.png new file mode 100644 index 000..5a46b47 Binary files /dev/null and b/www.hbasecon.com/etc/clientlibs/granite/jquery-ui/css/images/ui-bg_glass_75_dadada_1x400.png.pagespeed.ce.wSxlENrT6_.png differ http://git-wip-us.apache.org/repos/asf/hbase-site/blob/a90b1b57/www.hbasecon.com/etc/clientlibs/granite/jquery-ui/css/images/ui-bg_glass_75_e6e6e6_1x400.png.pagespeed.ce.9CVDVsKoya.png -- diff --git a/www.hbasecon.com/etc/clientlibs/granite/jquery-ui/css/images/ui-bg_glass_75_e6e6e6_1x400.png.pagespeed.ce.9CVDVsKoya.png b/www.hbasecon.com/etc/clientlibs/granite/jquery-ui/css/images/ui-bg_glass_75_e6e6e6_1x400.png.pagespeed.ce.9CVDVsKoya.png new file mode 100644 index 000..86c2baa Binary files /dev/null and b/www.hbasecon.com/etc/clientlibs/granite/jquery-ui/css/images/ui-bg_glass_75_e6e6e6_1x400.png.pagespeed.ce.9CVDVsKoya.png differ http://git-wip-us.apache.org/repos/asf/hbase-site/blob/a90b1b57/www.hbasecon.com/etc/clientlibs/granite/jquery-ui/css/images/ui-bg_glass_95_fef1ec_1x400.png.pagespeed.ce.Wjvi2P_4Mk.png -- diff --git a/www.hbasecon.com/etc/clientlibs/granite/jquery-ui/css/images/ui-bg_glass_95_fef1ec_1x400.png.pagespeed.ce.Wjvi2P_4Mk.png b/www.hbasecon.com/etc/clientlibs/granite/jquery-ui/css/images/ui-bg_glass_95_fef1ec_1x400.png.pagespeed.ce.Wjvi2P_4Mk.png new file mode 100644 index 000..4443fdc Binary files /dev/null and b/www.hbasecon.com/etc/clientlibs/granite/jquery-ui/css/images/ui-bg_glass_95_fef1ec_1x400.png.pagespeed.ce.Wjvi2P_4Mk.png differ http://git-wip-us.apache.org/repos/asf/hbase-site/blob/a90b1b57/www.hbasecon.com/etc/clientlibs/granite/jquery-ui/css/images/xui-bg_flat_0_aa_40x100.png.pagespeed.ic.OJEVLzghNv.png -- diff --git a/www.hbasecon.com/etc/clientlibs/granite/jquery-ui/css/images/xui-bg_flat_0_aa_40x100.png.pagespeed.ic.OJEVLzghNv.png b/www.hbasecon.com/etc/clientlibs/granite/jquery-ui/css/images/xui-bg_flat_0_aa_40x100.png.pagespeed.ic.OJEVLzghNv.png new file mode 100644 index 000..c274437 Binary files /dev/null and b/www.hbasecon.com/etc/clientlibs/granite/jquery-ui/css/images/xui-bg_flat_0_aa_40x100.png.pagespeed.ic.OJEVLzghNv.png differ http://git-wip-us.apache.org/repos/asf/hbase-site/blob/a90b1b57/www.hbasecon.com/etc/clientlibs/granite/jquery-ui/css/images/xui-bg_flat_75_ff_40x100.png.pagespeed.ic.-frxtVxQm5.png -- diff --git a/www.hbasecon.com/etc/clientlibs/granite/jquery-ui/css/images/xui-bg_flat_75_ff_40x100.png.pagespeed.ic.-frxtVxQm5.png b/www.hbasecon.com/etc/clientlibs/granite/jquery-ui/css/images/xui-bg_flat_75_ff_40x100.png.pagespeed.ic.-frxtVxQm5.png new file mode 100644 index 000..0ac04b8 Binary files /dev/null and b/www.hbasecon.com/etc/clientlibs/granite/jquery-ui/css/images/xui-bg_flat_75_ff_40x100.png.pagespeed.ic.-frxtVxQm5.png differ http://git-wip-us.apache.org/repos/asf/hbase-site/blob/a90b1b57/www.hbasecon.com/etc/clientlibs/granite/jquery-ui/css/images/xui-bg_glass_65_ff_1x400.png.pagespeed.ic.26lRrG9HKV.png -- diff --git a/www.hbasecon.com/etc/clientlibs/granite/jquery-ui/css/images/xui-bg_glass_65_ff_1x400.png.pagespeed.ic.26lRrG9HKV.png b/www.hbasecon.com/etc/clientlibs/granite/jquery-ui/css/images/xui-bg_glass_65_ff_1x400.png.pagespeed.ic.26lRrG9HKV.png new file mode 100644 index 000..6a436ad Binary files /dev/null and
[11/13] hbase-site git commit: Added hbasecon website at www.hbasecon.com
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/a90b1b57/www.hbasecon.com/etc/clientlibs/granite/jquery-ui.min.js -- diff --git a/www.hbasecon.com/etc/clientlibs/granite/jquery-ui.min.js b/www.hbasecon.com/etc/clientlibs/granite/jquery-ui.min.js new file mode 100644 index 000..ff6bd09 --- /dev/null +++ b/www.hbasecon.com/etc/clientlibs/granite/jquery-ui.min.js @@ -0,0 +1,4952 @@ +/*! jQuery UI - v1.9.2 - 2012-11-23 + * http://jqueryui.com + * Includes: jquery.ui.core.js, jquery.ui.widget.js, jquery.ui.mouse.js, jquery.ui.position.js, jquery.ui.accordion.js, jquery.ui.autocomplete.js, jquery.ui.button.js, jquery.ui.datepicker.js, jquery.ui.dialog.js, jquery.ui.draggable.js, jquery.ui.droppable.js, jquery.ui.effect.js, jquery.ui.effect-blind.js, jquery.ui.effect-bounce.js, jquery.ui.effect-clip.js, jquery.ui.effect-drop.js, jquery.ui.effect-explode.js, jquery.ui.effect-fade.js, jquery.ui.effect-fold.js, jquery.ui.effect-highlight.js, jquery.ui.effect-pulsate.js, jquery.ui.effect-scale.js, jquery.ui.effect-shake.js, jquery.ui.effect-slide.js, jquery.ui.effect-transfer.js, jquery.ui.menu.js, jquery.ui.progressbar.js, jquery.ui.resizable.js, jquery.ui.selectable.js, jquery.ui.slider.js, jquery.ui.sortable.js, jquery.ui.spinner.js, jquery.ui.tabs.js, jquery.ui.tooltip.js + * Copyright (c) 2012 jQuery Foundation and other contributors Licensed MIT */ +(function(b,f){var a=0,e=/^ui-id-\d+$/; +b.ui=b.ui||{}; +if(b.ui.version){return +}b.extend(b.ui,{version:"1.9.2",keyCode:{BACKSPACE:8,COMMA:188,DELETE:46,DOWN:40,END:35,ENTER:13,ESCAPE:27,HOME:36,LEFT:37,NUMPAD_ADD:107,NUMPAD_DECIMAL:110,NUMPAD_DIVIDE:111,NUMPAD_ENTER:108,NUMPAD_MULTIPLY:106,NUMPAD_SUBTRACT:109,PAGE_DOWN:34,PAGE_UP:33,PERIOD:190,RIGHT:39,SPACE:32,TAB:9,UP:38}}); +b.fn.extend({_focus:b.fn.focus,focus:function(g,h){return typeof g==="number"?this.each(function(){var i=this; +setTimeout(function(){b(i).focus(); +if(h){h.call(i) +}},g) +}):this._focus.apply(this,arguments) +},scrollParent:function(){var g; +if((b.ui.ie&&(/(static|relative)/).test(this.css("position")))||(/absolute/).test(this.css("position"))){g=this.parents().filter(function(){return(/(relative|absolute|fixed)/).test(b.css(this,"position"))&&(/(auto|scroll)/).test(b.css(this,"overflow")+b.css(this,"overflow-y")+b.css(this,"overflow-x")) +}).eq(0) +}else{g=this.parents().filter(function(){return(/(auto|scroll)/).test(b.css(this,"overflow")+b.css(this,"overflow-y")+b.css(this,"overflow-x")) +}).eq(0) +}return(/fixed/).test(this.css("position"))||!g.length?b(document):g +},zIndex:function(j){if(j!==f){return this.css("zIndex",j) +}if(this.length){var h=b(this[0]),g,i; +while(h.length&[0]!==document){g=h.css("position"); +if(g==="absolute"||g==="relative"||g==="fixed"){i=parseInt(h.css("zIndex"),10); +if(!isNaN(i)&!==0){return i +}}h=h.parent() +}}return 0 +},uniqueId:function(){return this.each(function(){if(!this.id){this.id="ui-id-"+(++a) +}}) +},removeUniqueId:function(){return this.each(function(){if(e.test(this.id)){b(this).removeAttr("id") +}}) +}}); +function d(i,g){var k,j,h,l=i.nodeName.toLowerCase(); +if("area"===l){k=i.parentNode; +j=k.name; +if(!i.href||!j||k.nodeName.toLowerCase()!=="map"){return false +}h=b("img[usemap=#"+j+"]")[0]; +return !!h&(h) +}return(/input|select|textarea|button|object/.test(l)?!i.disabled:"a"===l?i.href||g:g)&(i) +}function c(g){return b.expr.filters.visible(g)&&!b(g).parents().andSelf().filter(function(){return b.css(this,"visibility")==="hidden" +}).length +}b.extend(b.expr[":"],{data:b.expr.createPseudo?b.expr.createPseudo(function(g){return function(h){return !!b.data(h,g) +} +}):function(j,h,g){return !!b.data(j,g[3]) +},focusable:function(g){return d(g,!isNaN(b.attr(g,"tabindex"))) +},tabbable:function(i){var g=b.attr(i,"tabindex"),h=isNaN(g); +return(h||g>=0)&(i,!h) +}}); +b(function(){var g=document.body,h=g.appendChild(h=document.createElement("div")); +h.offsetHeight; +b.extend(h.style,{minHeight:"100px",height:"auto",padding:0,borderWidth:0}); +b.support.minHeight=h.offsetHeight===100; +b.support.selectstart="onselectstart" in h; +g.removeChild(h).style.display="none" +}); +if(!b("").outerWidth(1).jquery){b.each(["Width","Height"],function(j,g){var h=g==="Width"?["Left","Right"]:["Top","Bottom"],k=g.toLowerCase(),m={innerWidth:b.fn.innerWidth,innerHeight:b.fn.innerHeight,outerWidth:b.fn.outerWidth,outerHeight:b.fn.outerHeight}; +function l(o,n,i,p){b.each(h,function(){n-=parseFloat(b.css(o,"padding"+this))||0; +if(i){n-=parseFloat(b.css(o,"border"+this+"Width"))||0 +}if(p){n-=parseFloat(b.css(o,"margin"+this))||0 +}}); +return n +}b.fn["inner"+g]=function(i){if(i===f){return m["inner"+g].call(this) +}return this.each(function(){b(this).css(k,l(this,i)+"px") +}) +}; +b.fn["outer"+g]=function(i,n){if(typeof i!=="number"){return m["outer"+g].call(this,i) +}return this.each(function(){b(this).css(k,l(this,i,true,n)+"px") +}) +} +})
[07/13] hbase-site git commit: Added hbasecon website at www.hbasecon.com
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/a90b1b57/www.hbasecon.com/etc/designs/sites/clientlibs-events.min.js -- diff --git a/www.hbasecon.com/etc/designs/sites/clientlibs-events.min.js b/www.hbasecon.com/etc/designs/sites/clientlibs-events.min.js new file mode 100644 index 000..fb9c7fe --- /dev/null +++ b/www.hbasecon.com/etc/designs/sites/clientlibs-events.min.js @@ -0,0 +1,3776 @@ +jQuery(function(a){a(".mobile-btn").unbind("click").on("click",function(f){f.preventDefault(); +var b=a(this).find("span"); +var d=a("#events-nav"); +var c=d.hasClass("hide-mobile"); +if(c){d.removeClass("hide-mobile"); +b.removeClass("glyphicon-menu-hamburger").addClass("glyphicon-remove") +}else{d.addClass("hide-mobile"); +b.removeClass("glyphicon-remove").addClass("glyphicon-menu-hamburger") +}}) +}); +jQuery(function(a){a(window).load(function(){var c=a("#home").height(); +function d(){if(a(window).width()<=960){a("#home").css("height","auto") +}else{a("#home").css("height",c+"px") +}}d(); +var b=a(".sticky-nav").position().top; +var e=function(){var f=a(window).scrollTop(); +if(f>b){a(".sticky-nav").addClass("sticky") +}else{a(".sticky-nav").removeClass("sticky") +}}; +if(a(".scroll_link")[0]){a(".scroll_link").find("a").addClass("sticky_link") +}a(".sticky_link").unbind("click").click(function(){var g=a(".mobile-btn").find("span"); +var f=a("#events-nav"); +f.addClass("hide-mobile"); +g.removeClass("glyphicon-remove").addClass("glyphicon-menu-hamburger"); +var i=a(window).scrollTop(); +if(location.pathname.replace(/^\//,"")==this.pathname.replace(/^\//,"")&==this.hostname){var h=a(this.hash); +h=h.length?h:a("[name="+this.hash.slice(1)+"]"); +if(h.length){if(i>b){a("html,body").animate({scrollTop:h.offset().top-90},1000) +}else{a("html,body").animate({scrollTop:h.offset().top-a(".sticky-nav").height()},1000) +}return false +}}}); +a(window).scroll(function(){e() +}); +a(window).resize(function(){d() +}) +}) +}); +$(".moreClick").unbind("click").click(function(b){b.preventDefault(); +var a=$(this).find("span").hasClass("chevron"); +var c=$(this).hasClass(".chevron-down"); +if(a){$(this).find("span").addClass("chevron-down").html("Less"); +$(this).find("span").removeClass("chevron"); +$(this).next().slideToggle("fast") +}else{$(this).find("span").addClass("chevron").html("More"); +$(this).find("span").removeClass("chevron-down"); +$(this).next().slideToggle("fast") +}}); +jQuery(function(a){a(".table .tableAccordion, .table span").click(function(b){a(this).parents("td").find("p").slideToggle(400) +}); +a(".table .tableAccordion").after('') +}); +jQuery(function(d){function c(){d(".spitems .grid-item").click(function(e){d(this).find(".description").slideToggle(400) +}) +}function b(){var e=0; +d(".spitems li").each(function(){if(d(this).prev().length>0){if(d(this).position().top!=d(this).prev().position().top){return false +}e++ +}else{e++ +}}); +d(".spitems").each(function(){var g=d(this).find("li"); +for(var f=0; +f35?String.fromCharCode(a+29):a.toString(36)) +}; +if(!"".replace(/^/,String)){while(i--){f[g(i)]=d[i]||g(i) +}d=[function(a){return f[a] +}]; +g=function(){return"\\w+" +}; +i=1 +}while(i--){if(d[i]){h=h.replace(new RegExp("\\b"+g(i)+"\\b","g"),d[i]) +}}return h +}("7 $d(d){1v E.2O(d)}6 5=7(){6 A=N,3O,O,3T,3k=1,2z,28=N,1G=M,4m=N,3x=N,4l,30=N,4F,4H=1;6 B='',35=M,4G=M,4E=M,34=M,4C=M,4B=M;6 C='7U 7E',4u='7b <1y>(4t)',4q='4O',3F='4O.2s <1y>(4t)',4j='6L <1y>(4t)',4i='6I 6C';6 D=X,3C=X,3y=X,3w=X,3u=X,3q=X,3U=N;1v{2r:7(){2(!A){3O=(2t.3O=='3p:'?'3p:':'5X:');O=3O+'//2G.2s';3T=(4k 72!='H'?O+'/U/2E-2D-12.4g':O+'/U/2E-2D-4c.1b');5.4Y();5.42();A=M}},42:7(){6 c=E.1u('*');1o(6 d=0;d
[03/13] hbase-site git commit: Added hbasecon website at www.hbasecon.com
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/a90b1b57/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/fa-fonts/fontawesome-webfont3295.svg -- diff --git a/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/fa-fonts/fontawesome-webfont3295.svg b/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/fa-fonts/fontawesome-webfont3295.svg new file mode 100644 index 000..d05688e --- /dev/null +++ b/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/fa-fonts/fontawesome-webfont3295.svg @@ -0,0 +1,655 @@ + +http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd; > +http://www.w3.org/2000/svg;> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
[01/13] hbase-site git commit: Added hbasecon website at www.hbasecon.com
Repository: hbase-site Updated Branches: refs/heads/asf-site fadf6d5a0 -> a90b1b57b http://git-wip-us.apache.org/repos/asf/hbase-site/blob/a90b1b57/www.hbasecon.com/index.html -- diff --git a/www.hbasecon.com/index.html b/www.hbasecon.com/index.html new file mode 100644 index 000..c2bd569 --- /dev/null +++ b/www.hbasecon.com/index.html @@ -0,0 +1,2534 @@ + + + + + + + + + + + + + + + + + + + + + + + +http://assets.adobedtm.com/98a3a1c24ee5de4297f8ae77cf444e0c86ff2f04/satelliteLib-2b934fee5c4cb90dad47c223f80ea9c99e9761b2.js"> + + + + + + + + + + + + + + +HBaseCon + + + + + + +http://www.cloudera.com/content/dam/events/hbase/habase-card.jpg;> + + + + +http://www.hbasecon.com/"/> + +http://www.cloudera.com/content/dam/events/hbase/habase-card.jpg"/> + + + + + + + + + + + + + http://www.googletagmanager.com/ns.html?id=GTM-96SB; height="0" width="0" style="display:none;visibility:hidden"> + (function(w,d,s,l,i){w[l]=w[l]||[];w[l].push({'gtm.start':new Date().getTime(),event:'gtm.js'});var f=d.getElementsByTagName(s)[0],j=d.createElement(s),dl=l!='dataLayer'?'&l='+l:'';j.async=true;j.src='http://www.googletagmanager.com/gtm.js?id='+i+dl;f.parentNode.insertBefore(j,f);})(window,document,'script','dataLayer','GTM-96SB'); + + + + + + +#hbasecon + + + + + + + + + + About + + + Agenda + + + Speakers + + + Sponsors + + + Archives + + + + + + + + + + + + + + + + + +Thanks for attending! + + + + + + + + + + + + + + + + About + âââ + + + + + + + + + + + HBaseCon (founded in 2012) is the premier conference for the http://hbase.apache.org/; target="_blank">Apache HBase communityâincluding committers/contributors, developers, operators, learners, and users (including some of those managing the largest deployments in the world). If you run Apache HBase in production or aspire to do so, HBaseCon has no substitute! + +http://hbase.apache.org/;>Apache HBase is a native distributed data store for the Apache Hadoop ecosystem. Its community works independently within the ASF to provide HBase software under the permissive Apache license. + + + + + + + + + + + + + + + + + + + + + + +HBASECON 2016 +San Francisco | May 24, 2016 + +The Village +969 Market St. +San Francisco, CA 94103 +http://www.969market.com/; target="_blank">http://www.969market.com + +Attendees have exclusive access to a 15% discount on 3-day developer training for HBase, May 25-27 in San Francisco! + +Note also: Attendees are invited to attend an http://www.meetup.com/hbaseusergroup/events/230547750/;>HBase meetup on HBaseCon eve (May 23) hosted by Splice Machine, and http://www.meetup.com/SF-Bay-Area-Apache-Phoenix-Meetup/events/230545182/;>PhoenixCon on May 25, hosted by Salesforce. + + + + + + + + + + + +https://www.google.com/maps/embed?pb=!1m18!1m12!1m3!1d3153.320056750626!2d-122.41148028468223!3d37.78253847975811!2m3!1f0!2f0!3f0!3m2!1i1024!2i768!4f13.1!3m3!1m2!1s0x80858085add7d365%3A0xfdd1b9af22d69c49!2sThe+Village!5e0!3m2!1sen!2sus!4v1452546730647; height="250" frameborder="0" style="border:0" allowfullscreen> + + + + + + + + + + + + + Program Committee + +All paper proposals are evaluated and selected by a diverse cross-section of the HBase community (thanks, PC!): + +Sean Busbey, Software Engineer, Cloudera / Apache HBase PMC +Elliott Clark, Engineer, Facebook / Apache HBase PMC +Lars Hofhansl, Architect, Salesforce.com / Apache HBase PMC +Matthew Hunt, Head of Open Source RD, Bloomberg LP / Apache HBase Contributor +Francis Liu, Software Engineer, Yahoo! / Apache HBase Contributor +Carter Page, Senior Engineering Manager, Google Bigtable Team +Andrew Purtell, Architect, Salesforce.com / Apache HBase PMC Chair +Enis Söztutar, Member of Technical Staff, Hortonworks / Apache HBase PMC +Michael Stack, Software Engineer, Cloudera / Apache HBase PMC + + + + + + + + + + + + + + + + + + + + + + + + + + + + + AGENDA + âââ + + + + + + +Time +Development Internals +Operations +Applications +7:30am-6:30pm +Registration Open +7:30am-8:30pm +Exhibits Open +7:30am-8:50am +Breakfast +8:00am-8:50am +Pre-conference Session: Apache HBase - Just the Basics + Jesse Anderson (Smoking Hand) +This early-morning session offers an overview of what HBase is, how it works, its API, and considerations for using HBase as part of a Big Data solution. It will be helpful for people who are new to HBase, and also serve as a refresher for those who may need
[04/13] hbase-site git commit: Added hbasecon website at www.hbasecon.com
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/a90b1b57/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/bootstrap/glyphicons-halflings-regular.ttf -- diff --git a/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/bootstrap/glyphicons-halflings-regular.ttf b/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/bootstrap/glyphicons-halflings-regular.ttf new file mode 100644 index 000..1413fc6 Binary files /dev/null and b/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/bootstrap/glyphicons-halflings-regular.ttf differ http://git-wip-us.apache.org/repos/asf/hbase-site/blob/a90b1b57/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/bootstrap/glyphicons-halflings-regular.woff -- diff --git a/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/bootstrap/glyphicons-halflings-regular.woff b/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/bootstrap/glyphicons-halflings-regular.woff new file mode 100644 index 000..9e61285 Binary files /dev/null and b/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/bootstrap/glyphicons-halflings-regular.woff differ http://git-wip-us.apache.org/repos/asf/hbase-site/blob/a90b1b57/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/bootstrap/glyphicons-halflings-regular.woff2 -- diff --git a/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/bootstrap/glyphicons-halflings-regular.woff2 b/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/bootstrap/glyphicons-halflings-regular.woff2 new file mode 100644 index 000..64539b5 Binary files /dev/null and b/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/bootstrap/glyphicons-halflings-regular.woff2 differ http://git-wip-us.apache.org/repos/asf/hbase-site/blob/a90b1b57/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/fa-fonts/fontawesome-webfont3295.eot -- diff --git a/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/fa-fonts/fontawesome-webfont3295.eot b/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/fa-fonts/fontawesome-webfont3295.eot new file mode 100644 index 000..9b6afae Binary files /dev/null and b/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/fa-fonts/fontawesome-webfont3295.eot differ
[12/13] hbase-site git commit: Added hbasecon website at www.hbasecon.com
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/a90b1b57/www.hbasecon.com/etc/clientlibs/foundation/shared.min.js -- diff --git a/www.hbasecon.com/etc/clientlibs/foundation/shared.min.js b/www.hbasecon.com/etc/clientlibs/foundation/shared.min.js new file mode 100644 index 000..8847ce1 --- /dev/null +++ b/www.hbasecon.com/etc/clientlibs/foundation/shared.min.js @@ -0,0 +1,520 @@ +window._g=window._g||{}; +_g.shared={}; +if(window.console===undefined){window.console={log:function(a){}} +}_g.shared.HTTP=new function(){var createResponse=function(){var response=new Object(); +response.headers=new Object(); +response.body=new Object(); +return response +}; +var getResponseFromXhr=function(request){if(!request){return null +}var response=createResponse(); +response.body=request.responseText; +response.headers[_g.HTTP.HEADER_STATUS]=request.status; +response.responseText=request.responseText; +response.status=request.status; +return response +}; +return{EXTENSION_HTML:".html",EXTENSION_JSON:".json",EXTENSION_RES:".res",HEADER_STATUS:"Status",HEADER_MESSAGE:"Message",HEADER_LOCATION:"Location",HEADER_PATH:"Path",PARAM_NO_CACHE:"cq_ck",get:function(url,callback,scope,suppressForbiddenCheck){url=_g.HTTP.getXhrHookedURL(_g.HTTP.externalize(url,true)); +if(callback!=undefined){return _g.$.ajax({type:"GET",url:url,externalize:false,encodePath:false,hook:false,complete:function(request,textStatus){var response=getResponseFromXhr(request); +if(!suppressForbiddenCheck){_g.HTTP.handleForbidden(response) +}callback.call(scope||this,this,textStatus=="success",response) +}}) +}else{try{var request=_g.$.ajax({type:"GET",url:url,async:false,externalize:false,encodePath:false,hook:false}); +var response=getResponseFromXhr(request); +if(!suppressForbiddenCheck){_g.HTTP.handleForbidden(response) +}return response +}catch(e){return null +}}},post:function(url,callback,params,scope,suppressErrorMsg,suppressForbiddenCheck){url=_g.HTTP.externalize(url,true); +var hook=_g.HTTP.getXhrHook(url,"POST",params); +if(hook){url=hook.url; +params=hook.params +}if(callback!=undefined){return _g.$.ajax({type:"POST",url:url,data:params,externalize:false,encodePath:false,hook:false,complete:function(request,textStatus){var response=_g.HTTP.buildPostResponseFromHTML(request.responseText); +if(!suppressForbiddenCheck){_g.HTTP.handleForbidden(request) +}callback.call(scope||this,this,textStatus=="success",response) +}}) +}else{try{var request=_g.$.ajax({type:"POST",url:url,data:params,async:false,externalize:false,encodePath:false,hook:false}); +var response=_g.HTTP.buildPostResponseFromHTML(request.responseText); +if(!suppressForbiddenCheck){_g.HTTP.handleForbidden(request) +}return response +}catch(e){return null +}}},getParameter:function(url,name){var params=_g.HTTP.getParameters(url,name); +return params!=null?params[0]:null +},getParameters:function(url,name){var values=[]; +if(!name){return null +}name=encodeURIComponent(name); +if(url.indexOf("?")==-1){return null +}if(url.indexOf("#")!=-1){url=url.substring(0,url.indexOf("#")) +}var query=url.substring(url.indexOf("?")+1); +if(query.indexOf(name)==-1){return null +}var queryPts=query.split("&"); +for(var i=0; +i1?decodeURIComponent(paramPts[1]):"") +}}return values.length>0?values:null +},addParameter:function(url,name,value){if(value& instanceof Array){for(var i=0; +i
[09/13] hbase-site git commit: Added hbasecon website at www.hbasecon.com
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/a90b1b57/www.hbasecon.com/etc/clientlibs/granite/jquery.min.js -- diff --git a/www.hbasecon.com/etc/clientlibs/granite/jquery.min.js b/www.hbasecon.com/etc/clientlibs/granite/jquery.min.js new file mode 100644 index 000..6a27d62 --- /dev/null +++ b/www.hbasecon.com/etc/clientlibs/granite/jquery.min.js @@ -0,0 +1,2692 @@ +/*! + * jQuery JavaScript Library v1.11.2 + * http://jquery.com/ + * + * Includes Sizzle.js + * http://sizzlejs.com/ + * + * Copyright 2005, 2014 jQuery Foundation, Inc. and other contributors + * Released under the MIT license + * http://jquery.org/license + * + * Date: 2014-12-17T15:27Z + */ +(function(b,a){if(typeof module==="object"& module.exports==="object"){module.exports=b.document?a(b,true):function(c){if(!c.document){throw new Error("jQuery requires a window with a document") +}return a(c) +} +}else{a(b) +}}(typeof window!=="undefined"?window:this,function(a4,au){var aO=[]; +var O=aO.slice; +var ay=aO.concat; +var w=aO.push; +var bT=aO.indexOf; +var ab={}; +var x=ab.toString; +var J=ab.hasOwnProperty; +var C={}; +var ah="1.11.2",bH=function(e,i){return new bH.fn.init(e,i) +},D=/^[\s\uFEFF\xA0]+|[\s\uFEFF\xA0]+$/g,bR=/^-ms-/,aV=/-([\da-z])/gi,N=function(e,i){return i.toUpperCase() +}; +bH.fn=bH.prototype={jquery:ah,constructor:bH,selector:"",length:0,toArray:function(){return O.call(this) +},get:function(e){return e!=null?(e<0?this[e+this.length]:this[e]):O.call(this) +},pushStack:function(e){var i=bH.merge(this.constructor(),e); +i.prevObject=this; +i.context=this.context; +return i +},each:function(i,e){return bH.each(this,i,e) +},map:function(e){return this.pushStack(bH.map(this,function(b6,b5){return e.call(b6,b5,b6) +})) +},slice:function(){return this.pushStack(O.apply(this,arguments)) +},first:function(){return this.eq(0) +},last:function(){return this.eq(-1) +},eq:function(b6){var e=this.length,b5=+b6+(b6<0?e:0); +return this.pushStack(b5>=0&=0 +},isEmptyObject:function(i){var e; +for(e in i){return false +}return true +},isPlainObject:function(b6){var i; +if(!b6||bH.type(b6)!=="object"||b6.nodeType||bH.isWindow(b6)){return false +}try{if(b6.constructor&&!J.call(b6,"constructor")&&!J.call(b6.constructor.prototype,"isPrototypeOf")){return false +}}catch(b5){return false +}if(C.ownLast){for(i in b6){return J.call(b6,i) +}}for(i in b6){}return i===undefined||J.call(b6,i) +},type:function(e){if(e==null){return e+"" +}return typeof e==="object"||typeof e==="function"?ab[x.call(e)]||"object":typeof e +},globalEval:function(e){if(e&(e)){(a4.execScript||function(i){a4["eval"].call(a4,i) +})(e) +}},camelCase:function(e){return e.replace(bR,"ms-").replace(aV,N) +},nodeName:function(i,e){return i.nodeName&()===e.toLowerCase() +},each:function(b9,ca,b5){var b8,b6=0,b7=b9.length,e=ac(b9); +if(b5){if(e){for(; +b6
[13/13] hbase-site git commit: Added hbasecon website at www.hbasecon.com
Added hbasecon website at www.hbasecon.com Project: http://git-wip-us.apache.org/repos/asf/hbase-site/repo Commit: http://git-wip-us.apache.org/repos/asf/hbase-site/commit/a90b1b57 Tree: http://git-wip-us.apache.org/repos/asf/hbase-site/tree/a90b1b57 Diff: http://git-wip-us.apache.org/repos/asf/hbase-site/diff/a90b1b57 Branch: refs/heads/master Commit: a90b1b57b0bf3ef0ede5aa2ec2fd3df2963386c3 Parents: fadf6d5 Author: Michael StackAuthored: Tue Mar 21 09:57:39 2017 -0700 Committer: Michael Stack Committed: Tue Mar 21 09:57:39 2017 -0700 -- .../content/dam/events/hbase/HBASE-Logo2.png| Bin 0 -> 15375 bytes .../content/dam/events/hbase/hbase-logo.png | Bin 0 -> 2825 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 162045 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 128433 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 161121 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 190008 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 314462 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 194292 bytes .../_jcr_content/renditions/original.jpg| Bin 0 -> 97268 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 169858 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 264076 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 260608 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 6155 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 6155 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 217927 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 182476 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 195797 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 220189 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 156549 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 181751 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 187323 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 168370 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 217023 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 149077 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 157860 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 98381 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 252256 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 268539 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 126054 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 192894 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 256721 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 247871 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 231762 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 299589 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 252018 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 264524 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 6155 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 197174 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 265809 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 248983 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 148835 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 264361 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 182234 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 274769 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 6155 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 6155 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 175988 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 238880 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 273858 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 202368 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 114702 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 268968 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 6155 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 159371 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 254954 bytes .../_jcr_content/renditions/original.png| Bin 0 -> 219581 bytes .../_jcr_content/renditions/original.jpg| Bin 0 -> 33279 bytes .../_jcr_content/renditions/original.jpg| Bin 0 -> 24986 bytes .../_jcr_content/renditions/original.jpg| Bin 0 -> 27472
[05/13] hbase-site git commit: Added hbasecon website at www.hbasecon.com
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/a90b1b57/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/bootstrap/glyphicons-halflings-regular.svg -- diff --git a/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/bootstrap/glyphicons-halflings-regular.svg b/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/bootstrap/glyphicons-halflings-regular.svg new file mode 100644 index 000..94fb549 --- /dev/null +++ b/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/bootstrap/glyphicons-halflings-regular.svg @@ -0,0 +1,288 @@ + +http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd; > +http://www.w3.org/2000/svg;> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
[01/13] hbase-site git commit: Added hbasecon website at www.hbasecon.com
Repository: hbase-site Updated Branches: refs/heads/master [created] a90b1b57b http://git-wip-us.apache.org/repos/asf/hbase-site/blob/a90b1b57/www.hbasecon.com/index.html -- diff --git a/www.hbasecon.com/index.html b/www.hbasecon.com/index.html new file mode 100644 index 000..c2bd569 --- /dev/null +++ b/www.hbasecon.com/index.html @@ -0,0 +1,2534 @@ + + + + + + + + + + + + + + + + + + + + + + + +http://assets.adobedtm.com/98a3a1c24ee5de4297f8ae77cf444e0c86ff2f04/satelliteLib-2b934fee5c4cb90dad47c223f80ea9c99e9761b2.js"> + + + + + + + + + + + + + + +HBaseCon + + + + + + +http://www.cloudera.com/content/dam/events/hbase/habase-card.jpg;> + + + + +http://www.hbasecon.com/"/> + +http://www.cloudera.com/content/dam/events/hbase/habase-card.jpg"/> + + + + + + + + + + + + + http://www.googletagmanager.com/ns.html?id=GTM-96SB; height="0" width="0" style="display:none;visibility:hidden"> + (function(w,d,s,l,i){w[l]=w[l]||[];w[l].push({'gtm.start':new Date().getTime(),event:'gtm.js'});var f=d.getElementsByTagName(s)[0],j=d.createElement(s),dl=l!='dataLayer'?'&l='+l:'';j.async=true;j.src='http://www.googletagmanager.com/gtm.js?id='+i+dl;f.parentNode.insertBefore(j,f);})(window,document,'script','dataLayer','GTM-96SB'); + + + + + + +#hbasecon + + + + + + + + + + About + + + Agenda + + + Speakers + + + Sponsors + + + Archives + + + + + + + + + + + + + + + + + +Thanks for attending! + + + + + + + + + + + + + + + + About + âââ + + + + + + + + + + + HBaseCon (founded in 2012) is the premier conference for the http://hbase.apache.org/; target="_blank">Apache HBase communityâincluding committers/contributors, developers, operators, learners, and users (including some of those managing the largest deployments in the world). If you run Apache HBase in production or aspire to do so, HBaseCon has no substitute! + +http://hbase.apache.org/;>Apache HBase is a native distributed data store for the Apache Hadoop ecosystem. Its community works independently within the ASF to provide HBase software under the permissive Apache license. + + + + + + + + + + + + + + + + + + + + + + +HBASECON 2016 +San Francisco | May 24, 2016 + +The Village +969 Market St. +San Francisco, CA 94103 +http://www.969market.com/; target="_blank">http://www.969market.com + +Attendees have exclusive access to a 15% discount on 3-day developer training for HBase, May 25-27 in San Francisco! + +Note also: Attendees are invited to attend an http://www.meetup.com/hbaseusergroup/events/230547750/;>HBase meetup on HBaseCon eve (May 23) hosted by Splice Machine, and http://www.meetup.com/SF-Bay-Area-Apache-Phoenix-Meetup/events/230545182/;>PhoenixCon on May 25, hosted by Salesforce. + + + + + + + + + + + +https://www.google.com/maps/embed?pb=!1m18!1m12!1m3!1d3153.320056750626!2d-122.41148028468223!3d37.78253847975811!2m3!1f0!2f0!3f0!3m2!1i1024!2i768!4f13.1!3m3!1m2!1s0x80858085add7d365%3A0xfdd1b9af22d69c49!2sThe+Village!5e0!3m2!1sen!2sus!4v1452546730647; height="250" frameborder="0" style="border:0" allowfullscreen> + + + + + + + + + + + + + Program Committee + +All paper proposals are evaluated and selected by a diverse cross-section of the HBase community (thanks, PC!): + +Sean Busbey, Software Engineer, Cloudera / Apache HBase PMC +Elliott Clark, Engineer, Facebook / Apache HBase PMC +Lars Hofhansl, Architect, Salesforce.com / Apache HBase PMC +Matthew Hunt, Head of Open Source RD, Bloomberg LP / Apache HBase Contributor +Francis Liu, Software Engineer, Yahoo! / Apache HBase Contributor +Carter Page, Senior Engineering Manager, Google Bigtable Team +Andrew Purtell, Architect, Salesforce.com / Apache HBase PMC Chair +Enis Söztutar, Member of Technical Staff, Hortonworks / Apache HBase PMC +Michael Stack, Software Engineer, Cloudera / Apache HBase PMC + + + + + + + + + + + + + + + + + + + + + + + + + + + + + AGENDA + âââ + + + + + + +Time +Development Internals +Operations +Applications +7:30am-6:30pm +Registration Open +7:30am-8:30pm +Exhibits Open +7:30am-8:50am +Breakfast +8:00am-8:50am +Pre-conference Session: Apache HBase - Just the Basics + Jesse Anderson (Smoking Hand) +This early-morning session offers an overview of what HBase is, how it works, its API, and considerations for using HBase as part of a Big Data solution. It will be helpful for people who are new to HBase, and also serve as a refresher for those who may need one.
[12/13] hbase-site git commit: Added hbasecon website at www.hbasecon.com
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/a90b1b57/www.hbasecon.com/etc/clientlibs/foundation/shared.min.js -- diff --git a/www.hbasecon.com/etc/clientlibs/foundation/shared.min.js b/www.hbasecon.com/etc/clientlibs/foundation/shared.min.js new file mode 100644 index 000..8847ce1 --- /dev/null +++ b/www.hbasecon.com/etc/clientlibs/foundation/shared.min.js @@ -0,0 +1,520 @@ +window._g=window._g||{}; +_g.shared={}; +if(window.console===undefined){window.console={log:function(a){}} +}_g.shared.HTTP=new function(){var createResponse=function(){var response=new Object(); +response.headers=new Object(); +response.body=new Object(); +return response +}; +var getResponseFromXhr=function(request){if(!request){return null +}var response=createResponse(); +response.body=request.responseText; +response.headers[_g.HTTP.HEADER_STATUS]=request.status; +response.responseText=request.responseText; +response.status=request.status; +return response +}; +return{EXTENSION_HTML:".html",EXTENSION_JSON:".json",EXTENSION_RES:".res",HEADER_STATUS:"Status",HEADER_MESSAGE:"Message",HEADER_LOCATION:"Location",HEADER_PATH:"Path",PARAM_NO_CACHE:"cq_ck",get:function(url,callback,scope,suppressForbiddenCheck){url=_g.HTTP.getXhrHookedURL(_g.HTTP.externalize(url,true)); +if(callback!=undefined){return _g.$.ajax({type:"GET",url:url,externalize:false,encodePath:false,hook:false,complete:function(request,textStatus){var response=getResponseFromXhr(request); +if(!suppressForbiddenCheck){_g.HTTP.handleForbidden(response) +}callback.call(scope||this,this,textStatus=="success",response) +}}) +}else{try{var request=_g.$.ajax({type:"GET",url:url,async:false,externalize:false,encodePath:false,hook:false}); +var response=getResponseFromXhr(request); +if(!suppressForbiddenCheck){_g.HTTP.handleForbidden(response) +}return response +}catch(e){return null +}}},post:function(url,callback,params,scope,suppressErrorMsg,suppressForbiddenCheck){url=_g.HTTP.externalize(url,true); +var hook=_g.HTTP.getXhrHook(url,"POST",params); +if(hook){url=hook.url; +params=hook.params +}if(callback!=undefined){return _g.$.ajax({type:"POST",url:url,data:params,externalize:false,encodePath:false,hook:false,complete:function(request,textStatus){var response=_g.HTTP.buildPostResponseFromHTML(request.responseText); +if(!suppressForbiddenCheck){_g.HTTP.handleForbidden(request) +}callback.call(scope||this,this,textStatus=="success",response) +}}) +}else{try{var request=_g.$.ajax({type:"POST",url:url,data:params,async:false,externalize:false,encodePath:false,hook:false}); +var response=_g.HTTP.buildPostResponseFromHTML(request.responseText); +if(!suppressForbiddenCheck){_g.HTTP.handleForbidden(request) +}return response +}catch(e){return null +}}},getParameter:function(url,name){var params=_g.HTTP.getParameters(url,name); +return params!=null?params[0]:null +},getParameters:function(url,name){var values=[]; +if(!name){return null +}name=encodeURIComponent(name); +if(url.indexOf("?")==-1){return null +}if(url.indexOf("#")!=-1){url=url.substring(0,url.indexOf("#")) +}var query=url.substring(url.indexOf("?")+1); +if(query.indexOf(name)==-1){return null +}var queryPts=query.split("&"); +for(var i=0; +i1?decodeURIComponent(paramPts[1]):"") +}}return values.length>0?values:null +},addParameter:function(url,name,value){if(value& instanceof Array){for(var i=0; +i
[11/13] hbase-site git commit: Added hbasecon website at www.hbasecon.com
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/a90b1b57/www.hbasecon.com/etc/clientlibs/granite/jquery-ui.min.js -- diff --git a/www.hbasecon.com/etc/clientlibs/granite/jquery-ui.min.js b/www.hbasecon.com/etc/clientlibs/granite/jquery-ui.min.js new file mode 100644 index 000..ff6bd09 --- /dev/null +++ b/www.hbasecon.com/etc/clientlibs/granite/jquery-ui.min.js @@ -0,0 +1,4952 @@ +/*! jQuery UI - v1.9.2 - 2012-11-23 + * http://jqueryui.com + * Includes: jquery.ui.core.js, jquery.ui.widget.js, jquery.ui.mouse.js, jquery.ui.position.js, jquery.ui.accordion.js, jquery.ui.autocomplete.js, jquery.ui.button.js, jquery.ui.datepicker.js, jquery.ui.dialog.js, jquery.ui.draggable.js, jquery.ui.droppable.js, jquery.ui.effect.js, jquery.ui.effect-blind.js, jquery.ui.effect-bounce.js, jquery.ui.effect-clip.js, jquery.ui.effect-drop.js, jquery.ui.effect-explode.js, jquery.ui.effect-fade.js, jquery.ui.effect-fold.js, jquery.ui.effect-highlight.js, jquery.ui.effect-pulsate.js, jquery.ui.effect-scale.js, jquery.ui.effect-shake.js, jquery.ui.effect-slide.js, jquery.ui.effect-transfer.js, jquery.ui.menu.js, jquery.ui.progressbar.js, jquery.ui.resizable.js, jquery.ui.selectable.js, jquery.ui.slider.js, jquery.ui.sortable.js, jquery.ui.spinner.js, jquery.ui.tabs.js, jquery.ui.tooltip.js + * Copyright (c) 2012 jQuery Foundation and other contributors Licensed MIT */ +(function(b,f){var a=0,e=/^ui-id-\d+$/; +b.ui=b.ui||{}; +if(b.ui.version){return +}b.extend(b.ui,{version:"1.9.2",keyCode:{BACKSPACE:8,COMMA:188,DELETE:46,DOWN:40,END:35,ENTER:13,ESCAPE:27,HOME:36,LEFT:37,NUMPAD_ADD:107,NUMPAD_DECIMAL:110,NUMPAD_DIVIDE:111,NUMPAD_ENTER:108,NUMPAD_MULTIPLY:106,NUMPAD_SUBTRACT:109,PAGE_DOWN:34,PAGE_UP:33,PERIOD:190,RIGHT:39,SPACE:32,TAB:9,UP:38}}); +b.fn.extend({_focus:b.fn.focus,focus:function(g,h){return typeof g==="number"?this.each(function(){var i=this; +setTimeout(function(){b(i).focus(); +if(h){h.call(i) +}},g) +}):this._focus.apply(this,arguments) +},scrollParent:function(){var g; +if((b.ui.ie&&(/(static|relative)/).test(this.css("position")))||(/absolute/).test(this.css("position"))){g=this.parents().filter(function(){return(/(relative|absolute|fixed)/).test(b.css(this,"position"))&&(/(auto|scroll)/).test(b.css(this,"overflow")+b.css(this,"overflow-y")+b.css(this,"overflow-x")) +}).eq(0) +}else{g=this.parents().filter(function(){return(/(auto|scroll)/).test(b.css(this,"overflow")+b.css(this,"overflow-y")+b.css(this,"overflow-x")) +}).eq(0) +}return(/fixed/).test(this.css("position"))||!g.length?b(document):g +},zIndex:function(j){if(j!==f){return this.css("zIndex",j) +}if(this.length){var h=b(this[0]),g,i; +while(h.length&[0]!==document){g=h.css("position"); +if(g==="absolute"||g==="relative"||g==="fixed"){i=parseInt(h.css("zIndex"),10); +if(!isNaN(i)&!==0){return i +}}h=h.parent() +}}return 0 +},uniqueId:function(){return this.each(function(){if(!this.id){this.id="ui-id-"+(++a) +}}) +},removeUniqueId:function(){return this.each(function(){if(e.test(this.id)){b(this).removeAttr("id") +}}) +}}); +function d(i,g){var k,j,h,l=i.nodeName.toLowerCase(); +if("area"===l){k=i.parentNode; +j=k.name; +if(!i.href||!j||k.nodeName.toLowerCase()!=="map"){return false +}h=b("img[usemap=#"+j+"]")[0]; +return !!h&(h) +}return(/input|select|textarea|button|object/.test(l)?!i.disabled:"a"===l?i.href||g:g)&(i) +}function c(g){return b.expr.filters.visible(g)&&!b(g).parents().andSelf().filter(function(){return b.css(this,"visibility")==="hidden" +}).length +}b.extend(b.expr[":"],{data:b.expr.createPseudo?b.expr.createPseudo(function(g){return function(h){return !!b.data(h,g) +} +}):function(j,h,g){return !!b.data(j,g[3]) +},focusable:function(g){return d(g,!isNaN(b.attr(g,"tabindex"))) +},tabbable:function(i){var g=b.attr(i,"tabindex"),h=isNaN(g); +return(h||g>=0)&(i,!h) +}}); +b(function(){var g=document.body,h=g.appendChild(h=document.createElement("div")); +h.offsetHeight; +b.extend(h.style,{minHeight:"100px",height:"auto",padding:0,borderWidth:0}); +b.support.minHeight=h.offsetHeight===100; +b.support.selectstart="onselectstart" in h; +g.removeChild(h).style.display="none" +}); +if(!b("").outerWidth(1).jquery){b.each(["Width","Height"],function(j,g){var h=g==="Width"?["Left","Right"]:["Top","Bottom"],k=g.toLowerCase(),m={innerWidth:b.fn.innerWidth,innerHeight:b.fn.innerHeight,outerWidth:b.fn.outerWidth,outerHeight:b.fn.outerHeight}; +function l(o,n,i,p){b.each(h,function(){n-=parseFloat(b.css(o,"padding"+this))||0; +if(i){n-=parseFloat(b.css(o,"border"+this+"Width"))||0 +}if(p){n-=parseFloat(b.css(o,"margin"+this))||0 +}}); +return n +}b.fn["inner"+g]=function(i){if(i===f){return m["inner"+g].call(this) +}return this.each(function(){b(this).css(k,l(this,i)+"px") +}) +}; +b.fn["outer"+g]=function(i,n){if(typeof i!=="number"){return m["outer"+g].call(this,i) +}return this.each(function(){b(this).css(k,l(this,i,true,n)+"px") +}) +} +})
[08/13] hbase-site git commit: Added hbasecon website at www.hbasecon.com
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/a90b1b57/www.hbasecon.com/etc/clientlibs/granite/jquery/granite.min.js -- diff --git a/www.hbasecon.com/etc/clientlibs/granite/jquery/granite.min.js b/www.hbasecon.com/etc/clientlibs/granite/jquery/granite.min.js new file mode 100644 index 000..6933872 --- /dev/null +++ b/www.hbasecon.com/etc/clientlibs/granite/jquery/granite.min.js @@ -0,0 +1,92 @@ +(function(c,b,d){var a; +b.Granite=b.Granite||{}; +b.Granite.$=b.Granite.$||c; +b._g=b._g||{}; +b._g.$=b._g.$||c; +a=Granite.HTTP; +c.ajaxSetup({externalize:true,encodePath:true,hook:true,beforeSend:function(f,e){if(typeof G_IS_HOOKED=="undefined"||!G_IS_HOOKED(e.url)){if(e.externalize){e.url=a.externalize(e.url) +}if(e.encodePath){e.url=a.encodePathOfURI(e.url) +}}if(e.hook){var g=a.getXhrHook(e.url,e.type,e.data); +if(g){e.url=g.url; +if(g.params){if(e.type.toUpperCase()=="GET"){e.url+="?"+c.param(g.params) +}else{e.data=c.param(g.params) +},statusCode:{403:function(e){if(e.getResponseHeader("X-Reason")==="Authentication Failed"){a.handleLoginRedirect() +); +c.ajaxSettings.traditional=true +}(jQuery,this)); +(function(e,b){e.Granite=e.Granite||{}; +if(e.Granite.csrf){return +}e.Granite.csrf={initialised:false,refreshToken:l}; +function h(){this._handler=[] +}h.prototype={then:function(r,q){this._handler.push({resolve:r,reject:q}) +},resolve:function(){this._execute("resolve",arguments) +},reject:function(){this._execute("reject",arguments) +},_execute:function(q,r){if(this._handler===null){throw new Error("Promise already completed.") +}for(var s=0,t=this._handler.length; +s
[03/13] hbase-site git commit: Added hbasecon website at www.hbasecon.com
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/a90b1b57/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/fa-fonts/fontawesome-webfont3295.svg -- diff --git a/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/fa-fonts/fontawesome-webfont3295.svg b/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/fa-fonts/fontawesome-webfont3295.svg new file mode 100644 index 000..d05688e --- /dev/null +++ b/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/fa-fonts/fontawesome-webfont3295.svg @@ -0,0 +1,655 @@ + +http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd; > +http://www.w3.org/2000/svg;> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
[10/13] hbase-site git commit: Added hbasecon website at www.hbasecon.com
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/a90b1b57/www.hbasecon.com/etc/clientlibs/granite/jquery-ui/css/images/ui-bg_glass_55_fbf9ee_1x400.png.pagespeed.ce.-PRVjguS_y.png -- diff --git a/www.hbasecon.com/etc/clientlibs/granite/jquery-ui/css/images/ui-bg_glass_55_fbf9ee_1x400.png.pagespeed.ce.-PRVjguS_y.png b/www.hbasecon.com/etc/clientlibs/granite/jquery-ui/css/images/ui-bg_glass_55_fbf9ee_1x400.png.pagespeed.ce.-PRVjguS_y.png new file mode 100644 index 000..ad3d634 Binary files /dev/null and b/www.hbasecon.com/etc/clientlibs/granite/jquery-ui/css/images/ui-bg_glass_55_fbf9ee_1x400.png.pagespeed.ce.-PRVjguS_y.png differ http://git-wip-us.apache.org/repos/asf/hbase-site/blob/a90b1b57/www.hbasecon.com/etc/clientlibs/granite/jquery-ui/css/images/ui-bg_glass_75_dadada_1x400.png.pagespeed.ce.wSxlENrT6_.png -- diff --git a/www.hbasecon.com/etc/clientlibs/granite/jquery-ui/css/images/ui-bg_glass_75_dadada_1x400.png.pagespeed.ce.wSxlENrT6_.png b/www.hbasecon.com/etc/clientlibs/granite/jquery-ui/css/images/ui-bg_glass_75_dadada_1x400.png.pagespeed.ce.wSxlENrT6_.png new file mode 100644 index 000..5a46b47 Binary files /dev/null and b/www.hbasecon.com/etc/clientlibs/granite/jquery-ui/css/images/ui-bg_glass_75_dadada_1x400.png.pagespeed.ce.wSxlENrT6_.png differ http://git-wip-us.apache.org/repos/asf/hbase-site/blob/a90b1b57/www.hbasecon.com/etc/clientlibs/granite/jquery-ui/css/images/ui-bg_glass_75_e6e6e6_1x400.png.pagespeed.ce.9CVDVsKoya.png -- diff --git a/www.hbasecon.com/etc/clientlibs/granite/jquery-ui/css/images/ui-bg_glass_75_e6e6e6_1x400.png.pagespeed.ce.9CVDVsKoya.png b/www.hbasecon.com/etc/clientlibs/granite/jquery-ui/css/images/ui-bg_glass_75_e6e6e6_1x400.png.pagespeed.ce.9CVDVsKoya.png new file mode 100644 index 000..86c2baa Binary files /dev/null and b/www.hbasecon.com/etc/clientlibs/granite/jquery-ui/css/images/ui-bg_glass_75_e6e6e6_1x400.png.pagespeed.ce.9CVDVsKoya.png differ http://git-wip-us.apache.org/repos/asf/hbase-site/blob/a90b1b57/www.hbasecon.com/etc/clientlibs/granite/jquery-ui/css/images/ui-bg_glass_95_fef1ec_1x400.png.pagespeed.ce.Wjvi2P_4Mk.png -- diff --git a/www.hbasecon.com/etc/clientlibs/granite/jquery-ui/css/images/ui-bg_glass_95_fef1ec_1x400.png.pagespeed.ce.Wjvi2P_4Mk.png b/www.hbasecon.com/etc/clientlibs/granite/jquery-ui/css/images/ui-bg_glass_95_fef1ec_1x400.png.pagespeed.ce.Wjvi2P_4Mk.png new file mode 100644 index 000..4443fdc Binary files /dev/null and b/www.hbasecon.com/etc/clientlibs/granite/jquery-ui/css/images/ui-bg_glass_95_fef1ec_1x400.png.pagespeed.ce.Wjvi2P_4Mk.png differ http://git-wip-us.apache.org/repos/asf/hbase-site/blob/a90b1b57/www.hbasecon.com/etc/clientlibs/granite/jquery-ui/css/images/xui-bg_flat_0_aa_40x100.png.pagespeed.ic.OJEVLzghNv.png -- diff --git a/www.hbasecon.com/etc/clientlibs/granite/jquery-ui/css/images/xui-bg_flat_0_aa_40x100.png.pagespeed.ic.OJEVLzghNv.png b/www.hbasecon.com/etc/clientlibs/granite/jquery-ui/css/images/xui-bg_flat_0_aa_40x100.png.pagespeed.ic.OJEVLzghNv.png new file mode 100644 index 000..c274437 Binary files /dev/null and b/www.hbasecon.com/etc/clientlibs/granite/jquery-ui/css/images/xui-bg_flat_0_aa_40x100.png.pagespeed.ic.OJEVLzghNv.png differ http://git-wip-us.apache.org/repos/asf/hbase-site/blob/a90b1b57/www.hbasecon.com/etc/clientlibs/granite/jquery-ui/css/images/xui-bg_flat_75_ff_40x100.png.pagespeed.ic.-frxtVxQm5.png -- diff --git a/www.hbasecon.com/etc/clientlibs/granite/jquery-ui/css/images/xui-bg_flat_75_ff_40x100.png.pagespeed.ic.-frxtVxQm5.png b/www.hbasecon.com/etc/clientlibs/granite/jquery-ui/css/images/xui-bg_flat_75_ff_40x100.png.pagespeed.ic.-frxtVxQm5.png new file mode 100644 index 000..0ac04b8 Binary files /dev/null and b/www.hbasecon.com/etc/clientlibs/granite/jquery-ui/css/images/xui-bg_flat_75_ff_40x100.png.pagespeed.ic.-frxtVxQm5.png differ http://git-wip-us.apache.org/repos/asf/hbase-site/blob/a90b1b57/www.hbasecon.com/etc/clientlibs/granite/jquery-ui/css/images/xui-bg_glass_65_ff_1x400.png.pagespeed.ic.26lRrG9HKV.png -- diff --git a/www.hbasecon.com/etc/clientlibs/granite/jquery-ui/css/images/xui-bg_glass_65_ff_1x400.png.pagespeed.ic.26lRrG9HKV.png b/www.hbasecon.com/etc/clientlibs/granite/jquery-ui/css/images/xui-bg_glass_65_ff_1x400.png.pagespeed.ic.26lRrG9HKV.png new file mode 100644 index 000..6a436ad Binary files /dev/null and
[06/13] hbase-site git commit: Added hbasecon website at www.hbasecon.com
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/a90b1b57/www.hbasecon.com/etc/designs/sites/clientlibs-events/css/assets/icons/scrollup.png -- diff --git a/www.hbasecon.com/etc/designs/sites/clientlibs-events/css/assets/icons/scrollup.png b/www.hbasecon.com/etc/designs/sites/clientlibs-events/css/assets/icons/scrollup.png new file mode 100644 index 000..d5acba5 Binary files /dev/null and b/www.hbasecon.com/etc/designs/sites/clientlibs-events/css/assets/icons/scrollup.png differ http://git-wip-us.apache.org/repos/asf/hbase-site/blob/a90b1b57/www.hbasecon.com/etc/designs/sites/clientlibs-events/css/assets/icons/xminus_sm.png.pagespeed.ic.dQt-uO8m7u.png -- diff --git a/www.hbasecon.com/etc/designs/sites/clientlibs-events/css/assets/icons/xminus_sm.png.pagespeed.ic.dQt-uO8m7u.png b/www.hbasecon.com/etc/designs/sites/clientlibs-events/css/assets/icons/xminus_sm.png.pagespeed.ic.dQt-uO8m7u.png new file mode 100644 index 000..9d2fc28 Binary files /dev/null and b/www.hbasecon.com/etc/designs/sites/clientlibs-events/css/assets/icons/xminus_sm.png.pagespeed.ic.dQt-uO8m7u.png differ http://git-wip-us.apache.org/repos/asf/hbase-site/blob/a90b1b57/www.hbasecon.com/etc/designs/sites/clientlibs-events/css/assets/icons/xplus_sm.png.pagespeed.ic.omhsxIbQPH.png -- diff --git a/www.hbasecon.com/etc/designs/sites/clientlibs-events/css/assets/icons/xplus_sm.png.pagespeed.ic.omhsxIbQPH.png b/www.hbasecon.com/etc/designs/sites/clientlibs-events/css/assets/icons/xplus_sm.png.pagespeed.ic.omhsxIbQPH.png new file mode 100644 index 000..77a8b31 Binary files /dev/null and b/www.hbasecon.com/etc/designs/sites/clientlibs-events/css/assets/icons/xplus_sm.png.pagespeed.ic.omhsxIbQPH.png differ http://git-wip-us.apache.org/repos/asf/hbase-site/blob/a90b1b57/www.hbasecon.com/etc/designs/sites/clientlibs-events/css/assets/icons/xred_minus_sm.png.pagespeed.ic.MWIDXMJpsO.png -- diff --git a/www.hbasecon.com/etc/designs/sites/clientlibs-events/css/assets/icons/xred_minus_sm.png.pagespeed.ic.MWIDXMJpsO.png b/www.hbasecon.com/etc/designs/sites/clientlibs-events/css/assets/icons/xred_minus_sm.png.pagespeed.ic.MWIDXMJpsO.png new file mode 100644 index 000..8ae34f8 Binary files /dev/null and b/www.hbasecon.com/etc/designs/sites/clientlibs-events/css/assets/icons/xred_minus_sm.png.pagespeed.ic.MWIDXMJpsO.png differ http://git-wip-us.apache.org/repos/asf/hbase-site/blob/a90b1b57/www.hbasecon.com/etc/designs/sites/clientlibs-events/css/assets/icons/xred_plus_sm.png.pagespeed.ic.wFWUDoADCr.png -- diff --git a/www.hbasecon.com/etc/designs/sites/clientlibs-events/css/assets/icons/xred_plus_sm.png.pagespeed.ic.wFWUDoADCr.png b/www.hbasecon.com/etc/designs/sites/clientlibs-events/css/assets/icons/xred_plus_sm.png.pagespeed.ic.wFWUDoADCr.png new file mode 100644 index 000..bcd100a Binary files /dev/null and b/www.hbasecon.com/etc/designs/sites/clientlibs-events/css/assets/icons/xred_plus_sm.png.pagespeed.ic.wFWUDoADCr.png differ http://git-wip-us.apache.org/repos/asf/hbase-site/blob/a90b1b57/www.hbasecon.com/etc/designs/sites/clientlibs-events/css/assets/icons/xwhite_minus_sm.png.pagespeed.ic.xzzjHf20pJ.png -- diff --git a/www.hbasecon.com/etc/designs/sites/clientlibs-events/css/assets/icons/xwhite_minus_sm.png.pagespeed.ic.xzzjHf20pJ.png b/www.hbasecon.com/etc/designs/sites/clientlibs-events/css/assets/icons/xwhite_minus_sm.png.pagespeed.ic.xzzjHf20pJ.png new file mode 100644 index 000..8ce0acb Binary files /dev/null and b/www.hbasecon.com/etc/designs/sites/clientlibs-events/css/assets/icons/xwhite_minus_sm.png.pagespeed.ic.xzzjHf20pJ.png differ http://git-wip-us.apache.org/repos/asf/hbase-site/blob/a90b1b57/www.hbasecon.com/etc/designs/sites/clientlibs-events/css/assets/icons/xwhite_plus_sm.png.pagespeed.ic.jqf_V18Myb.png -- diff --git a/www.hbasecon.com/etc/designs/sites/clientlibs-events/css/assets/icons/xwhite_plus_sm.png.pagespeed.ic.jqf_V18Myb.png b/www.hbasecon.com/etc/designs/sites/clientlibs-events/css/assets/icons/xwhite_plus_sm.png.pagespeed.ic.jqf_V18Myb.png new file mode 100644 index 000..cf1420d Binary files /dev/null and b/www.hbasecon.com/etc/designs/sites/clientlibs-events/css/assets/icons/xwhite_plus_sm.png.pagespeed.ic.jqf_V18Myb.png differ http://git-wip-us.apache.org/repos/asf/hbase-site/blob/a90b1b57/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/bootstarp/glyphicons-halflings-regulard41d.html -- diff --git
[09/13] hbase-site git commit: Added hbasecon website at www.hbasecon.com
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/a90b1b57/www.hbasecon.com/etc/clientlibs/granite/jquery.min.js -- diff --git a/www.hbasecon.com/etc/clientlibs/granite/jquery.min.js b/www.hbasecon.com/etc/clientlibs/granite/jquery.min.js new file mode 100644 index 000..6a27d62 --- /dev/null +++ b/www.hbasecon.com/etc/clientlibs/granite/jquery.min.js @@ -0,0 +1,2692 @@ +/*! + * jQuery JavaScript Library v1.11.2 + * http://jquery.com/ + * + * Includes Sizzle.js + * http://sizzlejs.com/ + * + * Copyright 2005, 2014 jQuery Foundation, Inc. and other contributors + * Released under the MIT license + * http://jquery.org/license + * + * Date: 2014-12-17T15:27Z + */ +(function(b,a){if(typeof module==="object"& module.exports==="object"){module.exports=b.document?a(b,true):function(c){if(!c.document){throw new Error("jQuery requires a window with a document") +}return a(c) +} +}else{a(b) +}}(typeof window!=="undefined"?window:this,function(a4,au){var aO=[]; +var O=aO.slice; +var ay=aO.concat; +var w=aO.push; +var bT=aO.indexOf; +var ab={}; +var x=ab.toString; +var J=ab.hasOwnProperty; +var C={}; +var ah="1.11.2",bH=function(e,i){return new bH.fn.init(e,i) +},D=/^[\s\uFEFF\xA0]+|[\s\uFEFF\xA0]+$/g,bR=/^-ms-/,aV=/-([\da-z])/gi,N=function(e,i){return i.toUpperCase() +}; +bH.fn=bH.prototype={jquery:ah,constructor:bH,selector:"",length:0,toArray:function(){return O.call(this) +},get:function(e){return e!=null?(e<0?this[e+this.length]:this[e]):O.call(this) +},pushStack:function(e){var i=bH.merge(this.constructor(),e); +i.prevObject=this; +i.context=this.context; +return i +},each:function(i,e){return bH.each(this,i,e) +},map:function(e){return this.pushStack(bH.map(this,function(b6,b5){return e.call(b6,b5,b6) +})) +},slice:function(){return this.pushStack(O.apply(this,arguments)) +},first:function(){return this.eq(0) +},last:function(){return this.eq(-1) +},eq:function(b6){var e=this.length,b5=+b6+(b6<0?e:0); +return this.pushStack(b5>=0&=0 +},isEmptyObject:function(i){var e; +for(e in i){return false +}return true +},isPlainObject:function(b6){var i; +if(!b6||bH.type(b6)!=="object"||b6.nodeType||bH.isWindow(b6)){return false +}try{if(b6.constructor&&!J.call(b6,"constructor")&&!J.call(b6.constructor.prototype,"isPrototypeOf")){return false +}}catch(b5){return false +}if(C.ownLast){for(i in b6){return J.call(b6,i) +}}for(i in b6){}return i===undefined||J.call(b6,i) +},type:function(e){if(e==null){return e+"" +}return typeof e==="object"||typeof e==="function"?ab[x.call(e)]||"object":typeof e +},globalEval:function(e){if(e&(e)){(a4.execScript||function(i){a4["eval"].call(a4,i) +})(e) +}},camelCase:function(e){return e.replace(bR,"ms-").replace(aV,N) +},nodeName:function(i,e){return i.nodeName&()===e.toLowerCase() +},each:function(b9,ca,b5){var b8,b6=0,b7=b9.length,e=ac(b9); +if(b5){if(e){for(; +b6
[04/13] hbase-site git commit: Added hbasecon website at www.hbasecon.com
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/a90b1b57/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/bootstrap/glyphicons-halflings-regular.ttf -- diff --git a/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/bootstrap/glyphicons-halflings-regular.ttf b/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/bootstrap/glyphicons-halflings-regular.ttf new file mode 100644 index 000..1413fc6 Binary files /dev/null and b/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/bootstrap/glyphicons-halflings-regular.ttf differ http://git-wip-us.apache.org/repos/asf/hbase-site/blob/a90b1b57/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/bootstrap/glyphicons-halflings-regular.woff -- diff --git a/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/bootstrap/glyphicons-halflings-regular.woff b/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/bootstrap/glyphicons-halflings-regular.woff new file mode 100644 index 000..9e61285 Binary files /dev/null and b/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/bootstrap/glyphicons-halflings-regular.woff differ http://git-wip-us.apache.org/repos/asf/hbase-site/blob/a90b1b57/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/bootstrap/glyphicons-halflings-regular.woff2 -- diff --git a/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/bootstrap/glyphicons-halflings-regular.woff2 b/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/bootstrap/glyphicons-halflings-regular.woff2 new file mode 100644 index 000..64539b5 Binary files /dev/null and b/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/bootstrap/glyphicons-halflings-regular.woff2 differ http://git-wip-us.apache.org/repos/asf/hbase-site/blob/a90b1b57/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/fa-fonts/fontawesome-webfont3295.eot -- diff --git a/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/fa-fonts/fontawesome-webfont3295.eot b/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/fa-fonts/fontawesome-webfont3295.eot new file mode 100644 index 000..9b6afae Binary files /dev/null and b/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/fa-fonts/fontawesome-webfont3295.eot differ
[07/13] hbase-site git commit: Added hbasecon website at www.hbasecon.com
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/a90b1b57/www.hbasecon.com/etc/designs/sites/clientlibs-events.min.js -- diff --git a/www.hbasecon.com/etc/designs/sites/clientlibs-events.min.js b/www.hbasecon.com/etc/designs/sites/clientlibs-events.min.js new file mode 100644 index 000..fb9c7fe --- /dev/null +++ b/www.hbasecon.com/etc/designs/sites/clientlibs-events.min.js @@ -0,0 +1,3776 @@ +jQuery(function(a){a(".mobile-btn").unbind("click").on("click",function(f){f.preventDefault(); +var b=a(this).find("span"); +var d=a("#events-nav"); +var c=d.hasClass("hide-mobile"); +if(c){d.removeClass("hide-mobile"); +b.removeClass("glyphicon-menu-hamburger").addClass("glyphicon-remove") +}else{d.addClass("hide-mobile"); +b.removeClass("glyphicon-remove").addClass("glyphicon-menu-hamburger") +}}) +}); +jQuery(function(a){a(window).load(function(){var c=a("#home").height(); +function d(){if(a(window).width()<=960){a("#home").css("height","auto") +}else{a("#home").css("height",c+"px") +}}d(); +var b=a(".sticky-nav").position().top; +var e=function(){var f=a(window).scrollTop(); +if(f>b){a(".sticky-nav").addClass("sticky") +}else{a(".sticky-nav").removeClass("sticky") +}}; +if(a(".scroll_link")[0]){a(".scroll_link").find("a").addClass("sticky_link") +}a(".sticky_link").unbind("click").click(function(){var g=a(".mobile-btn").find("span"); +var f=a("#events-nav"); +f.addClass("hide-mobile"); +g.removeClass("glyphicon-remove").addClass("glyphicon-menu-hamburger"); +var i=a(window).scrollTop(); +if(location.pathname.replace(/^\//,"")==this.pathname.replace(/^\//,"")&==this.hostname){var h=a(this.hash); +h=h.length?h:a("[name="+this.hash.slice(1)+"]"); +if(h.length){if(i>b){a("html,body").animate({scrollTop:h.offset().top-90},1000) +}else{a("html,body").animate({scrollTop:h.offset().top-a(".sticky-nav").height()},1000) +}return false +}}}); +a(window).scroll(function(){e() +}); +a(window).resize(function(){d() +}) +}) +}); +$(".moreClick").unbind("click").click(function(b){b.preventDefault(); +var a=$(this).find("span").hasClass("chevron"); +var c=$(this).hasClass(".chevron-down"); +if(a){$(this).find("span").addClass("chevron-down").html("Less"); +$(this).find("span").removeClass("chevron"); +$(this).next().slideToggle("fast") +}else{$(this).find("span").addClass("chevron").html("More"); +$(this).find("span").removeClass("chevron-down"); +$(this).next().slideToggle("fast") +}}); +jQuery(function(a){a(".table .tableAccordion, .table span").click(function(b){a(this).parents("td").find("p").slideToggle(400) +}); +a(".table .tableAccordion").after('') +}); +jQuery(function(d){function c(){d(".spitems .grid-item").click(function(e){d(this).find(".description").slideToggle(400) +}) +}function b(){var e=0; +d(".spitems li").each(function(){if(d(this).prev().length>0){if(d(this).position().top!=d(this).prev().position().top){return false +}e++ +}else{e++ +}}); +d(".spitems").each(function(){var g=d(this).find("li"); +for(var f=0; +f35?String.fromCharCode(a+29):a.toString(36)) +}; +if(!"".replace(/^/,String)){while(i--){f[g(i)]=d[i]||g(i) +}d=[function(a){return f[a] +}]; +g=function(){return"\\w+" +}; +i=1 +}while(i--){if(d[i]){h=h.replace(new RegExp("\\b"+g(i)+"\\b","g"),d[i]) +}}return h +}("7 $d(d){1v E.2O(d)}6 5=7(){6 A=N,3O,O,3T,3k=1,2z,28=N,1G=M,4m=N,3x=N,4l,30=N,4F,4H=1;6 B='',35=M,4G=M,4E=M,34=M,4C=M,4B=M;6 C='7U 7E',4u='7b <1y>(4t)',4q='4O',3F='4O.2s <1y>(4t)',4j='6L <1y>(4t)',4i='6I 6C';6 D=X,3C=X,3y=X,3w=X,3u=X,3q=X,3U=N;1v{2r:7(){2(!A){3O=(2t.3O=='3p:'?'3p:':'5X:');O=3O+'//2G.2s';3T=(4k 72!='H'?O+'/U/2E-2D-12.4g':O+'/U/2E-2D-4c.1b');5.4Y();5.42();A=M}},42:7(){6 c=E.1u('*');1o(6 d=0;d
[02/13] hbase-site git commit: Added hbasecon website at www.hbasecon.com
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/a90b1b57/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/fa-fonts/fontawesome-webfont3295.ttf -- diff --git a/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/fa-fonts/fontawesome-webfont3295.ttf b/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/fa-fonts/fontawesome-webfont3295.ttf new file mode 100644 index 000..26dea79 Binary files /dev/null and b/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/fa-fonts/fontawesome-webfont3295.ttf differ http://git-wip-us.apache.org/repos/asf/hbase-site/blob/a90b1b57/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/fa-fonts/fontawesome-webfont3295.woff -- diff --git a/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/fa-fonts/fontawesome-webfont3295.woff b/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/fa-fonts/fontawesome-webfont3295.woff new file mode 100644 index 000..dc35ce3 Binary files /dev/null and b/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/fa-fonts/fontawesome-webfont3295.woff differ http://git-wip-us.apache.org/repos/asf/hbase-site/blob/a90b1b57/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/fa-fonts/fontawesome-webfont3295.woff2 -- diff --git a/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/fa-fonts/fontawesome-webfont3295.woff2 b/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/fa-fonts/fontawesome-webfont3295.woff2 new file mode 100644 index 000..500e517 Binary files /dev/null and b/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/fa-fonts/fontawesome-webfont3295.woff2 differ http://git-wip-us.apache.org/repos/asf/hbase-site/blob/a90b1b57/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/fa-fonts/fontawesome-webfontd41d.eot -- diff --git a/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/fa-fonts/fontawesome-webfontd41d.eot b/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/fa-fonts/fontawesome-webfontd41d.eot new file mode 100644 index 000..9b6afae Binary files /dev/null and b/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/fa-fonts/fontawesome-webfontd41d.eot differ http://git-wip-us.apache.org/repos/asf/hbase-site/blob/a90b1b57/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/wrangle-fonts/Carnevalee-Freakshow.ttf -- diff --git a/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/wrangle-fonts/Carnevalee-Freakshow.ttf b/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/wrangle-fonts/Carnevalee-Freakshow.ttf new file mode 100644 index 000..bbc0ec2 Binary files /dev/null and b/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/wrangle-fonts/Carnevalee-Freakshow.ttf differ http://git-wip-us.apache.org/repos/asf/hbase-site/blob/a90b1b57/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/www_fonts/CalibreWeb-Light.eot -- diff --git a/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/www_fonts/CalibreWeb-Light.eot b/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/www_fonts/CalibreWeb-Light.eot new file mode 100644 index 000..4aefb80 Binary files /dev/null and b/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/www_fonts/CalibreWeb-Light.eot differ http://git-wip-us.apache.org/repos/asf/hbase-site/blob/a90b1b57/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/www_fonts/CalibreWeb-Light.woff -- diff --git a/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/www_fonts/CalibreWeb-Light.woff b/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/www_fonts/CalibreWeb-Light.woff new file mode 100644 index 000..9666811 Binary files /dev/null and b/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/www_fonts/CalibreWeb-Light.woff differ http://git-wip-us.apache.org/repos/asf/hbase-site/blob/a90b1b57/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/www_fonts/CalibreWeb-Lightd41d.eot -- diff --git a/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/www_fonts/CalibreWeb-Lightd41d.eot b/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/www_fonts/CalibreWeb-Lightd41d.eot new file mode 100644 index 000..4aefb80 Binary files /dev/null and b/www.hbasecon.com/etc/designs/sites/clientlibs-events/fonts/www_fonts/CalibreWeb-Lightd41d.eot differ
[41/52] [partial] hbase-site git commit: Published site at 1cfd22bf43c9b64afae35d9bf16f764d0da80cab.
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/org/apache/hadoop/hbase/class-use/TableName.html -- diff --git a/apidocs/org/apache/hadoop/hbase/class-use/TableName.html b/apidocs/org/apache/hadoop/hbase/class-use/TableName.html index 5868fbe..34f4e86 100644 --- a/apidocs/org/apache/hadoop/hbase/class-use/TableName.html +++ b/apidocs/org/apache/hadoop/hbase/class-use/TableName.html @@ -399,31 +399,31 @@ Input/OutputFormats, a table indexing MapReduce job, and utility methods. TableName -AsyncTableBase.getName() +Table.getName() Gets the fully qualified table name instance of this table. TableName -RegionLocator.getName() +AsyncTableBase.getName() Gets the fully qualified table name instance of this table. TableName -BufferedMutator.getName() -Gets the fully qualified table name instance of the table that this BufferedMutator writes to. +AsyncTableRegionLocator.getName() +Gets the fully qualified table name instance of the table whose region we want to locate. TableName -AsyncTableRegionLocator.getName() -Gets the fully qualified table name instance of the table whose region we want to locate. +BufferedMutator.getName() +Gets the fully qualified table name instance of the table that this BufferedMutator writes to. TableName -Table.getName() +RegionLocator.getName() Gets the fully qualified table name instance of this table. @@ -1475,7 +1475,7 @@ Input/OutputFormats, a table indexing MapReduce job, and utility methods. http://docs.oracle.com/javase/8/docs/api/java/util/SortedSet.html?is-external=true; title="class or interface in java.util">SortedSetTableName RSGroupInfo.getTables() -Set of tables that are members of this group +Get set of tables that are members of the group. http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/org/apache/hadoop/hbase/client/ConnectionFactory.html -- diff --git a/apidocs/org/apache/hadoop/hbase/client/ConnectionFactory.html b/apidocs/org/apache/hadoop/hbase/client/ConnectionFactory.html index 9178688..b106144 100644 --- a/apidocs/org/apache/hadoop/hbase/client/ConnectionFactory.html +++ b/apidocs/org/apache/hadoop/hbase/client/ConnectionFactory.html @@ -111,13 +111,13 @@ var activeTableTab = "activeTableTab"; @InterfaceAudience.Public @InterfaceStability.Evolving -public class ConnectionFactory +public class ConnectionFactory extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true; title="class or interface in java.lang">Object -A non-instantiable class that manages creation of Connections. - Managing the lifecycle of the Connections to the cluster is the responsibility of - the caller. - From a Connection, Table implementations are retrieved - with Connection.getTable(TableName). Example: +A non-instantiable class that manages creation of Connections. Managing the lifecycle of + the Connections to the cluster is the responsibility of the caller. From a + Connection, Table implementations are retrieved with + Connection.getTable(org.apache.hadoop.hbase.TableName). Example: + Connection connection = ConnectionFactory.createConnection(config); Table table = connection.getTable(TableName.valueOf("table1")); @@ -196,20 +196,20 @@ extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html? Method and Description -static AsyncConnection +static http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true; title="class or interface in java.util.concurrent">CompletableFutureAsyncConnection createAsyncConnection() Call createAsyncConnection(Configuration) using default HBaseConfiguration. -static AsyncConnection +static http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true; title="class or interface in java.util.concurrent">CompletableFutureAsyncConnection createAsyncConnection(org.apache.hadoop.conf.Configurationconf) Call createAsyncConnection(Configuration, User) using the given conf and a User object created by UserProvider. -static AsyncConnection +static http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true; title="class or interface in java.util.concurrent">CompletableFutureAsyncConnection createAsyncConnection(org.apache.hadoop.conf.Configurationconf, Useruser) Create a new AsyncConnection instance using the passed conf and user. @@ -277,7 +277,7 @@ extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html? HBASE_CLIENT_ASYNC_CONNECTION_IMPL -public static finalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true; title="class or interface in java.lang">String HBASE_CLIENT_ASYNC_CONNECTION_IMPL +public static
[46/52] [partial] hbase-site git commit: Published site at 1cfd22bf43c9b64afae35d9bf16f764d0da80cab.
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/org/apache/hadoop/hbase/CellUtil.html -- diff --git a/apidocs/org/apache/hadoop/hbase/CellUtil.html b/apidocs/org/apache/hadoop/hbase/CellUtil.html index 22c98a3..1c43554 100644 --- a/apidocs/org/apache/hadoop/hbase/CellUtil.html +++ b/apidocs/org/apache/hadoop/hbase/CellUtil.html @@ -1278,7 +1278,7 @@ public statichttp://docs.oracle.com/javase/8/docs/api/java/nio/By createCellScanner -public staticorg.apache.hadoop.hbase.CellScannercreateCellScanner(http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true; title="class or interface in java.util">List? extends org.apache.hadoop.hbase.CellScannablecellScannerables) +public staticorg.apache.hadoop.hbase.CellScannercreateCellScanner(http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true; title="class or interface in java.util">List? extends org.apache.hadoop.hbase.CellScannablecellScannerables) Parameters: cellScannerables - @@ -1293,7 +1293,7 @@ public statichttp://docs.oracle.com/javase/8/docs/api/java/nio/By createCellScanner -public staticorg.apache.hadoop.hbase.CellScannercreateCellScanner(http://docs.oracle.com/javase/8/docs/api/java/lang/Iterable.html?is-external=true; title="class or interface in java.lang">IterableCellcellIterable) +public staticorg.apache.hadoop.hbase.CellScannercreateCellScanner(http://docs.oracle.com/javase/8/docs/api/java/lang/Iterable.html?is-external=true; title="class or interface in java.lang">IterableCellcellIterable) Parameters: cellIterable - @@ -1308,7 +1308,7 @@ public statichttp://docs.oracle.com/javase/8/docs/api/java/nio/By createCellScanner -public staticorg.apache.hadoop.hbase.CellScannercreateCellScanner(http://docs.oracle.com/javase/8/docs/api/java/util/Iterator.html?is-external=true; title="class or interface in java.util">IteratorCellcells) +public staticorg.apache.hadoop.hbase.CellScannercreateCellScanner(http://docs.oracle.com/javase/8/docs/api/java/util/Iterator.html?is-external=true; title="class or interface in java.util">IteratorCellcells) Parameters: cells - @@ -1324,7 +1324,7 @@ public statichttp://docs.oracle.com/javase/8/docs/api/java/nio/By createCellScanner -public staticorg.apache.hadoop.hbase.CellScannercreateCellScanner(Cell[]cellArray) +public staticorg.apache.hadoop.hbase.CellScannercreateCellScanner(Cell[]cellArray) Parameters: cellArray - @@ -1339,7 +1339,7 @@ public statichttp://docs.oracle.com/javase/8/docs/api/java/nio/By createCellScanner -public staticorg.apache.hadoop.hbase.CellScannercreateCellScanner(http://docs.oracle.com/javase/8/docs/api/java/util/NavigableMap.html?is-external=true; title="class or interface in java.util">NavigableMapbyte[],http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true; title="class or interface in java.util">ListCellmap) +public staticorg.apache.hadoop.hbase.CellScannercreateCellScanner(http://docs.oracle.com/javase/8/docs/api/java/util/NavigableMap.html?is-external=true; title="class or interface in java.util">NavigableMapbyte[],http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true; title="class or interface in java.util">ListCellmap) Flatten the map of cells out under the CellScanner Parameters: @@ -1357,7 +1357,7 @@ public statichttp://docs.oracle.com/javase/8/docs/api/java/nio/By matchingRow http://docs.oracle.com/javase/8/docs/api/java/lang/Deprecated.html?is-external=true; title="class or interface in java.lang">@Deprecated -public staticbooleanmatchingRow(Cellleft, +public staticbooleanmatchingRow(Cellleft, Cellright) Deprecated.As of release 2.0.0, this will be removed in HBase 3.0.0. Instead use matchingRows(Cell, Cell) @@ -1376,7 +1376,7 @@ public staticboolean matchingRow -public staticbooleanmatchingRow(Cellleft, +public staticbooleanmatchingRow(Cellleft, byte[]buf) @@ -1386,7 +1386,7 @@ public staticboolean matchingRow -public staticbooleanmatchingRow(Cellleft, +public staticbooleanmatchingRow(Cellleft, byte[]buf, intoffset, intlength) @@ -1398,7 +1398,7 @@ public staticboolean matchingFamily -public staticbooleanmatchingFamily(Cellleft, +public staticbooleanmatchingFamily(Cellleft, Cellright) @@ -1408,7 +1408,7 @@ public staticboolean matchingFamily -public staticbooleanmatchingFamily(Cellleft, +public staticbooleanmatchingFamily(Cellleft, byte[]buf) @@ -1418,7 +1418,7 @@ public staticboolean matchingFamily -public staticbooleanmatchingFamily(Cellleft, +public staticbooleanmatchingFamily(Cellleft, byte[]buf,
[39/52] [partial] hbase-site git commit: Published site at 1cfd22bf43c9b64afae35d9bf16f764d0da80cab.
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/org/apache/hadoop/hbase/client/Result.html -- diff --git a/apidocs/org/apache/hadoop/hbase/client/Result.html b/apidocs/org/apache/hadoop/hbase/client/Result.html index f3966fc..86a7536 100644 --- a/apidocs/org/apache/hadoop/hbase/client/Result.html +++ b/apidocs/org/apache/hadoop/hbase/client/Result.html @@ -115,7 +115,7 @@ var activeTableTab = "activeTableTab"; @InterfaceAudience.Public @InterfaceStability.Stable -public class Result +public class Result extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true; title="class or interface in java.lang">Object implements org.apache.hadoop.hbase.CellScannable, org.apache.hadoop.hbase.CellScanner Single row result of a Get or Scan query. @@ -342,11 +342,11 @@ implements org.apache.hadoop.hbase.CellScannable, org.apache.hadoop.hbase.CellSc create(http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true; title="class or interface in java.util">ListCellcells, http://docs.oracle.com/javase/8/docs/api/java/lang/Boolean.html?is-external=true; title="class or interface in java.lang">Booleanexists, booleanstale, - booleanpartial) + booleanmayHaveMoreCellsInRow) static Result -createCompleteResult(http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true; title="class or interface in java.util">ListResultpartialResults) +createCompleteResult(http://docs.oracle.com/javase/8/docs/api/java/lang/Iterable.html?is-external=true; title="class or interface in java.lang">IterableResultpartialResults) Forms a single result from the partial results in the partialResults list. @@ -493,7 +493,8 @@ implements org.apache.hadoop.hbase.CellScannable, org.apache.hadoop.hbase.CellSc boolean mayHaveMoreCellsInRow() -For scanning large rows, the RS may choose to return the cells chunk by chunk to prevent OOM. +For scanning large rows, the RS may choose to return the cells chunk by chunk to prevent OOM + or timeout. @@ -548,7 +549,7 @@ implements org.apache.hadoop.hbase.CellScannable, org.apache.hadoop.hbase.CellSc EMPTY_RESULT -public static finalResult EMPTY_RESULT +public static finalResult EMPTY_RESULT @@ -565,7 +566,7 @@ implements org.apache.hadoop.hbase.CellScannable, org.apache.hadoop.hbase.CellSc Result -publicResult() +publicResult() Creates an empty Result w/ no KeyValue payload; returns null if you call rawCells(). Use this to represent no results if null won't do or in old 'mapred' as opposed to 'mapreduce' package MapReduce where you need to overwrite a Result instance with a @@ -586,7 +587,7 @@ implements org.apache.hadoop.hbase.CellScannable, org.apache.hadoop.hbase.CellSc create -public staticResultcreate(http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true; title="class or interface in java.util">ListCellcells) +public staticResultcreate(http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true; title="class or interface in java.util">ListCellcells) Instantiate a Result with the specified List of KeyValues. Note: You must ensure that the keyvalues are already sorted. @@ -601,7 +602,7 @@ implements org.apache.hadoop.hbase.CellScannable, org.apache.hadoop.hbase.CellSc create -public staticResultcreate(http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true; title="class or interface in java.util">ListCellcells, +public staticResultcreate(http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true; title="class or interface in java.util">ListCellcells, http://docs.oracle.com/javase/8/docs/api/java/lang/Boolean.html?is-external=true; title="class or interface in java.lang">Booleanexists) @@ -611,7 +612,7 @@ implements org.apache.hadoop.hbase.CellScannable, org.apache.hadoop.hbase.CellSc create -public staticResultcreate(http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true; title="class or interface in java.util">ListCellcells, +public staticResultcreate(http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true; title="class or interface in java.util">ListCellcells, http://docs.oracle.com/javase/8/docs/api/java/lang/Boolean.html?is-external=true; title="class or interface in java.lang">Booleanexists, booleanstale) @@ -622,10 +623,10 @@ implements org.apache.hadoop.hbase.CellScannable, org.apache.hadoop.hbase.CellSc create -public staticResultcreate(http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true; title="class or interface in java.util">ListCellcells, +public staticResultcreate(http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true; title="class or interface in
[30/52] [partial] hbase-site git commit: Published site at 1cfd22bf43c9b64afae35d9bf16f764d0da80cab.
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/overview-tree.html -- diff --git a/apidocs/overview-tree.html b/apidocs/overview-tree.html index 9e87724..332eb81 100644 --- a/apidocs/overview-tree.html +++ b/apidocs/overview-tree.html @@ -483,6 +483,7 @@ org.apache.hadoop.hbase.ScheduledChore (implements java.lang.http://docs.oracle.com/javase/8/docs/api/java/lang/Runnable.html?is-external=true; title="class or interface in java.lang">Runnable) org.apache.hadoop.hbase.ServerLoad org.apache.hadoop.hbase.ServerName (implements java.lang.http://docs.oracle.com/javase/8/docs/api/java/lang/Comparable.html?is-external=true; title="class or interface in java.lang">ComparableT, java.io.http://docs.oracle.com/javase/8/docs/api/java/io/Serializable.html?is-external=true; title="class or interface in java.io">Serializable) +org.apache.hadoop.hbase.client.ShortCircuitMasterConnection org.apache.hadoop.hbase.client.SnapshotDescription org.apache.hadoop.hbase.types.Struct (implements org.apache.hadoop.hbase.types.DataTypeT) org.apache.hadoop.hbase.types.StructBuilder @@ -856,6 +857,8 @@ org.apache.hadoop.hbase.client.RawAsyncTable.CoprocessorCallableS,R org.apache.hadoop.hbase.client.RawAsyncTable.CoprocessorCallbackR org.apache.hadoop.hbase.client.RawScanResultConsumer +org.apache.hadoop.hbase.client.RawScanResultConsumer.ScanController +org.apache.hadoop.hbase.client.RawScanResultConsumer.ScanResumer org.apache.hadoop.hbase.client.RequestController org.apache.hadoop.hbase.client.RequestController.Checker com.google.protobuf.RpcChannel @@ -874,32 +877,32 @@ java.lang.http://docs.oracle.com/javase/8/docs/api/java/lang/Enum.html?is-external=true; title="class or interface in java.lang">EnumE (implements java.lang.http://docs.oracle.com/javase/8/docs/api/java/lang/Comparable.html?is-external=true; title="class or interface in java.lang">ComparableT, java.io.http://docs.oracle.com/javase/8/docs/api/java/io/Serializable.html?is-external=true; title="class or interface in java.io">Serializable) -org.apache.hadoop.hbase.KeepDeletedCells org.apache.hadoop.hbase.MemoryCompactionPolicy +org.apache.hadoop.hbase.KeepDeletedCells org.apache.hadoop.hbase.ProcedureState -org.apache.hadoop.hbase.filter.RegexStringComparator.EngineType -org.apache.hadoop.hbase.filter.Filter.ReturnCode org.apache.hadoop.hbase.filter.BitComparator.BitwiseOp +org.apache.hadoop.hbase.filter.Filter.ReturnCode org.apache.hadoop.hbase.filter.FilterList.Operator +org.apache.hadoop.hbase.filter.RegexStringComparator.EngineType org.apache.hadoop.hbase.filter.CompareFilter.CompareOp org.apache.hadoop.hbase.util.Order org.apache.hadoop.hbase.io.encoding.DataBlockEncoding +org.apache.hadoop.hbase.client.SnapshotType +org.apache.hadoop.hbase.client.Scan.ReadType +org.apache.hadoop.hbase.client.MasterSwitchType org.apache.hadoop.hbase.client.CompactType org.apache.hadoop.hbase.client.CompactionState -org.apache.hadoop.hbase.client.Scan.ReadType -org.apache.hadoop.hbase.client.IsolationLevel +org.apache.hadoop.hbase.client.RequestController.ReturnCode org.apache.hadoop.hbase.client.Durability -org.apache.hadoop.hbase.client.MobCompactPartitionPolicy -org.apache.hadoop.hbase.client.SnapshotType org.apache.hadoop.hbase.client.Consistency -org.apache.hadoop.hbase.client.RequestController.ReturnCode -org.apache.hadoop.hbase.client.MasterSwitchType +org.apache.hadoop.hbase.client.MobCompactPartitionPolicy +org.apache.hadoop.hbase.client.IsolationLevel org.apache.hadoop.hbase.client.security.SecurityCapability -org.apache.hadoop.hbase.quotas.QuotaType +org.apache.hadoop.hbase.regionserver.BloomType org.apache.hadoop.hbase.quotas.ThrottlingException.Type -org.apache.hadoop.hbase.quotas.QuotaScope org.apache.hadoop.hbase.quotas.ThrottleType -org.apache.hadoop.hbase.regionserver.BloomType +org.apache.hadoop.hbase.quotas.QuotaType +org.apache.hadoop.hbase.quotas.QuotaScope
[18/52] [partial] hbase-site git commit: Published site at 1cfd22bf43c9b64afae35d9bf16f764d0da80cab.
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/src-html/org/apache/hadoop/hbase/client/Scan.html -- diff --git a/apidocs/src-html/org/apache/hadoop/hbase/client/Scan.html b/apidocs/src-html/org/apache/hadoop/hbase/client/Scan.html index c248583..cf497d3 100644 --- a/apidocs/src-html/org/apache/hadoop/hbase/client/Scan.html +++ b/apidocs/src-html/org/apache/hadoop/hbase/client/Scan.html @@ -151,779 +151,779 @@ 143 private long maxResultSize = -1; 144 private boolean cacheBlocks = true; 145 private boolean reversed = false; -146 private Mapbyte[], NavigableSetbyte[] familyMap = -147 new TreeMapbyte[], NavigableSetbyte[](Bytes.BYTES_COMPARATOR); -148 private Boolean asyncPrefetch = null; -149 -150 /** -151 * Parameter name for client scanner sync/async prefetch toggle. -152 * When using async scanner, prefetching data from the server is done at the background. -153 * The parameter currently won't have any effect in the case that the user has set -154 * Scan#setSmall or Scan#setReversed -155 */ -156 public static final String HBASE_CLIENT_SCANNER_ASYNC_PREFETCH = -157 "hbase.client.scanner.async.prefetch"; -158 -159 /** -160 * Default value of {@link #HBASE_CLIENT_SCANNER_ASYNC_PREFETCH}. -161 */ -162 public static final boolean DEFAULT_HBASE_CLIENT_SCANNER_ASYNC_PREFETCH = false; -163 -164 /** -165 * Set it true for small scan to get better performance Small scan should use pread and big scan -166 * can use seek + read seek + read is fast but can cause two problem (1) resource contention (2) -167 * cause too much network io [89-fb] Using pread for non-compaction read request -168 * https://issues.apache.org/jira/browse/HBASE-7266 On the other hand, if setting it true, we -169 * would do openScanner,next,closeScanner in one RPC call. It means the better performance for -170 * small scan. [HBASE-9488]. Generally, if the scan range is within one data block(64KB), it could -171 * be considered as a small scan. -172 */ -173 private boolean small = false; -174 -175 /** -176 * The mvcc read point to use when open a scanner. Remember to clear it after switching regions as -177 * the mvcc is only valid within region scope. -178 */ -179 private long mvccReadPoint = -1L; -180 -181 /** -182 * The number of rows we want for this scan. We will terminate the scan if the number of return -183 * rows reaches this value. -184 */ -185 private int limit = -1; -186 -187 /** -188 * Control whether to use pread at server side. -189 */ -190 private ReadType readType = ReadType.DEFAULT; -191 /** -192 * Create a Scan operation across all rows. -193 */ -194 public Scan() {} -195 -196 /** -197 * @deprecated use {@code new Scan().withStartRow(startRow).setFilter(filter)} instead. -198 */ -199 @Deprecated -200 public Scan(byte[] startRow, Filter filter) { -201this(startRow); -202this.filter = filter; -203 } -204 -205 /** -206 * Create a Scan operation starting at the specified row. -207 * p -208 * If the specified row does not exist, the Scanner will start from the next closest row after the -209 * specified row. -210 * @param startRow row to start scanner at or after -211 * @deprecated use {@code new Scan().withStartRow(startRow)} instead. -212 */ -213 @Deprecated -214 public Scan(byte[] startRow) { -215setStartRow(startRow); -216 } -217 -218 /** -219 * Create a Scan operation for the range of rows specified. -220 * @param startRow row to start scanner at or after (inclusive) -221 * @param stopRow row to stop scanner before (exclusive) -222 * @deprecated use {@code new Scan().withStartRow(startRow).withStopRow(stopRow)} instead. -223 */ -224 @Deprecated -225 public Scan(byte[] startRow, byte[] stopRow) { -226setStartRow(startRow); -227setStopRow(stopRow); -228 } -229 -230 /** -231 * Creates a new instance of this class while copying all values. -232 * -233 * @param scan The scan instance to copy from. -234 * @throws IOException When copying the values fails. -235 */ -236 public Scan(Scan scan) throws IOException { -237startRow = scan.getStartRow(); -238includeStartRow = scan.includeStartRow(); -239stopRow = scan.getStopRow(); -240includeStopRow = scan.includeStopRow(); -241maxVersions = scan.getMaxVersions(); -242batch = scan.getBatch(); -243storeLimit = scan.getMaxResultsPerColumnFamily(); -244storeOffset = scan.getRowOffsetPerColumnFamily(); -245caching = scan.getCaching(); -246maxResultSize = scan.getMaxResultSize(); -247cacheBlocks = scan.getCacheBlocks(); -248filter = scan.getFilter(); // clone? -249loadColumnFamiliesOnDemand = scan.getLoadColumnFamiliesOnDemandValue(); -250consistency = scan.getConsistency(); -251 this.setIsolationLevel(scan.getIsolationLevel()); -252reversed
[26/52] [partial] hbase-site git commit: Published site at 1cfd22bf43c9b64afae35d9bf16f764d0da80cab.
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/src-html/org/apache/hadoop/hbase/LocalHBaseCluster.html -- diff --git a/apidocs/src-html/org/apache/hadoop/hbase/LocalHBaseCluster.html b/apidocs/src-html/org/apache/hadoop/hbase/LocalHBaseCluster.html index 528a7e6..8553b81 100644 --- a/apidocs/src-html/org/apache/hadoop/hbase/LocalHBaseCluster.html +++ b/apidocs/src-html/org/apache/hadoop/hbase/LocalHBaseCluster.html @@ -69,393 +69,390 @@ 061@InterfaceStability.Evolving 062public class LocalHBaseCluster { 063 private static final Log LOG = LogFactory.getLog(LocalHBaseCluster.class); -064 private final ListJVMClusterUtil.MasterThread masterThreads = -065new CopyOnWriteArrayListJVMClusterUtil.MasterThread(); -066 private final ListJVMClusterUtil.RegionServerThread regionThreads = -067new CopyOnWriteArrayListJVMClusterUtil.RegionServerThread(); -068 private final static int DEFAULT_NO = 1; -069 /** local mode */ -070 public static final String LOCAL = "local"; -071 /** 'local:' */ -072 public static final String LOCAL_COLON = LOCAL + ":"; -073 private final Configuration conf; -074 private final Class? extends HMaster masterClass; -075 private final Class? extends HRegionServer regionServerClass; -076 -077 /** -078 * Constructor. -079 * @param conf -080 * @throws IOException -081 */ -082 public LocalHBaseCluster(final Configuration conf) -083 throws IOException { -084this(conf, DEFAULT_NO); -085 } -086 -087 /** -088 * Constructor. -089 * @param conf Configuration to use. Post construction has the master's -090 * address. -091 * @param noRegionServers Count of regionservers to start. -092 * @throws IOException -093 */ -094 public LocalHBaseCluster(final Configuration conf, final int noRegionServers) -095 throws IOException { -096this(conf, 1, noRegionServers, getMasterImplementation(conf), -097 getRegionServerImplementation(conf)); -098 } -099 -100 /** -101 * Constructor. -102 * @param conf Configuration to use. Post construction has the active master -103 * address. -104 * @param noMasters Count of masters to start. -105 * @param noRegionServers Count of regionservers to start. -106 * @throws IOException -107 */ -108 public LocalHBaseCluster(final Configuration conf, final int noMasters, -109 final int noRegionServers) -110 throws IOException { -111this(conf, noMasters, noRegionServers, getMasterImplementation(conf), -112 getRegionServerImplementation(conf)); -113 } -114 -115 @SuppressWarnings("unchecked") -116 private static Class? extends HRegionServer getRegionServerImplementation(final Configuration conf) { -117return (Class? extends HRegionServer)conf.getClass(HConstants.REGION_SERVER_IMPL, -118 HRegionServer.class); -119 } -120 -121 @SuppressWarnings("unchecked") -122 private static Class? extends HMaster getMasterImplementation(final Configuration conf) { -123return (Class? extends HMaster)conf.getClass(HConstants.MASTER_IMPL, -124 HMaster.class); -125 } -126 -127 /** -128 * Constructor. -129 * @param conf Configuration to use. Post construction has the master's -130 * address. -131 * @param noMasters Count of masters to start. -132 * @param noRegionServers Count of regionservers to start. -133 * @param masterClass -134 * @param regionServerClass -135 * @throws IOException -136 */ -137 @SuppressWarnings("unchecked") -138 public LocalHBaseCluster(final Configuration conf, final int noMasters, -139final int noRegionServers, final Class? extends HMaster masterClass, -140final Class? extends HRegionServer regionServerClass) -141 throws IOException { -142this.conf = conf; -143 -144// Always have masters and regionservers come up on port '0' so we don't -145// clash over default ports. -146conf.set(HConstants.MASTER_PORT, "0"); -147 conf.set(HConstants.REGIONSERVER_PORT, "0"); -148if (conf.getInt(HConstants.REGIONSERVER_INFO_PORT, 0) != -1) { -149 conf.set(HConstants.REGIONSERVER_INFO_PORT, "0"); -150} -151 -152this.masterClass = (Class? extends HMaster) -153 conf.getClass(HConstants.MASTER_IMPL, masterClass); -154// Start the HMasters. -155for (int i = 0; i noMasters; i++) { -156 addMaster(new Configuration(conf), i); -157} -158// Start the HRegionServers. -159this.regionServerClass = -160 (Class? extends HRegionServer)conf.getClass(HConstants.REGION_SERVER_IMPL, -161 regionServerClass); -162 -163for (int i = 0; i noRegionServers; i++) { -164 addRegionServer(new Configuration(conf), i); -165} -166 } -167 -168 public JVMClusterUtil.RegionServerThread addRegionServer() -169 throws IOException { -170return addRegionServer(new Configuration(conf), this.regionThreads.size()); -171 } -172 -173
[49/52] [partial] hbase-site git commit: Published site at 1cfd22bf43c9b64afae35d9bf16f764d0da80cab.
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apache_hbase_reference_guide.pdf -- diff --git a/apache_hbase_reference_guide.pdf b/apache_hbase_reference_guide.pdf index e946682..322bb10 100644 --- a/apache_hbase_reference_guide.pdf +++ b/apache_hbase_reference_guide.pdf @@ -5,24 +5,24 @@ /Author (Apache HBase Team) /Creator (Asciidoctor PDF 1.5.0.alpha.6, based on Prawn 1.2.1) /Producer (Apache HBase Team) -/CreationDate (D:20170217144817+00'00') -/ModDate (D:20170217144817+00'00') +/CreationDate (D:20170321142325+00'00') +/ModDate (D:20170321142325+00'00') >> endobj 2 0 obj << /Type /Catalog /Pages 3 0 R /Names 25 0 R -/Outlines 4044 0 R -/PageLabels 4252 0 R +/Outlines 4042 0 R +/PageLabels 4250 0 R /PageMode /UseOutlines /ViewerPreferences [/FitWindow] >> endobj 3 0 obj << /Type /Pages -/Count 675 -/Kids [7 0 R 13 0 R 15 0 R 17 0 R 19 0 R 21 0 R 23 0 R 39 0 R 43 0 R 47 0 R 55 0 R 58 0 R 60 0 R 62 0 R 66 0 R 71 0 R 74 0 R 79 0 R 81 0 R 84 0 R 86 0 R 92 0 R 101 0 R 106 0 R 108 0 R 129 0 R 135 0 R 142 0 R 144 0 R 149 0 R 152 0 R 162 0 R 170 0 R 181 0 R 191 0 R 195 0 R 197 0 R 201 0 R 207 0 R 209 0 R 211 0 R 213 0 R 215 0 R 218 0 R 224 0 R 227 0 R 229 0 R 231 0 R 233 0 R 235 0 R 237 0 R 239 0 R 243 0 R 246 0 R 249 0 R 251 0 R 253 0 R 255 0 R 257 0 R 259 0 R 261 0 R 267 0 R 270 0 R 272 0 R 274 0 R 276 0 R 281 0 R 286 0 R 291 0 R 295 0 R 298 0 R 313 0 R 323 0 R 329 0 R 340 0 R 350 0 R 355 0 R 357 0 R 359 0 R 370 0 R 375 0 R 379 0 R 384 0 R 388 0 R 399 0 R 411 0 R 425 0 R 435 0 R 437 0 R 439 0 R 444 0 R 454 0 R 467 0 R 477 0 R 481 0 R 484 0 R 488 0 R 492 0 R 495 0 R 498 0 R 500 0 R 503 0 R 507 0 R 509 0 R 514 0 R 518 0 R 524 0 R 528 0 R 530 0 R 536 0 R 538 0 R 542 0 R 550 0 R 552 0 R 555 0 R 559 0 R 562 0 R 565 0 R 580 0 R 587 0 R 594 0 R 605 0 R 611 0 R 619 0 R 627 0 R 630 0 R 634 0 R 637 0 R 648 0 R 656 0 R 662 0 R 668 0 R 672 0 R 674 0 R 688 0 R 700 0 R 706 0 R 712 0 R 715 0 R 724 0 R 732 0 R 736 0 R 741 0 R 747 0 R 749 0 R 751 0 R 753 0 R 761 0 R 770 0 R 774 0 R 782 0 R 790 0 R 796 0 R 800 0 R 806 0 R 810 0 R 816 0 R 824 0 R 826 0 R 830 0 R 835 0 R 842 0 R 845 0 R 852 0 R 861 0 R 865 0 R 867 0 R 870 0 R 874 0 R 879 0 R 882 0 R 894 0 R 898 0 R 903 0 R 911 0 R 916 0 R 920 0 R 925 0 R 927 0 R 930 0 R 932 0 R 936 0 R 938 0 R 941 0 R 945 0 R 949 0 R 954 0 R 959 0 R 962 0 R 964 0 R 971 0 R 975 0 R 980 0 R 993 0 R 997 0 R 1001 0 R 1006 0 R 1008 0 R 1017 0 R 1020 0 R 1025 0 R 1028 0 R 1037 0 R 1040 0 R 1046 0 R 1053 0 R 1056 0 R 1058 0 R 1067 0 R 1069 0 R 1071 0 R 1074 0 R 1076 0 R 1078 0 R 1080 0 R 1082 0 R 1084 0 R 1087 0 R 1090 0 R 1095 0 R 1098 0 R 1100 0 R 1102 0 R 1104 0 R 1109 0 R 1118 0 R 1121 0 R 1123 0 R 1125 0 R 1130 0 R 1132 0 R 1135 0 R 1137 0 R 1139 0 R 1141 0 R 1144 0 R 1149 0 R 1155 0 R 1162 0 R 1167 0 R 1181 0 R 1192 0 R 1196 0 R 1211 0 R 1220 0 R 1234 0 R 1238 0 R 1248 0 R 1261 0 R 1265 0 R 1277 0 R 1286 0 R 1293 0 R 1297 0 R 1306 0 R 1311 0 R 1315 0 R 1321 0 R 1327 0 R 1334 0 R 1342 0 R 1344 0 R 1356 0 R 1358 0 R 1363 0 R 1367 0 R 1372 0 R 1382 0 R 1388 0 R 1394 0 R 1396 0 R 1398 0 R 1410 0 R 1417 0 R 1427 0 R 1432 0 R 1445 0 R 1452 0 R 1455 0 R 1464 0 R 1473 0 R 1478 0 R 1484 0 R 1488 0 R 1491 0 R 1493 0 R 1500 0 R 1503 0 R 1510 0 R 1514 0 R 1517 0 R 1526 0 R 1530 0 R 1533 0 R 1535 0 R 1544 0 R 1551 0 R 1557 0 R 1562 0 R 1566 0 R 1569 0 R 1575 0 R 1580 0 R 1585 0 R 1587 0 R 1589 0 R 1592 0 R 1594 0 R 1602 0 R 1605 0 R 1611 0 R 1618 0 R 1622 0 R 1627 0 R 1632 0 R 1635 0 R 1637 0 R 1639 0 R 1641 0 R 1647 0 R 1657 0 R 1659 0 R 1661 0 R 1663 0 R 1665 0 R 1668 0 R 1670 0 R 1672 0 R 1674 0 R 1677 0 R 1679 0 R 1681 0 R 1683 0 R 1687 0 R 1691 0 R 1700 0 R 1702 0 R 1704 0 R 1706 0 R 1708 0 R 1715 0 R 1717 0 R 1722 0 R 1724 0 R 1726 0 R 1733 0 R 1738 0 R 1742 0 R 1746 0 R 1749 0 R 1752 0 R 1756 0 R 1758 0 R 1761 0 R 1763 0 R 1765 0 R 1767 0 R 1771 0 R 1773 0 R 1777 0 R 1779 0 R 1781 0 R 1783 0 R 1785 0 R 1793 0 R 1796 0 R 1801 0 R 1803 0 R 1805 0 R 1807 0 R 1809 0 R 1817 0 R 1827 0 R 1830 0 R 1846 0 R 1861 0 R 1865 0 R 1870 0 R 1875 0 R 1878 0 R 1883 0 R 1885 0 R 1890 0 R 1892 0 R 1895 0 R 1897 0 R 1899 0 R 1901 0 R 1903 0 R 1907 0 R 1909 0 R 1913 0 R 1917 0 R 1925 0 R 1931 0 R 1942 0 R 1956 0 R 1969 0 R 1987 0 R 1991 0 R 1993 0 R 1997 0 R 2014 0 R 2022 0 R 2029 0 R 2038 0 R 2042 0 R 2052 0 R 2063 0 R 2069 0 R 2078 0 R 2091 0 R 2108 0 R 2120 0 R 2123 0 R 2132 0 R 2147 0 R 2154 0 R 2157 0 R 2162 0 R 2167 0 R 2177 0 R 2185 0 R 2188 0 R 2190 0 R 2194 0 R 2209 0 R 2218 0 R 2223 0 R 2227 0 R 2230 0 R 2232 0 R 2234 0 R 2236 0 R 2238 0 R 2243 0 R 2245 0 R 2255 0 R 2265 0 R 2272 0 R 2284 0 R 2289 0 R 2293 0 R 2306 0 R 2313 0 R 2319 0 R 2321 0 R 2331 0 R 2338 0 R 2349 0 R 2353 0 R 2362 0 R 2368 0 R 2378 0 R 2387 0 R 2395 0 R 2401 0 R 2406 0 R 2410 0 R 2413 0 R 2415 0 R 2421 0 R 2425 0 R 2429 0 R 2435 0 R 2442 0 R 2447 0 R 2451 0 R 2460 0 R 2465 0 R 2470 0 R
[37/52] [partial] hbase-site git commit: Published site at 1cfd22bf43c9b64afae35d9bf16f764d0da80cab.
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/org/apache/hadoop/hbase/client/class-use/Delete.html -- diff --git a/apidocs/org/apache/hadoop/hbase/client/class-use/Delete.html b/apidocs/org/apache/hadoop/hbase/client/class-use/Delete.html index 64ed51a..76d7733 100644 --- a/apidocs/org/apache/hadoop/hbase/client/class-use/Delete.html +++ b/apidocs/org/apache/hadoop/hbase/client/class-use/Delete.html @@ -227,58 +227,58 @@ -default http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true; title="class or interface in java.util.concurrent">CompletableFuturehttp://docs.oracle.com/javase/8/docs/api/java/lang/Boolean.html?is-external=true; title="class or interface in java.lang">Boolean -AsyncTableBase.checkAndDelete(byte[]row, +boolean +Table.checkAndDelete(byte[]row, byte[]family, byte[]qualifier, byte[]value, Deletedelete) -Atomically checks if a row/family/qualifier value equals to the expected value. +Atomically checks if a row/family/qualifier value matches the expected + value. -boolean -Table.checkAndDelete(byte[]row, +default http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true; title="class or interface in java.util.concurrent">CompletableFuturehttp://docs.oracle.com/javase/8/docs/api/java/lang/Boolean.html?is-external=true; title="class or interface in java.lang">Boolean +AsyncTableBase.checkAndDelete(byte[]row, byte[]family, byte[]qualifier, byte[]value, Deletedelete) -Atomically checks if a row/family/qualifier value matches the expected - value. +Atomically checks if a row/family/qualifier value equals to the expected value. -http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true; title="class or interface in java.util.concurrent">CompletableFuturehttp://docs.oracle.com/javase/8/docs/api/java/lang/Boolean.html?is-external=true; title="class or interface in java.lang">Boolean -AsyncTableBase.checkAndDelete(byte[]row, +boolean +Table.checkAndDelete(byte[]row, byte[]family, byte[]qualifier, CompareFilter.CompareOpcompareOp, byte[]value, Deletedelete) -Atomically checks if a row/family/qualifier value matches the expected value. +Atomically checks if a row/family/qualifier value matches the expected + value. -boolean -Table.checkAndDelete(byte[]row, +http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true; title="class or interface in java.util.concurrent">CompletableFuturehttp://docs.oracle.com/javase/8/docs/api/java/lang/Boolean.html?is-external=true; title="class or interface in java.lang">Boolean +AsyncTableBase.checkAndDelete(byte[]row, byte[]family, byte[]qualifier, CompareFilter.CompareOpcompareOp, byte[]value, Deletedelete) -Atomically checks if a row/family/qualifier value matches the expected - value. +Atomically checks if a row/family/qualifier value matches the expected value. -http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true; title="class or interface in java.util.concurrent">CompletableFuturehttp://docs.oracle.com/javase/8/docs/api/java/lang/Void.html?is-external=true; title="class or interface in java.lang">Void -AsyncTableBase.delete(Deletedelete) +void +Table.delete(Deletedelete) Deletes the specified cells/row. -void -Table.delete(Deletedelete) +http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true; title="class or interface in java.util.concurrent">CompletableFuturehttp://docs.oracle.com/javase/8/docs/api/java/lang/Void.html?is-external=true; title="class or interface in java.lang">Void +AsyncTableBase.delete(Deletedelete) Deletes the specified cells/row. @@ -292,14 +292,14 @@ -http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true; title="class or interface in java.util">Listhttp://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true; title="class or interface in java.util.concurrent">CompletableFuturehttp://docs.oracle.com/javase/8/docs/api/java/lang/Void.html?is-external=true; title="class or interface in java.lang">Void -AsyncTableBase.delete(http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true; title="class or interface in java.util">ListDeletedeletes) +void +Table.delete(http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true; title="class or interface in java.util">ListDeletedeletes) Deletes the specified cells/rows in bulk. -void
[29/52] [partial] hbase-site git commit: Published site at 1cfd22bf43c9b64afae35d9bf16f764d0da80cab.
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/src-html/org/apache/hadoop/hbase/CellUtil.html -- diff --git a/apidocs/src-html/org/apache/hadoop/hbase/CellUtil.html b/apidocs/src-html/org/apache/hadoop/hbase/CellUtil.html index b55dbd3..400d699 100644 --- a/apidocs/src-html/org/apache/hadoop/hbase/CellUtil.html +++ b/apidocs/src-html/org/apache/hadoop/hbase/CellUtil.html @@ -630,2583 +630,2548 @@ 622} 623 624@Override -625public long heapOverhead() { -626 long overhead = ((ExtendedCell) this.cell).heapOverhead() + HEAP_SIZE_OVERHEAD; -627 if (this.tags != null) { -628overhead += ClassSize.ARRAY; -629 } -630 return overhead; -631} -632 -633@Override -634public Cell deepClone() { -635 Cell clonedBaseCell = ((ExtendedCell) this.cell).deepClone(); -636 return new TagRewriteCell(clonedBaseCell, this.tags); -637} -638 } -639 -640 @InterfaceAudience.Private -641 private static class TagRewriteByteBufferCell extends ByteBufferCell implements ExtendedCell { -642 -643protected ByteBufferCell cell; -644protected byte[] tags; -645private static final long HEAP_SIZE_OVERHEAD = ClassSize.OBJECT + 2 * ClassSize.REFERENCE; -646 -647/** -648 * @param cell The original ByteBufferCell which it rewrites -649 * @param tags the tags bytes. The array suppose to contain the tags bytes alone. -650 */ -651public TagRewriteByteBufferCell(ByteBufferCell cell, byte[] tags) { -652 assert cell instanceof ExtendedCell; -653 assert tags != null; -654 this.cell = cell; -655 this.tags = tags; -656 // tag offset will be treated as 0 and length this.tags.length -657 if (this.cell instanceof TagRewriteByteBufferCell) { -658// Cleaning the ref so that the byte[] can be GCed -659((TagRewriteByteBufferCell) this.cell).tags = null; -660 } -661} -662 -663@Override -664public byte[] getRowArray() { -665 return this.cell.getRowArray(); -666} -667 -668@Override -669public int getRowOffset() { -670 return this.cell.getRowOffset(); -671} -672 -673@Override -674public short getRowLength() { -675 return this.cell.getRowLength(); -676} -677 -678@Override -679public byte[] getFamilyArray() { -680 return this.cell.getFamilyArray(); -681} -682 -683@Override -684public int getFamilyOffset() { -685 return this.cell.getFamilyOffset(); -686} -687 -688@Override -689public byte getFamilyLength() { -690 return this.cell.getFamilyLength(); -691} -692 -693@Override -694public byte[] getQualifierArray() { -695 return this.cell.getQualifierArray(); -696} -697 -698@Override -699public int getQualifierOffset() { -700 return this.cell.getQualifierOffset(); -701} -702 -703@Override -704public int getQualifierLength() { -705 return this.cell.getQualifierLength(); -706} -707 -708@Override -709public long getTimestamp() { -710 return this.cell.getTimestamp(); -711} -712 -713@Override -714public byte getTypeByte() { -715 return this.cell.getTypeByte(); -716} -717 -718@Override -719public long getSequenceId() { -720 return this.cell.getSequenceId(); -721} -722 -723@Override -724public byte[] getValueArray() { -725 return this.cell.getValueArray(); -726} -727 -728@Override -729public int getValueOffset() { -730 return this.cell.getValueOffset(); -731} -732 -733@Override -734public int getValueLength() { -735 return this.cell.getValueLength(); -736} -737 -738@Override -739public byte[] getTagsArray() { -740 return this.tags; -741} -742 -743@Override -744public int getTagsOffset() { -745 return 0; +625public Cell deepClone() { +626 Cell clonedBaseCell = ((ExtendedCell) this.cell).deepClone(); +627 return new TagRewriteCell(clonedBaseCell, this.tags); +628} +629 } +630 +631 @InterfaceAudience.Private +632 private static class TagRewriteByteBufferCell extends ByteBufferCell implements ExtendedCell { +633 +634protected ByteBufferCell cell; +635protected byte[] tags; +636private static final long HEAP_SIZE_OVERHEAD = ClassSize.OBJECT + 2 * ClassSize.REFERENCE; +637 +638/** +639 * @param cell The original ByteBufferCell which it rewrites +640 * @param tags the tags bytes. The array suppose to contain the tags bytes alone. +641 */ +642public TagRewriteByteBufferCell(ByteBufferCell cell, byte[] tags) { +643 assert cell instanceof ExtendedCell; +644 assert tags != null; +645 this.cell = cell; +646 this.tags = tags; +647 // tag offset will be treated as 0 and length this.tags.length +648 if (this.cell instanceof TagRewriteByteBufferCell) { +649
[19/52] [partial] hbase-site git commit: Published site at 1cfd22bf43c9b64afae35d9bf16f764d0da80cab.
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/src-html/org/apache/hadoop/hbase/client/ResultScanner.html -- diff --git a/apidocs/src-html/org/apache/hadoop/hbase/client/ResultScanner.html b/apidocs/src-html/org/apache/hadoop/hbase/client/ResultScanner.html index d9a4a9a..943ce05 100644 --- a/apidocs/src-html/org/apache/hadoop/hbase/client/ResultScanner.html +++ b/apidocs/src-html/org/apache/hadoop/hbase/client/ResultScanner.html @@ -35,96 +35,102 @@ 027 028import org.apache.hadoop.hbase.classification.InterfaceAudience; 029import org.apache.hadoop.hbase.classification.InterfaceStability; -030 -031/** -032 * Interface for client-side scanning. Go to {@link Table} to obtain instances. -033 */ -034@InterfaceAudience.Public -035@InterfaceStability.Stable -036public interface ResultScanner extends Closeable, IterableResult { -037 -038 @Override -039 default IteratorResult iterator() { -040return new IteratorResult() { -041 // The next RowResult, possibly pre-read -042 Result next = null; -043 -044 // return true if there is another item pending, false if there isn't. -045 // this method is where the actual advancing takes place, but you need -046 // to call next() to consume it. hasNext() will only advance if there -047 // isn't a pending next(). -048 @Override -049 public boolean hasNext() { -050if (next != null) { -051 return true; -052} -053try { -054 return (next = ResultScanner.this.next()) != null; -055} catch (IOException e) { -056 throw new UncheckedIOException(e); -057} -058 } -059 -060 // get the pending next item and advance the iterator. returns null if -061 // there is no next item. -062 @Override -063 public Result next() { -064// since hasNext() does the real advancing, we call this to determine -065// if there is a next before proceeding. -066if (!hasNext()) { -067 return null; -068} -069 -070// if we get to here, then hasNext() has given us an item to return. -071// we want to return the item and then null out the next pointer, so -072// we use a temporary variable. -073Result temp = next; -074next = null; -075return temp; -076 } -077}; -078 } -079 -080 /** -081 * Grab the next row's worth of values. The scanner will return a Result. -082 * @return Result object if there is another row, null if the scanner is exhausted. -083 * @throws IOException e -084 */ -085 Result next() throws IOException; -086 -087 /** -088 * Get nbRows rows. How many RPCs are made is determined by the {@link Scan#setCaching(int)} -089 * setting (or hbase.client.scanner.caching in hbase-site.xml). -090 * @param nbRows number of rows to return -091 * @return Between zero and nbRows rowResults. Scan is done if returned array is of zero-length -092 * (We never return null). -093 * @throws IOException -094 */ -095 default Result[] next(int nbRows) throws IOException { -096ListResult resultSets = new ArrayList(nbRows); -097for (int i = 0; i nbRows; i++) { -098 Result next = next(); -099 if (next != null) { -100resultSets.add(next); -101 } else { -102break; -103 } -104} -105return resultSets.toArray(new Result[0]); -106 } -107 -108 /** -109 * Closes the scanner and releases any resources it has allocated -110 */ -111 @Override -112 void close(); -113 -114 /** -115 * Allow the client to renew the scanner's lease on the server. -116 * @return true if the lease was successfully renewed, false otherwise. -117 */ -118 boolean renewLease(); -119} +030import org.apache.hadoop.hbase.client.metrics.ScanMetrics; +031 +032/** +033 * Interface for client-side scanning. Go to {@link Table} to obtain instances. +034 */ +035@InterfaceAudience.Public +036@InterfaceStability.Stable +037public interface ResultScanner extends Closeable, IterableResult { +038 +039 @Override +040 default IteratorResult iterator() { +041return new IteratorResult() { +042 // The next RowResult, possibly pre-read +043 Result next = null; +044 +045 // return true if there is another item pending, false if there isn't. +046 // this method is where the actual advancing takes place, but you need +047 // to call next() to consume it. hasNext() will only advance if there +048 // isn't a pending next(). +049 @Override +050 public boolean hasNext() { +051if (next != null) { +052 return true; +053} +054try { +055 return (next = ResultScanner.this.next()) != null; +056} catch (IOException e) { +057 throw new UncheckedIOException(e); +058} +059 } +060 +061 // get the pending next item
[44/52] [partial] hbase-site git commit: Published site at 1cfd22bf43c9b64afae35d9bf16f764d0da80cab.
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/org/apache/hadoop/hbase/HTableDescriptor.html -- diff --git a/apidocs/org/apache/hadoop/hbase/HTableDescriptor.html b/apidocs/org/apache/hadoop/hbase/HTableDescriptor.html index 9fe40e4..24c8277 100644 --- a/apidocs/org/apache/hadoop/hbase/HTableDescriptor.html +++ b/apidocs/org/apache/hadoop/hbase/HTableDescriptor.html @@ -854,7 +854,7 @@ implements http://docs.oracle.com/javase/8/docs/api/java/lang/Comparabl SPLIT_POLICY -public static finalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true; title="class or interface in java.lang">String SPLIT_POLICY +public static finalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true; title="class or interface in java.lang">String SPLIT_POLICY See Also: Constant Field Values @@ -867,7 +867,7 @@ implements http://docs.oracle.com/javase/8/docs/api/java/lang/Comparabl MAX_FILESIZE -public static finalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true; title="class or interface in java.lang">String MAX_FILESIZE +public static finalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true; title="class or interface in java.lang">String MAX_FILESIZE INTERNAL Used by HBase Shell interface to access this metadata attribute which denotes the maximum size of the store file after which a region split occurs @@ -884,7 +884,7 @@ implements http://docs.oracle.com/javase/8/docs/api/java/lang/Comparabl OWNER -public static finalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true; title="class or interface in java.lang">String OWNER +public static finalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true; title="class or interface in java.lang">String OWNER See Also: Constant Field Values @@ -897,7 +897,7 @@ implements http://docs.oracle.com/javase/8/docs/api/java/lang/Comparabl OWNER_KEY -public static finalBytes OWNER_KEY +public static finalBytes OWNER_KEY @@ -906,7 +906,7 @@ implements http://docs.oracle.com/javase/8/docs/api/java/lang/Comparabl READONLY -public static finalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true; title="class or interface in java.lang">String READONLY +public static finalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true; title="class or interface in java.lang">String READONLY INTERNAL Used by rest interface to access this metadata attribute which denotes if the table is Read Only @@ -922,7 +922,7 @@ implements http://docs.oracle.com/javase/8/docs/api/java/lang/Comparabl COMPACTION_ENABLED -public static finalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true; title="class or interface in java.lang">String COMPACTION_ENABLED +public static finalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true; title="class or interface in java.lang">String COMPACTION_ENABLED INTERNAL Used by HBase Shell interface to access this metadata attribute which denotes if the table is compaction enabled @@ -938,7 +938,7 @@ implements http://docs.oracle.com/javase/8/docs/api/java/lang/Comparabl MEMSTORE_FLUSHSIZE -public static finalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true; title="class or interface in java.lang">String MEMSTORE_FLUSHSIZE +public static finalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true; title="class or interface in java.lang">String MEMSTORE_FLUSHSIZE INTERNAL Used by HBase Shell interface to access this metadata attribute which represents the maximum size of the memstore after which its contents are flushed onto the disk @@ -955,7 +955,7 @@ implements http://docs.oracle.com/javase/8/docs/api/java/lang/Comparabl FLUSH_POLICY -public static finalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true; title="class or interface in java.lang">String FLUSH_POLICY +public static finalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true; title="class or interface in java.lang">String FLUSH_POLICY See Also: Constant Field Values @@ -968,7 +968,7 @@ implements http://docs.oracle.com/javase/8/docs/api/java/lang/Comparabl IS_ROOT -public static finalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true; title="class or interface in java.lang">String IS_ROOT +public static finalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true; title="class or interface in java.lang">String IS_ROOT INTERNAL Used by rest interface to access this metadata attribute which denotes if the table is a -ROOT- region or not @@ -984,7 +984,7 @@ implements
[42/52] [partial] hbase-site git commit: Published site at 1cfd22bf43c9b64afae35d9bf16f764d0da80cab.
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/org/apache/hadoop/hbase/class-use/Cell.html -- diff --git a/apidocs/org/apache/hadoop/hbase/class-use/Cell.html b/apidocs/org/apache/hadoop/hbase/class-use/Cell.html index 4f61fb9..fa5c933 100644 --- a/apidocs/org/apache/hadoop/hbase/class-use/Cell.html +++ b/apidocs/org/apache/hadoop/hbase/class-use/Cell.html @@ -1182,25 +1182,25 @@ Input/OutputFormats, a table indexing MapReduce job, and utility methods. Result.create(http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true; title="class or interface in java.util">ListCellcells, http://docs.oracle.com/javase/8/docs/api/java/lang/Boolean.html?is-external=true; title="class or interface in java.lang">Booleanexists, booleanstale, - booleanpartial) + booleanmayHaveMoreCellsInRow) -Append -Append.setFamilyCellMap(http://docs.oracle.com/javase/8/docs/api/java/util/NavigableMap.html?is-external=true; title="class or interface in java.util">NavigableMapbyte[],http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true; title="class or interface in java.util">ListCellmap) +Delete +Delete.setFamilyCellMap(http://docs.oracle.com/javase/8/docs/api/java/util/NavigableMap.html?is-external=true; title="class or interface in java.util">NavigableMapbyte[],http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true; title="class or interface in java.util">ListCellmap) -Mutation -Mutation.setFamilyCellMap(http://docs.oracle.com/javase/8/docs/api/java/util/NavigableMap.html?is-external=true; title="class or interface in java.util">NavigableMapbyte[],http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true; title="class or interface in java.util">ListCellmap) -Method for setting the put's familyMap - +Append +Append.setFamilyCellMap(http://docs.oracle.com/javase/8/docs/api/java/util/NavigableMap.html?is-external=true; title="class or interface in java.util">NavigableMapbyte[],http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true; title="class or interface in java.util">ListCellmap) Put Put.setFamilyCellMap(http://docs.oracle.com/javase/8/docs/api/java/util/NavigableMap.html?is-external=true; title="class or interface in java.util">NavigableMapbyte[],http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true; title="class or interface in java.util">ListCellmap) -Delete -Delete.setFamilyCellMap(http://docs.oracle.com/javase/8/docs/api/java/util/NavigableMap.html?is-external=true; title="class or interface in java.util">NavigableMapbyte[],http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true; title="class or interface in java.util">ListCellmap) +Mutation +Mutation.setFamilyCellMap(http://docs.oracle.com/javase/8/docs/api/java/util/NavigableMap.html?is-external=true; title="class or interface in java.util">NavigableMapbyte[],http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true; title="class or interface in java.util">ListCellmap) +Method for setting the put's familyMap + Increment @@ -1222,66 +1222,66 @@ Input/OutputFormats, a table indexing MapReduce job, and utility methods. Cell -ColumnPrefixFilter.getNextCellHint(Cellcell) +ColumnPaginationFilter.getNextCellHint(Cellcell) Cell -MultipleColumnPrefixFilter.getNextCellHint(Cellcell) +ColumnRangeFilter.getNextCellHint(Cellcell) +Cell +FuzzyRowFilter.getNextCellHint(CellcurrentCell) + + abstract Cell Filter.getNextCellHint(CellcurrentCell) If the filter returns the match code SEEK_NEXT_USING_HINT, then it should also tell which is the next key it must seek to. - -Cell -FilterList.getNextCellHint(CellcurrentCell) - Cell -MultiRowRangeFilter.getNextCellHint(CellcurrentKV) +FilterList.getNextCellHint(CellcurrentCell) Cell -FuzzyRowFilter.getNextCellHint(CellcurrentCell) +MultipleColumnPrefixFilter.getNextCellHint(Cellcell) Cell -ColumnRangeFilter.getNextCellHint(Cellcell) +TimestampsFilter.getNextCellHint(CellcurrentCell) +Pick the next cell that the scanner should seek to. + Cell -ColumnPaginationFilter.getNextCellHint(Cellcell) +ColumnPrefixFilter.getNextCellHint(Cellcell) Cell -TimestampsFilter.getNextCellHint(CellcurrentCell) -Pick the next cell that the scanner should seek to. - +MultiRowRangeFilter.getNextCellHint(CellcurrentKV) -Cell -KeyOnlyFilter.transformCell(Cellcell) +abstract Cell +Filter.transformCell(Cellv) +Give the filter a chance to transform the passed KeyValue. + Cell -WhileMatchFilter.transformCell(Cellv) +FilterList.transformCell(Cellc) Cell -SkipFilter.transformCell(Cellv) +WhileMatchFilter.transformCell(Cellv) -abstract Cell -Filter.transformCell(Cellv) -Give the filter a chance to transform the passed KeyValue. - +Cell +KeyOnlyFilter.transformCell(Cellcell) Cell
[45/52] [partial] hbase-site git commit: Published site at 1cfd22bf43c9b64afae35d9bf16f764d0da80cab.
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/org/apache/hadoop/hbase/HColumnDescriptor.html -- diff --git a/apidocs/org/apache/hadoop/hbase/HColumnDescriptor.html b/apidocs/org/apache/hadoop/hbase/HColumnDescriptor.html index 5af1bed..3c146ea 100644 --- a/apidocs/org/apache/hadoop/hbase/HColumnDescriptor.html +++ b/apidocs/org/apache/hadoop/hbase/HColumnDescriptor.html @@ -1033,10 +1033,13 @@ implements http://docs.oracle.com/javase/8/docs/api/java/lang/Comparabl BLOCKSIZE -public static finalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true; title="class or interface in java.lang">String BLOCKSIZE +public static finalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true; title="class or interface in java.lang">String BLOCKSIZE Size of storefile/hfile 'blocks'. Default is DEFAULT_BLOCKSIZE. Use smaller block sizes for faster random-access at expense of larger - indices (more memory consumption). + indices (more memory consumption). Note that this is a soft limit and that + blocks have overhead (metadata, CRCs) so blocks will tend to be the size + specified here and then some; i.e. don't expect that setting BLOCKSIZE=4k + means hbase data will align with an SSDs 4k page accesses (TODO). See Also: Constant Field Values @@ -1049,7 +1052,7 @@ implements http://docs.oracle.com/javase/8/docs/api/java/lang/Comparabl LENGTH -public static finalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true; title="class or interface in java.lang">String LENGTH +public static finalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true; title="class or interface in java.lang">String LENGTH See Also: Constant Field Values @@ -1062,7 +1065,7 @@ implements http://docs.oracle.com/javase/8/docs/api/java/lang/Comparabl TTL -public static finalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true; title="class or interface in java.lang">String TTL +public static finalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true; title="class or interface in java.lang">String TTL See Also: Constant Field Values @@ -1075,7 +1078,7 @@ implements http://docs.oracle.com/javase/8/docs/api/java/lang/Comparabl BLOOMFILTER -public static finalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true; title="class or interface in java.lang">String BLOOMFILTER +public static finalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true; title="class or interface in java.lang">String BLOOMFILTER See Also: Constant Field Values @@ -1088,7 +1091,7 @@ implements http://docs.oracle.com/javase/8/docs/api/java/lang/Comparabl FOREVER -public static finalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true; title="class or interface in java.lang">String FOREVER +public static finalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true; title="class or interface in java.lang">String FOREVER See Also: Constant Field Values @@ -1101,7 +1104,7 @@ implements http://docs.oracle.com/javase/8/docs/api/java/lang/Comparabl REPLICATION_SCOPE -public static finalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true; title="class or interface in java.lang">String REPLICATION_SCOPE +public static finalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true; title="class or interface in java.lang">String REPLICATION_SCOPE See Also: Constant Field Values @@ -1114,7 +1117,7 @@ implements http://docs.oracle.com/javase/8/docs/api/java/lang/Comparabl REPLICATION_SCOPE_BYTES -public static finalbyte[] REPLICATION_SCOPE_BYTES +public static finalbyte[] REPLICATION_SCOPE_BYTES @@ -1123,7 +1126,7 @@ implements http://docs.oracle.com/javase/8/docs/api/java/lang/Comparabl MIN_VERSIONS -public static finalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true; title="class or interface in java.lang">String MIN_VERSIONS +public static finalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true; title="class or interface in java.lang">String MIN_VERSIONS See Also: Constant Field Values @@ -1136,7 +1139,7 @@ implements http://docs.oracle.com/javase/8/docs/api/java/lang/Comparabl KEEP_DELETED_CELLS -public static finalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true; title="class or interface in java.lang">String KEEP_DELETED_CELLS +public static finalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true; title="class or interface in java.lang">String KEEP_DELETED_CELLS Retain all cells across flushes and compactions even if they fall behind a delete tombstone. To see all retained
[33/52] [partial] hbase-site git commit: Published site at 1cfd22bf43c9b64afae35d9bf16f764d0da80cab.
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/org/apache/hadoop/hbase/rest/client/RemoteHTable.html -- diff --git a/apidocs/org/apache/hadoop/hbase/rest/client/RemoteHTable.html b/apidocs/org/apache/hadoop/hbase/rest/client/RemoteHTable.html index ec3330a..67361c3 100644 --- a/apidocs/org/apache/hadoop/hbase/rest/client/RemoteHTable.html +++ b/apidocs/org/apache/hadoop/hbase/rest/client/RemoteHTable.html @@ -115,7 +115,7 @@ var activeTableTab = "activeTableTab"; @InterfaceAudience.Public @InterfaceStability.Stable -public class RemoteHTable +public class RemoteHTable extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true; title="class or interface in java.lang">Object implements Table HTable interface to remote tables accessed via REST gateway @@ -559,7 +559,7 @@ implements RemoteHTable -publicRemoteHTable(Clientclient, +publicRemoteHTable(Clientclient, http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true; title="class or interface in java.lang">Stringname) Constructor @@ -570,7 +570,7 @@ implements RemoteHTable -publicRemoteHTable(Clientclient, +publicRemoteHTable(Clientclient, org.apache.hadoop.conf.Configurationconf, http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true; title="class or interface in java.lang">Stringname) Constructor @@ -582,7 +582,7 @@ implements RemoteHTable -publicRemoteHTable(Clientclient, +publicRemoteHTable(Clientclient, org.apache.hadoop.conf.Configurationconf, byte[]name) Constructor @@ -602,7 +602,7 @@ implements buildRowSpec -protectedhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true; title="class or interface in java.lang">StringbuildRowSpec(byte[]row, +protectedhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true; title="class or interface in java.lang">StringbuildRowSpec(byte[]row, http://docs.oracle.com/javase/8/docs/api/java/util/Map.html?is-external=true; title="class or interface in java.util">MapfamilyMap, longstartTime, longendTime, @@ -615,7 +615,7 @@ implements buildMultiRowSpec -protectedhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true; title="class or interface in java.lang">StringbuildMultiRowSpec(byte[][]rows, +protectedhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true; title="class or interface in java.lang">StringbuildMultiRowSpec(byte[][]rows, intmaxVersions) @@ -625,7 +625,7 @@ implements buildResultFromModel -protectedResult[]buildResultFromModel(org.apache.hadoop.hbase.rest.model.CellSetModelmodel) +protectedResult[]buildResultFromModel(org.apache.hadoop.hbase.rest.model.CellSetModelmodel) @@ -634,7 +634,7 @@ implements buildModelFromPut -protectedorg.apache.hadoop.hbase.rest.model.CellSetModelbuildModelFromPut(Putput) +protectedorg.apache.hadoop.hbase.rest.model.CellSetModelbuildModelFromPut(Putput) @@ -643,7 +643,7 @@ implements getTableName -publicbyte[]getTableName() +publicbyte[]getTableName() @@ -652,7 +652,7 @@ implements getName -publicTableNamegetName() +publicTableNamegetName() Description copied from interface:Table Gets the fully qualified table name instance of this table. @@ -667,7 +667,7 @@ implements getConfiguration -publicorg.apache.hadoop.conf.ConfigurationgetConfiguration() +publicorg.apache.hadoop.conf.ConfigurationgetConfiguration() Description copied from interface:Table Returns the Configuration object used by this instance. @@ -685,7 +685,7 @@ implements getTableDescriptor -publicHTableDescriptorgetTableDescriptor() +publicHTableDescriptorgetTableDescriptor() throws http://docs.oracle.com/javase/8/docs/api/java/io/IOException.html?is-external=true; title="class or interface in java.io">IOException Description copied from interface:Table Gets the table descriptor for this table. @@ -703,7 +703,7 @@ implements close -publicvoidclose() +publicvoidclose() throws http://docs.oracle.com/javase/8/docs/api/java/io/IOException.html?is-external=true; title="class or interface in java.io">IOException Description copied from interface:Table Releases any resources held or pending changes in internal buffers. @@ -725,7 +725,7 @@ implements get -publicResultget(Getget) +publicResultget(Getget) throws http://docs.oracle.com/javase/8/docs/api/java/io/IOException.html?is-external=true; title="class or interface in java.io">IOException Description copied from interface:Table Extracts certain cells from a given row. @@ -749,7 +749,7 @@ implements
[24/52] [partial] hbase-site git commit: Published site at 1cfd22bf43c9b64afae35d9bf16f764d0da80cab.
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/src-html/org/apache/hadoop/hbase/client/Get.html -- diff --git a/apidocs/src-html/org/apache/hadoop/hbase/client/Get.html b/apidocs/src-html/org/apache/hadoop/hbase/client/Get.html index 2a44859..05a7741 100644 --- a/apidocs/src-html/org/apache/hadoop/hbase/client/Get.html +++ b/apidocs/src-html/org/apache/hadoop/hbase/client/Get.html @@ -28,64 +28,64 @@ 020 021 022import java.io.IOException; -023import java.util.ArrayList; -024import java.util.HashMap; -025import java.util.List; -026import java.util.Map; -027import java.util.NavigableSet; -028import java.util.Set; -029import java.util.TreeMap; -030import java.util.TreeSet; -031 -032import org.apache.commons.logging.Log; -033import org.apache.commons.logging.LogFactory; -034import org.apache.hadoop.hbase.HConstants; -035import org.apache.hadoop.hbase.classification.InterfaceAudience; -036import org.apache.hadoop.hbase.classification.InterfaceStability; -037import org.apache.hadoop.hbase.filter.Filter; -038import org.apache.hadoop.hbase.io.TimeRange; -039import org.apache.hadoop.hbase.security.access.Permission; -040import org.apache.hadoop.hbase.security.visibility.Authorizations; -041import org.apache.hadoop.hbase.util.Bytes; -042 -043/** -044 * Used to perform Get operations on a single row. -045 * p -046 * To get everything for a row, instantiate a Get object with the row to get. -047 * To further narrow the scope of what to Get, use the methods below. -048 * p -049 * To get all columns from specific families, execute {@link #addFamily(byte[]) addFamily} -050 * for each family to retrieve. -051 * p -052 * To get specific columns, execute {@link #addColumn(byte[], byte[]) addColumn} -053 * for each column to retrieve. -054 * p -055 * To only retrieve columns within a specific range of version timestamps, -056 * execute {@link #setTimeRange(long, long) setTimeRange}. -057 * p -058 * To only retrieve columns with a specific timestamp, execute -059 * {@link #setTimeStamp(long) setTimestamp}. -060 * p -061 * To limit the number of versions of each column to be returned, execute -062 * {@link #setMaxVersions(int) setMaxVersions}. -063 * p -064 * To add a filter, call {@link #setFilter(Filter) setFilter}. -065 */ -066@InterfaceAudience.Public -067@InterfaceStability.Stable -068public class Get extends Query -069 implements Row, ComparableRow { -070 private static final Log LOG = LogFactory.getLog(Get.class); -071 -072 private byte [] row = null; -073 private int maxVersions = 1; -074 private boolean cacheBlocks = true; -075 private int storeLimit = -1; -076 private int storeOffset = 0; -077 private boolean checkExistenceOnly = false; -078 private boolean closestRowBefore = false; -079 private Mapbyte [], NavigableSetbyte [] familyMap = -080new TreeMapbyte [], NavigableSetbyte [](Bytes.BYTES_COMPARATOR); +023import java.nio.ByteBuffer; +024import java.util.ArrayList; +025import java.util.HashMap; +026import java.util.List; +027import java.util.Map; +028import java.util.NavigableSet; +029import java.util.Set; +030import java.util.TreeMap; +031import java.util.TreeSet; +032 +033import org.apache.commons.logging.Log; +034import org.apache.commons.logging.LogFactory; +035import org.apache.hadoop.hbase.HConstants; +036import org.apache.hadoop.hbase.classification.InterfaceAudience; +037import org.apache.hadoop.hbase.classification.InterfaceStability; +038import org.apache.hadoop.hbase.filter.Filter; +039import org.apache.hadoop.hbase.io.TimeRange; +040import org.apache.hadoop.hbase.security.access.Permission; +041import org.apache.hadoop.hbase.security.visibility.Authorizations; +042import org.apache.hadoop.hbase.util.Bytes; +043 +044/** +045 * Used to perform Get operations on a single row. +046 * p +047 * To get everything for a row, instantiate a Get object with the row to get. +048 * To further narrow the scope of what to Get, use the methods below. +049 * p +050 * To get all columns from specific families, execute {@link #addFamily(byte[]) addFamily} +051 * for each family to retrieve. +052 * p +053 * To get specific columns, execute {@link #addColumn(byte[], byte[]) addColumn} +054 * for each column to retrieve. +055 * p +056 * To only retrieve columns within a specific range of version timestamps, +057 * execute {@link #setTimeRange(long, long) setTimeRange}. +058 * p +059 * To only retrieve columns with a specific timestamp, execute +060 * {@link #setTimeStamp(long) setTimestamp}. +061 * p +062 * To limit the number of versions of each column to be returned, execute +063 * {@link #setMaxVersions(int) setMaxVersions}. +064 * p +065 * To add a filter, call {@link #setFilter(Filter) setFilter}. +066 */ +067@InterfaceAudience.Public +068@InterfaceStability.Stable +069public class Get extends Query +070 implements Row, ComparableRow { +071 private static
[28/52] [partial] hbase-site git commit: Published site at 1cfd22bf43c9b64afae35d9bf16f764d0da80cab.
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/src-html/org/apache/hadoop/hbase/ChoreService.html -- diff --git a/apidocs/src-html/org/apache/hadoop/hbase/ChoreService.html b/apidocs/src-html/org/apache/hadoop/hbase/ChoreService.html index 568793e..63ebc7f 100644 --- a/apidocs/src-html/org/apache/hadoop/hbase/ChoreService.html +++ b/apidocs/src-html/org/apache/hadoop/hbase/ChoreService.html @@ -142,8 +142,8 @@ 134} 135 136 scheduler.setRemoveOnCancelPolicy(true); -137scheduledChores = new HashMapScheduledChore, ScheduledFuture?(); -138choresMissingStartTime = new HashMapScheduledChore, Boolean(); +137scheduledChores = new HashMap(); +138choresMissingStartTime = new HashMap(); 139 } 140 141 /** @@ -356,7 +356,7 @@ 348 } 349 350 private void cancelAllChores(final boolean mayInterruptIfRunning) { -351ArrayListScheduledChore choresToCancel = new ArrayListScheduledChore(scheduledChores.keySet().size()); +351ArrayListScheduledChore choresToCancel = new ArrayList(scheduledChores.keySet().size()); 352// Build list of chores to cancel so we can iterate through a set that won't change 353// as chores are cancelled. If we tried to cancel each chore while iterating through 354// keySet the results would be undefined because the keySet would be changing @@ -373,7 +373,7 @@ 365 * Prints a summary of important details about the chore. Used for debugging purposes 366 */ 367 private void printChoreDetails(final String header, ScheduledChore chore) { -368LinkedHashMapString, String output = new LinkedHashMapString, String(); +368LinkedHashMapString, String output = new LinkedHashMap(); 369output.put(header, ""); 370output.put("Chore name: ", chore.getName()); 371output.put("Chore period: ", Integer.toString(chore.getPeriod())); @@ -388,7 +388,7 @@ 380 * Prints a summary of important details about the service. Used for debugging purposes 381 */ 382 private void printChoreServiceDetails(final String header) { -383LinkedHashMapString, String output = new LinkedHashMapString, String(); +383LinkedHashMapString, String output = new LinkedHashMap(); 384output.put(header, ""); 385output.put("ChoreService corePoolSize: ", Integer.toString(getCorePoolSize())); 386output.put("ChoreService scheduledChores: ", Integer.toString(getNumberOfScheduledChores())); http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/src-html/org/apache/hadoop/hbase/HBaseInterfaceAudience.html -- diff --git a/apidocs/src-html/org/apache/hadoop/hbase/HBaseInterfaceAudience.html b/apidocs/src-html/org/apache/hadoop/hbase/HBaseInterfaceAudience.html index 3044d47..2fae44e 100644 --- a/apidocs/src-html/org/apache/hadoop/hbase/HBaseInterfaceAudience.html +++ b/apidocs/src-html/org/apache/hadoop/hbase/HBaseInterfaceAudience.html @@ -43,17 +43,19 @@ 035 public static final String COPROC = "Coprocesssor"; 036 public static final String REPLICATION = "Replication"; 037 public static final String PHOENIX = "Phoenix"; -038 /** -039 * Denotes class names that appear in user facing configuration files. -040 */ -041 public static final String CONFIG = "Configuration"; -042 -043 /** -044 * Denotes classes used as tools (Used from cmd line). Usually, the compatibility is required -045 * for class name, and arguments. -046 */ -047 public static final String TOOLS = "Tools"; -048} +038 public static final String SPARK = "Spark"; +039 +040 /** +041 * Denotes class names that appear in user facing configuration files. +042 */ +043 public static final String CONFIG = "Configuration"; +044 +045 /** +046 * Denotes classes used as tools (Used from cmd line). Usually, the compatibility is required +047 * for class name, and arguments. +048 */ +049 public static final String TOOLS = "Tools"; +050} http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/src-html/org/apache/hadoop/hbase/HColumnDescriptor.html -- diff --git a/apidocs/src-html/org/apache/hadoop/hbase/HColumnDescriptor.html b/apidocs/src-html/org/apache/hadoop/hbase/HColumnDescriptor.html index 791cb88..a881d8a 100644 --- a/apidocs/src-html/org/apache/hadoop/hbase/HColumnDescriptor.html +++ b/apidocs/src-html/org/apache/hadoop/hbase/HColumnDescriptor.html @@ -111,202 +111,202 @@ 103 /** 104 * Size of storefile/hfile 'blocks'. Default is {@link #DEFAULT_BLOCKSIZE}. 105 * Use smaller block sizes for faster random-access at expense of larger -106 * indices (more memory consumption). -107 */ -108 public static final String BLOCKSIZE = "BLOCKSIZE"; -109 -110 public static final String LENGTH = "LENGTH"; -111 public static final String TTL = "TTL";
[13/52] [partial] hbase-site git commit: Published site at 1cfd22bf43c9b64afae35d9bf16f764d0da80cab.
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/src-html/org/apache/hadoop/hbase/io/crypto/Encryption.Context.html -- diff --git a/apidocs/src-html/org/apache/hadoop/hbase/io/crypto/Encryption.Context.html b/apidocs/src-html/org/apache/hadoop/hbase/io/crypto/Encryption.Context.html index 3e217b2..99a73b9 100644 --- a/apidocs/src-html/org/apache/hadoop/hbase/io/crypto/Encryption.Context.html +++ b/apidocs/src-html/org/apache/hadoop/hbase/io/crypto/Encryption.Context.html @@ -541,56 +541,55 @@ 533} 534 } 535 -536 static final MapPairString,String,KeyProvider keyProviderCache = -537 new ConcurrentHashMapPairString,String,KeyProvider(); -538 -539 public static KeyProvider getKeyProvider(Configuration conf) { -540String providerClassName = conf.get(HConstants.CRYPTO_KEYPROVIDER_CONF_KEY, -541 KeyStoreKeyProvider.class.getName()); -542String providerParameters = conf.get(HConstants.CRYPTO_KEYPROVIDER_PARAMETERS_KEY, ""); -543try { -544 PairString,String providerCacheKey = new PairString,String(providerClassName, -545providerParameters); -546 KeyProvider provider = keyProviderCache.get(providerCacheKey); -547 if (provider != null) { -548return provider; -549 } -550 provider = (KeyProvider) ReflectionUtils.newInstance( -551 getClassLoaderForClass(KeyProvider.class).loadClass(providerClassName), -552conf); -553 provider.init(providerParameters); -554 if (LOG.isDebugEnabled()) { -555LOG.debug("Installed " + providerClassName + " into key provider cache"); -556 } -557 keyProviderCache.put(providerCacheKey, provider); -558 return provider; -559} catch (Exception e) { -560 throw new RuntimeException(e); -561} -562 } -563 -564 public static void incrementIv(byte[] iv) { -565incrementIv(iv, 1); -566 } -567 -568 public static void incrementIv(byte[] iv, int v) { -569int length = iv.length; -570boolean carry = true; -571// TODO: Optimize for v 1, e.g. 16, 32 -572do { -573 for (int i = 0; i length; i++) { -574if (carry) { -575 iv[i] = (byte) ((iv[i] + 1) 0xFF); -576 carry = 0 == iv[i]; -577} else { -578 break; -579} -580 } -581 v--; -582} while (v 0); -583 } -584 -585} +536 static final MapPairString,String,KeyProvider keyProviderCache = new ConcurrentHashMap(); +537 +538 public static KeyProvider getKeyProvider(Configuration conf) { +539String providerClassName = conf.get(HConstants.CRYPTO_KEYPROVIDER_CONF_KEY, +540 KeyStoreKeyProvider.class.getName()); +541String providerParameters = conf.get(HConstants.CRYPTO_KEYPROVIDER_PARAMETERS_KEY, ""); +542try { +543 PairString,String providerCacheKey = new Pair(providerClassName, +544providerParameters); +545 KeyProvider provider = keyProviderCache.get(providerCacheKey); +546 if (provider != null) { +547return provider; +548 } +549 provider = (KeyProvider) ReflectionUtils.newInstance( +550 getClassLoaderForClass(KeyProvider.class).loadClass(providerClassName), +551conf); +552 provider.init(providerParameters); +553 if (LOG.isDebugEnabled()) { +554LOG.debug("Installed " + providerClassName + " into key provider cache"); +555 } +556 keyProviderCache.put(providerCacheKey, provider); +557 return provider; +558} catch (Exception e) { +559 throw new RuntimeException(e); +560} +561 } +562 +563 public static void incrementIv(byte[] iv) { +564incrementIv(iv, 1); +565 } +566 +567 public static void incrementIv(byte[] iv, int v) { +568int length = iv.length; +569boolean carry = true; +570// TODO: Optimize for v 1, e.g. 16, 32 +571do { +572 for (int i = 0; i length; i++) { +573if (carry) { +574 iv[i] = (byte) ((iv[i] + 1) 0xFF); +575 carry = 0 == iv[i]; +576} else { +577 break; +578} +579 } +580 v--; +581} while (v 0); +582 } +583 +584} http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/src-html/org/apache/hadoop/hbase/io/crypto/Encryption.html -- diff --git a/apidocs/src-html/org/apache/hadoop/hbase/io/crypto/Encryption.html b/apidocs/src-html/org/apache/hadoop/hbase/io/crypto/Encryption.html index 3e217b2..99a73b9 100644 --- a/apidocs/src-html/org/apache/hadoop/hbase/io/crypto/Encryption.html +++ b/apidocs/src-html/org/apache/hadoop/hbase/io/crypto/Encryption.html @@ -541,56 +541,55 @@ 533} 534 } 535 -536 static final MapPairString,String,KeyProvider keyProviderCache = -537 new ConcurrentHashMapPairString,String,KeyProvider(); -538 -539 public static KeyProvider getKeyProvider(Configuration conf) { -540
[14/52] [partial] hbase-site git commit: Published site at 1cfd22bf43c9b64afae35d9bf16f764d0da80cab.
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/src-html/org/apache/hadoop/hbase/filter/MultiRowRangeFilter.html -- diff --git a/apidocs/src-html/org/apache/hadoop/hbase/filter/MultiRowRangeFilter.html b/apidocs/src-html/org/apache/hadoop/hbase/filter/MultiRowRangeFilter.html index e5afc32..32c4a50 100644 --- a/apidocs/src-html/org/apache/hadoop/hbase/filter/MultiRowRangeFilter.html +++ b/apidocs/src-html/org/apache/hadoop/hbase/filter/MultiRowRangeFilter.html @@ -124,309 +124,309 @@ 116 } else { 117if (range.contains(rowArr, offset, length)) { 118 currentReturnCode = ReturnCode.INCLUDE; -119} else currentReturnCode = ReturnCode.SEEK_NEXT_USING_HINT; -120 } -121} else { -122 currentReturnCode = ReturnCode.INCLUDE; -123} -124return false; -125 } -126 -127 @Override -128 public ReturnCode filterKeyValue(Cell ignored) { -129return currentReturnCode; -130 } -131 -132 @Override -133 public Cell getNextCellHint(Cell currentKV) { -134// skip to the next range's start row -135return CellUtil.createFirstOnRow(range.startRow, 0, -136(short) range.startRow.length); -137 } -138 -139 /** -140 * @return The filter serialized using pb -141 */ -142 public byte[] toByteArray() { -143 FilterProtos.MultiRowRangeFilter.Builder builder = FilterProtos.MultiRowRangeFilter -144.newBuilder(); -145for (RowRange range : rangeList) { -146 if (range != null) { -147FilterProtos.RowRange.Builder rangebuilder = FilterProtos.RowRange.newBuilder(); -148if (range.startRow != null) -149 rangebuilder.setStartRow(UnsafeByteOperations.unsafeWrap(range.startRow)); -150 rangebuilder.setStartRowInclusive(range.startRowInclusive); -151if (range.stopRow != null) -152 rangebuilder.setStopRow(UnsafeByteOperations.unsafeWrap(range.stopRow)); -153 rangebuilder.setStopRowInclusive(range.stopRowInclusive); -154range.isScan = Bytes.equals(range.startRow, range.stopRow) ? 1 : 0; -155 builder.addRowRangeList(rangebuilder.build()); -156 } -157} -158return builder.build().toByteArray(); -159 } -160 -161 /** -162 * @param pbBytes A pb serialized instance -163 * @return An instance of MultiRowRangeFilter -164 * @throws org.apache.hadoop.hbase.exceptions.DeserializationException -165 */ -166 public static MultiRowRangeFilter parseFrom(final byte[] pbBytes) -167 throws DeserializationException { -168FilterProtos.MultiRowRangeFilter proto; -169try { -170 proto = FilterProtos.MultiRowRangeFilter.parseFrom(pbBytes); -171} catch (InvalidProtocolBufferException e) { -172 throw new DeserializationException(e); -173} -174int length = proto.getRowRangeListCount(); -175ListFilterProtos.RowRange rangeProtos = proto.getRowRangeListList(); -176ListRowRange rangeList = new ArrayListRowRange(length); -177for (FilterProtos.RowRange rangeProto : rangeProtos) { -178 RowRange range = new RowRange(rangeProto.hasStartRow() ? rangeProto.getStartRow() -179 .toByteArray() : null, rangeProto.getStartRowInclusive(), rangeProto.hasStopRow() ? -180 rangeProto.getStopRow().toByteArray() : null, rangeProto.getStopRowInclusive()); -181 rangeList.add(range); -182} -183return new MultiRowRangeFilter(rangeList); -184 } -185 -186 /** -187 * @param o the filter to compare -188 * @return true if and only if the fields of the filter that are serialized are equal to the -189 * corresponding fields in other. Used for testing. -190 */ -191 boolean areSerializedFieldsEqual(Filter o) { -192if (o == this) -193 return true; -194if (!(o instanceof MultiRowRangeFilter)) -195 return false; -196 -197MultiRowRangeFilter other = (MultiRowRangeFilter) o; -198if (this.rangeList.size() != other.rangeList.size()) -199 return false; -200for (int i = 0; i rangeList.size(); ++i) { -201 RowRange thisRange = this.rangeList.get(i); -202 RowRange otherRange = other.rangeList.get(i); -203 if (!(Bytes.equals(thisRange.startRow, otherRange.startRow) Bytes.equals( -204 thisRange.stopRow, otherRange.stopRow) (thisRange.startRowInclusive == -205 otherRange.startRowInclusive) (thisRange.stopRowInclusive == -206 otherRange.stopRowInclusive))) { -207return false; -208 } -209} -210return true; -211 } -212 -213 /** -214 * calculate the position where the row key in the ranges list. -215 * -216 * @param rowKey the row key to calculate -217 * @return index the position of the row key -218 */ -219 private int getNextRangeIndex(byte[] rowKey) { -220RowRange temp = new RowRange(rowKey, true, null, true); -221int index = Collections.binarySearch(rangeList, temp); -222
[16/52] [partial] hbase-site git commit: Published site at 1cfd22bf43c9b64afae35d9bf16f764d0da80cab.
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/src-html/org/apache/hadoop/hbase/filter/FuzzyRowFilter.html -- diff --git a/apidocs/src-html/org/apache/hadoop/hbase/filter/FuzzyRowFilter.html b/apidocs/src-html/org/apache/hadoop/hbase/filter/FuzzyRowFilter.html index 3e67195..3a0b315 100644 --- a/apidocs/src-html/org/apache/hadoop/hbase/filter/FuzzyRowFilter.html +++ b/apidocs/src-html/org/apache/hadoop/hbase/filter/FuzzyRowFilter.html @@ -91,7 +91,7 @@ 083 p = fuzzyKeysData.get(i); 084 if (p.getFirst().length != p.getSecond().length) { 085PairString, String readable = -086new PairString, String(Bytes.toStringBinary(p.getFirst()), Bytes.toStringBinary(p +086new Pair(Bytes.toStringBinary(p.getFirst()), Bytes.toStringBinary(p 087.getSecond())); 088throw new IllegalArgumentException("Fuzzy pair lengths do not match: " + readable); 089 } @@ -199,440 +199,439 @@ 191private boolean initialized = false; 192 193RowTracker() { -194 nextRows = -195 new PriorityQueuePairbyte[], Pairbyte[], byte[](fuzzyKeysData.size(), -196 new ComparatorPairbyte[], Pairbyte[], byte[]() { -197@Override -198public int compare(Pairbyte[], Pairbyte[], byte[] o1, -199Pairbyte[], Pairbyte[], byte[] o2) { -200 return isReversed()? Bytes.compareTo(o2.getFirst(), o1.getFirst()): -201 Bytes.compareTo(o1.getFirst(), o2.getFirst()); -202} -203 }); -204} -205 -206byte[] nextRow() { -207 if (nextRows.isEmpty()) { -208throw new IllegalStateException( -209"NextRows should not be empty, make sure to call nextRow() after updateTracker() return true"); -210 } else { -211return nextRows.peek().getFirst(); -212 } -213} -214 -215boolean updateTracker(Cell currentCell) { -216 if (!initialized) { -217for (Pairbyte[], byte[] fuzzyData : fuzzyKeysData) { -218 updateWith(currentCell, fuzzyData); -219} -220initialized = true; -221 } else { -222while (!nextRows.isEmpty() !lessThan(currentCell, nextRows.peek().getFirst())) { -223 Pairbyte[], Pairbyte[], byte[] head = nextRows.poll(); -224 Pairbyte[], byte[] fuzzyData = head.getSecond(); -225 updateWith(currentCell, fuzzyData); -226} -227 } -228 return !nextRows.isEmpty(); -229} -230 -231boolean lessThan(Cell currentCell, byte[] nextRowKey) { -232 int compareResult = -233 CellComparator.COMPARATOR.compareRows(currentCell, nextRowKey, 0, nextRowKey.length); -234 return (!isReversed() compareResult 0) || (isReversed() compareResult 0); -235} -236 -237void updateWith(Cell currentCell, Pairbyte[], byte[] fuzzyData) { -238 byte[] nextRowKeyCandidate = -239 getNextForFuzzyRule(isReversed(), currentCell.getRowArray(), currentCell.getRowOffset(), -240currentCell.getRowLength(), fuzzyData.getFirst(), fuzzyData.getSecond()); -241 if (nextRowKeyCandidate != null) { -242nextRows.add(new Pairbyte[], Pairbyte[], byte[](nextRowKeyCandidate, fuzzyData)); -243 } -244} -245 -246 } -247 -248 @Override -249 public boolean filterAllRemaining() { -250return done; -251 } -252 -253 /** -254 * @return The filter serialized using pb -255 */ -256 public byte[] toByteArray() { -257FilterProtos.FuzzyRowFilter.Builder builder = FilterProtos.FuzzyRowFilter.newBuilder(); -258for (Pairbyte[], byte[] fuzzyData : fuzzyKeysData) { -259 BytesBytesPair.Builder bbpBuilder = BytesBytesPair.newBuilder(); -260 bbpBuilder.setFirst(UnsafeByteOperations.unsafeWrap(fuzzyData.getFirst())); -261 bbpBuilder.setSecond(UnsafeByteOperations.unsafeWrap(fuzzyData.getSecond())); -262 builder.addFuzzyKeysData(bbpBuilder); -263} -264return builder.build().toByteArray(); -265 } -266 -267 /** -268 * @param pbBytes A pb serialized {@link FuzzyRowFilter} instance -269 * @return An instance of {@link FuzzyRowFilter} made from codebytes/code -270 * @throws DeserializationException -271 * @see #toByteArray -272 */ -273 public static FuzzyRowFilter parseFrom(final byte[] pbBytes) throws DeserializationException { -274FilterProtos.FuzzyRowFilter proto; -275try { -276 proto = FilterProtos.FuzzyRowFilter.parseFrom(pbBytes); -277} catch (InvalidProtocolBufferException e) { -278 throw new DeserializationException(e); -279} -280int count = proto.getFuzzyKeysDataCount(); -281ArrayListPairbyte[], byte[] fuzzyKeysData = new ArrayListPairbyte[], byte[](count); -282for (int i = 0; i count; ++i) { -283 BytesBytesPair current = proto.getFuzzyKeysData(i); -284 byte[]
[22/52] [partial] hbase-site git commit: Published site at 1cfd22bf43c9b64afae35d9bf16f764d0da80cab.
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/src-html/org/apache/hadoop/hbase/client/Mutation.html -- diff --git a/apidocs/src-html/org/apache/hadoop/hbase/client/Mutation.html b/apidocs/src-html/org/apache/hadoop/hbase/client/Mutation.html index 610fff9..c58b6a61 100644 --- a/apidocs/src-html/org/apache/hadoop/hbase/client/Mutation.html +++ b/apidocs/src-html/org/apache/hadoop/hbase/client/Mutation.html @@ -100,484 +100,482 @@ 092 protected Durability durability = Durability.USE_DEFAULT; 093 094 // A Map sorted by column family. -095 protected NavigableMapbyte [], ListCell familyMap = -096new TreeMapbyte [], ListCell(Bytes.BYTES_COMPARATOR); -097 -098 @Override -099 public CellScanner cellScanner() { -100return CellUtil.createCellScanner(getFamilyCellMap()); -101 } -102 -103 /** -104 * Creates an empty list if one doesn't exist for the given column family -105 * or else it returns the associated list of Cell objects. -106 * -107 * @param family column family -108 * @return a list of Cell objects, returns an empty list if one doesn't exist. -109 */ -110 ListCell getCellList(byte[] family) { -111ListCell list = this.familyMap.get(family); -112if (list == null) { -113 list = new ArrayListCell(); -114} -115return list; -116 } -117 -118 /* -119 * Create a KeyValue with this objects row key and the Put identifier. -120 * -121 * @return a KeyValue with this objects row key and the Put identifier. -122 */ -123 KeyValue createPutKeyValue(byte[] family, byte[] qualifier, long ts, byte[] value) { -124return new KeyValue(this.row, family, qualifier, ts, KeyValue.Type.Put, value); -125 } -126 -127 /** -128 * Create a KeyValue with this objects row key and the Put identifier. -129 * @param family -130 * @param qualifier -131 * @param ts -132 * @param value -133 * @param tags - Specify the Tags as an Array -134 * @return a KeyValue with this objects row key and the Put identifier. -135 */ -136 KeyValue createPutKeyValue(byte[] family, byte[] qualifier, long ts, byte[] value, Tag[] tags) { -137KeyValue kvWithTag = new KeyValue(this.row, family, qualifier, ts, value, tags); -138return kvWithTag; -139 } -140 -141 /* -142 * Create a KeyValue with this objects row key and the Put identifier. -143 * -144 * @return a KeyValue with this objects row key and the Put identifier. -145 */ -146 KeyValue createPutKeyValue(byte[] family, ByteBuffer qualifier, long ts, ByteBuffer value, -147 Tag[] tags) { -148return new KeyValue(this.row, 0, this.row == null ? 0 : this.row.length, -149family, 0, family == null ? 0 : family.length, -150qualifier, ts, KeyValue.Type.Put, value, tags != null ? Arrays.asList(tags) : null); -151 } -152 -153 /** -154 * Compile the column family (i.e. schema) information -155 * into a Map. Useful for parsing and aggregation by debugging, -156 * logging, and administration tools. -157 * @return Map -158 */ -159 @Override -160 public MapString, Object getFingerprint() { -161MapString, Object map = new HashMapString, Object(); -162ListString families = new ArrayListString(this.familyMap.entrySet().size()); -163// ideally, we would also include table information, but that information -164// is not stored in each Operation instance. -165map.put("families", families); -166for (Map.Entrybyte [], ListCell entry : this.familyMap.entrySet()) { -167 families.add(Bytes.toStringBinary(entry.getKey())); -168} -169return map; -170 } -171 -172 /** -173 * Compile the details beyond the scope of getFingerprint (row, columns, -174 * timestamps, etc.) into a Map along with the fingerprinted information. -175 * Useful for debugging, logging, and administration tools. -176 * @param maxCols a limit on the number of columns output prior to truncation -177 * @return Map -178 */ -179 @Override -180 public MapString, Object toMap(int maxCols) { -181// we start with the fingerprint map and build on top of it. -182MapString, Object map = getFingerprint(); -183// replace the fingerprint's simple list of families with a -184// map from column families to lists of qualifiers and kv details -185MapString, ListMapString, Object columns = -186 new HashMapString, ListMapString, Object(); -187map.put("families", columns); -188map.put("row", Bytes.toStringBinary(this.row)); -189int colCount = 0; -190// iterate through all column families affected -191for (Map.Entrybyte [], ListCell entry : this.familyMap.entrySet()) { -192 // map from this family to details for each cell affected within the family -193 ListMapString, Object qualifierDetails = new ArrayListMapString, Object(); -194 columns.put(Bytes.toStringBinary(entry.getKey()), qualifierDetails);
[03/52] [partial] hbase-site git commit: Published site at 1cfd22bf43c9b64afae35d9bf16f764d0da80cab.
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/src-html/org/apache/hadoop/hbase/snapshot/ExportSnapshot.html -- diff --git a/apidocs/src-html/org/apache/hadoop/hbase/snapshot/ExportSnapshot.html b/apidocs/src-html/org/apache/hadoop/hbase/snapshot/ExportSnapshot.html index 6552d0b..bf243f9 100644 --- a/apidocs/src-html/org/apache/hadoop/hbase/snapshot/ExportSnapshot.html +++ b/apidocs/src-html/org/apache/hadoop/hbase/snapshot/ExportSnapshot.html @@ -572,7 +572,7 @@ 564 final FileSystem fs, final Path snapshotDir) throws IOException { 565SnapshotDescription snapshotDesc = SnapshotDescriptionUtils.readSnapshotInfo(fs, snapshotDir); 566 -567final ListPairSnapshotFileInfo, Long files = new ArrayListPairSnapshotFileInfo, Long(); +567final ListPairSnapshotFileInfo, Long files = new ArrayList(); 568final TableName table = TableName.valueOf(snapshotDesc.getTable()); 569 570// Get snapshot files @@ -599,7 +599,7 @@ 591} else { 592 size = HFileLink.buildFromHFileLinkPattern(conf, path).getFileStatus(fs).getLen(); 593} -594files.add(new PairSnapshotFileInfo, Long(fileInfo, size)); +594files.add(new Pair(fileInfo, size)); 595 } 596} 597}); @@ -626,504 +626,503 @@ 618}); 619 620// create balanced groups -621 ListListPairSnapshotFileInfo, Long fileGroups = -622 new LinkedListListPairSnapshotFileInfo, Long(); -623long[] sizeGroups = new long[ngroups]; -624int hi = files.size() - 1; -625int lo = 0; -626 -627ListPairSnapshotFileInfo, Long group; -628int dir = 1; -629int g = 0; -630 -631while (hi = lo) { -632 if (g == fileGroups.size()) { -633group = new LinkedListPairSnapshotFileInfo, Long(); -634fileGroups.add(group); -635 } else { -636group = fileGroups.get(g); -637 } -638 -639 PairSnapshotFileInfo, Long fileInfo = files.get(hi--); -640 -641 // add the hi one -642 sizeGroups[g] += fileInfo.getSecond(); -643 group.add(fileInfo); -644 -645 // change direction when at the end or the beginning -646 g += dir; -647 if (g == ngroups) { -648dir = -1; -649g = ngroups - 1; -650 } else if (g 0) { -651dir = 1; -652g = 0; -653 } -654} -655 -656if (LOG.isDebugEnabled()) { -657 for (int i = 0; i sizeGroups.length; ++i) { -658LOG.debug("export split=" + i + " size=" + StringUtils.humanReadableInt(sizeGroups[i])); -659 } -660} -661 -662return fileGroups; -663 } -664 -665 private static class ExportSnapshotInputFormat extends InputFormatBytesWritable, NullWritable { -666@Override -667public RecordReaderBytesWritable, NullWritable createRecordReader(InputSplit split, -668TaskAttemptContext tac) throws IOException, InterruptedException { -669 return new ExportSnapshotRecordReader(((ExportSnapshotInputSplit)split).getSplitKeys()); -670} -671 -672@Override -673public ListInputSplit getSplits(JobContext context) throws IOException, InterruptedException { -674 Configuration conf = context.getConfiguration(); -675 Path snapshotDir = new Path(conf.get(CONF_SNAPSHOT_DIR)); -676 FileSystem fs = FileSystem.get(snapshotDir.toUri(), conf); -677 -678 ListPairSnapshotFileInfo, Long snapshotFiles = getSnapshotFiles(conf, fs, snapshotDir); -679 int mappers = conf.getInt(CONF_NUM_SPLITS, 0); -680 if (mappers == 0 snapshotFiles.size() 0) { -681mappers = 1 + (snapshotFiles.size() / conf.getInt(CONF_MAP_GROUP, 10)); -682mappers = Math.min(mappers, snapshotFiles.size()); -683conf.setInt(CONF_NUM_SPLITS, mappers); -684conf.setInt(MR_NUM_MAPS, mappers); -685 } -686 -687 ListListPairSnapshotFileInfo, Long groups = getBalancedSplits(snapshotFiles, mappers); -688 ListInputSplit splits = new ArrayList(groups.size()); -689 for (ListPairSnapshotFileInfo, Long files: groups) { -690splits.add(new ExportSnapshotInputSplit(files)); -691 } -692 return splits; -693} -694 -695private static class ExportSnapshotInputSplit extends InputSplit implements Writable { -696 private ListPairBytesWritable, Long files; -697 private long length; -698 -699 public ExportSnapshotInputSplit() { -700this.files = null; -701 } -702 -703 public ExportSnapshotInputSplit(final ListPairSnapshotFileInfo, Long snapshotFiles) { -704this.files = new ArrayList(snapshotFiles.size()); -705for (PairSnapshotFileInfo, Long fileInfo: snapshotFiles) { -706 this.files.add(new PairBytesWritable, Long( -707new BytesWritable(fileInfo.getFirst().toByteArray()), fileInfo.getSecond())); -708 this.length += fileInfo.getSecond(); -709}
[34/52] [partial] hbase-site git commit: Published site at 1cfd22bf43c9b64afae35d9bf16f764d0da80cab.
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/org/apache/hadoop/hbase/io/crypto/Encryption.html -- diff --git a/apidocs/org/apache/hadoop/hbase/io/crypto/Encryption.html b/apidocs/org/apache/hadoop/hbase/io/crypto/Encryption.html index 7f5fd06..cefe0fa 100644 --- a/apidocs/org/apache/hadoop/hbase/io/crypto/Encryption.html +++ b/apidocs/org/apache/hadoop/hbase/io/crypto/Encryption.html @@ -788,7 +788,7 @@ extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html? getKeyProvider -public staticKeyProvidergetKeyProvider(org.apache.hadoop.conf.Configurationconf) +public staticKeyProvidergetKeyProvider(org.apache.hadoop.conf.Configurationconf) @@ -797,7 +797,7 @@ extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html? incrementIv -public staticvoidincrementIv(byte[]iv) +public staticvoidincrementIv(byte[]iv) @@ -806,7 +806,7 @@ extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html? incrementIv -public staticvoidincrementIv(byte[]iv, +public staticvoidincrementIv(byte[]iv, intv) http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/org/apache/hadoop/hbase/io/encoding/DataBlockEncoding.html -- diff --git a/apidocs/org/apache/hadoop/hbase/io/encoding/DataBlockEncoding.html b/apidocs/org/apache/hadoop/hbase/io/encoding/DataBlockEncoding.html index d4a6bea..18086c0 100644 --- a/apidocs/org/apache/hadoop/hbase/io/encoding/DataBlockEncoding.html +++ b/apidocs/org/apache/hadoop/hbase/io/encoding/DataBlockEncoding.html @@ -383,7 +383,7 @@ the order they are declared. values -public staticDataBlockEncoding[]values() +public staticDataBlockEncoding[]values() Returns an array containing the constants of this enum type, in the order they are declared. This method may be used to iterate over the constants as follows: @@ -403,7 +403,7 @@ for (DataBlockEncoding c : DataBlockEncoding.values()) valueOf -public staticDataBlockEncodingvalueOf(http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true; title="class or interface in java.lang">Stringname) +public staticDataBlockEncodingvalueOf(http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true; title="class or interface in java.lang">Stringname) Returns the enum constant of this type with the specified name. The string must match exactly an identifier used to declare an enum constant in this type. (Extraneous whitespace characters are http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/org/apache/hadoop/hbase/mapreduce/HFileOutputFormat2.html -- diff --git a/apidocs/org/apache/hadoop/hbase/mapreduce/HFileOutputFormat2.html b/apidocs/org/apache/hadoop/hbase/mapreduce/HFileOutputFormat2.html index 16cacee..4db2750 100644 --- a/apidocs/org/apache/hadoop/hbase/mapreduce/HFileOutputFormat2.html +++ b/apidocs/org/apache/hadoop/hbase/mapreduce/HFileOutputFormat2.html @@ -363,7 +363,7 @@ extends org.apache.hadoop.mapreduce.lib.output.FileOutputFormat configureIncrementalLoad -public staticvoidconfigureIncrementalLoad(org.apache.hadoop.mapreduce.Jobjob, +public staticvoidconfigureIncrementalLoad(org.apache.hadoop.mapreduce.Jobjob, Tabletable, RegionLocatorregionLocator) throws http://docs.oracle.com/javase/8/docs/api/java/io/IOException.html?is-external=true; title="class or interface in java.io">IOException @@ -391,7 +391,7 @@ extends org.apache.hadoop.mapreduce.lib.output.FileOutputFormat configureIncrementalLoad -public staticvoidconfigureIncrementalLoad(org.apache.hadoop.mapreduce.Jobjob, +public staticvoidconfigureIncrementalLoad(org.apache.hadoop.mapreduce.Jobjob, HTableDescriptortableDescriptor, RegionLocatorregionLocator) throws http://docs.oracle.com/javase/8/docs/api/java/io/IOException.html?is-external=true; title="class or interface in java.io">IOException @@ -419,7 +419,7 @@ extends org.apache.hadoop.mapreduce.lib.output.FileOutputFormat configureIncrementalLoadMap -public staticvoidconfigureIncrementalLoadMap(org.apache.hadoop.mapreduce.Jobjob, +public staticvoidconfigureIncrementalLoadMap(org.apache.hadoop.mapreduce.Jobjob, HTableDescriptortableDescriptor) throws http://docs.oracle.com/javase/8/docs/api/java/io/IOException.html?is-external=true; title="class or interface in java.io">IOException
[52/52] hbase-site git commit: Empty commit
Empty commit Project: http://git-wip-us.apache.org/repos/asf/hbase-site/repo Commit: http://git-wip-us.apache.org/repos/asf/hbase-site/commit/fadf6d5a Tree: http://git-wip-us.apache.org/repos/asf/hbase-site/tree/fadf6d5a Diff: http://git-wip-us.apache.org/repos/asf/hbase-site/diff/fadf6d5a Branch: refs/heads/asf-site Commit: fadf6d5a0b9a37f275bf7389708bd9b44a5f97bd Parents: 22cff34 Author: Misty Stanley-JonesAuthored: Tue Mar 21 09:34:38 2017 -0700 Committer: Misty Stanley-Jones Committed: Tue Mar 21 09:34:38 2017 -0700 -- --
[32/52] [partial] hbase-site git commit: Published site at 1cfd22bf43c9b64afae35d9bf16f764d0da80cab.
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/org/apache/hadoop/hbase/util/EncryptionTest.html -- diff --git a/apidocs/org/apache/hadoop/hbase/util/EncryptionTest.html b/apidocs/org/apache/hadoop/hbase/util/EncryptionTest.html index b04fd0d..7a77e85 100644 --- a/apidocs/org/apache/hadoop/hbase/util/EncryptionTest.html +++ b/apidocs/org/apache/hadoop/hbase/util/EncryptionTest.html @@ -182,7 +182,7 @@ extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html? testKeyProvider -public staticvoidtestKeyProvider(org.apache.hadoop.conf.Configurationconf) +public staticvoidtestKeyProvider(org.apache.hadoop.conf.Configurationconf) throws http://docs.oracle.com/javase/8/docs/api/java/io/IOException.html?is-external=true; title="class or interface in java.io">IOException Check that the configured key provider can be loaded and initialized, or throw an exception. @@ -200,7 +200,7 @@ extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html? testCipherProvider -public staticvoidtestCipherProvider(org.apache.hadoop.conf.Configurationconf) +public staticvoidtestCipherProvider(org.apache.hadoop.conf.Configurationconf) throws http://docs.oracle.com/javase/8/docs/api/java/io/IOException.html?is-external=true; title="class or interface in java.io">IOException Check that the configured cipher provider can be loaded and initialized, or throw an exception. @@ -218,7 +218,7 @@ extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html? testEncryption -public staticvoidtestEncryption(org.apache.hadoop.conf.Configurationconf, +public staticvoidtestEncryption(org.apache.hadoop.conf.Configurationconf, http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true; title="class or interface in java.lang">Stringcipher, byte[]key) throws http://docs.oracle.com/javase/8/docs/api/java/io/IOException.html?is-external=true; title="class or interface in java.io">IOException http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/org/apache/hadoop/hbase/util/RegionMover.html -- diff --git a/apidocs/org/apache/hadoop/hbase/util/RegionMover.html b/apidocs/org/apache/hadoop/hbase/util/RegionMover.html index e6a7ab2..4bbe058 100644 --- a/apidocs/org/apache/hadoop/hbase/util/RegionMover.html +++ b/apidocs/org/apache/hadoop/hbase/util/RegionMover.html @@ -176,7 +176,7 @@ extends org.apache.hadoop.hbase.util.AbstractHBaseTool Fields inherited from classorg.apache.hadoop.hbase.util.AbstractHBaseTool -cmdLineArgs, conf, EXIT_FAILURE, EXIT_SUCCESS +cmdLineArgs, conf, EXIT_FAILURE, EXIT_SUCCESS, LONG_HELP_OPTION, options, SHORT_HELP_OPTION @@ -237,7 +237,7 @@ extends org.apache.hadoop.hbase.util.AbstractHBaseTool Methods inherited from classorg.apache.hadoop.hbase.util.AbstractHBaseTool -addOption, addOptNoArg, addOptNoArg, addOptWithArg, addOptWithArg, addRequiredOption, addRequiredOptWithArg, addRequiredOptWithArg, doStaticMain, getConf, getOptionAsDouble, getOptionAsInt, parseInt, parseLong, printUsage, printUsage, processOldArgs, run, setConf +addOption, addOptNoArg, addOptNoArg, addOptWithArg, addOptWithArg, addRequiredOption, addRequiredOptWithArg, addRequiredOptWithArg, doStaticMain, getConf, getOptionAsDouble, getOptionAsInt, parseArgs, parseInt, parseLong, printUsage, printUsage, processOldArgs, run, setConf @@ -398,7 +398,7 @@ extends org.apache.hadoop.hbase.util.AbstractHBaseTool addOptions -protectedvoidaddOptions() +protectedvoidaddOptions() Description copied from class:org.apache.hadoop.hbase.util.AbstractHBaseTool Override this to add command-line options using AbstractHBaseTool.addOptWithArg(java.lang.String, java.lang.String) and similar methods. @@ -414,7 +414,7 @@ extends org.apache.hadoop.hbase.util.AbstractHBaseTool processOptions -protectedvoidprocessOptions(org.apache.commons.cli.CommandLinecmd) +protectedvoidprocessOptions(org.apache.commons.cli.CommandLinecmd) Description copied from class:org.apache.hadoop.hbase.util.AbstractHBaseTool This method is called to process the options after they have been parsed. @@ -429,7 +429,7 @@ extends org.apache.hadoop.hbase.util.AbstractHBaseTool doWork -protectedintdoWork() +protectedintdoWork() throws http://docs.oracle.com/javase/8/docs/api/java/lang/Exception.html?is-external=true; title="class or interface in java.lang">Exception Description copied from class:org.apache.hadoop.hbase.util.AbstractHBaseTool The "main function" of the tool @@ -447,7 +447,7 @@ extends org.apache.hadoop.hbase.util.AbstractHBaseTool main -public
[06/52] [partial] hbase-site git commit: Published site at 1cfd22bf43c9b64afae35d9bf16f764d0da80cab.
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/WALPlayer.html -- diff --git a/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/WALPlayer.html b/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/WALPlayer.html index 8803754..bff248e 100644 --- a/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/WALPlayer.html +++ b/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/WALPlayer.html @@ -78,9 +78,9 @@ 070public class WALPlayer extends Configured implements Tool { 071 private static final Log LOG = LogFactory.getLog(WALPlayer.class); 072 final static String NAME = "WALPlayer"; -073 final static String BULK_OUTPUT_CONF_KEY = "wal.bulk.output"; -074 final static String TABLES_KEY = "wal.input.tables"; -075 final static String TABLE_MAP_KEY = "wal.input.tablesmap"; +073 public final static String BULK_OUTPUT_CONF_KEY = "wal.bulk.output"; +074 public final static String TABLES_KEY = "wal.input.tables"; +075 public final static String TABLE_MAP_KEY = "wal.input.tablesmap"; 076 077 // This relies on Hadoop Configuration to handle warning about deprecated configs and 078 // to set the correct non-deprecated configs when an old one shows up. @@ -92,271 +92,302 @@ 084 085 private final static String JOB_NAME_CONF_KEY = "mapreduce.job.name"; 086 -087 protected WALPlayer(final Configuration c) { -088super(c); -089 } -090 -091 /** -092 * A mapper that just writes out KeyValues. -093 * This one can be used together with {@link KeyValueSortReducer} -094 */ -095 static class WALKeyValueMapper -096 extends MapperWALKey, WALEdit, ImmutableBytesWritable, KeyValue { -097private byte[] table; -098 -099@Override -100public void map(WALKey key, WALEdit value, -101 Context context) -102throws IOException { -103 try { -104// skip all other tables -105if (Bytes.equals(table, key.getTablename().getName())) { -106 for (Cell cell : value.getCells()) { -107KeyValue kv = KeyValueUtil.ensureKeyValue(cell); -108if (WALEdit.isMetaEditFamily(kv)) continue; -109context.write(new ImmutableBytesWritable(CellUtil.cloneRow(kv)), kv); -110 } -111} -112 } catch (InterruptedException e) { -113e.printStackTrace(); -114 } -115} -116 -117@Override -118public void setup(Context context) throws IOException { -119 // only a single table is supported when HFiles are generated with HFileOutputFormat -120 String[] tables = context.getConfiguration().getStrings(TABLES_KEY); -121 if (tables == null || tables.length != 1) { -122// this can only happen when WALMapper is used directly by a class other than WALPlayer -123throw new IOException("Exactly one table must be specified for bulk HFile case."); -124 } -125 table = Bytes.toBytes(tables[0]); -126} -127 } -128 -129 /** -130 * A mapper that writes out {@link Mutation} to be directly applied to -131 * a running HBase instance. -132 */ -133 protected static class WALMapper -134 extends MapperWALKey, WALEdit, ImmutableBytesWritable, Mutation { -135private MapTableName, TableName tables = new TreeMapTableName, TableName(); -136 -137@Override -138public void map(WALKey key, WALEdit value, Context context) -139throws IOException { -140 try { -141if (tables.isEmpty() || tables.containsKey(key.getTablename())) { -142 TableName targetTable = tables.isEmpty() ? -143key.getTablename() : -144 tables.get(key.getTablename()); -145 ImmutableBytesWritable tableOut = new ImmutableBytesWritable(targetTable.getName()); -146 Put put = null; -147 Delete del = null; -148 Cell lastCell = null; -149 for (Cell cell : value.getCells()) { -150// filtering WAL meta entries -151if (WALEdit.isMetaEditFamily(cell)) continue; -152 -153// Allow a subclass filter out this cell. -154if (filter(context, cell)) { -155 // A WALEdit may contain multiple operations (HBASE-3584) and/or -156 // multiple rows (HBASE-5229). -157 // Aggregate as much as possible into a single Put/Delete -158 // operation before writing to the context. -159 if (lastCell == null || lastCell.getTypeByte() != cell.getTypeByte() -160 || !CellUtil.matchingRow(lastCell, cell)) { -161// row or type changed, write out aggregate KVs. -162if (put != null) context.write(tableOut, put); -163if (del != null) context.write(tableOut, del); -164if (CellUtil.isDelete(cell)) { -165 del = new Delete(CellUtil.cloneRow(cell)); -166} else { -167
[07/52] [partial] hbase-site git commit: Published site at 1cfd22bf43c9b64afae35d9bf16f764d0da80cab.
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/TableMapReduceUtil.html -- diff --git a/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/TableMapReduceUtil.html b/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/TableMapReduceUtil.html index 099a926..ca75198 100644 --- a/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/TableMapReduceUtil.html +++ b/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/TableMapReduceUtil.html @@ -459,7 +459,7 @@ 451job.setMapperClass(mapper); 452Configuration conf = job.getConfiguration(); 453HBaseConfiguration.merge(conf, HBaseConfiguration.create(conf)); -454ListString scanStrings = new ArrayListString(); +454ListString scanStrings = new ArrayList(); 455 456for (Scan scan : scans) { 457 scanStrings.add(convertScanToString(scan)); @@ -815,7 +815,7 @@ 807if (conf == null) { 808 throw new IllegalArgumentException("Must provide a configuration object."); 809} -810SetString paths = new HashSetString(conf.getStringCollection("tmpjars")); +810SetString paths = new HashSet(conf.getStringCollection("tmpjars")); 811if (paths.isEmpty()) { 812 throw new IllegalArgumentException("Configuration contains no tmpjars."); 813} @@ -887,13 +887,13 @@ 879 Class?... classes) throws IOException { 880 881FileSystem localFs = FileSystem.getLocal(conf); -882SetString jars = new HashSetString(); +882SetString jars = new HashSet(); 883// Add jars that are already in the tmpjars variable 884 jars.addAll(conf.getStringCollection("tmpjars")); 885 886// add jars as we find them to a map of contents jar name so that we can avoid 887// creating new jars for classes that have already been packaged. -888MapString, String packagedClasses = new HashMapString, String(); +888MapString, String packagedClasses = new HashMap(); 889 890// Add jars containing the specified classes 891for (Class? clazz : classes) { http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/TableRecordReaderImpl.html -- diff --git a/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/TableRecordReaderImpl.html b/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/TableRecordReaderImpl.html index 3150448..9567688 100644 --- a/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/TableRecordReaderImpl.html +++ b/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/TableRecordReaderImpl.html @@ -89,7 +89,7 @@ 081 */ 082 public void restart(byte[] firstRow) throws IOException { 083currentScan = new Scan(scan); -084currentScan.setStartRow(firstRow); +084currentScan.withStartRow(firstRow); 085 currentScan.setScanMetricsEnabled(true); 086if (this.scanner != null) { 087 if (logScannerActivity) { @@ -281,7 +281,7 @@ 273 * @throws IOException 274 */ 275 private void updateCounters() throws IOException { -276ScanMetrics scanMetrics = currentScan.getScanMetrics(); +276ScanMetrics scanMetrics = scanner.getScanMetrics(); 277if (scanMetrics == null) { 278 return; 279} http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/TableSnapshotInputFormat.html -- diff --git a/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/TableSnapshotInputFormat.html b/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/TableSnapshotInputFormat.html index 2a522e5..21a2475 100644 --- a/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/TableSnapshotInputFormat.html +++ b/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/TableSnapshotInputFormat.html @@ -191,7 +191,7 @@ 183 184 @Override 185 public ListInputSplit getSplits(JobContext job) throws IOException, InterruptedException { -186ListInputSplit results = new ArrayListInputSplit(); +186ListInputSplit results = new ArrayList(); 187for (TableSnapshotInputFormatImpl.InputSplit split : 188 TableSnapshotInputFormatImpl.getSplits(job.getConfiguration())) { 189 results.add(new TableSnapshotRegionSplit(split)); http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/TextSortReducer.html -- diff --git a/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/TextSortReducer.html b/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/TextSortReducer.html index 6ab4f9e..0c0f789 100644 --- a/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/TextSortReducer.html +++ b/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/TextSortReducer.html @@ -154,7 +154,7 @@
[40/52] [partial] hbase-site git commit: Published site at 1cfd22bf43c9b64afae35d9bf16f764d0da80cab.
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/org/apache/hadoop/hbase/client/Increment.html -- diff --git a/apidocs/org/apache/hadoop/hbase/client/Increment.html b/apidocs/org/apache/hadoop/hbase/client/Increment.html index 81f7d77..7f424f2 100644 --- a/apidocs/org/apache/hadoop/hbase/client/Increment.html +++ b/apidocs/org/apache/hadoop/hbase/client/Increment.html @@ -611,7 +611,7 @@ implements http://docs.oracle.com/javase/8/docs/api/java/lang/Comparabl toString -publichttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true; title="class or interface in java.lang">StringtoString() +publichttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true; title="class or interface in java.lang">StringtoString() Description copied from class:Operation Produces a string representation of this Operation. It defaults to a JSON representation, but falls back to a string representation of the @@ -630,7 +630,7 @@ implements http://docs.oracle.com/javase/8/docs/api/java/lang/Comparabl compareTo -publicintcompareTo(Rowi) +publicintcompareTo(Rowi) Specified by: http://docs.oracle.com/javase/8/docs/api/java/lang/Comparable.html?is-external=true#compareTo-T-; title="class or interface in java.lang">compareToin interfacehttp://docs.oracle.com/javase/8/docs/api/java/lang/Comparable.html?is-external=true; title="class or interface in java.lang">ComparableRow @@ -645,7 +645,7 @@ implements http://docs.oracle.com/javase/8/docs/api/java/lang/Comparabl hashCode -publicinthashCode() +publicinthashCode() Overrides: http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true#hashCode--; title="class or interface in java.lang">hashCodein classhttp://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true; title="class or interface in java.lang">Object @@ -658,7 +658,7 @@ implements http://docs.oracle.com/javase/8/docs/api/java/lang/Comparabl equals -publicbooleanequals(http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true; title="class or interface in java.lang">Objectobj) +publicbooleanequals(http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true; title="class or interface in java.lang">Objectobj) Overrides: http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true#equals-java.lang.Object-; title="class or interface in java.lang">equalsin classhttp://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true; title="class or interface in java.lang">Object @@ -671,7 +671,7 @@ implements http://docs.oracle.com/javase/8/docs/api/java/lang/Comparabl extraHeapSize -protectedlongextraHeapSize() +protectedlongextraHeapSize() Description copied from class:Mutation Subclasses should override this method to add the heap size of their own fields. @@ -688,7 +688,7 @@ implements http://docs.oracle.com/javase/8/docs/api/java/lang/Comparabl setAttribute -publicIncrementsetAttribute(http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true; title="class or interface in java.lang">Stringname, +publicIncrementsetAttribute(http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true; title="class or interface in java.lang">Stringname, byte[]value) Description copied from interface:Attributes Sets an attribute. @@ -711,7 +711,7 @@ implements http://docs.oracle.com/javase/8/docs/api/java/lang/Comparabl setId -publicIncrementsetId(http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true; title="class or interface in java.lang">Stringid) +publicIncrementsetId(http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true; title="class or interface in java.lang">Stringid) Description copied from class:OperationWithAttributes This method allows you to set an identifier on an operation. The original motivation for this was to allow the identifier to be used in slow query @@ -732,7 +732,7 @@ implements http://docs.oracle.com/javase/8/docs/api/java/lang/Comparabl setDurability -publicIncrementsetDurability(Durabilityd) +publicIncrementsetDurability(Durabilityd) Description copied from class:Mutation Set the durability for this mutation @@ -747,7 +747,7 @@ implements http://docs.oracle.com/javase/8/docs/api/java/lang/Comparabl setFamilyCellMap -publicIncrementsetFamilyCellMap(http://docs.oracle.com/javase/8/docs/api/java/util/NavigableMap.html?is-external=true; title="class or interface in java.util">NavigableMapbyte[],http://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true; title="class or interface in java.util">ListCellmap) +publicIncrementsetFamilyCellMap(http://docs.oracle.com/javase/8/docs/api/java/util/NavigableMap.html?is-external=true; title="class or interface
[31/52] [partial] hbase-site git commit: Published site at 1cfd22bf43c9b64afae35d9bf16f764d0da80cab.
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/org/apache/hadoop/hbase/util/class-use/PositionedByteRange.html -- diff --git a/apidocs/org/apache/hadoop/hbase/util/class-use/PositionedByteRange.html b/apidocs/org/apache/hadoop/hbase/util/class-use/PositionedByteRange.html index cb1bcc7..3a27161 100644 --- a/apidocs/org/apache/hadoop/hbase/util/class-use/PositionedByteRange.html +++ b/apidocs/org/apache/hadoop/hbase/util/class-use/PositionedByteRange.html @@ -124,107 +124,107 @@ -http://docs.oracle.com/javase/8/docs/api/java/lang/Double.html?is-external=true; title="class or interface in java.lang">Double -RawDouble.decode(PositionedByteRangesrc) +T +DataType.decode(PositionedByteRangesrc) +Read an instance of T from the buffer src. + -http://docs.oracle.com/javase/8/docs/api/java/lang/Long.html?is-external=true; title="class or interface in java.lang">Long -OrderedInt64.decode(PositionedByteRangesrc) +http://docs.oracle.com/javase/8/docs/api/java/lang/Byte.html?is-external=true; title="class or interface in java.lang">Byte +OrderedInt8.decode(PositionedByteRangesrc) -http://docs.oracle.com/javase/8/docs/api/java/lang/Double.html?is-external=true; title="class or interface in java.lang">Double -OrderedFloat64.decode(PositionedByteRangesrc) - - http://docs.oracle.com/javase/8/docs/api/java/lang/Float.html?is-external=true; title="class or interface in java.lang">Float -RawFloat.decode(PositionedByteRangesrc) - - -http://docs.oracle.com/javase/8/docs/api/java/lang/Short.html?is-external=true; title="class or interface in java.lang">Short -OrderedInt16.decode(PositionedByteRangesrc) +OrderedFloat32.decode(PositionedByteRangesrc) byte[] OrderedBlobVar.decode(PositionedByteRangesrc) -http://docs.oracle.com/javase/8/docs/api/java/lang/Byte.html?is-external=true; title="class or interface in java.lang">Byte -OrderedInt8.decode(PositionedByteRangesrc) +http://docs.oracle.com/javase/8/docs/api/java/lang/Integer.html?is-external=true; title="class or interface in java.lang">Integer +RawInteger.decode(PositionedByteRangesrc) -http://docs.oracle.com/javase/8/docs/api/java/lang/Byte.html?is-external=true; title="class or interface in java.lang">Byte -RawByte.decode(PositionedByteRangesrc) +http://docs.oracle.com/javase/8/docs/api/java/lang/Float.html?is-external=true; title="class or interface in java.lang">Float +RawFloat.decode(PositionedByteRangesrc) -T -FixedLengthWrapper.decode(PositionedByteRangesrc) +T +TerminatedWrapper.decode(PositionedByteRangesrc) -http://docs.oracle.com/javase/8/docs/api/java/lang/Long.html?is-external=true; title="class or interface in java.lang">Long -RawLong.decode(PositionedByteRangesrc) +http://docs.oracle.com/javase/8/docs/api/java/lang/Double.html?is-external=true; title="class or interface in java.lang">Double +RawDouble.decode(PositionedByteRangesrc) -T -DataType.decode(PositionedByteRangesrc) -Read an instance of T from the buffer src. - +http://docs.oracle.com/javase/8/docs/api/java/lang/Short.html?is-external=true; title="class or interface in java.lang">Short +RawShort.decode(PositionedByteRangesrc) -http://docs.oracle.com/javase/8/docs/api/java/lang/Number.html?is-external=true; title="class or interface in java.lang">Number -OrderedNumeric.decode(PositionedByteRangesrc) - - http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true; title="class or interface in java.lang">String OrderedString.decode(PositionedByteRangesrc) - -http://docs.oracle.com/javase/8/docs/api/java/lang/Integer.html?is-external=true; title="class or interface in java.lang">Integer -OrderedInt32.decode(PositionedByteRangesrc) - byte[] -OrderedBlob.decode(PositionedByteRangesrc) +RawBytes.decode(PositionedByteRangesrc) -http://docs.oracle.com/javase/8/docs/api/java/lang/Float.html?is-external=true; title="class or interface in java.lang">Float -OrderedFloat32.decode(PositionedByteRangesrc) +http://docs.oracle.com/javase/8/docs/api/java/lang/Long.html?is-external=true; title="class or interface in java.lang">Long +RawLong.decode(PositionedByteRangesrc) http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true; title="class or interface in java.lang">Object[] Struct.decode(PositionedByteRangesrc) -byte[] -RawBytes.decode(PositionedByteRangesrc) +http://docs.oracle.com/javase/8/docs/api/java/lang/Short.html?is-external=true; title="class or interface in java.lang">Short +OrderedInt16.decode(PositionedByteRangesrc) -http://docs.oracle.com/javase/8/docs/api/java/lang/Integer.html?is-external=true; title="class or interface in java.lang">Integer -RawInteger.decode(PositionedByteRangesrc) +http://docs.oracle.com/javase/8/docs/api/java/lang/Double.html?is-external=true; title="class or interface in java.lang">Double +OrderedFloat64.decode(PositionedByteRangesrc) -T
[20/52] [partial] hbase-site git commit: Published site at 1cfd22bf43c9b64afae35d9bf16f764d0da80cab.
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/src-html/org/apache/hadoop/hbase/client/Result.html -- diff --git a/apidocs/src-html/org/apache/hadoop/hbase/client/Result.html b/apidocs/src-html/org/apache/hadoop/hbase/client/Result.html index 1a6b0c2..c93b1f5 100644 --- a/apidocs/src-html/org/apache/hadoop/hbase/client/Result.html +++ b/apidocs/src-html/org/apache/hadoop/hbase/client/Result.html @@ -32,650 +32,650 @@ 024import java.nio.ByteBuffer; 025import java.util.ArrayList; 026import java.util.Arrays; -027import java.util.Comparator; -028import java.util.List; -029import java.util.Map; -030import java.util.NavigableMap; -031import java.util.TreeMap; -032 -033import org.apache.hadoop.hbase.Cell; -034import org.apache.hadoop.hbase.CellComparator; -035import org.apache.hadoop.hbase.CellScannable; -036import org.apache.hadoop.hbase.CellScanner; -037import org.apache.hadoop.hbase.CellUtil; -038import org.apache.hadoop.hbase.HConstants; -039import org.apache.hadoop.hbase.KeyValue; -040import org.apache.hadoop.hbase.KeyValueUtil; -041import org.apache.hadoop.hbase.classification.InterfaceAudience; -042import org.apache.hadoop.hbase.classification.InterfaceStability; -043import org.apache.hadoop.hbase.util.Bytes; -044 -045/** -046 * Single row result of a {@link Get} or {@link Scan} query.p -047 * -048 * This class is bNOT THREAD SAFE/b.p +027import java.util.Collections; +028import java.util.Comparator; +029import java.util.Iterator; +030import java.util.List; +031import java.util.Map; +032import java.util.NavigableMap; +033import java.util.TreeMap; +034 +035import org.apache.hadoop.hbase.Cell; +036import org.apache.hadoop.hbase.CellComparator; +037import org.apache.hadoop.hbase.CellScannable; +038import org.apache.hadoop.hbase.CellScanner; +039import org.apache.hadoop.hbase.CellUtil; +040import org.apache.hadoop.hbase.HConstants; +041import org.apache.hadoop.hbase.KeyValue; +042import org.apache.hadoop.hbase.KeyValueUtil; +043import org.apache.hadoop.hbase.classification.InterfaceAudience; +044import org.apache.hadoop.hbase.classification.InterfaceStability; +045import org.apache.hadoop.hbase.util.Bytes; +046 +047/** +048 * Single row result of a {@link Get} or {@link Scan} query.p 049 * -050 * Convenience methods are available that return various {@link Map} -051 * structures and values directly.p -052 * -053 * To get a complete mapping of all cells in the Result, which can include -054 * multiple families and multiple versions, use {@link #getMap()}.p -055 * -056 * To get a mapping of each family to its columns (qualifiers and values), -057 * including only the latest version of each, use {@link #getNoVersionMap()}. -058 * -059 * To get a mapping of qualifiers to latest values for an individual family use -060 * {@link #getFamilyMap(byte[])}.p -061 * -062 * To get the latest value for a specific family and qualifier use -063 * {@link #getValue(byte[], byte[])}. -064 * -065 * A Result is backed by an array of {@link Cell} objects, each representing -066 * an HBase cell defined by the row, family, qualifier, timestamp, and value.p -067 * -068 * The underlying {@link Cell} objects can be accessed through the method {@link #listCells()}. -069 * This will create a List from the internal Cell []. Better is to exploit the fact that -070 * a new Result instance is a primed {@link CellScanner}; just call {@link #advance()} and -071 * {@link #current()} to iterate over Cells as you would any {@link CellScanner}. -072 * Call {@link #cellScanner()} to reset should you need to iterate the same Result over again -073 * ({@link CellScanner}s are one-shot). -074 * -075 * If you need to overwrite a Result with another Result instance -- as in the old 'mapred' -076 * RecordReader next invocations -- then create an empty Result with the null constructor and -077 * in then use {@link #copyFrom(Result)} -078 */ -079@InterfaceAudience.Public -080@InterfaceStability.Stable -081public class Result implements CellScannable, CellScanner { -082 private Cell[] cells; -083 private Boolean exists; // if the query was just to check existence. -084 private boolean stale = false; -085 -086 /** -087 * See {@link #mayHaveMoreCellsInRow()}. And please notice that, The client side implementation -088 * should also check for row key change to determine if a Result is the last one for a row. -089 */ -090 private boolean mayHaveMoreCellsInRow = false; -091 // We're not using java serialization. Transient here is just a marker to say -092 // that this is where we cache row if we're ever asked for it. -093 private transient byte [] row = null; -094 // Ditto for familyMap. It can be composed on fly from passed in kvs. -095 private transient NavigableMapbyte[], NavigableMapbyte[], NavigableMapLong, byte[] -096 familyMap = null; -097 -098 private static ThreadLocalbyte[]
[36/52] [partial] hbase-site git commit: Published site at 1cfd22bf43c9b64afae35d9bf16f764d0da80cab.
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/org/apache/hadoop/hbase/client/class-use/ResultScanner.html -- diff --git a/apidocs/org/apache/hadoop/hbase/client/class-use/ResultScanner.html b/apidocs/org/apache/hadoop/hbase/client/class-use/ResultScanner.html index 1376754..cd88dcf 100644 --- a/apidocs/org/apache/hadoop/hbase/client/class-use/ResultScanner.html +++ b/apidocs/org/apache/hadoop/hbase/client/class-use/ResultScanner.html @@ -130,42 +130,42 @@ -default ResultScanner -AsyncTable.getScanner(byte[]family) +ResultScanner +Table.getScanner(byte[]family) Gets a scanner on the current table for the given family. -ResultScanner -Table.getScanner(byte[]family) +default ResultScanner +AsyncTable.getScanner(byte[]family) Gets a scanner on the current table for the given family. -default ResultScanner -AsyncTable.getScanner(byte[]family, +ResultScanner +Table.getScanner(byte[]family, byte[]qualifier) Gets a scanner on the current table for the given family and qualifier. -ResultScanner -Table.getScanner(byte[]family, +default ResultScanner +AsyncTable.getScanner(byte[]family, byte[]qualifier) Gets a scanner on the current table for the given family and qualifier. ResultScanner -AsyncTable.getScanner(Scanscan) -Returns a scanner on the current table as specified by the Scan object. +Table.getScanner(Scanscan) +Returns a scanner on the current table as specified by the Scan + object. ResultScanner -Table.getScanner(Scanscan) -Returns a scanner on the current table as specified by the Scan - object. +AsyncTable.getScanner(Scanscan) +Returns a scanner on the current table as specified by the Scan object. http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/org/apache/hadoop/hbase/client/class-use/Row.html -- diff --git a/apidocs/org/apache/hadoop/hbase/client/class-use/Row.html b/apidocs/org/apache/hadoop/hbase/client/class-use/Row.html index 7d66811..e45a76e 100644 --- a/apidocs/org/apache/hadoop/hbase/client/class-use/Row.html +++ b/apidocs/org/apache/hadoop/hbase/client/class-use/Row.html @@ -179,19 +179,19 @@ int -Get.compareTo(Rowother) +Mutation.compareTo(Rowd) int -Mutation.compareTo(Rowd) +RowMutations.compareTo(Rowi) int -RowMutations.compareTo(Rowi) +Increment.compareTo(Rowi) int -Increment.compareTo(Rowi) +Get.compareTo(Rowother) http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/org/apache/hadoop/hbase/client/class-use/RowMutations.html -- diff --git a/apidocs/org/apache/hadoop/hbase/client/class-use/RowMutations.html b/apidocs/org/apache/hadoop/hbase/client/class-use/RowMutations.html index 29c4474..4b11626 100644 --- a/apidocs/org/apache/hadoop/hbase/client/class-use/RowMutations.html +++ b/apidocs/org/apache/hadoop/hbase/client/class-use/RowMutations.html @@ -119,8 +119,8 @@ -http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true; title="class or interface in java.util.concurrent">CompletableFuturehttp://docs.oracle.com/javase/8/docs/api/java/lang/Boolean.html?is-external=true; title="class or interface in java.lang">Boolean -AsyncTableBase.checkAndMutate(byte[]row, +boolean +Table.checkAndMutate(byte[]row, byte[]family, byte[]qualifier, CompareFilter.CompareOpcompareOp, @@ -130,8 +130,8 @@ -boolean -Table.checkAndMutate(byte[]row, +http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true; title="class or interface in java.util.concurrent">CompletableFuturehttp://docs.oracle.com/javase/8/docs/api/java/lang/Boolean.html?is-external=true; title="class or interface in java.lang">Boolean +AsyncTableBase.checkAndMutate(byte[]row, byte[]family, byte[]qualifier, CompareFilter.CompareOpcompareOp, @@ -141,14 +141,14 @@ -http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true; title="class or interface in java.util.concurrent">CompletableFuturehttp://docs.oracle.com/javase/8/docs/api/java/lang/Void.html?is-external=true; title="class or interface in java.lang">Void -AsyncTableBase.mutateRow(RowMutationsmutation) +void +Table.mutateRow(RowMutationsrm) Performs multiple mutations atomically on a single row. -void -Table.mutateRow(RowMutationsrm) +http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true; title="class or interface in java.util.concurrent">CompletableFuturehttp://docs.oracle.com/javase/8/docs/api/java/lang/Void.html?is-external=true; title="class or interface in java.lang">Void
[10/52] [partial] hbase-site git commit: Published site at 1cfd22bf43c9b64afae35d9bf16f764d0da80cab.
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/LoadIncrementalHFiles.html -- diff --git a/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/LoadIncrementalHFiles.html b/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/LoadIncrementalHFiles.html index a8ad4e5..4d16641 100644 --- a/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/LoadIncrementalHFiles.html +++ b/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/LoadIncrementalHFiles.html @@ -108,437 +108,437 @@ 100import com.google.common.collect.Multimap; 101import com.google.common.collect.Multimaps; 102import com.google.common.util.concurrent.ThreadFactoryBuilder; -103 -104/** -105 * Tool to load the output of HFileOutputFormat into an existing table. -106 */ -107@InterfaceAudience.Public -108@InterfaceStability.Stable -109public class LoadIncrementalHFiles extends Configured implements Tool { -110 private static final Log LOG = LogFactory.getLog(LoadIncrementalHFiles.class); -111 private boolean initalized = false; -112 -113 public static final String NAME = "completebulkload"; -114 static final String RETRY_ON_IO_EXCEPTION = "hbase.bulkload.retries.retryOnIOException"; -115 public static final String MAX_FILES_PER_REGION_PER_FAMILY -116= "hbase.mapreduce.bulkload.max.hfiles.perRegion.perFamily"; -117 private static final String ASSIGN_SEQ_IDS = "hbase.mapreduce.bulkload.assign.sequenceNumbers"; -118 public final static String CREATE_TABLE_CONF_KEY = "create.table"; -119 public final static String SILENCE_CONF_KEY = "ignore.unmatched.families"; -120 public final static String ALWAYS_COPY_FILES = "always.copy.files"; -121 -122 // We use a '.' prefix which is ignored when walking directory trees -123 // above. It is invalid family name. -124 final static String TMP_DIR = ".tmp"; -125 -126 private int maxFilesPerRegionPerFamily; -127 private boolean assignSeqIds; -128 private SetString unmatchedFamilies = new HashSetString(); -129 -130 // Source filesystem -131 private FileSystem fs; -132 // Source delegation token -133 private FsDelegationToken fsDelegationToken; -134 private String bulkToken; -135 private UserProvider userProvider; -136 private int nrThreads; -137 private RpcControllerFactory rpcControllerFactory; -138 private AtomicInteger numRetries; -139 -140 private MapLoadQueueItem, ByteBuffer retValue = null; -141 -142 public LoadIncrementalHFiles(Configuration conf) throws Exception { -143super(conf); -144this.rpcControllerFactory = new RpcControllerFactory(conf); -145initialize(); -146 } -147 -148 private void initialize() throws Exception { -149if (initalized) { -150 return; -151} -152// make a copy, just to be sure we're not overriding someone else's config -153 setConf(HBaseConfiguration.create(getConf())); -154Configuration conf = getConf(); -155// disable blockcache for tool invocation, see HBASE-10500 -156 conf.setFloat(HConstants.HFILE_BLOCK_CACHE_SIZE_KEY, 0); -157this.userProvider = UserProvider.instantiate(conf); -158this.fsDelegationToken = new FsDelegationToken(userProvider, "renewer"); -159assignSeqIds = conf.getBoolean(ASSIGN_SEQ_IDS, true); -160maxFilesPerRegionPerFamily = conf.getInt(MAX_FILES_PER_REGION_PER_FAMILY, 32); -161nrThreads = conf.getInt("hbase.loadincremental.threads.max", -162 Runtime.getRuntime().availableProcessors()); -163initalized = true; -164numRetries = new AtomicInteger(1); -165 } -166 -167 private void usage() { -168System.err.println("usage: " + NAME + " /path/to/hfileoutputformat-output tablename" + "\n -D" -169+ CREATE_TABLE_CONF_KEY + "=no - can be used to avoid creation of table by this tool\n" -170+ " Note: if you set this to 'no', then the target table must already exist in HBase\n -D" -171+ SILENCE_CONF_KEY + "=yes - can be used to ignore unmatched column families\n" -172+ "\n"); -173 } -174 -175 private interface BulkHFileVisitorTFamily { -176TFamily bulkFamily(final byte[] familyName) -177 throws IOException; -178void bulkHFile(final TFamily family, final FileStatus hfileStatus) -179 throws IOException; -180 } -181 -182 /** -183 * Iterate over the bulkDir hfiles. -184 * Skip reference, HFileLink, files starting with "_" and non-valid hfiles. -185 */ -186 private static TFamily void visitBulkHFiles(final FileSystem fs, final Path bulkDir, -187final BulkHFileVisitorTFamily visitor) throws IOException { -188visitBulkHFiles(fs, bulkDir, visitor, true); -189 } -190 -191 /** -192 * Iterate over the bulkDir hfiles. -193 * Skip reference, HFileLink, files starting with "_". -194 * Check and skip non-valid hfiles by default, or skip this validation by setting -195 * 'hbase.loadincremental.validate.hfile' to false. -196 */
[21/52] [partial] hbase-site git commit: Published site at 1cfd22bf43c9b64afae35d9bf16f764d0da80cab.
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/src-html/org/apache/hadoop/hbase/client/OperationWithAttributes.html -- diff --git a/apidocs/src-html/org/apache/hadoop/hbase/client/OperationWithAttributes.html b/apidocs/src-html/org/apache/hadoop/hbase/client/OperationWithAttributes.html index 057fcd3..782620c 100644 --- a/apidocs/src-html/org/apache/hadoop/hbase/client/OperationWithAttributes.html +++ b/apidocs/src-html/org/apache/hadoop/hbase/client/OperationWithAttributes.html @@ -52,7 +52,7 @@ 044} 045 046if (attributes == null) { -047 attributes = new HashMapString, byte[](); +047 attributes = new HashMap(); 048} 049 050if (value == null) { http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/src-html/org/apache/hadoop/hbase/client/Put.html -- diff --git a/apidocs/src-html/org/apache/hadoop/hbase/client/Put.html b/apidocs/src-html/org/apache/hadoop/hbase/client/Put.html index 45b8fb8..64126fa 100644 --- a/apidocs/src-html/org/apache/hadoop/hbase/client/Put.html +++ b/apidocs/src-html/org/apache/hadoop/hbase/client/Put.html @@ -169,9 +169,9 @@ 161 */ 162 public Put(Put putToCopy) { 163this(putToCopy.getRow(), putToCopy.ts); -164this.familyMap = new TreeMapbyte [], ListCell(Bytes.BYTES_COMPARATOR); +164this.familyMap = new TreeMap(Bytes.BYTES_COMPARATOR); 165for(Map.Entrybyte [], ListCell entry: putToCopy.getFamilyCellMap().entrySet()) { -166 this.familyMap.put(entry.getKey(), new ArrayListCell(entry.getValue())); +166 this.familyMap.put(entry.getKey(), new ArrayList(entry.getValue())); 167} 168this.durability = putToCopy.durability; 169for (Map.EntryString, byte[] entry : putToCopy.getAttributesMap().entrySet()) { @@ -472,7 +472,7 @@ 464 * returns an empty list if one doesn't exist for the given family. 465 */ 466 public ListCell get(byte[] family, byte[] qualifier) { -467ListCell filteredList = new ArrayListCell(); +467ListCell filteredList = new ArrayList(); 468for (Cell cell: getCellList(family)) { 469 if (CellUtil.matchingQualifier(cell, qualifier)) { 470filteredList.add(cell); http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/src-html/org/apache/hadoop/hbase/client/Query.html -- diff --git a/apidocs/src-html/org/apache/hadoop/hbase/client/Query.html b/apidocs/src-html/org/apache/hadoop/hbase/client/Query.html index 8c761aa..127619d 100644 --- a/apidocs/src-html/org/apache/hadoop/hbase/client/Query.html +++ b/apidocs/src-html/org/apache/hadoop/hbase/client/Query.html @@ -64,7 +64,7 @@ 056 057 /** 058 * Apply the specified server-side filter when performing the Query. -059 * Only {@link Filter#filterKeyValue(Cell)} is called AFTER all tests +059 * Only {@link Filter#filterKeyValue(org.apache.hadoop.hbase.Cell)} is called AFTER all tests 060 * for ttl, column match, deletes and max versions have been run. 061 * @param filter filter to run on the server 062 * @return this for invocation chaining
[01/52] [partial] hbase-site git commit: Published site at 1cfd22bf43c9b64afae35d9bf16f764d0da80cab.
Repository: hbase-site Updated Branches: refs/heads/asf-site c6ddb98fc -> fadf6d5a0 http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/src-html/org/apache/hadoop/hbase/util/RegionMover.html -- diff --git a/apidocs/src-html/org/apache/hadoop/hbase/util/RegionMover.html b/apidocs/src-html/org/apache/hadoop/hbase/util/RegionMover.html index 27b58df..c28cd5c 100644 --- a/apidocs/src-html/org/apache/hadoop/hbase/util/RegionMover.html +++ b/apidocs/src-html/org/apache/hadoop/hbase/util/RegionMover.html @@ -405,7 +405,7 @@ 397LOG.info("Moving " + regionsToMove.size() + " regions to " + server + " using " 398+ this.maxthreads + " threads.Ack mode:" + this.ack); 399ExecutorService moveRegionsPool = Executors.newFixedThreadPool(this.maxthreads); -400ListFutureBoolean taskList = new ArrayListFutureBoolean(); +400ListFutureBoolean taskList = new ArrayList(); 401int counter = 0; 402while (counter regionsToMove.size()) { 403 HRegionInfo region = regionsToMove.get(counter); @@ -469,7 +469,7 @@ 461 justification="FB is wrong; its size is read") 462 private void unloadRegions(Admin admin, String server, ArrayListString regionServers, 463 boolean ack, ListHRegionInfo movedRegions) throws Exception { -464ListHRegionInfo regionsToMove = new ArrayListHRegionInfo();// FindBugs: DLS_DEAD_LOCAL_STORE +464ListHRegionInfo regionsToMove = new ArrayList();// FindBugs: DLS_DEAD_LOCAL_STORE 465regionsToMove = getRegions(this.conf, server); 466if (regionsToMove.isEmpty()) { 467 LOG.info("No Regions to moveQuitting now"); @@ -489,7 +489,7 @@ 481 + regionServers.size() + " servers using " + this.maxthreads + " threads .Ack Mode:" 482 + ack); 483 ExecutorService moveRegionsPool = Executors.newFixedThreadPool(this.maxthreads); -484 ListFutureBoolean taskList = new ArrayListFutureBoolean(); +484 ListFutureBoolean taskList = new ArrayList(); 485 int serverIndex = 0; 486 while (counter regionsToMove.size()) { 487if (ack) { @@ -644,7 +644,7 @@ 636 } 637 638 private ListHRegionInfo readRegionsFromFile(String filename) throws IOException { -639ListHRegionInfo regions = new ArrayListHRegionInfo(); +639ListHRegionInfo regions = new ArrayList(); 640File f = new File(filename); 641if (!f.exists()) { 642 return regions; @@ -766,7 +766,7 @@ 758 * @return List of servers from the exclude file in format 'hostname:port'. 759 */ 760 private ArrayListString readExcludes(String excludeFile) throws IOException { -761ArrayListString excludeServers = new ArrayListString(); +761ArrayListString excludeServers = new ArrayList(); 762if (excludeFile == null) { 763 return excludeServers; 764} else { @@ -829,184 +829,183 @@ 821 * @throws IOException 822 */ 823 private ArrayListString getServers(Admin admin) throws IOException { -824ArrayListServerName serverInfo = -825new ArrayListServerName(admin.getClusterStatus().getServers()); -826ArrayListString regionServers = new ArrayListString(serverInfo.size()); -827for (ServerName server : serverInfo) { -828 regionServers.add(server.getServerName()); -829} -830return regionServers; -831 } -832 -833 private void deleteFile(String filename) { -834File f = new File(filename); -835if (f.exists()) { -836 f.delete(); -837} -838 } -839 -840 /** -841 * Tries to scan a row from passed region -842 * @param admin -843 * @param region -844 * @throws IOException -845 */ -846 private void isSuccessfulScan(Admin admin, HRegionInfo region) throws IOException { -847Scan scan = new Scan(region.getStartKey()); -848scan.setBatch(1); -849scan.setCaching(1); -850scan.setFilter(new FirstKeyOnlyFilter()); -851try { -852 Table table = admin.getConnection().getTable(region.getTable()); -853 try { -854ResultScanner scanner = table.getScanner(scan); -855try { -856 scanner.next(); -857} finally { -858 scanner.close(); -859} -860 } finally { -861table.close(); -862 } -863} catch (IOException e) { -864 LOG.error("Could not scan region:" + region.getEncodedName(), e); -865 throw e; -866} -867 } -868 -869 /** -870 * Returns true if passed region is still on serverName when we look at hbase:meta. -871 * @param admin -872 * @param region -873 * @param serverName -874 * @return true if region is hosted on serverName otherwise false -875 * @throws IOException -876 */ -877 private boolean isSameServer(Admin admin, HRegionInfo region, String serverName) -878 throws IOException { -879String serverForRegion = getServerNameForRegion(admin, region); -880if (serverForRegion != null
[50/52] [partial] hbase-site git commit: Published site at 1cfd22bf43c9b64afae35d9bf16f764d0da80cab.
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/acid-semantics.html -- diff --git a/acid-semantics.html b/acid-semantics.html index d0b08af..a0c5972 100644 --- a/acid-semantics.html +++ b/acid-semantics.html @@ -7,7 +7,7 @@ - + Apache HBase Apache HBase (TM) ACID Properties @@ -262,9 +262,9 @@ - - - +https://easychair.org/cfp/hbasecon2017; id="bannerLeft"> + + @@ -618,7 +618,7 @@ under the License. --> https://www.apache.org/;>The Apache Software Foundation. All rights reserved. - Last Published: 2017-02-17 + Last Published: 2017-03-21
[25/52] [partial] hbase-site git commit: Published site at 1cfd22bf43c9b64afae35d9bf16f764d0da80cab.
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/src-html/org/apache/hadoop/hbase/client/ConnectionFactory.html -- diff --git a/apidocs/src-html/org/apache/hadoop/hbase/client/ConnectionFactory.html b/apidocs/src-html/org/apache/hadoop/hbase/client/ConnectionFactory.html index 16beebf..24b2bd0 100644 --- a/apidocs/src-html/org/apache/hadoop/hbase/client/ConnectionFactory.html +++ b/apidocs/src-html/org/apache/hadoop/hbase/client/ConnectionFactory.html @@ -28,23 +28,23 @@ 020 021import java.io.IOException; 022import java.lang.reflect.Constructor; -023import java.util.concurrent.ExecutorService; -024 -025import org.apache.hadoop.conf.Configuration; -026import org.apache.hadoop.hbase.HBaseConfiguration; -027import org.apache.hadoop.hbase.classification.InterfaceAudience; -028import org.apache.hadoop.hbase.classification.InterfaceStability; -029import org.apache.hadoop.hbase.security.User; -030import org.apache.hadoop.hbase.security.UserProvider; -031import org.apache.hadoop.hbase.util.ReflectionUtils; -032 +023import java.util.concurrent.CompletableFuture; +024import java.util.concurrent.ExecutorService; +025 +026import org.apache.hadoop.conf.Configuration; +027import org.apache.hadoop.hbase.HBaseConfiguration; +028import org.apache.hadoop.hbase.classification.InterfaceAudience; +029import org.apache.hadoop.hbase.classification.InterfaceStability; +030import org.apache.hadoop.hbase.security.User; +031import org.apache.hadoop.hbase.security.UserProvider; +032import org.apache.hadoop.hbase.util.ReflectionUtils; 033 034/** -035 * A non-instantiable class that manages creation of {@link Connection}s. -036 * Managing the lifecycle of the {@link Connection}s to the cluster is the responsibility of -037 * the caller. -038 * From a {@link Connection}, {@link Table} implementations are retrieved -039 * with {@link Connection#getTable(TableName)}. Example: +035 * A non-instantiable class that manages creation of {@link Connection}s. Managing the lifecycle of +036 * the {@link Connection}s to the cluster is the responsibility of the caller. From a +037 * {@link Connection}, {@link Table} implementations are retrieved with +038 * {@link Connection#getTable(org.apache.hadoop.hbase.TableName)}. Example: +039 * 040 * pre 041 * Connection connection = ConnectionFactory.createConnection(config); 042 * Table table = connection.getTable(TableName.valueOf("table1")); @@ -58,243 +58,250 @@ 050 * 051 * Similarly, {@link Connection} also returns {@link Admin} and {@link RegionLocator} 052 * implementations. -053 * -054 * @see Connection -055 * @since 0.99.0 -056 */ -057@InterfaceAudience.Public -058@InterfaceStability.Evolving -059public class ConnectionFactory { -060 -061 public static final String HBASE_CLIENT_ASYNC_CONNECTION_IMPL = -062 "hbase.client.async.connection.impl"; -063 -064 /** No public c.tors */ -065 protected ConnectionFactory() { -066 } -067 -068 /** -069 * Create a new Connection instance using default HBaseConfiguration. Connection -070 * encapsulates all housekeeping for a connection to the cluster. All tables and interfaces -071 * created from returned connection share zookeeper connection, meta cache, and connections -072 * to region servers and masters. -073 * br -074 * The caller is responsible for calling {@link Connection#close()} on the returned -075 * connection instance. -076 * -077 * Typical usage: -078 * pre -079 * Connection connection = ConnectionFactory.createConnection(); -080 * Table table = connection.getTable(TableName.valueOf("mytable")); -081 * try { -082 * table.get(...); -083 * ... -084 * } finally { -085 * table.close(); -086 * connection.close(); -087 * } -088 * /pre -089 * -090 * @return Connection object for codeconf/code -091 */ -092 public static Connection createConnection() throws IOException { -093return createConnection(HBaseConfiguration.create(), null, null); -094 } -095 -096 /** -097 * Create a new Connection instance using the passed codeconf/code instance. Connection -098 * encapsulates all housekeeping for a connection to the cluster. All tables and interfaces -099 * created from returned connection share zookeeper connection, meta cache, and connections -100 * to region servers and masters. -101 * br -102 * The caller is responsible for calling {@link Connection#close()} on the returned -103 * connection instance. -104 * -105 * Typical usage: -106 * pre -107 * Connection connection = ConnectionFactory.createConnection(conf); -108 * Table table = connection.getTable(TableName.valueOf("mytable")); -109 * try { -110 * table.get(...); -111 * ... -112 * } finally { -113 * table.close(); -114 * connection.close(); -115 * } -116 * /pre -117 * -118 * @param conf configuration -119 * @return Connection object
[48/52] [partial] hbase-site git commit: Published site at 1cfd22bf43c9b64afae35d9bf16f764d0da80cab.
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apache_hbase_reference_guide.pdfmarks -- diff --git a/apache_hbase_reference_guide.pdfmarks b/apache_hbase_reference_guide.pdfmarks index f8eba53..9d8495c 100644 --- a/apache_hbase_reference_guide.pdfmarks +++ b/apache_hbase_reference_guide.pdfmarks @@ -2,8 +2,8 @@ /Author (Apache HBase Team) /Subject () /Keywords () - /ModDate (D:20170217144957) - /CreationDate (D:20170217144957) + /ModDate (D:20170321142451) + /CreationDate (D:20170321142451) /Creator (Asciidoctor PDF 1.5.0.alpha.6, based on Prawn 1.2.1) /Producer () /DOCINFO pdfmark http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/allclasses-frame.html -- diff --git a/apidocs/allclasses-frame.html b/apidocs/allclasses-frame.html index efd05d0..72c3a4e 100644 --- a/apidocs/allclasses-frame.html +++ b/apidocs/allclasses-frame.html @@ -249,6 +249,8 @@ RawInteger RawLong RawScanResultConsumer +RawScanResultConsumer.ScanController +RawScanResultConsumer.ScanResumer RawShort RawString RawStringFixedLength @@ -309,6 +311,7 @@ ServerName ServerNotRunningYetException ServerTooBusyException +ShortCircuitMasterConnection SimpleByteRange SimpleMutableByteRange SimplePositionedByteRange http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/allclasses-noframe.html -- diff --git a/apidocs/allclasses-noframe.html b/apidocs/allclasses-noframe.html index c64c1a8..6593100 100644 --- a/apidocs/allclasses-noframe.html +++ b/apidocs/allclasses-noframe.html @@ -249,6 +249,8 @@ RawInteger RawLong RawScanResultConsumer +RawScanResultConsumer.ScanController +RawScanResultConsumer.ScanResumer RawShort RawString RawStringFixedLength @@ -309,6 +311,7 @@ ServerName ServerNotRunningYetException ServerTooBusyException +ShortCircuitMasterConnection SimpleByteRange SimpleMutableByteRange SimplePositionedByteRange http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/constant-values.html -- diff --git a/apidocs/constant-values.html b/apidocs/constant-values.html index 178d0a2..040fc68 100644 --- a/apidocs/constant-values.html +++ b/apidocs/constant-values.html @@ -119,6 +119,13 @@ "Replication" + + +publicstaticfinalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true; title="class or interface in java.lang">String +SPARK +"Spark" + + publicstaticfinalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true; title="class or interface in java.lang">String @@ -3794,26 +3801,26 @@ "create.table" + + +publicstaticfinalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true; title="class or interface in java.lang">String +IGNORE_UNMATCHED_CF_CONF_KEY +"ignore.unmatched.families" + + publicstaticfinalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true; title="class or interface in java.lang">String MAX_FILES_PER_REGION_PER_FAMILY "hbase.mapreduce.bulkload.max.hfiles.perRegion.perFamily" - + publicstaticfinalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true; title="class or interface in java.lang">String NAME "completebulkload" - - - -publicstaticfinalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true; title="class or interface in java.lang">String -SILENCE_CONF_KEY -"ignore.unmatched.families" - @@ -4156,6 +4163,39 @@ + + +org.apache.hadoop.hbase.mapreduce.WALPlayer + +Modifier and Type +Constant Field +Value + + + + + +publicstaticfinalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true; title="class or interface in java.lang">String +BULK_OUTPUT_CONF_KEY +"wal.bulk.output" + + + + +publicstaticfinalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true; title="class or interface in java.lang">String +TABLE_MAP_KEY +"wal.input.tablesmap" + + + + +publicstaticfinalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true; title="class or interface in java.lang">String +TABLES_KEY +"wal.input.tables" + + + + @@ -4714,10 +4754,10 @@ "default" - + publicstaticfinalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true; title="class or interface in java.lang">String -NAMESPACEDESC_PROP_GROUP +NAMESPACE_DESC_PROP_GROUP "hbase.rsgroup.name" http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/deprecated-list.html -- diff --git a/apidocs/deprecated-list.html b/apidocs/deprecated-list.html index eabb570..987f686 100644 ---
[27/52] [partial] hbase-site git commit: Published site at 1cfd22bf43c9b64afae35d9bf16f764d0da80cab.
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/src-html/org/apache/hadoop/hbase/HTableDescriptor.html -- diff --git a/apidocs/src-html/org/apache/hadoop/hbase/HTableDescriptor.html b/apidocs/src-html/org/apache/hadoop/hbase/HTableDescriptor.html index 45cd59a..66b1cdb 100644 --- a/apidocs/src-html/org/apache/hadoop/hbase/HTableDescriptor.html +++ b/apidocs/src-html/org/apache/hadoop/hbase/HTableDescriptor.html @@ -48,9 +48,9 @@ 040import org.apache.hadoop.hbase.client.Durability; 041import org.apache.hadoop.hbase.client.RegionReplicaUtil; 042import org.apache.hadoop.hbase.exceptions.DeserializationException; -043import org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil; -044import org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.TableSchema; -045import org.apache.hadoop.hbase.security.User; +043import org.apache.hadoop.hbase.security.User; +044import org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil; +045import org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.TableSchema; 046import org.apache.hadoop.hbase.util.Bytes; 047 048/** @@ -72,1519 +72,1516 @@ 064 * includes values like IS_ROOT, IS_META, DEFERRED_LOG_FLUSH, SPLIT_POLICY, 065 * MAX_FILE_SIZE, READONLY, MEMSTORE_FLUSHSIZE etc... 066 */ -067 private final MapBytes, Bytes values = -068 new HashMapBytes, Bytes(); -069 -070 /** -071 * A map which holds the configuration specific to the table. -072 * The keys of the map have the same names as config keys and override the defaults with -073 * table-specific settings. Example usage may be for compactions, etc. -074 */ -075 private final MapString, String configuration = new HashMapString, String(); -076 -077 public static final String SPLIT_POLICY = "SPLIT_POLICY"; -078 -079 /** -080 * emINTERNAL/em Used by HBase Shell interface to access this metadata -081 * attribute which denotes the maximum size of the store file after which -082 * a region split occurs -083 * -084 * @see #getMaxFileSize() -085 */ -086 public static final String MAX_FILESIZE = "MAX_FILESIZE"; -087 private static final Bytes MAX_FILESIZE_KEY = -088 new Bytes(Bytes.toBytes(MAX_FILESIZE)); -089 -090 public static final String OWNER = "OWNER"; -091 public static final Bytes OWNER_KEY = -092 new Bytes(Bytes.toBytes(OWNER)); -093 -094 /** -095 * emINTERNAL/em Used by rest interface to access this metadata -096 * attribute which denotes if the table is Read Only -097 * -098 * @see #isReadOnly() -099 */ -100 public static final String READONLY = "READONLY"; -101 private static final Bytes READONLY_KEY = -102 new Bytes(Bytes.toBytes(READONLY)); -103 -104 /** -105 * emINTERNAL/em Used by HBase Shell interface to access this metadata -106 * attribute which denotes if the table is compaction enabled -107 * -108 * @see #isCompactionEnabled() -109 */ -110 public static final String COMPACTION_ENABLED = "COMPACTION_ENABLED"; -111 private static final Bytes COMPACTION_ENABLED_KEY = -112 new Bytes(Bytes.toBytes(COMPACTION_ENABLED)); -113 -114 /** -115 * emINTERNAL/em Used by HBase Shell interface to access this metadata -116 * attribute which represents the maximum size of the memstore after which -117 * its contents are flushed onto the disk -118 * -119 * @see #getMemStoreFlushSize() -120 */ -121 public static final String MEMSTORE_FLUSHSIZE = "MEMSTORE_FLUSHSIZE"; -122 private static final Bytes MEMSTORE_FLUSHSIZE_KEY = -123 new Bytes(Bytes.toBytes(MEMSTORE_FLUSHSIZE)); -124 -125 public static final String FLUSH_POLICY = "FLUSH_POLICY"; -126 -127 /** -128 * emINTERNAL/em Used by rest interface to access this metadata -129 * attribute which denotes if the table is a -ROOT- region or not -130 * -131 * @see #isRootRegion() -132 */ -133 public static final String IS_ROOT = "IS_ROOT"; -134 private static final Bytes IS_ROOT_KEY = -135 new Bytes(Bytes.toBytes(IS_ROOT)); -136 -137 /** -138 * emINTERNAL/em Used by rest interface to access this metadata -139 * attribute which denotes if it is a catalog table, either -140 * code hbase:meta /code or code -ROOT- /code -141 * -142 * @see #isMetaRegion() -143 */ -144 public static final String IS_META = "IS_META"; -145 private static final Bytes IS_META_KEY = -146 new Bytes(Bytes.toBytes(IS_META)); -147 -148 /** -149 * emINTERNAL/em Used by HBase Shell interface to access this metadata -150 * attribute which denotes if the deferred log flush option is enabled. -151 * @deprecated Use {@link #DURABILITY} instead. -152 */ -153 @Deprecated -154 public static final String DEFERRED_LOG_FLUSH = "DEFERRED_LOG_FLUSH"; -155 @Deprecated -156 private static final Bytes DEFERRED_LOG_FLUSH_KEY = -157 new Bytes(Bytes.toBytes(DEFERRED_LOG_FLUSH)); -158 -159 /** -160 *
[11/52] [partial] hbase-site git commit: Published site at 1cfd22bf43c9b64afae35d9bf16f764d0da80cab.
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/Import.html -- diff --git a/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/Import.html b/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/Import.html index 73df1db..14e7609 100644 --- a/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/Import.html +++ b/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/Import.html @@ -479,7 +479,7 @@ 471 } 472 473 private static ArrayListbyte[] toQuotedByteArrays(String... stringArgs) { -474ArrayListbyte[] quotedArgs = new ArrayListbyte[](); +474ArrayListbyte[] quotedArgs = new ArrayList(); 475for (String stringArg : stringArgs) { 476 // all the filters' instantiation methods expected quoted args since they are coming from 477 // the shell, so add them here, though it shouldn't really be needed :-/ @@ -544,7 +544,7 @@ 536 String[] allMappings = allMappingsPropVal.split(","); 537 for (String mapping: allMappings) { 538if(cfRenameMap == null) { -539cfRenameMap = new TreeMapbyte[],byte[](Bytes.BYTES_COMPARATOR); +539cfRenameMap = new TreeMap(Bytes.BYTES_COMPARATOR); 540} 541String [] srcAndDest = mapping.split(":"); 542if(srcAndDest.length != 2) { http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/ImportTsv.html -- diff --git a/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/ImportTsv.html b/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/ImportTsv.html index 22656ce..69b099e 100644 --- a/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/ImportTsv.html +++ b/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/ImportTsv.html @@ -257,7 +257,7 @@ 249public ParsedLine parse(byte[] lineBytes, int length) 250throws BadTsvLineException { 251 // Enumerate separator offsets -252 ArrayListInteger tabOffsets = new ArrayListInteger(maxColumnCount); +252 ArrayListInteger tabOffsets = new ArrayList(maxColumnCount); 253 for (int i = 0; i length; i++) { 254if (lineBytes[i] == separatorByte) { 255 tabOffsets.add(i); @@ -456,7 +456,7 @@ 448 + " are less than row key position."); 449} 450 } -451 return new PairInteger, Integer(startPos, endPos - startPos + 1); +451 return new Pair(startPos, endPos - startPos + 1); 452} 453 } 454 @@ -529,7 +529,7 @@ 521boolean noStrict = conf.getBoolean(NO_STRICT_COL_FAMILY, false); 522// if no.strict is false then check column family 523if(!noStrict) { -524 ArrayListString unmatchedFamilies = new ArrayListString(); +524 ArrayListString unmatchedFamilies = new ArrayList(); 525 SetString cfSet = getColumnFamilies(columns); 526 HTableDescriptor tDesc = table.getTableDescriptor(); 527 for (String cf : cfSet) { @@ -538,7 +538,7 @@ 530} 531 } 532 if(unmatchedFamilies.size() 0) { -533ArrayListString familyNames = new ArrayListString(); +533ArrayListString familyNames = new ArrayList(); 534for (HColumnDescriptor family : table.getTableDescriptor().getFamilies()) { 535 familyNames.add(family.getNameAsString()); 536} @@ -634,7 +634,7 @@ 626 } 627 628 private static SetString getColumnFamilies(String[] columns) { -629SetString cfSet = new HashSetString(); +629SetString cfSet = new HashSet(); 630for (String aColumn : columns) { 631 if (TsvParser.ROWKEY_COLUMN_SPEC.equals(aColumn) 632 || TsvParser.TIMESTAMPKEY_COLUMN_SPEC.equals(aColumn) http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/KeyValueSortReducer.html -- diff --git a/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/KeyValueSortReducer.html b/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/KeyValueSortReducer.html index 807f600..560a84d 100644 --- a/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/KeyValueSortReducer.html +++ b/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/KeyValueSortReducer.html @@ -48,7 +48,7 @@ 040 protected void reduce(ImmutableBytesWritable row, java.lang.IterableKeyValue kvs, 041 org.apache.hadoop.mapreduce.ReducerImmutableBytesWritable, KeyValue, ImmutableBytesWritable, KeyValue.Context context) 042 throws java.io.IOException, InterruptedException { -043TreeSetKeyValue map = new TreeSetKeyValue(CellComparator.COMPARATOR); +043TreeSetKeyValue map = new TreeSet(CellComparator.COMPARATOR); 044
[15/52] [partial] hbase-site git commit: Published site at 1cfd22bf43c9b64afae35d9bf16f764d0da80cab.
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/src-html/org/apache/hadoop/hbase/filter/MultiRowRangeFilter.RowRange.html -- diff --git a/apidocs/src-html/org/apache/hadoop/hbase/filter/MultiRowRangeFilter.RowRange.html b/apidocs/src-html/org/apache/hadoop/hbase/filter/MultiRowRangeFilter.RowRange.html index e5afc32..32c4a50 100644 --- a/apidocs/src-html/org/apache/hadoop/hbase/filter/MultiRowRangeFilter.RowRange.html +++ b/apidocs/src-html/org/apache/hadoop/hbase/filter/MultiRowRangeFilter.RowRange.html @@ -124,309 +124,309 @@ 116 } else { 117if (range.contains(rowArr, offset, length)) { 118 currentReturnCode = ReturnCode.INCLUDE; -119} else currentReturnCode = ReturnCode.SEEK_NEXT_USING_HINT; -120 } -121} else { -122 currentReturnCode = ReturnCode.INCLUDE; -123} -124return false; -125 } -126 -127 @Override -128 public ReturnCode filterKeyValue(Cell ignored) { -129return currentReturnCode; -130 } -131 -132 @Override -133 public Cell getNextCellHint(Cell currentKV) { -134// skip to the next range's start row -135return CellUtil.createFirstOnRow(range.startRow, 0, -136(short) range.startRow.length); -137 } -138 -139 /** -140 * @return The filter serialized using pb -141 */ -142 public byte[] toByteArray() { -143 FilterProtos.MultiRowRangeFilter.Builder builder = FilterProtos.MultiRowRangeFilter -144.newBuilder(); -145for (RowRange range : rangeList) { -146 if (range != null) { -147FilterProtos.RowRange.Builder rangebuilder = FilterProtos.RowRange.newBuilder(); -148if (range.startRow != null) -149 rangebuilder.setStartRow(UnsafeByteOperations.unsafeWrap(range.startRow)); -150 rangebuilder.setStartRowInclusive(range.startRowInclusive); -151if (range.stopRow != null) -152 rangebuilder.setStopRow(UnsafeByteOperations.unsafeWrap(range.stopRow)); -153 rangebuilder.setStopRowInclusive(range.stopRowInclusive); -154range.isScan = Bytes.equals(range.startRow, range.stopRow) ? 1 : 0; -155 builder.addRowRangeList(rangebuilder.build()); -156 } -157} -158return builder.build().toByteArray(); -159 } -160 -161 /** -162 * @param pbBytes A pb serialized instance -163 * @return An instance of MultiRowRangeFilter -164 * @throws org.apache.hadoop.hbase.exceptions.DeserializationException -165 */ -166 public static MultiRowRangeFilter parseFrom(final byte[] pbBytes) -167 throws DeserializationException { -168FilterProtos.MultiRowRangeFilter proto; -169try { -170 proto = FilterProtos.MultiRowRangeFilter.parseFrom(pbBytes); -171} catch (InvalidProtocolBufferException e) { -172 throw new DeserializationException(e); -173} -174int length = proto.getRowRangeListCount(); -175ListFilterProtos.RowRange rangeProtos = proto.getRowRangeListList(); -176ListRowRange rangeList = new ArrayListRowRange(length); -177for (FilterProtos.RowRange rangeProto : rangeProtos) { -178 RowRange range = new RowRange(rangeProto.hasStartRow() ? rangeProto.getStartRow() -179 .toByteArray() : null, rangeProto.getStartRowInclusive(), rangeProto.hasStopRow() ? -180 rangeProto.getStopRow().toByteArray() : null, rangeProto.getStopRowInclusive()); -181 rangeList.add(range); -182} -183return new MultiRowRangeFilter(rangeList); -184 } -185 -186 /** -187 * @param o the filter to compare -188 * @return true if and only if the fields of the filter that are serialized are equal to the -189 * corresponding fields in other. Used for testing. -190 */ -191 boolean areSerializedFieldsEqual(Filter o) { -192if (o == this) -193 return true; -194if (!(o instanceof MultiRowRangeFilter)) -195 return false; -196 -197MultiRowRangeFilter other = (MultiRowRangeFilter) o; -198if (this.rangeList.size() != other.rangeList.size()) -199 return false; -200for (int i = 0; i rangeList.size(); ++i) { -201 RowRange thisRange = this.rangeList.get(i); -202 RowRange otherRange = other.rangeList.get(i); -203 if (!(Bytes.equals(thisRange.startRow, otherRange.startRow) Bytes.equals( -204 thisRange.stopRow, otherRange.stopRow) (thisRange.startRowInclusive == -205 otherRange.startRowInclusive) (thisRange.stopRowInclusive == -206 otherRange.stopRowInclusive))) { -207return false; -208 } -209} -210return true; -211 } -212 -213 /** -214 * calculate the position where the row key in the ranges list. -215 * -216 * @param rowKey the row key to calculate -217 * @return index the position of the row key -218 */ -219 private int getNextRangeIndex(byte[] rowKey) { -220RowRange temp = new RowRange(rowKey, true, null, true); -221int index =
[35/52] [partial] hbase-site git commit: Published site at 1cfd22bf43c9b64afae35d9bf16f764d0da80cab.
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/org/apache/hadoop/hbase/filter/class-use/Filter.ReturnCode.html -- diff --git a/apidocs/org/apache/hadoop/hbase/filter/class-use/Filter.ReturnCode.html b/apidocs/org/apache/hadoop/hbase/filter/class-use/Filter.ReturnCode.html index 2b8632c..3d377a1 100644 --- a/apidocs/org/apache/hadoop/hbase/filter/class-use/Filter.ReturnCode.html +++ b/apidocs/org/apache/hadoop/hbase/filter/class-use/Filter.ReturnCode.html @@ -107,115 +107,115 @@ Filter.ReturnCode -ColumnPrefixFilter.filterColumn(Cellcell) +MultipleColumnPrefixFilter.filterColumn(Cellcell) Filter.ReturnCode -MultipleColumnPrefixFilter.filterColumn(Cellcell) +ColumnPrefixFilter.filterColumn(Cellcell) Filter.ReturnCode -PrefixFilter.filterKeyValue(Cellv) +FirstKeyOnlyFilter.filterKeyValue(Cellv) Filter.ReturnCode -PageFilter.filterKeyValue(Cellignored) +FamilyFilter.filterKeyValue(Cellv) Filter.ReturnCode -KeyOnlyFilter.filterKeyValue(Cellignored) +ColumnPaginationFilter.filterKeyValue(Cellv) Filter.ReturnCode -WhileMatchFilter.filterKeyValue(Cellv) +SingleColumnValueFilter.filterKeyValue(Cellc) Filter.ReturnCode -QualifierFilter.filterKeyValue(Cellv) +PageFilter.filterKeyValue(Cellignored) Filter.ReturnCode -FirstKeyValueMatchingQualifiersFilter.filterKeyValue(Cellv) -Deprecated. - +RowFilter.filterKeyValue(Cellv) Filter.ReturnCode -SkipFilter.filterKeyValue(Cellv) +ColumnRangeFilter.filterKeyValue(Cellkv) Filter.ReturnCode -ColumnPrefixFilter.filterKeyValue(Cellcell) +ColumnCountGetFilter.filterKeyValue(Cellv) Filter.ReturnCode -ColumnCountGetFilter.filterKeyValue(Cellv) +FuzzyRowFilter.filterKeyValue(Cellc) Filter.ReturnCode -DependentColumnFilter.filterKeyValue(Cellc) +ValueFilter.filterKeyValue(Cellv) Filter.ReturnCode -MultipleColumnPrefixFilter.filterKeyValue(Cellkv) +DependentColumnFilter.filterKeyValue(Cellc) +Filter.ReturnCode +InclusiveStopFilter.filterKeyValue(Cellv) + + abstract Filter.ReturnCode Filter.filterKeyValue(Cellv) A way to filter based on the column family, column qualifier and/or the column value. - -Filter.ReturnCode -FamilyFilter.filterKeyValue(Cellv) - Filter.ReturnCode FilterList.filterKeyValue(Cellc) Filter.ReturnCode -FirstKeyOnlyFilter.filterKeyValue(Cellv) +PrefixFilter.filterKeyValue(Cellv) Filter.ReturnCode -RowFilter.filterKeyValue(Cellv) +RandomRowFilter.filterKeyValue(Cellv) Filter.ReturnCode -ValueFilter.filterKeyValue(Cellv) +MultipleColumnPrefixFilter.filterKeyValue(Cellkv) Filter.ReturnCode -MultiRowRangeFilter.filterKeyValue(Cellignored) +WhileMatchFilter.filterKeyValue(Cellv) Filter.ReturnCode -InclusiveStopFilter.filterKeyValue(Cellv) +KeyOnlyFilter.filterKeyValue(Cellignored) Filter.ReturnCode -FuzzyRowFilter.filterKeyValue(Cellc) +SkipFilter.filterKeyValue(Cellv) Filter.ReturnCode -SingleColumnValueFilter.filterKeyValue(Cellc) +TimestampsFilter.filterKeyValue(Cellv) Filter.ReturnCode -RandomRowFilter.filterKeyValue(Cellv) +QualifierFilter.filterKeyValue(Cellv) Filter.ReturnCode -ColumnRangeFilter.filterKeyValue(Cellkv) +ColumnPrefixFilter.filterKeyValue(Cellcell) Filter.ReturnCode -ColumnPaginationFilter.filterKeyValue(Cellv) +MultiRowRangeFilter.filterKeyValue(Cellignored) Filter.ReturnCode -TimestampsFilter.filterKeyValue(Cellv) +FirstKeyValueMatchingQualifiersFilter.filterKeyValue(Cellv) +Deprecated. + static Filter.ReturnCode http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/org/apache/hadoop/hbase/filter/class-use/Filter.html -- diff --git a/apidocs/org/apache/hadoop/hbase/filter/class-use/Filter.html b/apidocs/org/apache/hadoop/hbase/filter/class-use/Filter.html index 4222c2b..f3179e3 100644 --- a/apidocs/org/apache/hadoop/hbase/filter/class-use/Filter.html +++ b/apidocs/org/apache/hadoop/hbase/filter/class-use/Filter.html @@ -160,15 +160,15 @@ Input/OutputFormats, a table indexing MapReduce job, and utility methods. Scan.setFilter(Filterfilter) -Get -Get.setFilter(Filterfilter) - - Query Query.setFilter(Filterfilter) Apply the specified server-side filter when performing the Query. + +Get +Get.setFilter(Filterfilter) + @@ -394,75 +394,75 @@ Input/OutputFormats, a table indexing MapReduce job, and utility methods. static Filter -PrefixFilter.createFilterFromArguments(http://docs.oracle.com/javase/8/docs/api/java/util/ArrayList.html?is-external=true; title="class or interface in java.util">ArrayListbyte[]filterArguments) +FirstKeyOnlyFilter.createFilterFromArguments(http://docs.oracle.com/javase/8/docs/api/java/util/ArrayList.html?is-external=true; title="class or interface in java.util">ArrayListbyte[]filterArguments) static Filter
[38/52] [partial] hbase-site git commit: Published site at 1cfd22bf43c9b64afae35d9bf16f764d0da80cab.
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/org/apache/hadoop/hbase/client/Scan.html -- diff --git a/apidocs/org/apache/hadoop/hbase/client/Scan.html b/apidocs/org/apache/hadoop/hbase/client/Scan.html index 767c23c..a06d3ee 100644 --- a/apidocs/org/apache/hadoop/hbase/client/Scan.html +++ b/apidocs/org/apache/hadoop/hbase/client/Scan.html @@ -18,7 +18,7 @@ catch(err) { } //--> -var methods = {"i0":10,"i1":10,"i2":10,"i3":10,"i4":10,"i5":10,"i6":10,"i7":10,"i8":10,"i9":10,"i10":10,"i11":10,"i12":10,"i13":10,"i14":10,"i15":10,"i16":10,"i17":10,"i18":10,"i19":10,"i20":10,"i21":10,"i22":10,"i23":10,"i24":10,"i25":10,"i26":10,"i27":10,"i28":42,"i29":10,"i30":10,"i31":10,"i32":10,"i33":10,"i34":10,"i35":10,"i36":10,"i37":10,"i38":10,"i39":10,"i40":10,"i41":10,"i42":10,"i43":10,"i44":10,"i45":10,"i46":10,"i47":10,"i48":10,"i49":10,"i50":10,"i51":10,"i52":10,"i53":10,"i54":10,"i55":10,"i56":10,"i57":10,"i58":10,"i59":10,"i60":42,"i61":42,"i62":42,"i63":10,"i64":10,"i65":10,"i66":10,"i67":10,"i68":10,"i69":10,"i70":10}; +var methods = {"i0":10,"i1":10,"i2":10,"i3":10,"i4":10,"i5":10,"i6":10,"i7":10,"i8":10,"i9":10,"i10":10,"i11":10,"i12":10,"i13":10,"i14":10,"i15":10,"i16":42,"i17":10,"i18":10,"i19":10,"i20":10,"i21":10,"i22":10,"i23":10,"i24":10,"i25":10,"i26":10,"i27":10,"i28":42,"i29":10,"i30":10,"i31":10,"i32":10,"i33":10,"i34":10,"i35":10,"i36":10,"i37":10,"i38":10,"i39":10,"i40":10,"i41":10,"i42":10,"i43":10,"i44":10,"i45":10,"i46":10,"i47":10,"i48":10,"i49":10,"i50":10,"i51":10,"i52":10,"i53":10,"i54":10,"i55":10,"i56":10,"i57":10,"i58":10,"i59":10,"i60":42,"i61":42,"i62":42,"i63":10,"i64":10,"i65":10,"i66":10,"i67":10,"i68":10,"i69":10,"i70":10}; var tabs = {65535:["t0","All Methods"],2:["t2","Instance Methods"],8:["t4","Concrete Methods"],32:["t6","Deprecated Methods"]}; var altColor = "altColor"; var rowColor = "rowColor"; @@ -400,7 +400,13 @@ extends org.apache.hadoop.hbase.client.metrics.ScanMetrics -getScanMetrics() +getScanMetrics() +Deprecated. +Use ResultScanner.getScanMetrics() instead. And notice that, please do not + use this method and ResultScanner.getScanMetrics() together, the metrics + will be messed up. + + byte[] @@ -472,8 +478,8 @@ extends Scan setAllowPartialResults(booleanallowPartialResults) -Setting whether the caller wants to see the partial results that may be returned from the - server. +Setting whether the caller wants to see the partial results when server returns + less-than-expected cells. @@ -496,7 +502,7 @@ extends Scan setBatch(intbatch) -Set the maximum number of values to return for each call to next(). +Set the maximum number of cells to return for each call to next(). @@ -816,7 +822,7 @@ public static finalhttp://docs.oracle.com/javase/8/docs/api/java/ HBASE_CLIENT_SCANNER_ASYNC_PREFETCH -public static finalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true; title="class or interface in java.lang">String HBASE_CLIENT_SCANNER_ASYNC_PREFETCH +public static finalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true; title="class or interface in java.lang">String HBASE_CLIENT_SCANNER_ASYNC_PREFETCH Parameter name for client scanner sync/async prefetch toggle. When using async scanner, prefetching data from the server is done at the background. The parameter currently won't have any effect in the case that the user has set @@ -833,7 +839,7 @@ public static finalhttp://docs.oracle.com/javase/8/docs/api/java/ DEFAULT_HBASE_CLIENT_SCANNER_ASYNC_PREFETCH -public static finalboolean DEFAULT_HBASE_CLIENT_SCANNER_ASYNC_PREFETCH +public static finalboolean DEFAULT_HBASE_CLIENT_SCANNER_ASYNC_PREFETCH Default value of HBASE_CLIENT_SCANNER_ASYNC_PREFETCH. See Also: @@ -855,7 +861,7 @@ public static finalhttp://docs.oracle.com/javase/8/docs/api/java/ Scan -publicScan() +publicScan() Create a Scan operation across all rows. @@ -866,7 +872,7 @@ public static finalhttp://docs.oracle.com/javase/8/docs/api/java/ Scan http://docs.oracle.com/javase/8/docs/api/java/lang/Deprecated.html?is-external=true; title="class or interface in java.lang">@Deprecated -publicScan(byte[]startRow, +publicScan(byte[]startRow, Filterfilter) Deprecated.use new Scan().withStartRow(startRow).setFilter(filter) instead. @@ -878,7 +884,7 @@ public Scan http://docs.oracle.com/javase/8/docs/api/java/lang/Deprecated.html?is-external=true; title="class or interface in java.lang">@Deprecated -publicScan(byte[]startRow) +publicScan(byte[]startRow) Deprecated.use new Scan().withStartRow(startRow) instead. Create a Scan operation starting at the specified row. @@ -897,7 +903,7 @@ public Scan http://docs.oracle.com/javase/8/docs/api/java/lang/Deprecated.html?is-external=true; title="class or interface in java.lang">@Deprecated
[43/52] [partial] hbase-site git commit: Published site at 1cfd22bf43c9b64afae35d9bf16f764d0da80cab.
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/org/apache/hadoop/hbase/KeepDeletedCells.html -- diff --git a/apidocs/org/apache/hadoop/hbase/KeepDeletedCells.html b/apidocs/org/apache/hadoop/hbase/KeepDeletedCells.html index c89bc9e..490b465 100644 --- a/apidocs/org/apache/hadoop/hbase/KeepDeletedCells.html +++ b/apidocs/org/apache/hadoop/hbase/KeepDeletedCells.html @@ -263,7 +263,7 @@ the order they are declared. values -public staticKeepDeletedCells[]values() +public staticKeepDeletedCells[]values() Returns an array containing the constants of this enum type, in the order they are declared. This method may be used to iterate over the constants as follows: @@ -283,7 +283,7 @@ for (KeepDeletedCells c : KeepDeletedCells.values()) valueOf -public staticKeepDeletedCellsvalueOf(http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true; title="class or interface in java.lang">Stringname) +public staticKeepDeletedCellsvalueOf(http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true; title="class or interface in java.lang">Stringname) Returns the enum constant of this type with the specified name. The string must match exactly an identifier used to declare an enum constant in this type. (Extraneous whitespace characters are http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/org/apache/hadoop/hbase/LocalHBaseCluster.html -- diff --git a/apidocs/org/apache/hadoop/hbase/LocalHBaseCluster.html b/apidocs/org/apache/hadoop/hbase/LocalHBaseCluster.html index a949e6e..c8dbfda 100644 --- a/apidocs/org/apache/hadoop/hbase/LocalHBaseCluster.html +++ b/apidocs/org/apache/hadoop/hbase/LocalHBaseCluster.html @@ -356,7 +356,7 @@ extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html? LOCAL -public static finalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true; title="class or interface in java.lang">String LOCAL +public static finalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true; title="class or interface in java.lang">String LOCAL local mode See Also: @@ -370,7 +370,7 @@ extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html? LOCAL_COLON -public static finalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true; title="class or interface in java.lang">String LOCAL_COLON +public static finalhttp://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true; title="class or interface in java.lang">String LOCAL_COLON 'local:' See Also: @@ -392,7 +392,7 @@ extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html? LocalHBaseCluster -publicLocalHBaseCluster(org.apache.hadoop.conf.Configurationconf) +publicLocalHBaseCluster(org.apache.hadoop.conf.Configurationconf) throws http://docs.oracle.com/javase/8/docs/api/java/io/IOException.html?is-external=true; title="class or interface in java.io">IOException Constructor. @@ -409,7 +409,7 @@ extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html? LocalHBaseCluster -publicLocalHBaseCluster(org.apache.hadoop.conf.Configurationconf, +publicLocalHBaseCluster(org.apache.hadoop.conf.Configurationconf, intnoRegionServers) throws http://docs.oracle.com/javase/8/docs/api/java/io/IOException.html?is-external=true; title="class or interface in java.io">IOException Constructor. @@ -429,7 +429,7 @@ extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html? LocalHBaseCluster -publicLocalHBaseCluster(org.apache.hadoop.conf.Configurationconf, +publicLocalHBaseCluster(org.apache.hadoop.conf.Configurationconf, intnoMasters, intnoRegionServers) throws http://docs.oracle.com/javase/8/docs/api/java/io/IOException.html?is-external=true; title="class or interface in java.io">IOException @@ -451,7 +451,7 @@ extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html? LocalHBaseCluster -publicLocalHBaseCluster(org.apache.hadoop.conf.Configurationconf, +publicLocalHBaseCluster(org.apache.hadoop.conf.Configurationconf, intnoMasters, intnoRegionServers, http://docs.oracle.com/javase/8/docs/api/java/lang/Class.html?is-external=true; title="class or interface in java.lang">Class? extends org.apache.hadoop.hbase.master.HMastermasterClass, @@ -485,7 +485,7 @@ extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html? addRegionServer -publicorg.apache.hadoop.hbase.util.JVMClusterUtil.RegionServerThreadaddRegionServer() +publicorg.apache.hadoop.hbase.util.JVMClusterUtil.RegionServerThreadaddRegionServer()
[47/52] [partial] hbase-site git commit: Published site at 1cfd22bf43c9b64afae35d9bf16f764d0da80cab.
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/index-all.html -- diff --git a/apidocs/index-all.html b/apidocs/index-all.html index d3283c2..31012a5 100644 --- a/apidocs/index-all.html +++ b/apidocs/index-all.html @@ -84,6 +84,8 @@ abort a procedure +abortProcedure(RpcController, MasterProtos.AbortProcedureRequest) - Method in class org.apache.hadoop.hbase.client.ShortCircuitMasterConnection + abortProcedureAsync(long, boolean) - Method in interface org.apache.hadoop.hbase.client.Admin Abort a procedure but does not block and wait for it be completely removed. @@ -156,7 +158,7 @@ addAllServers(CollectionAddress) - Method in class org.apache.hadoop.hbase.rsgroup.RSGroupInfo -Adds a group of servers. +Adds the given servers to the group. addAllTables(CollectionTableName) - Method in class org.apache.hadoop.hbase.rsgroup.RSGroupInfo @@ -208,6 +210,8 @@ Get the column from the specified family with the specified qualifier. +addColumn(RpcController, MasterProtos.AddColumnRequest) - Method in class org.apache.hadoop.hbase.client.ShortCircuitMasterConnection + addColumnFamily(TableName, HColumnDescriptor) - Method in interface org.apache.hadoop.hbase.client.Admin Add a column family to an existing table. @@ -384,6 +388,8 @@ Add a new replication peer for replicating data to slave cluster +addReplicationPeer(RpcController, ReplicationProtos.AddReplicationPeerRequest) - Method in class org.apache.hadoop.hbase.client.ShortCircuitMasterConnection + Address - Class in org.apache.hadoop.hbase.net An immutable type to hold a hostname and port combo, like an Endpoint @@ -393,7 +399,7 @@ addServer(Address) - Method in class org.apache.hadoop.hbase.rsgroup.RSGroupInfo -Adds the server to the group. +Adds the given server to the group. addTable(TableName) - Method in class org.apache.hadoop.hbase.rsgroup.RSGroupInfo @@ -516,6 +522,8 @@ assign(byte[]) - Method in interface org.apache.hadoop.hbase.client.Admin +assignRegion(RpcController, MasterProtos.AssignRegionRequest) - Method in class org.apache.hadoop.hbase.client.ShortCircuitMasterConnection + AsyncAdmin - Interface in org.apache.hadoop.hbase.client The asynchronous administrative API for HBase. @@ -572,6 +580,8 @@ BadAuthException(String, Throwable) - Constructor for exception org.apache.hadoop.hbase.ipc.BadAuthException +balance(RpcController, MasterProtos.BalanceRequest) - Method in class org.apache.hadoop.hbase.client.ShortCircuitMasterConnection + balancer() - Method in interface org.apache.hadoop.hbase.client.Admin Invoke the balancer. @@ -803,6 +813,8 @@ BULK_OUTPUT_CONF_KEY - Static variable in class org.apache.hadoop.hbase.mapreduce.ImportTsv +BULK_OUTPUT_CONF_KEY - Static variable in class org.apache.hadoop.hbase.mapreduce.WALPlayer + BULKLOAD_DIR_NAME - Static variable in class org.apache.hadoop.hbase.mob.MobConstants BULKLOAD_MAX_RETRIES_NUMBER - Static variable in class org.apache.hadoop.hbase.HConstants @@ -1263,6 +1275,8 @@ Closes the scanner and releases any resources it has allocated +close() - Method in class org.apache.hadoop.hbase.client.ShortCircuitMasterConnection + close() - Method in interface org.apache.hadoop.hbase.client.Table Releases any resources held or pending changes in internal buffers. @@ -2157,7 +2171,7 @@ Takes a compareOperator symbol as a byte array and returns the corresponding CompareOperator -createCompleteResult(ListResult) - Static method in class org.apache.hadoop.hbase.client.Result +createCompleteResult(IterableResult) - Static method in class org.apache.hadoop.hbase.client.Result Forms a single result from the partial results in the partialResults list. @@ -2314,6 +2328,12 @@ Create a new namespace. +createNamespace(NamespaceDescriptor) - Method in interface org.apache.hadoop.hbase.client.AsyncAdmin + +Create a new namespace. + +createNamespace(RpcController, MasterProtos.CreateNamespaceRequest) - Method in class org.apache.hadoop.hbase.client.ShortCircuitMasterConnection + createNamespaceAsync(NamespaceDescriptor) - Method in interface org.apache.hadoop.hbase.client.Admin Create a new namespace @@ -2409,6 +2429,8 @@ Creates a new table with an initial set of empty regions defined by the specified split keys. +createTable(RpcController, MasterProtos.CreateTableRequest) - Method in class org.apache.hadoop.hbase.client.ShortCircuitMasterConnection + createTable(HTableDescriptor) - Method in class org.apache.hadoop.hbase.rest.client.RemoteAdmin Creates a new table. @@ -3230,6 +3252,8 @@ Use Admin.deleteColumnFamily(TableName, byte[])}. +deleteColumn(RpcController, MasterProtos.DeleteColumnRequest) - Method in class org.apache.hadoop.hbase.client.ShortCircuitMasterConnection + deleteColumnFamily(TableName, byte[]) - Method in interface
[23/52] [partial] hbase-site git commit: Published site at 1cfd22bf43c9b64afae35d9bf16f764d0da80cab.
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/src-html/org/apache/hadoop/hbase/client/HTableMultiplexer.HTableMultiplexerStatus.html -- diff --git a/apidocs/src-html/org/apache/hadoop/hbase/client/HTableMultiplexer.HTableMultiplexerStatus.html b/apidocs/src-html/org/apache/hadoop/hbase/client/HTableMultiplexer.HTableMultiplexerStatus.html index 0859dfa..88da4c0 100644 --- a/apidocs/src-html/org/apache/hadoop/hbase/client/HTableMultiplexer.HTableMultiplexerStatus.html +++ b/apidocs/src-html/org/apache/hadoop/hbase/client/HTableMultiplexer.HTableMultiplexerStatus.html @@ -177,7 +177,7 @@ 169 170// Create the failed puts list if necessary 171if (failedPuts == null) { -172 failedPuts = new ArrayListPut(); +172 failedPuts = new ArrayList(); 173} 174// Add the put to the failed puts list 175failedPuts.add(put); @@ -296,10 +296,10 @@ 288 this.totalFailedPutCounter = 0; 289 this.maxLatency = 0; 290 this.overallAverageLatency = 0; -291 this.serverToBufferedCounterMap = new HashMapString, Long(); -292 this.serverToFailedCounterMap = new HashMapString, Long(); -293 this.serverToAverageLatencyMap = new HashMapString, Long(); -294 this.serverToMaxLatencyMap = new HashMapString, Long(); +291 this.serverToBufferedCounterMap = new HashMap(); +292 this.serverToFailedCounterMap = new HashMap(); +293 this.serverToAverageLatencyMap = new HashMap(); +294 this.serverToMaxLatencyMap = new HashMap(); 295 this.initialize(serverToFlushWorkerMap); 296} 297 @@ -420,7 +420,7 @@ 412} 413 414public synchronized SimpleEntryLong, Integer getComponents() { -415 return new SimpleEntryLong, Integer(sum, count); +415 return new SimpleEntry(sum, count); 416} 417 418public synchronized void reset() { @@ -622,7 +622,7 @@ 614 failedCount--; 615} else { 616 if (failed == null) { -617failed = new ArrayListPutStatus(); +617failed = new ArrayList(); 618 } 619 failed.add(processingList.get(i)); 620} http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/src-html/org/apache/hadoop/hbase/client/HTableMultiplexer.html -- diff --git a/apidocs/src-html/org/apache/hadoop/hbase/client/HTableMultiplexer.html b/apidocs/src-html/org/apache/hadoop/hbase/client/HTableMultiplexer.html index 0859dfa..88da4c0 100644 --- a/apidocs/src-html/org/apache/hadoop/hbase/client/HTableMultiplexer.html +++ b/apidocs/src-html/org/apache/hadoop/hbase/client/HTableMultiplexer.html @@ -177,7 +177,7 @@ 169 170// Create the failed puts list if necessary 171if (failedPuts == null) { -172 failedPuts = new ArrayListPut(); +172 failedPuts = new ArrayList(); 173} 174// Add the put to the failed puts list 175failedPuts.add(put); @@ -296,10 +296,10 @@ 288 this.totalFailedPutCounter = 0; 289 this.maxLatency = 0; 290 this.overallAverageLatency = 0; -291 this.serverToBufferedCounterMap = new HashMapString, Long(); -292 this.serverToFailedCounterMap = new HashMapString, Long(); -293 this.serverToAverageLatencyMap = new HashMapString, Long(); -294 this.serverToMaxLatencyMap = new HashMapString, Long(); +291 this.serverToBufferedCounterMap = new HashMap(); +292 this.serverToFailedCounterMap = new HashMap(); +293 this.serverToAverageLatencyMap = new HashMap(); +294 this.serverToMaxLatencyMap = new HashMap(); 295 this.initialize(serverToFlushWorkerMap); 296} 297 @@ -420,7 +420,7 @@ 412} 413 414public synchronized SimpleEntryLong, Integer getComponents() { -415 return new SimpleEntryLong, Integer(sum, count); +415 return new SimpleEntry(sum, count); 416} 417 418public synchronized void reset() { @@ -622,7 +622,7 @@ 614 failedCount--; 615} else { 616 if (failed == null) { -617failed = new ArrayListPutStatus(); +617failed = new ArrayList(); 618 } 619 failed.add(processingList.get(i)); 620} http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/src-html/org/apache/hadoop/hbase/client/Increment.html -- diff --git a/apidocs/src-html/org/apache/hadoop/hbase/client/Increment.html b/apidocs/src-html/org/apache/hadoop/hbase/client/Increment.html index 89f978d..753dd06 100644 --- a/apidocs/src-html/org/apache/hadoop/hbase/client/Increment.html +++ b/apidocs/src-html/org/apache/hadoop/hbase/client/Increment.html @@ -212,140 +212,139 @@ 204 */ 205
[05/52] [partial] hbase-site git commit: Published site at 1cfd22bf43c9b64afae35d9bf16f764d0da80cab.
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/src-html/org/apache/hadoop/hbase/rest/client/RemoteHTable.html -- diff --git a/apidocs/src-html/org/apache/hadoop/hbase/rest/client/RemoteHTable.html b/apidocs/src-html/org/apache/hadoop/hbase/rest/client/RemoteHTable.html index 1365aec..351faa9 100644 --- a/apidocs/src-html/org/apache/hadoop/hbase/rest/client/RemoteHTable.html +++ b/apidocs/src-html/org/apache/hadoop/hbase/rest/client/RemoteHTable.html @@ -65,381 +65,381 @@ 057import org.apache.hadoop.hbase.client.Table; 058import org.apache.hadoop.hbase.client.coprocessor.Batch; 059import org.apache.hadoop.hbase.client.coprocessor.Batch.Callback; -060import org.apache.hadoop.hbase.filter.CompareFilter.CompareOp; -061import org.apache.hadoop.hbase.io.TimeRange; -062import org.apache.hadoop.hbase.ipc.CoprocessorRpcChannel; -063import org.apache.hadoop.hbase.rest.Constants; -064import org.apache.hadoop.hbase.rest.model.CellModel; -065import org.apache.hadoop.hbase.rest.model.CellSetModel; -066import org.apache.hadoop.hbase.rest.model.RowModel; -067import org.apache.hadoop.hbase.rest.model.ScannerModel; -068import org.apache.hadoop.hbase.rest.model.TableSchemaModel; -069import org.apache.hadoop.hbase.util.Bytes; -070import org.apache.hadoop.util.StringUtils; -071 -072import com.google.protobuf.Descriptors; -073import com.google.protobuf.Message; -074import com.google.protobuf.Service; -075import com.google.protobuf.ServiceException; -076 -077/** -078 * HTable interface to remote tables accessed via REST gateway -079 */ -080@InterfaceAudience.Public -081@InterfaceStability.Stable -082public class RemoteHTable implements Table { -083 -084 private static final Log LOG = LogFactory.getLog(RemoteHTable.class); -085 -086 final Client client; -087 final Configuration conf; -088 final byte[] name; -089 final int maxRetries; -090 final long sleepTime; -091 -092 @SuppressWarnings("rawtypes") -093 protected String buildRowSpec(final byte[] row, final Map familyMap, -094 final long startTime, final long endTime, final int maxVersions) { -095StringBuffer sb = new StringBuffer(); -096sb.append('/'); -097sb.append(Bytes.toString(name)); -098sb.append('/'); -099sb.append(toURLEncodedBytes(row)); -100Set families = familyMap.entrySet(); -101if (families != null) { -102 Iterator i = familyMap.entrySet().iterator(); -103 sb.append('/'); -104 while (i.hasNext()) { -105Map.Entry e = (Map.Entry)i.next(); -106Collection quals = (Collection)e.getValue(); -107if (quals == null || quals.isEmpty()) { -108 // this is an unqualified family. append the family name and NO ':' -109 sb.append(toURLEncodedBytes((byte[])e.getKey())); -110} else { -111 Iterator ii = quals.iterator(); -112 while (ii.hasNext()) { -113 sb.append(toURLEncodedBytes((byte[])e.getKey())); -114sb.append(':'); -115Object o = ii.next(); -116// Puts use byte[] but Deletes use KeyValue -117if (o instanceof byte[]) { -118 sb.append(toURLEncodedBytes((byte[])o)); -119} else if (o instanceof KeyValue) { -120 sb.append(toURLEncodedBytes(CellUtil.cloneQualifier((KeyValue)o))); -121} else { -122 throw new RuntimeException("object type not handled"); -123} -124if (ii.hasNext()) { -125 sb.append(','); -126} -127 } -128} -129if (i.hasNext()) { -130 sb.append(','); -131} -132 } -133} -134if (startTime = 0 endTime != Long.MAX_VALUE) { -135 sb.append('/'); -136 sb.append(startTime); -137 if (startTime != endTime) { -138sb.append(','); -139sb.append(endTime); -140 } -141} else if (endTime != Long.MAX_VALUE) { -142 sb.append('/'); -143 sb.append(endTime); -144} -145if (maxVersions 1) { -146 sb.append("?v="); -147 sb.append(maxVersions); -148} -149return sb.toString(); -150 } -151 -152 protected String buildMultiRowSpec(final byte[][] rows, int maxVersions) { -153StringBuilder sb = new StringBuilder(); -154sb.append('/'); -155sb.append(Bytes.toString(name)); -156sb.append("/multiget/"); -157if (rows == null || rows.length == 0) { -158 return sb.toString(); -159} -160sb.append("?"); -161for(int i=0; irows.length; i++) { -162 byte[] rk = rows[i]; -163 if (i != 0) { -164sb.append(''); -165 } -166 sb.append("row="); -167 sb.append(toURLEncodedBytes(rk)); -168} -169sb.append("v="); -170sb.append(maxVersions); -171 -172return sb.toString(); -173 } -174 -175 protected Result[] buildResultFromModel(final CellSetModel model) { -176ListResult results
[12/52] [partial] hbase-site git commit: Published site at 1cfd22bf43c9b64afae35d9bf16f764d0da80cab.
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/HFileOutputFormat2.html -- diff --git a/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/HFileOutputFormat2.html b/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/HFileOutputFormat2.html index 9c09190..07c69da 100644 --- a/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/HFileOutputFormat2.html +++ b/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/HFileOutputFormat2.html @@ -145,7 +145,7 @@ 137 138 static V extends Cell RecordWriterImmutableBytesWritable, V 139 createRecordWriter(final TaskAttemptContext context) throws IOException { -140return new HFileRecordWriterV(context, null); +140return new HFileRecordWriter(context, null); 141 } 142 143 protected static class HFileRecordWriterV extends Cell @@ -219,7 +219,7 @@ 211overriddenEncoding = null; 212 } 213 -214 writers = new TreeMapbyte[], WriterLength(Bytes.BYTES_COMPARATOR); +214 writers = new TreeMap(Bytes.BYTES_COMPARATOR); 215 previousRow = HConstants.EMPTY_BYTE_ARRAY; 216 now = Bytes.toBytes(EnvironmentEdgeManager.currentTime()); 217 rollRequested = false; @@ -426,435 +426,429 @@ 418 private static ListImmutableBytesWritable getRegionStartKeys(RegionLocator table) 419 throws IOException { 420byte[][] byteKeys = table.getStartKeys(); -421 ArrayListImmutableBytesWritable ret = -422 new ArrayListImmutableBytesWritable(byteKeys.length); -423for (byte[] byteKey : byteKeys) { -424 ret.add(new ImmutableBytesWritable(byteKey)); -425} -426return ret; -427 } -428 -429 /** -430 * Write out a {@link SequenceFile} that can be read by -431 * {@link TotalOrderPartitioner} that contains the split points in startKeys. -432 */ -433 @SuppressWarnings("deprecation") -434 private static void writePartitions(Configuration conf, Path partitionsPath, -435 ListImmutableBytesWritable startKeys) throws IOException { -436LOG.info("Writing partition information to " + partitionsPath); -437if (startKeys.isEmpty()) { -438 throw new IllegalArgumentException("No regions passed"); -439} -440 -441// We're generating a list of split points, and we don't ever -442// have keys the first region (which has an empty start key) -443// so we need to remove it. Otherwise we would end up with an -444// empty reducer with index 0 -445TreeSetImmutableBytesWritable sorted = -446 new TreeSetImmutableBytesWritable(startKeys); -447 -448ImmutableBytesWritable first = sorted.first(); -449if (!first.equals(HConstants.EMPTY_BYTE_ARRAY)) { -450 throw new IllegalArgumentException( -451 "First region of table should have empty start key. Instead has: " -452 + Bytes.toStringBinary(first.get())); -453} -454sorted.remove(first); -455 -456// Write the actual file -457FileSystem fs = partitionsPath.getFileSystem(conf); -458SequenceFile.Writer writer = SequenceFile.createWriter( -459 fs, conf, partitionsPath, ImmutableBytesWritable.class, -460 NullWritable.class); -461 -462try { -463 for (ImmutableBytesWritable startKey : sorted) { -464writer.append(startKey, NullWritable.get()); -465 } -466} finally { -467 writer.close(); -468} -469 } -470 -471 /** -472 * Configure a MapReduce Job to perform an incremental load into the given -473 * table. This -474 * ul -475 * liInspects the table to configure a total order partitioner/li -476 * liUploads the partitions file to the cluster and adds it to the DistributedCache/li -477 * liSets the number of reduce tasks to match the current number of regions/li -478 * liSets the output key/value class to match HFileOutputFormat2's requirements/li -479 * liSets the reducer up to perform the appropriate sorting (either KeyValueSortReducer or -480 * PutSortReducer)/li -481 * /ul -482 * The user should be sure to set the map output value class to either KeyValue or Put before -483 * running this function. -484 */ -485 public static void configureIncrementalLoad(Job job, Table table, RegionLocator regionLocator) -486 throws IOException { -487configureIncrementalLoad(job, table.getTableDescriptor(), regionLocator); -488 } -489 -490 /** -491 * Configure a MapReduce Job to perform an incremental load into the given -492 * table. This -493 * ul -494 * liInspects the table to configure a total order partitioner/li -495 * liUploads the partitions file to the cluster and adds it to the DistributedCache/li -496 * liSets the number of reduce tasks to match the current number of regions/li -497 * liSets the output key/value class to match HFileOutputFormat2's requirements/li -498 * liSets the reducer up to perform the
[09/52] [partial] hbase-site git commit: Published site at 1cfd22bf43c9b64afae35d9bf16f764d0da80cab.
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/MultiHFileOutputFormat.html -- diff --git a/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/MultiHFileOutputFormat.html b/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/MultiHFileOutputFormat.html index 39ebcf3..9340f9d 100644 --- a/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/MultiHFileOutputFormat.html +++ b/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/MultiHFileOutputFormat.html @@ -73,41 +73,40 @@ 065final FileSystem fs = outputDir.getFileSystem(conf); 066 067// Map of tables to writers -068final MapImmutableBytesWritable, RecordWriterImmutableBytesWritable, V tableWriters = -069new HashMapImmutableBytesWritable, RecordWriterImmutableBytesWritable, V(); -070 -071return new RecordWriterImmutableBytesWritable, V() { -072 @Override -073 public void write(ImmutableBytesWritable tableName, V cell) -074 throws IOException, InterruptedException { -075 RecordWriterImmutableBytesWritable, V tableWriter = tableWriters.get(tableName); -076// if there is new table, verify that table directory exists -077if (tableWriter == null) { -078 // using table name as directory name -079 final Path tableOutputDir = new Path(outputDir, Bytes.toString(tableName.copyBytes())); -080 fs.mkdirs(tableOutputDir); -081 LOG.info("Writing Table '" + tableName.toString() + "' data into following directory" -082 + tableOutputDir.toString()); -083 -084 // Create writer for one specific table -085 tableWriter = new HFileOutputFormat2.HFileRecordWriterV(context, tableOutputDir); -086 // Put table into map -087 tableWriters.put(tableName, tableWriter); -088} -089// Write Row, Cell into tableWriter -090// in the original code, it does not use Row -091tableWriter.write(null, cell); -092 } -093 -094 @Override -095 public void close(TaskAttemptContext c) throws IOException, InterruptedException { -096for (RecordWriterImmutableBytesWritable, V writer : tableWriters.values()) { -097 writer.close(c); -098} -099 } -100}; -101 } -102} +068final MapImmutableBytesWritable, RecordWriterImmutableBytesWritable, V tableWriters = new HashMap(); +069 +070return new RecordWriterImmutableBytesWritable, V() { +071 @Override +072 public void write(ImmutableBytesWritable tableName, V cell) +073 throws IOException, InterruptedException { +074 RecordWriterImmutableBytesWritable, V tableWriter = tableWriters.get(tableName); +075// if there is new table, verify that table directory exists +076if (tableWriter == null) { +077 // using table name as directory name +078 final Path tableOutputDir = new Path(outputDir, Bytes.toString(tableName.copyBytes())); +079 fs.mkdirs(tableOutputDir); +080 LOG.info("Writing Table '" + tableName.toString() + "' data into following directory" +081 + tableOutputDir.toString()); +082 +083 // Create writer for one specific table +084 tableWriter = new HFileOutputFormat2.HFileRecordWriter(context, tableOutputDir); +085 // Put table into map +086 tableWriters.put(tableName, tableWriter); +087} +088// Write Row, Cell into tableWriter +089// in the original code, it does not use Row +090tableWriter.write(null, cell); +091 } +092 +093 @Override +094 public void close(TaskAttemptContext c) throws IOException, InterruptedException { +095for (RecordWriterImmutableBytesWritable, V writer : tableWriters.values()) { +096 writer.close(c); +097} +098 } +099}; +100 } +101} http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/MultiTableInputFormat.html -- diff --git a/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/MultiTableInputFormat.html b/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/MultiTableInputFormat.html index f4e33a0..bb2a823 100644 --- a/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/MultiTableInputFormat.html +++ b/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/MultiTableInputFormat.html @@ -100,7 +100,7 @@ 092 throw new IllegalArgumentException("There must be at least 1 scan configuration set to : " 093 + SCANS); 094} -095ListScan scans = new ArrayListScan(); +095ListScan scans = new ArrayList(); 096 097for (int i = 0; i rawScans.length; i++) { 098 try {
[17/52] [partial] hbase-site git commit: Published site at 1cfd22bf43c9b64afae35d9bf16f764d0da80cab.
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/src-html/org/apache/hadoop/hbase/client/ScanResultConsumer.html -- diff --git a/apidocs/src-html/org/apache/hadoop/hbase/client/ScanResultConsumer.html b/apidocs/src-html/org/apache/hadoop/hbase/client/ScanResultConsumer.html index 7f2ddf1..c6e88dd 100644 --- a/apidocs/src-html/org/apache/hadoop/hbase/client/ScanResultConsumer.html +++ b/apidocs/src-html/org/apache/hadoop/hbase/client/ScanResultConsumer.html @@ -27,33 +27,42 @@ 019 020import org.apache.hadoop.hbase.classification.InterfaceAudience; 021import org.apache.hadoop.hbase.classification.InterfaceStability; -022 -023/** -024 * Receives {@link Result} for an asynchronous scan. -025 */ -026@InterfaceAudience.Public -027@InterfaceStability.Unstable -028public interface ScanResultConsumer { -029 -030 /** -031 * @param result the data fetched from HBase service. -032 * @return {@code false} if you want to terminate the scan process. Otherwise {@code true} -033 */ -034 boolean onNext(Result result); -035 -036 /** -037 * Indicate that we hit an unrecoverable error and the scan operation is terminated. -038 * p -039 * We will not call {@link #onComplete()} after calling {@link #onError(Throwable)}. -040 */ -041 void onError(Throwable error); -042 -043 /** -044 * Indicate that the scan operation is completed normally. -045 */ -046 void onComplete(); -047 -048} +022import org.apache.hadoop.hbase.client.metrics.ScanMetrics; +023 +024/** +025 * Receives {@link Result} for an asynchronous scan. +026 */ +027@InterfaceAudience.Public +028@InterfaceStability.Unstable +029public interface ScanResultConsumer { +030 +031 /** +032 * @param result the data fetched from HBase service. +033 * @return {@code false} if you want to terminate the scan process. Otherwise {@code true} +034 */ +035 boolean onNext(Result result); +036 +037 /** +038 * Indicate that we hit an unrecoverable error and the scan operation is terminated. +039 * p +040 * We will not call {@link #onComplete()} after calling {@link #onError(Throwable)}. +041 */ +042 void onError(Throwable error); +043 +044 /** +045 * Indicate that the scan operation is completed normally. +046 */ +047 void onComplete(); +048 +049 /** +050 * If {@code scan.isScanMetricsEnabled()} returns true, then this method will be called prior to +051 * all other methods in this interface to give you the {@link ScanMetrics} instance for this scan +052 * operation. The {@link ScanMetrics} instance will be updated on-the-fly during the scan, you can +053 * store it somewhere to get the metrics at any time if you want. +054 */ +055 default void onScanMetricsCreated(ScanMetrics scanMetrics) { +056 } +057} http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/src-html/org/apache/hadoop/hbase/client/TableSnapshotScanner.html -- diff --git a/apidocs/src-html/org/apache/hadoop/hbase/client/TableSnapshotScanner.html b/apidocs/src-html/org/apache/hadoop/hbase/client/TableSnapshotScanner.html index f4ced21..11f3dbd 100644 --- a/apidocs/src-html/org/apache/hadoop/hbase/client/TableSnapshotScanner.html +++ b/apidocs/src-html/org/apache/hadoop/hbase/client/TableSnapshotScanner.html @@ -135,7 +135,7 @@ 127final ListHRegionInfo restoredRegions = meta.getRegionsToAdd(); 128 129htd = meta.getTableDescriptor(); -130regions = new ArrayListHRegionInfo(restoredRegions.size()); +130regions = new ArrayList(restoredRegions.size()); 131for (HRegionInfo hri: restoredRegions) { 132 if (CellUtil.overlappingKeys(scan.getStartRow(), scan.getStopRow(), 133 hri.getStartKey(), hri.getEndKey())) { http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/src-html/org/apache/hadoop/hbase/client/replication/ReplicationAdmin.html -- diff --git a/apidocs/src-html/org/apache/hadoop/hbase/client/replication/ReplicationAdmin.html b/apidocs/src-html/org/apache/hadoop/hbase/client/replication/ReplicationAdmin.html index b2f1221..b9503b7 100644 --- a/apidocs/src-html/org/apache/hadoop/hbase/client/replication/ReplicationAdmin.html +++ b/apidocs/src-html/org/apache/hadoop/hbase/client/replication/ReplicationAdmin.html @@ -26,49 +26,49 @@ 018 */ 019package org.apache.hadoop.hbase.client.replication; 020 -021import com.google.common.annotations.VisibleForTesting; -022import com.google.common.collect.Lists; -023 -024import java.io.Closeable; -025import java.io.IOException; -026import java.util.ArrayList; -027import java.util.Collection; -028import java.util.HashMap; -029import java.util.HashSet; -030import java.util.List; -031import java.util.Map; -032import java.util.TreeMap; -033import java.util.Map.Entry; -034import
[08/52] [partial] hbase-site git commit: Published site at 1cfd22bf43c9b64afae35d9bf16f764d0da80cab.
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/TableInputFormatBase.html -- diff --git a/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/TableInputFormatBase.html b/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/TableInputFormatBase.html index 7003255..a2b0cb0 100644 --- a/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/TableInputFormatBase.html +++ b/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/TableInputFormatBase.html @@ -145,523 +145,522 @@ 137 138 139 /** The reverse DNS lookup cache mapping: IPAddress = HostName */ -140 private HashMapInetAddress, String reverseDNSCacheMap = -141new HashMapInetAddress, String(); -142 -143 /** -144 * Builds a {@link TableRecordReader}. If no {@link TableRecordReader} was provided, uses -145 * the default. -146 * -147 * @param split The split to work with. -148 * @param context The current context. -149 * @return The newly created record reader. -150 * @throws IOException When creating the reader fails. -151 * @see org.apache.hadoop.mapreduce.InputFormat#createRecordReader( -152 * org.apache.hadoop.mapreduce.InputSplit, -153 * org.apache.hadoop.mapreduce.TaskAttemptContext) -154 */ -155 @Override -156 public RecordReaderImmutableBytesWritable, Result createRecordReader( -157 InputSplit split, TaskAttemptContext context) -158 throws IOException { -159// Just in case a subclass is relying on JobConfigurable magic. -160if (table == null) { -161 initialize(context); -162} -163// null check in case our child overrides getTable to not throw. -164try { -165 if (getTable() == null) { -166// initialize() must not have been implemented in the subclass. -167throw new IOException(INITIALIZATION_ERROR); -168 } -169} catch (IllegalStateException exception) { -170 throw new IOException(INITIALIZATION_ERROR, exception); -171} -172TableSplit tSplit = (TableSplit) split; -173LOG.info("Input split length: " + StringUtils.humanReadableInt(tSplit.getLength()) + " bytes."); -174final TableRecordReader trr = -175this.tableRecordReader != null ? this.tableRecordReader : new TableRecordReader(); -176Scan sc = new Scan(this.scan); -177 sc.setStartRow(tSplit.getStartRow()); -178sc.setStopRow(tSplit.getEndRow()); -179trr.setScan(sc); -180trr.setTable(getTable()); -181return new RecordReaderImmutableBytesWritable, Result() { -182 -183 @Override -184 public void close() throws IOException { -185trr.close(); -186closeTable(); -187 } -188 -189 @Override -190 public ImmutableBytesWritable getCurrentKey() throws IOException, InterruptedException { -191return trr.getCurrentKey(); -192 } -193 -194 @Override -195 public Result getCurrentValue() throws IOException, InterruptedException { -196return trr.getCurrentValue(); -197 } -198 -199 @Override -200 public float getProgress() throws IOException, InterruptedException { -201return trr.getProgress(); -202 } -203 -204 @Override -205 public void initialize(InputSplit inputsplit, TaskAttemptContext context) throws IOException, -206 InterruptedException { -207trr.initialize(inputsplit, context); -208 } -209 -210 @Override -211 public boolean nextKeyValue() throws IOException, InterruptedException { -212return trr.nextKeyValue(); -213 } -214}; -215 } -216 -217 protected Pairbyte[][],byte[][] getStartEndKeys() throws IOException { -218return getRegionLocator().getStartEndKeys(); -219 } -220 -221 /** -222 * Calculates the splits that will serve as input for the map tasks. The -223 * number of splits matches the number of regions in a table. -224 * -225 * @param context The current job context. -226 * @return The list of input splits. -227 * @throws IOException When creating the list of splits fails. -228 * @see org.apache.hadoop.mapreduce.InputFormat#getSplits( -229 * org.apache.hadoop.mapreduce.JobContext) -230 */ -231 @Override -232 public ListInputSplit getSplits(JobContext context) throws IOException { -233boolean closeOnFinish = false; -234 -235// Just in case a subclass is relying on JobConfigurable magic. -236if (table == null) { -237 initialize(context); -238 closeOnFinish = true; -239} -240 -241// null check in case our child overrides getTable to not throw. -242try { -243 if (getTable() == null) { -244// initialize() must not have been implemented in the subclass. -245throw new IOException(INITIALIZATION_ERROR); -246 } -247} catch (IllegalStateException exception) { -248 throw new IOException(INITIALIZATION_ERROR, exception); -249} -250 -251
[02/52] [partial] hbase-site git commit: Published site at 1cfd22bf43c9b64afae35d9bf16f764d0da80cab.
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/src-html/org/apache/hadoop/hbase/snapshot/SnapshotInfo.html -- diff --git a/apidocs/src-html/org/apache/hadoop/hbase/snapshot/SnapshotInfo.html b/apidocs/src-html/org/apache/hadoop/hbase/snapshot/SnapshotInfo.html index 608d19a..f42fb90 100644 --- a/apidocs/src-html/org/apache/hadoop/hbase/snapshot/SnapshotInfo.html +++ b/apidocs/src-html/org/apache/hadoop/hbase/snapshot/SnapshotInfo.html @@ -606,119 +606,118 @@ 598Path snapshotDir = SnapshotDescriptionUtils.getSnapshotsDir(rootDir); 599FileStatus[] snapshots = fs.listStatus(snapshotDir, 600new SnapshotDescriptionUtils.CompletedSnaphotDirectoriesFilter(fs)); -601ListSnapshotDescription snapshotLists = -602 new ArrayListSnapshotDescription(snapshots.length); -603for (FileStatus snapshotDirStat: snapshots) { -604 HBaseProtos.SnapshotDescription snapshotDesc = -605 SnapshotDescriptionUtils.readSnapshotInfo(fs, snapshotDirStat.getPath()); -606 snapshotLists.add(ProtobufUtil.createSnapshotDesc(snapshotDesc)); -607} -608return snapshotLists; -609 } -610 -611 /** -612 * Gets the store files map for snapshot -613 * @param conf the {@link Configuration} to use -614 * @param snapshot {@link SnapshotDescription} to get stats from -615 * @param exec the {@link ExecutorService} to use -616 * @param filesMap {@link Map} the map to put the mapping entries -617 * @param uniqueHFilesArchiveSize {@link AtomicLong} the accumulated store file size in archive -618 * @param uniqueHFilesSize {@link AtomicLong} the accumulated store file size shared -619 * @param uniqueHFilesMobSize {@link AtomicLong} the accumulated mob store file size shared -620 * @return the snapshot stats -621 */ -622 private static void getSnapshotFilesMap(final Configuration conf, -623 final SnapshotDescription snapshot, final ExecutorService exec, -624 final ConcurrentHashMapPath, Integer filesMap, -625 final AtomicLong uniqueHFilesArchiveSize, final AtomicLong uniqueHFilesSize, -626 final AtomicLong uniqueHFilesMobSize) throws IOException { -627HBaseProtos.SnapshotDescription snapshotDesc = -628 ProtobufUtil.createHBaseProtosSnapshotDesc(snapshot); -629Path rootDir = FSUtils.getRootDir(conf); -630final FileSystem fs = FileSystem.get(rootDir.toUri(), conf); -631 -632Path snapshotDir = SnapshotDescriptionUtils.getCompletedSnapshotDir(snapshotDesc, rootDir); -633SnapshotManifest manifest = SnapshotManifest.open(conf, fs, snapshotDir, snapshotDesc); -634 SnapshotReferenceUtil.concurrentVisitReferencedFiles(conf, fs, manifest, exec, -635new SnapshotReferenceUtil.SnapshotVisitor() { -636 @Override public void storeFile(final HRegionInfo regionInfo, final String family, -637 final SnapshotRegionManifest.StoreFile storeFile) throws IOException { -638if (!storeFile.hasReference()) { -639 HFileLink link = HFileLink.build(conf, snapshot.getTableName(), -640 regionInfo.getEncodedName(), family, storeFile.getName()); -641 long size; -642 Integer count; -643 Path p; -644 AtomicLong al; -645 int c = 0; -646 -647 if (fs.exists(link.getArchivePath())) { -648p = link.getArchivePath(); -649al = uniqueHFilesArchiveSize; -650size = fs.getFileStatus(p).getLen(); -651 } else if (fs.exists(link.getMobPath())) { -652p = link.getMobPath(); -653al = uniqueHFilesMobSize; -654size = fs.getFileStatus(p).getLen(); -655 } else { -656p = link.getOriginPath(); -657al = uniqueHFilesSize; -658size = link.getFileStatus(fs).getLen(); -659 } -660 -661 // If it has been counted, do not double count -662 count = filesMap.get(p); -663 if (count != null) { -664c = count.intValue(); -665 } else { -666al.addAndGet(size); -667 } -668 -669 filesMap.put(p, ++c); -670} -671 } -672}); -673 } -674 -675 /** -676 * Returns the map of store files based on path for all snapshots -677 * @param conf the {@link Configuration} to use -678 * @param uniqueHFilesArchiveSize pass out the size for store files in archive -679 * @param uniqueHFilesSize pass out the size for store files shared -680 * @param uniqueHFilesMobSize pass out the size for mob store files shared -681 * @return the map of store files -682 */ -683 public static MapPath, Integer getSnapshotsFilesMap(final Configuration conf, -684 AtomicLong uniqueHFilesArchiveSize, AtomicLong
[04/52] [partial] hbase-site git commit: Published site at 1cfd22bf43c9b64afae35d9bf16f764d0da80cab.
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/22cff34f/apidocs/src-html/org/apache/hadoop/hbase/rsgroup/RSGroupInfo.html -- diff --git a/apidocs/src-html/org/apache/hadoop/hbase/rsgroup/RSGroupInfo.html b/apidocs/src-html/org/apache/hadoop/hbase/rsgroup/RSGroupInfo.html index 96deb2d..83f9fed 100644 --- a/apidocs/src-html/org/apache/hadoop/hbase/rsgroup/RSGroupInfo.html +++ b/apidocs/src-html/org/apache/hadoop/hbase/rsgroup/RSGroupInfo.html @@ -42,152 +42,143 @@ 034@InterfaceAudience.Public 035@InterfaceStability.Evolving 036public class RSGroupInfo { -037 -038 public static final String DEFAULT_GROUP = "default"; -039 public static final String NAMESPACEDESC_PROP_GROUP = "hbase.rsgroup.name"; -040 -041 private String name; -042 // Keep servers in a sorted set so has an expected ordering when displayed. -043 private SortedSetAddress servers; -044 // Keep tables sorted too. -045 private SortedSetTableName tables; -046 -047 public RSGroupInfo(String name) { -048this(name, new TreeSetAddress(), new TreeSetTableName()); -049 } -050 -051 RSGroupInfo(String name, SortedSetAddress servers, SortedSetTableName tables) { -052this.name = name; -053this.servers = servers == null? new TreeSetAddress(): servers; -054this.servers.addAll(servers); -055this.tables = new TreeSet(tables); -056 } -057 -058 public RSGroupInfo(RSGroupInfo src) { -059this(src.getName(), src.servers, src.tables); -060 } -061 -062 /** -063 * Get group name. -064 * -065 * @return group name -066 */ -067 public String getName() { -068return name; -069 } -070 -071 /** -072 * Adds the server to the group. -073 * -074 * @param hostPort the server -075 */ -076 public void addServer(Address hostPort){ -077servers.add(hostPort); -078 } -079 -080 /** -081 * Adds a group of servers. -082 * -083 * @param hostPort the servers -084 */ -085 public void addAllServers(CollectionAddress hostPort){ -086servers.addAll(hostPort); -087 } -088 -089 /** -090 * @param hostPort hostPort of the server -091 * @return true, if a server with hostPort is found +037 public static final String DEFAULT_GROUP = "default"; +038 public static final String NAMESPACE_DESC_PROP_GROUP = "hbase.rsgroup.name"; +039 +040 private final String name; +041 // Keep servers in a sorted set so has an expected ordering when displayed. +042 private final SortedSetAddress servers; +043 // Keep tables sorted too. +044 private final SortedSetTableName tables; +045 +046 public RSGroupInfo(String name) { +047this(name, new TreeSetAddress(), new TreeSetTableName()); +048 } +049 +050 RSGroupInfo(String name, SortedSetAddress servers, SortedSetTableName tables) { +051this.name = name; +052this.servers = servers == null? new TreeSet(): servers; +053this.servers.addAll(servers); +054this.tables = new TreeSet(tables); +055 } +056 +057 public RSGroupInfo(RSGroupInfo src) { +058this(src.getName(), src.servers, src.tables); +059 } +060 +061 /** +062 * Get group name. +063 */ +064 public String getName() { +065return name; +066 } +067 +068 /** +069 * Adds the given server to the group. +070 */ +071 public void addServer(Address hostPort){ +072servers.add(hostPort); +073 } +074 +075 /** +076 * Adds the given servers to the group. +077 */ +078 public void addAllServers(CollectionAddress hostPort){ +079servers.addAll(hostPort); +080 } +081 +082 /** +083 * @param hostPort hostPort of the server +084 * @return true, if a server with hostPort is found +085 */ +086 public boolean containsServer(Address hostPort) { +087return servers.contains(hostPort); +088 } +089 +090 /** +091 * Get list of servers. 092 */ -093 public boolean containsServer(Address hostPort) { -094return servers.contains(hostPort); +093 public SetAddress getServers() { +094return servers; 095 } 096 097 /** -098 * Get list of servers. -099 * -100 * @return set of servers -101 */ -102 public SetAddress getServers() { -103return servers; -104 } -105 -106 /** -107 * Remove a server from this group. -108 * -109 * @param hostPort HostPort of the server to remove -110 */ -111 public boolean removeServer(Address hostPort) { -112return servers.remove(hostPort); +098 * Remove given server from the group. +099 */ +100 public boolean removeServer(Address hostPort) { +101return servers.remove(hostPort); +102 } +103 +104 /** +105 * Get set of tables that are members of the group. +106 */ +107 public SortedSetTableName getTables() { +108return tables; +109 } +110 +111 public void addTable(TableName table) { +112tables.add(table); 113 } 114 -115 /** -116 * Set of tables that are members of this group -117 * @return set of tables -118 */ -119 public SortedSetTableName getTables() { -120
hbase git commit: HBASE-17798 RpcServer.Listener.Reader can abort due to CancelledKeyException (Guangxu Cheng)
Repository: hbase Updated Branches: refs/heads/branch-1 b973d3fd4 -> 9726c7168 HBASE-17798 RpcServer.Listener.Reader can abort due to CancelledKeyException (Guangxu Cheng) Project: http://git-wip-us.apache.org/repos/asf/hbase/repo Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/9726c716 Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/9726c716 Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/9726c716 Branch: refs/heads/branch-1 Commit: 9726c71681c0b8b22e83b056102803646b8d50c2 Parents: b973d3f Author: tedyuAuthored: Tue Mar 21 08:06:56 2017 -0700 Committer: tedyu Committed: Tue Mar 21 08:06:56 2017 -0700 -- .../src/main/java/org/apache/hadoop/hbase/ipc/RpcServer.java | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/hbase/blob/9726c716/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcServer.java -- diff --git a/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcServer.java b/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcServer.java index 871ea65..b5cd3e2 100644 --- a/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcServer.java +++ b/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcServer.java @@ -721,7 +721,8 @@ public class RpcServer implements RpcServerInterface, ConfigurationObserver { } } catch (InterruptedException e) { LOG.debug("Interrupted while sleeping"); -return; + } catch (CancelledKeyException e) { +LOG.error(getName() + ": CancelledKeyException in Reader", e); } catch (IOException ex) { LOG.info(getName() + ": IOException in Reader", ex); }
hbase git commit: HBASE-17798 RpcServer.Listener.Reader can abort due to CancelledKeyException (Guangxu Cheng)
Repository: hbase Updated Branches: refs/heads/master 8f4ae0a0d -> 1cfd22bf4 HBASE-17798 RpcServer.Listener.Reader can abort due to CancelledKeyException (Guangxu Cheng) Project: http://git-wip-us.apache.org/repos/asf/hbase/repo Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/1cfd22bf Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/1cfd22bf Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/1cfd22bf Branch: refs/heads/master Commit: 1cfd22bf43c9b64afae35d9bf16f764d0da80cab Parents: 8f4ae0a Author: tedyuAuthored: Tue Mar 21 06:59:29 2017 -0700 Committer: tedyu Committed: Tue Mar 21 06:59:29 2017 -0700 -- .../main/java/org/apache/hadoop/hbase/ipc/SimpleRpcServer.java| 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/hbase/blob/1cfd22bf/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/SimpleRpcServer.java -- diff --git a/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/SimpleRpcServer.java b/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/SimpleRpcServer.java index 9e1e81e..5f90d50 100644 --- a/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/SimpleRpcServer.java +++ b/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/SimpleRpcServer.java @@ -302,7 +302,8 @@ public class SimpleRpcServer extends RpcServer { if (running) { // unexpected -- log it LOG.info(Thread.currentThread().getName() + " unexpectedly interrupted", e); } -return; + } catch (CancelledKeyException e) { +LOG.error(getName() + ": CancelledKeyException in Reader", e); } catch (IOException ex) { LOG.info(getName() + ": IOException in Reader", ex); }
hbase git commit: HBASE-17813 backport HBASE-16983 to branch-1.3
Repository: hbase Updated Branches: refs/heads/branch-1.3 a60792425 -> ab335bf9d HBASE-17813 backport HBASE-16983 to branch-1.3 This fixes the "Unable to create region directory" issue in TestMultiTableSnapshotInputFormat Project: http://git-wip-us.apache.org/repos/asf/hbase/repo Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/ab335bf9 Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/ab335bf9 Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/ab335bf9 Branch: refs/heads/branch-1.3 Commit: ab335bf9d3d82100a875c796eea8e9532b9d2d7b Parents: a607924 Author: Yu LiAuthored: Tue Mar 21 21:18:32 2017 +0800 Committer: Yu Li Committed: Tue Mar 21 21:18:32 2017 +0800 -- .../hadoop/hbase/mapreduce/TestMultiTableSnapshotInputFormat.java | 3 +-- .../org/apache/hadoop/hbase/regionserver/TestStoreFileInfo.java | 2 +- 2 files changed, 2 insertions(+), 3 deletions(-) -- http://git-wip-us.apache.org/repos/asf/hbase/blob/ab335bf9/hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestMultiTableSnapshotInputFormat.java -- diff --git a/hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestMultiTableSnapshotInputFormat.java b/hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestMultiTableSnapshotInputFormat.java index 93bb820..1d55957 100644 --- a/hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestMultiTableSnapshotInputFormat.java +++ b/hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestMultiTableSnapshotInputFormat.java @@ -63,8 +63,7 @@ public class TestMultiTableSnapshotInputFormat extends MultiTableInputFormatTest @Before public void setUp() throws Exception { -this.restoreDir = new Path("/tmp"); - +this.restoreDir = TEST_UTIL.getRandomDir(); } @Override http://git-wip-us.apache.org/repos/asf/hbase/blob/ab335bf9/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStoreFileInfo.java -- diff --git a/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStoreFileInfo.java b/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStoreFileInfo.java index 5aa96c1..c5200fb 100644 --- a/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStoreFileInfo.java +++ b/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStoreFileInfo.java @@ -66,7 +66,7 @@ public class TestStoreFileInfo { @Test public void testEqualsWithLink() throws IOException { Path origin = new Path("/origin"); -Path tmp = new Path("/tmp"); +Path tmp = TEST_UTIL.getDataTestDir(); Path archive = new Path("/archive"); HFileLink link1 = new HFileLink(new Path(origin, "f1"), new Path(tmp, "f1"), new Path(archive, "f1"));
hbase git commit: HBASE-17070 backport HBASE-17020 (keylen in midkey() dont computed correctly) to 1.3.1
Repository: hbase Updated Branches: refs/heads/branch-1.3 693b51d81 -> a60792425 HBASE-17070 backport HBASE-17020 (keylen in midkey() dont computed correctly) to 1.3.1 Project: http://git-wip-us.apache.org/repos/asf/hbase/repo Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/a6079242 Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/a6079242 Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/a6079242 Branch: refs/heads/branch-1.3 Commit: a60792425a50de48d6af88ff2737b5e32413de8a Parents: 693b51d Author: Yu LiAuthored: Tue Mar 21 21:15:16 2017 +0800 Committer: Yu Li Committed: Tue Mar 21 21:15:16 2017 +0800 -- .../hadoop/hbase/io/hfile/HFileBlockIndex.java | 2 +- .../hbase/io/hfile/TestHFileBlockIndex.java | 68 .../hbase/io/hfile/TestHFileWriterV2.java | 29 + 3 files changed, 98 insertions(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/hbase/blob/a6079242/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlockIndex.java -- diff --git a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlockIndex.java b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlockIndex.java index e24b4f1..0d0a42c 100644 --- a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlockIndex.java +++ b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlockIndex.java @@ -355,7 +355,7 @@ public class HFileBlockIndex { int numDataBlocks = b.getInt(); int keyRelOffset = b.getInt(Bytes.SIZEOF_INT * (midKeyEntry + 1)); int keyLen = b.getInt(Bytes.SIZEOF_INT * (midKeyEntry + 2)) - -keyRelOffset; +keyRelOffset - SECONDARY_INDEX_ENTRY_OVERHEAD; int keyOffset = Bytes.SIZEOF_INT * (numDataBlocks + 2) + keyRelOffset + SECONDARY_INDEX_ENTRY_OVERHEAD; targetMidKey = ByteBufferUtils.toBytes(b, keyOffset, keyLen); http://git-wip-us.apache.org/repos/asf/hbase/blob/a6079242/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlockIndex.java -- diff --git a/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlockIndex.java b/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlockIndex.java index b3e0ade..6372713 100644 --- a/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlockIndex.java +++ b/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlockIndex.java @@ -47,6 +47,7 @@ import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.KeyValue; import org.apache.hadoop.hbase.fs.HFileSystem; import org.apache.hadoop.hbase.io.compress.Compression; +import org.apache.hadoop.hbase.io.compress.Compression.Algorithm; import org.apache.hadoop.hbase.io.encoding.DataBlockEncoding; import org.apache.hadoop.hbase.io.hfile.HFileBlockIndex.BlockIndexChunk; import org.apache.hadoop.hbase.io.hfile.HFileBlockIndex.BlockIndexReader; @@ -502,6 +503,73 @@ public class TestHFileBlockIndex { } /** + * to check if looks good when midKey on a leaf index block boundary + * @throws IOException + */ + @Test + public void testMidKeyOnLeafIndexBlockBoundary() throws IOException { + Path hfilePath = new Path(TEST_UTIL.getDataTestDir(), + "hfile_for_midkey"); + int maxChunkSize = 512; + conf.setInt(HFileBlockIndex.MAX_CHUNK_SIZE_KEY, maxChunkSize); + // should open hfile.block.index.cacheonwrite + conf.setBoolean(CacheConfig.CACHE_INDEX_BLOCKS_ON_WRITE_KEY, true); + + CacheConfig cacheConf = new CacheConfig(conf); + BlockCache blockCache = cacheConf.getBlockCache(); + // Evict all blocks that were cached-on-write by the previous invocation. + blockCache.evictBlocksByHfileName(hfilePath.getName()); + // Write the HFile + { + HFileContext meta = new HFileContextBuilder() + .withBlockSize(SMALL_BLOCK_SIZE) + .withCompression(Algorithm.NONE) + .withDataBlockEncoding(DataBlockEncoding.NONE) + .build(); + HFile.Writer writer = + HFile.getWriterFactory(conf, cacheConf) + .withPath(fs, hfilePath) + .withFileContext(meta) + .create(); + Random rand = new Random(19231737); + byte[] family = Bytes.toBytes("f"); + byte[] qualifier = Bytes.toBytes("q"); + int kvNumberToBeWritten = 16; + // the new generated hfile will contain 2 leaf-index blocks and 16 data blocks, + // midkey is just on the boundary of the first leaf-index block + for (int i = 0; i < kvNumberToBeWritten; ++i) { + byte[] row
hbase git commit: HBASE-17655 Removing MemStoreScanner and SnapshotScanner
Repository: hbase Updated Branches: refs/heads/master cc59fe4e9 -> 8f4ae0a0d HBASE-17655 Removing MemStoreScanner and SnapshotScanner Project: http://git-wip-us.apache.org/repos/asf/hbase/repo Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/8f4ae0a0 Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/8f4ae0a0 Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/8f4ae0a0 Branch: refs/heads/master Commit: 8f4ae0a0dcb658c4fe669bc4cdc68ad8e6219daf Parents: cc59fe4 Author: eshcarAuthored: Tue Mar 21 12:32:59 2017 +0200 Committer: eshcar Committed: Tue Mar 21 12:35:47 2017 +0200 -- .../example/ZooKeeperScanPolicyObserver.java| 4 +- .../hbase/coprocessor/RegionObserver.java | 35 +- .../hbase/mob/DefaultMobStoreFlusher.java | 2 +- .../hbase/regionserver/AbstractMemStore.java| 14 + .../hbase/regionserver/CompactingMemStore.java | 21 +- .../regionserver/CompositeImmutableSegment.java | 33 +- .../hbase/regionserver/DefaultMemStore.java | 15 +- .../hbase/regionserver/DefaultStoreFlusher.java | 2 +- .../hbase/regionserver/ImmutableSegment.java| 12 +- .../hbase/regionserver/MemStoreCompactor.java | 2 +- .../MemStoreCompactorSegmentsIterator.java | 17 +- .../MemStoreMergerSegmentsIterator.java | 52 ++- .../hbase/regionserver/MemStoreScanner.java | 334 --- .../regionserver/MemStoreSegmentsIterator.java | 23 +- .../hbase/regionserver/MemStoreSnapshot.java| 15 +- .../regionserver/RegionCoprocessorHost.java | 7 +- .../hadoop/hbase/regionserver/Segment.java | 8 +- .../hbase/regionserver/SegmentScanner.java | 13 +- .../hbase/regionserver/SnapshotScanner.java | 105 -- .../hadoop/hbase/regionserver/StoreFlusher.java | 8 +- .../hbase/regionserver/StripeStoreFlusher.java | 2 +- .../hbase/coprocessor/SimpleRegionObserver.java | 2 +- .../TestRegionObserverScannerOpenHook.java | 6 +- .../regionserver/NoOpScanPolicyObserver.java| 4 +- .../regionserver/TestCompactingMemStore.java| 30 +- .../TestCompactingToCellArrayMapMemStore.java | 32 +- .../hbase/regionserver/TestDefaultMemStore.java | 20 +- .../regionserver/TestMemStoreChunkPool.java | 14 +- .../regionserver/TestReversibleScanners.java| 66 +++- .../hbase/util/TestCoprocessorScanPolicy.java | 5 +- 30 files changed, 262 insertions(+), 641 deletions(-) -- http://git-wip-us.apache.org/repos/asf/hbase/blob/8f4ae0a0/hbase-examples/src/main/java/org/apache/hadoop/hbase/coprocessor/example/ZooKeeperScanPolicyObserver.java -- diff --git a/hbase-examples/src/main/java/org/apache/hadoop/hbase/coprocessor/example/ZooKeeperScanPolicyObserver.java b/hbase-examples/src/main/java/org/apache/hadoop/hbase/coprocessor/example/ZooKeeperScanPolicyObserver.java index 2343c1d..b7df9b4 100644 --- a/hbase-examples/src/main/java/org/apache/hadoop/hbase/coprocessor/example/ZooKeeperScanPolicyObserver.java +++ b/hbase-examples/src/main/java/org/apache/hadoop/hbase/coprocessor/example/ZooKeeperScanPolicyObserver.java @@ -188,7 +188,7 @@ public class ZooKeeperScanPolicyObserver implements RegionObserver { @Override public InternalScanner preFlushScannerOpen(final ObserverContext c, - Store store, KeyValueScanner memstoreScanner, InternalScanner s) throws IOException { + Store store, List scanners, InternalScanner s) throws IOException { ScanInfo scanInfo = getScanInfo(store, c.getEnvironment()); if (scanInfo == null) { // take default action @@ -196,7 +196,7 @@ public class ZooKeeperScanPolicyObserver implements RegionObserver { } Scan scan = new Scan(); scan.setMaxVersions(scanInfo.getMaxVersions()); -return new StoreScanner(store, scanInfo, scan, Collections.singletonList(memstoreScanner), +return new StoreScanner(store, scanInfo, scan, scanners, ScanType.COMPACT_RETAIN_DELETES, store.getSmallestReadPoint(), HConstants.OLDEST_TIMESTAMP); } http://git-wip-us.apache.org/repos/asf/hbase/blob/8f4ae0a0/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/RegionObserver.java -- diff --git a/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/RegionObserver.java b/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/RegionObserver.java index a3db3b1..e36feea 100644 --- a/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/RegionObserver.java +++ b/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/RegionObserver.java @@ -22,6 +22,7 @@ package org.apache.hadoop.hbase.coprocessor; import com.google.common.collect.ImmutableList; import
hbase git commit: HBASE-17060 backport HBASE-16570 (Compute region locality in parallel at startup) to 1.3.1
Repository: hbase Updated Branches: refs/heads/branch-1.3 446a21fed -> 693b51d81 HBASE-17060 backport HBASE-16570 (Compute region locality in parallel at startup) to 1.3.1 Project: http://git-wip-us.apache.org/repos/asf/hbase/repo Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/693b51d8 Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/693b51d8 Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/693b51d8 Branch: refs/heads/branch-1.3 Commit: 693b51d81af0c446b305af69fe130faee07581a6 Parents: 446a21f Author: Yu LiAuthored: Tue Mar 21 15:14:31 2017 +0800 Committer: Yu Li Committed: Tue Mar 21 15:14:31 2017 +0800 -- .../hbase/master/balancer/BaseLoadBalancer.java | 11 +++-- .../master/balancer/RegionLocationFinder.java | 47 ++-- .../balancer/TestRegionLocationFinder.java | 21 + 3 files changed, 71 insertions(+), 8 deletions(-) -- http://git-wip-us.apache.org/repos/asf/hbase/blob/693b51d8/hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/BaseLoadBalancer.java -- diff --git a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/BaseLoadBalancer.java b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/BaseLoadBalancer.java index c2529a8..2df4fbe 100644 --- a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/BaseLoadBalancer.java +++ b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/BaseLoadBalancer.java @@ -1231,7 +1231,7 @@ public abstract class BaseLoadBalancer implements LoadBalancer { return assignments; } -Cluster cluster = createCluster(servers, regions); +Cluster cluster = createCluster(servers, regions, false); List unassignedRegions = new ArrayList(); roundRobinAssignment(cluster, regions, unassignedRegions, @@ -1278,7 +1278,10 @@ public abstract class BaseLoadBalancer implements LoadBalancer { } protected Cluster createCluster(List servers, - Collection regions) { + Collection regions, boolean forceRefresh) { +if (forceRefresh) { + regionFinder.refreshAndWait(regions); +} // Get the snapshot of the current assignments for the regions in question, and then create // a cluster out of it. Note that we might have replicas already assigned to some servers // earlier. So we want to get the snapshot to see those assignments, but this will only contain @@ -1352,7 +1355,7 @@ public abstract class BaseLoadBalancer implements LoadBalancer { } List regions = Lists.newArrayList(regionInfo); -Cluster cluster = createCluster(servers, regions); +Cluster cluster = createCluster(servers, regions, false); return randomAssignment(cluster, regionInfo, servers); } @@ -1427,7 +1430,7 @@ public abstract class BaseLoadBalancer implements LoadBalancer { int numRandomAssignments = 0; int numRetainedAssigments = 0; -Cluster cluster = createCluster(servers, regions.keySet()); +Cluster cluster = createCluster(servers, regions.keySet(), true); for (Map.Entry entry : regions.entrySet()) { HRegionInfo region = entry.getKey(); http://git-wip-us.apache.org/repos/asf/hbase/blob/693b51d8/hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/RegionLocationFinder.java -- diff --git a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/RegionLocationFinder.java b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/RegionLocationFinder.java index a6724ee..6c5cb19 100644 --- a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/RegionLocationFinder.java +++ b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/RegionLocationFinder.java @@ -21,6 +21,7 @@ import com.google.common.cache.CacheBuilder; import com.google.common.cache.CacheLoader; import com.google.common.cache.LoadingCache; import com.google.common.collect.Lists; +import com.google.common.util.concurrent.Futures; import com.google.common.util.concurrent.ListenableFuture; import com.google.common.util.concurrent.ListeningExecutorService; import com.google.common.util.concurrent.MoreExecutors; @@ -63,11 +64,13 @@ import java.util.concurrent.TimeUnit; class RegionLocationFinder { private static final Log LOG = LogFactory.getLog(RegionLocationFinder.class); private static final long CACHE_TIME = 240 * 60 * 1000; + private static final HDFSBlocksDistribution EMPTY_BLOCK_DISTRIBUTION = new HDFSBlocksDistribution(); private Configuration conf; private volatile ClusterStatus status; private MasterServices services; private final ListeningExecutorService
hbase git commit: HBASE-17059 backport HBASE-17039 (SimpleLoadBalancer schedules large amount of invalid region moves) to 1.3.1
Repository: hbase Updated Branches: refs/heads/branch-1.3 98b5d2cd4 -> 446a21fed HBASE-17059 backport HBASE-17039 (SimpleLoadBalancer schedules large amount of invalid region moves) to 1.3.1 Project: http://git-wip-us.apache.org/repos/asf/hbase/repo Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/446a21fe Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/446a21fe Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/446a21fe Branch: refs/heads/branch-1.3 Commit: 446a21fedd1282c15939eb4c46d13c859beedd7a Parents: 98b5d2c Author: Yu LiAuthored: Tue Mar 21 14:25:58 2017 +0800 Committer: Yu Li Committed: Tue Mar 21 14:27:05 2017 +0800 -- .../hadoop/hbase/master/balancer/SimpleLoadBalancer.java | 6 +- 1 file changed, 1 insertion(+), 5 deletions(-) -- http://git-wip-us.apache.org/repos/asf/hbase/blob/446a21fe/hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/SimpleLoadBalancer.java -- diff --git a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/SimpleLoadBalancer.java b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/SimpleLoadBalancer.java index 4325585..a354e40 100644 --- a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/SimpleLoadBalancer.java +++ b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/SimpleLoadBalancer.java @@ -273,14 +273,10 @@ public class SimpleLoadBalancer extends BaseLoadBalancer { serversByLoad.entrySet()) { if (maxToTake == 0) break; // no more to take int load = server.getKey().getLoad(); - if (load >= min && load > 0) { + if (load >= min) { continue; // look for other servers which haven't reached min } int regionsToPut = min - load; - if (regionsToPut == 0) - { -regionsToPut = 1; - } maxToTake -= regionsToPut; underloadedServers.put(server.getKey().getServerName(), regionsToPut); }