[jira] [Created] (HBASE-17691) Add ScanMetrics support for async scan
Duo Zhang created HBASE-17691: - Summary: Add ScanMetrics support for async scan Key: HBASE-17691 URL: https://issues.apache.org/jira/browse/HBASE-17691 Project: HBase Issue Type: Sub-task Reporter: Duo Zhang -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Created] (HBASE-17690) Clean up MOB code
Jingcheng Du created HBASE-17690: Summary: Clean up MOB code Key: HBASE-17690 URL: https://issues.apache.org/jira/browse/HBASE-17690 Project: HBase Issue Type: Improvement Components: mob Reporter: Jingcheng Du Assignee: Jingcheng Du Clean up the code in MOB. # Fix the incorrect description in comments. # Fix the warning and remove redundant reference in code. # Correct the code used in uni test. # Add throughput controller for DefaultMobStoreFlusher and DefaultMobStoreCompactor. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Created] (HBASE-17689) hbase thrift2 THBaseservice support table.existsAll
Yechao Chen created HBASE-17689: --- Summary: hbase thrift2 THBaseservice support table.existsAll Key: HBASE-17689 URL: https://issues.apache.org/jira/browse/HBASE-17689 Project: HBase Issue Type: Improvement Components: Thrift Reporter: Yechao Chen hbase thrift2 support existsAll(List gets) throws IOException; hbase.thrift add a method to service THBaseService like this list existsAll( 1: required binary table, 2: required list tgets ) throws (1:TIOError io) -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Created] (HBASE-17688) MultiRowRangeFilter not working correctly if given same start and stop RowKey
Ravi Ahuj created HBASE-17688: - Summary: MultiRowRangeFilter not working correctly if given same start and stop RowKey Key: HBASE-17688 URL: https://issues.apache.org/jira/browse/HBASE-17688 Project: HBase Issue Type: Bug Affects Versions: 1.1.2 Reporter: Ravi Ahuj Priority: Minor try (final Connection conn = ConnectionFactory.createConnection(conf); final Table scanTable = conn.getTable(table)){ ArrayList rowRangesList = new ArrayList<>(); String startRowkey="abc"; String stopRowkey="abc"; rowRangesList.add(new MultiRowRangeFilter.RowRange(startRowkey, true, stopRowkey, true)); Scan scan = new Scan(); scan.setFilter(new MultiRowRangeFilter(rowRangesList)); ResultScanner scanner=scanTable.getScanner(scan); for (Result result : scanner) { String rowkey=new String(result.getRow()); System.out.println(rowkey); } } In Hbase API of Java, we want to do multiple scans in the table using MultiRowRangeFilter. When we give multiple filters of startRowKey and stopRowKey, it is not working Properly with same StartRowKey and StopRowkey. Ideally, it should give only one Row with that Rowkey, but instead it is giving all the rows starting from that Rowkey in that Hbase Table -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Resolved] (HBASE-17687) hive on hbase table and phoenix table can't be selected
[ https://issues.apache.org/jira/browse/HBASE-17687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashish Singhi resolved HBASE-17687. --- Resolution: Invalid Hadoop Flags: (was: Incompatible change) > hive on hbase table and phoenix table can't be selected > > > Key: HBASE-17687 > URL: https://issues.apache.org/jira/browse/HBASE-17687 > Project: HBase > Issue Type: Improvement > Components: hbase >Affects Versions: 1.0.2 > Environment: hadoop 2.7.2 > hbase 1.0.2 > phoenix 4.4 > hive 1.3 > all above are based on huawei FusionInsight HD(FusionInsight > V100R002C60U10SPC001) >Reporter: yunliangchen > Original Estimate: 48h > Remaining Estimate: 48h > > First , I created a table on phoenix, as this: > --- > DROP TABLE IF EXISTS bidwd_test01 CASCADE; > CREATE TABLE IF NOT EXISTS bidwd_test01( >rk VARCHAR, >c1 integer, >c2 VARCHAR, >c3 VARCHAR, >c4 VARCHAR >constraint bidwd_test01_pk primary key(rk) > ) > COMPRESSION='SNAPPY' > ; > --- > And then , I upserted two rows into the table: > --- > upsert into bidwd_test01 values('001',1,'zhangsan','20170217','2017-02-17 > 12:34:22'); > upsert into bidwd_test01 values('002',2,'lisi','20170216','2017-02-16 > 12:34:22'); > --- > At last , I scaned the table like this: > --- > select * from bidwd_test01; > --- > It's OK by now, but, I want to create a hive on hbase table ,that mapping to > the phoenix table , the script likes this: > --- > USE BIDWD; > DROP TABLE test01; > CREATE EXTERNAL TABLE test01 > ( > rk string, > id int, > name string, > datekey string, > time_stamp string > ) > STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' > WITH SERDEPROPERTIES ("hbase.columns.mapping" = ":key,0:C1,0:C2,0:C3,0:C4") > TBLPROPERTIES ("hbase.table.name" = "BIDWD_TEST01"); > --- > So,I also try to insert some data into the table,and scan this table: > --- > set hive.execution.engine=mr; > insert into test01 values('003',3,'lisi2','20170215','2017-02-15 12:34:22'); > select * from test01; > --- > But,there are some problems like this: > +++--+-+--+--+ > | test01.rk | test01.id | test01.name | test01.datekey | > test01.time_stamp | > +++--+-+--+--+ > | 001| NULL | zhangsan | 20170217| 2017-02-17 > 12:34:22 | > | 002| NULL | lisi | 20170216| 2017-02-16 > 12:34:22 | > | 003| 3 | lisi2| 20170215| 2017-02-15 > 12:34:22 | > +++--+-+--+--+ > the column "id" 's value was null,only the last row is ok. > but,when I scan data in the phoenix ,there are some errors like this: > Error: ERROR 201 (22000): Illegal data. Expected length of at least 115 > bytes, but had 31 (state=22000,code=201) > java.sql.SQLException: ERROR 201 (22000): Illegal data. Expected length of at > least 115 bytes, but had 31 > at > org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:389) > at > org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:145) > at > org.apache.phoenix.schema.KeyValueSchema.next(KeyValueSchema.java:211) > at > org.apache.phoenix.expression.ProjectedColumnExpression.evaluate(ProjectedColumnExpression.java:113) > at > org.apache.phoenix.compile.ExpressionProjector.getValue(ExpressionProjector.java:69) > at > org.apache.phoenix.jdbc.PhoenixResultSet.getString(PhoenixResultSet.java:591) > at sqlline.Rows$Row.(Rows.java:183) > at sqlline.BufferedRows.(BufferedRows.java:38) > at sqlline.SqlLine.print(SqlLine.java:1546) > at sqlline.Commands.execute(Commands.java:833) > at sqlline.Commands.sql(Commands.java:732) > at sqlline.SqlLine.dispatch(SqlLine.java:702) > at sqlline.SqlLine.begin(SqlLine.java:575) > at sqlline.SqlLine.start(SqlLine.java:292) > at
[jira] [Created] (HBASE-17687) hive on hbase table and phoenix table can't be selected
yunliangchen created HBASE-17687: Summary: hive on hbase table and phoenix table can't be selected Key: HBASE-17687 URL: https://issues.apache.org/jira/browse/HBASE-17687 Project: HBase Issue Type: Improvement Components: hbase Affects Versions: 1.0.2 Environment: hadoop 2.7.2 hbase 1.0.2 phoenix 4.4 hive 1.3 all above are based on huawei FusionInsight HD(FusionInsight V100R002C60U10SPC001) Reporter: yunliangchen First , I created a table on phoenix, as this: --- DROP TABLE IF EXISTS bidwd_test01 CASCADE; CREATE TABLE IF NOT EXISTS bidwd_test01( rk VARCHAR, c1 integer, c2 VARCHAR, c3 VARCHAR, c4 VARCHAR constraint bidwd_test01_pk primary key(rk) ) COMPRESSION='SNAPPY' ; --- And then , I upserted two rows into the table: --- upsert into bidwd_test01 values('001',1,'zhangsan','20170217','2017-02-17 12:34:22'); upsert into bidwd_test01 values('002',2,'lisi','20170216','2017-02-16 12:34:22'); --- At last , I scaned the table like this: --- select * from bidwd_test01; --- It's OK by now, but, I want to create a hive on hbase table ,that mapping to the phoenix table , the script likes this: --- USE BIDWD; DROP TABLE test01; CREATE EXTERNAL TABLE test01 ( rk string, id int, name string, datekey string, time_stamp string ) STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' WITH SERDEPROPERTIES ("hbase.columns.mapping" = ":key,0:C1,0:C2,0:C3,0:C4") TBLPROPERTIES ("hbase.table.name" = "BIDWD_TEST01"); --- So,I also try to insert some data into the table,and scan this table: --- set hive.execution.engine=mr; insert into test01 values('003',3,'lisi2','20170215','2017-02-15 12:34:22'); select * from test01; --- But,there are some problems like this: +++--+-+--+--+ | test01.rk | test01.id | test01.name | test01.datekey | test01.time_stamp | +++--+-+--+--+ | 001| NULL | zhangsan | 20170217| 2017-02-17 12:34:22 | | 002| NULL | lisi | 20170216| 2017-02-16 12:34:22 | | 003| 3 | lisi2| 20170215| 2017-02-15 12:34:22 | +++--+-+--+--+ the column "id" 's value was null,only the last row is ok. but,when I scan data in the phoenix ,there are some errors like this: Error: ERROR 201 (22000): Illegal data. Expected length of at least 115 bytes, but had 31 (state=22000,code=201) java.sql.SQLException: ERROR 201 (22000): Illegal data. Expected length of at least 115 bytes, but had 31 at org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:389) at org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:145) at org.apache.phoenix.schema.KeyValueSchema.next(KeyValueSchema.java:211) at org.apache.phoenix.expression.ProjectedColumnExpression.evaluate(ProjectedColumnExpression.java:113) at org.apache.phoenix.compile.ExpressionProjector.getValue(ExpressionProjector.java:69) at org.apache.phoenix.jdbc.PhoenixResultSet.getString(PhoenixResultSet.java:591) at sqlline.Rows$Row.(Rows.java:183) at sqlline.BufferedRows.(BufferedRows.java:38) at sqlline.SqlLine.print(SqlLine.java:1546) at sqlline.Commands.execute(Commands.java:833) at sqlline.Commands.sql(Commands.java:732) at sqlline.SqlLine.dispatch(SqlLine.java:702) at sqlline.SqlLine.begin(SqlLine.java:575) at sqlline.SqlLine.start(SqlLine.java:292) at sqlline.SqlLine.main(SqlLine.java:194) So,I don't know why? How can I solve this problem? -- This message was sent by Atlassian JIRA (v6.3.15#6346)
Re: [VOTE] First release candidate for HBase 1.1.9 (RC0) is available
Thanks Nick. Plan to look tomorrow. On Thu, Feb 23, 2017 at 6:31 PM, Nick Dimidukwrote: > Reminder: T-minus 3 days remain on the voting period. > > Thanks, > -n > > On Mon, Feb 20, 2017 at 11:44 PM Nick Dimiduk wrote: > > > I'm happy to announce the first release candidate of HBase 1.1.9 (HBase- > > 1.1.9RC0) is available for download at > > https://dist.apache.org/repos/dist/dev/hbase/hbase-1.1.9RC0/ > > > > Maven artifacts are also available in the staging repository > > https://repository.apache.org/content/repositories/orgapachehbase-1163 > > > > Artifacts are signed with my code signing subkey 0xAD9039071C3489BD, > > available in the Apache keys directory > > https://people.apache.org/keys/committer/ndimiduk.asc > > > > There's also a signed tag for this release at > > https://git-wip-us.apache.org/repos/asf?p=hbase.git;a=commit;h= > 0d1feabed5295495ed2257d31fab9e6553e8a9d7 > > > > The detailed source and binary compatibility report vs 1.1.8 has been > > published for your review, at > > http://home.apache.org/~ndimiduk/1.1.8_1.1.9RC0_compat_report.html > > > > HBase 1.1.9 is the ninth patch release in the HBase 1.1 line, continuing > > on the theme of bringing a stable, reliable database to the Hadoop and > > NoSQL communities. This release includes nearly 20 bug fixes since the > 1.1 > > .8 release. Notable correctness fixes include HBASE-17238, > > HBASE-17587, HBASE-17275, and HBASE-17265. > > > > The full list of fixes included in this release is available at > > https://issues.apache.org/jira/secure/ReleaseNote.jspa? > projectId=12310753=12338734 > > and in the CHANGES.txt file included in the distribution. > > > > Please try out this candidate and vote +/-1 by 23:59 Pacific time on > > Sunday, 2017-02-26 as to whether we should release these artifacts as > > HBase 1.1.9. > > > > Thanks, > > Nick > > > -- Best regards, - Andy If you are given a choice, you believe you have acted freely. - Raymond Teller (via Peter Watts)
Re: [VOTE] First release candidate for HBase 1.1.9 (RC0) is available
Reminder: T-minus 3 days remain on the voting period. Thanks, -n On Mon, Feb 20, 2017 at 11:44 PM Nick Dimidukwrote: > I'm happy to announce the first release candidate of HBase 1.1.9 (HBase- > 1.1.9RC0) is available for download at > https://dist.apache.org/repos/dist/dev/hbase/hbase-1.1.9RC0/ > > Maven artifacts are also available in the staging repository > https://repository.apache.org/content/repositories/orgapachehbase-1163 > > Artifacts are signed with my code signing subkey 0xAD9039071C3489BD, > available in the Apache keys directory > https://people.apache.org/keys/committer/ndimiduk.asc > > There's also a signed tag for this release at > https://git-wip-us.apache.org/repos/asf?p=hbase.git;a=commit;h=0d1feabed5295495ed2257d31fab9e6553e8a9d7 > > The detailed source and binary compatibility report vs 1.1.8 has been > published for your review, at > http://home.apache.org/~ndimiduk/1.1.8_1.1.9RC0_compat_report.html > > HBase 1.1.9 is the ninth patch release in the HBase 1.1 line, continuing > on the theme of bringing a stable, reliable database to the Hadoop and > NoSQL communities. This release includes nearly 20 bug fixes since the 1.1 > .8 release. Notable correctness fixes include HBASE-17238, > HBASE-17587, HBASE-17275, and HBASE-17265. > > The full list of fixes included in this release is available at > https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12310753=12338734 > and in the CHANGES.txt file included in the distribution. > > Please try out this candidate and vote +/-1 by 23:59 Pacific time on > Sunday, 2017-02-26 as to whether we should release these artifacts as > HBase 1.1.9. > > Thanks, > Nick >
Re: [DISCUSS] Tracking HBase Snapshots for Space Quotas
Oops! Thanks, Ted. Will flip that now. Ted Yu wrote: Josh: The design doc in [1] is View only. Can you give viewers permission to comment ? On Wed, Feb 22, 2017 at 2:04 PM, Josh Elserwrote: Hiya folks, As we're wrapping up on the current set of features listed in HBASE-16961 for tracking and limiting the HDFS space used by HBase tables and namespaces, I wanted to present a doc that outlines an approach to tracking snapshots in the context of space quotas. As most operators know, snapshots can be a cause of the "why is my HDFS full??" category of errors. Obviously, this makes it a logical next-step to track. The following docs present (what I think is) a fairly simple extension to the existing design/implementation as outlined in HBASE-16961. For those with the cycles to give it some thought/feedback, I'd be over-joyed to receive it! Thanks in advance to our Clay, Enis, and Devaraj for the feedback they've provided already. As always, your choice of Google doc[1] and PDF[2] are available. - Josh [1] https://docs.google.com/document/d/1f7utThEBYRXYHvp3e5fOhQBv 2K1aeuzGHGEfNNE3Clc/edit?usp=sharing [2] http://home.apache.org/~elserj/hbase/FileSystemQuotasforApac heHBase-Snapshots.pdf
[jira] [Created] (HBASE-17686) Improve Javadoc comments in Observer Interfaces
Zach York created HBASE-17686: - Summary: Improve Javadoc comments in Observer Interfaces Key: HBASE-17686 URL: https://issues.apache.org/jira/browse/HBASE-17686 Project: HBase Issue Type: Improvement Components: Coprocessors Affects Versions: 2.0.0 Reporter: Zach York Assignee: Zach York Priority: Minor Based off of comments from https://issues.apache.org/jira/browse/HBASE-17312, we should improve Javadoc comments in the Observer interfaces. This JIRA includes adding @returns to clarify what is being returned (and why) and to either improve @params/@throws or remove if there is no way to provide meaningful information. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Created] (HBASE-17685) Tools/Admin API to dump the replica load of server(s)
Thiruvel Thirumoolan created HBASE-17685: Summary: Tools/Admin API to dump the replica load of server(s) Key: HBASE-17685 URL: https://issues.apache.org/jira/browse/HBASE-17685 Project: HBase Issue Type: Sub-task Components: FavoredNodes Reporter: Thiruvel Thirumoolan Assignee: Thiruvel Thirumoolan RPM has an option to dump the favored node distribution. We need an API to get the replica load from master. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Created] (HBASE-17684) Tools/API to read favored nodes for region(s)
Thiruvel Thirumoolan created HBASE-17684: Summary: Tools/API to read favored nodes for region(s) Key: HBASE-17684 URL: https://issues.apache.org/jira/browse/HBASE-17684 Project: HBase Issue Type: Sub-task Components: FavoredNodes Reporter: Thiruvel Thirumoolan Assignee: Thiruvel Thirumoolan We need APIs to read FN from Master. This will help in troubleshooting when regions are in RIT due to all FN being dead etc. For small clusters, we could just read from SnapshotOfRegionAssignmentFromMeta, but for large clusters it takes 4-5 mins. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Created] (HBASE-17683) Admin API to update favored nodes in Master
Thiruvel Thirumoolan created HBASE-17683: Summary: Admin API to update favored nodes in Master Key: HBASE-17683 URL: https://issues.apache.org/jira/browse/HBASE-17683 Project: HBase Issue Type: Sub-task Components: FavoredNodes Reporter: Thiruvel Thirumoolan Assignee: Thiruvel Thirumoolan For troubleshooting/decommissioning nodes/replacing nodes, we need an API to update the FN for a set of regions in Master. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
Re: [DISCUSS] Tracking HBase Snapshots for Space Quotas
Josh: The design doc in [1] is View only. Can you give viewers permission to comment ? On Wed, Feb 22, 2017 at 2:04 PM, Josh Elserwrote: > Hiya folks, > > As we're wrapping up on the current set of features listed in HBASE-16961 > for tracking and limiting the HDFS space used by HBase tables and > namespaces, I wanted to present a doc that outlines an approach to tracking > snapshots in the context of space quotas. > > As most operators know, snapshots can be a cause of the "why is my HDFS > full??" category of errors. Obviously, this makes it a logical next-step to > track. The following docs present (what I think is) a fairly simple > extension to the existing design/implementation as outlined in HBASE-16961. > > For those with the cycles to give it some thought/feedback, I'd be > over-joyed to receive it! Thanks in advance to our Clay, Enis, and Devaraj > for the feedback they've provided already. > > As always, your choice of Google doc[1] and PDF[2] are available. > > - Josh > > [1] https://docs.google.com/document/d/1f7utThEBYRXYHvp3e5fOhQBv > 2K1aeuzGHGEfNNE3Clc/edit?usp=sharing > [2] http://home.apache.org/~elserj/hbase/FileSystemQuotasforApac > heHBase-Snapshots.pdf >
Successful: HBase Generate Website
Build status: Successful If successful, the website and docs have been generated. To update the live site, follow the instructions below. If failed, skip to the bottom of this email. Use the following commands to download the patch and apply it to a clean branch based on origin/asf-site. If you prefer to keep the hbase-site repo around permanently, you can skip the clone step. git clone https://git-wip-us.apache.org/repos/asf/hbase-site.git cd hbase-site wget -O- https://builds.apache.org/job/hbase_generate_website/497/artifact/website.patch.zip | funzip > 8fb44fae35123ab7ddeecfbb80ae5e051c07e111.patch git fetch git checkout -b asf-site-8fb44fae35123ab7ddeecfbb80ae5e051c07e111 origin/asf-site git am --whitespace=fix 8fb44fae35123ab7ddeecfbb80ae5e051c07e111.patch At this point, you can preview the changes by opening index.html or any of the other HTML pages in your local asf-site-8fb44fae35123ab7ddeecfbb80ae5e051c07e111 branch. There are lots of spurious changes, such as timestamps and CSS styles in tables, so a generic git diff is not very useful. To see a list of files that have been added, deleted, renamed, changed type, or are otherwise interesting, use the following command: git diff --name-status --diff-filter=ADCRTXUB origin/asf-site To see only files that had 100 or more lines changed: git diff --stat origin/asf-site | grep -E '[1-9][0-9]{2,}' When you are satisfied, publish your changes to origin/asf-site using these commands: git commit --allow-empty -m "Empty commit" # to work around a current ASF INFRA bug git push origin asf-site-8fb44fae35123ab7ddeecfbb80ae5e051c07e111:asf-site git checkout asf-site git branch -D asf-site-8fb44fae35123ab7ddeecfbb80ae5e051c07e111 Changes take a couple of minutes to be propagated. You can verify whether they have been propagated by looking at the Last Published date at the bottom of http://hbase.apache.org/. It should match the date in the index.html on the asf-site branch in Git. As a courtesy- reply-all to this email to let other committers know you pushed the site. If failed, see https://builds.apache.org/job/hbase_generate_website/497/console
[jira] [Created] (HBASE-17682) Region stuck in merging_new state indefinitely
Abhishek Singh Chouhan created HBASE-17682: -- Summary: Region stuck in merging_new state indefinitely Key: HBASE-17682 URL: https://issues.apache.org/jira/browse/HBASE-17682 Project: HBase Issue Type: Bug Affects Versions: 1.3.0 Reporter: Abhishek Singh Chouhan Assignee: Abhishek Singh Chouhan Ran into issue while tinkering around with a chaos monkey that did splits, merges and kills exclusively, which resulted in regions getting stuck in transition in merging new state indefinitely which i think happens when the rs is killed during the merge but before the ponr, in which case the new regions state in master is merging new. When the rs dies at this point the master executes RegionStates.serverOffline() for the rs which does for (RegionState state : regionsInTransition.values()) { HRegionInfo hri = state.getRegion(); if (assignedRegions.contains(hri)) { // Region is open on this region server, but in transition. // This region must be moving away from this server, or splitting/merging. // SSH will handle it, either skip assigning, or re-assign. LOG.info("Transitioning " + state + " will be handled by ServerCrashProcedure for " + sn); } else if (sn.equals(state.getServerName())) { // Region is in transition on this region server, and this // region is not open on this server. So the region must be // moving to this server from another one (i.e. opening or // pending open on this server, was open on another one. // Offline state is also kind of pending open if the region is in // transition. The region could be in failed_close state too if we have // tried several times to open it while this region server is not reachable) if (state.isPendingOpenOrOpening() || state.isFailedClose() || state.isOffline()) { LOG.info("Found region in " + state + " to be reassigned by ServerCrashProcedure for " + sn); rits.add(hri); } else if(state.isSplittingNew()) { regionsToCleanIfNoMetaEntry.add(state.getRegion()); } else { LOG.warn("THIS SHOULD NOT HAPPEN: unexpected " + state); } } } We donot handle merging new here and end up with "THIS SHOULD NOT HAPPEN: unexpected ...". Post this we have the new region which does not have any data stuck which leads to the balancer not running. I think we should handle mergingnew the same way as splittingnew. -- This message was sent by Atlassian JIRA (v6.3.15#6346)