[jira] [Created] (HBASE-11406) ExportSnapshot#run better catch Throwable for ExportSnapshot#runCopyJob
Qiang Tian created HBASE-11406: -- Summary: ExportSnapshot#run better catch Throwable for ExportSnapshot#runCopyJob Key: HBASE-11406 URL: https://issues.apache.org/jira/browse/HBASE-11406 Project: HBase Issue Type: Improvement Components: snapshots Reporter: Qiang Tian Assignee: Qiang Tian Priority: Trivial ExportSnapshot#run better catch Throwable for ExportSnapshot#runCopyJob since runCopyJob involves hadoop and mapreduce. other errors could be thrown, catching them like below at first place is helpful for error handling and problem identification. 14/06/24 18:39:30 ERROR snapshot.ExportSnapshot: Snapshot export failed java.lang.NoSuchMethodError: org/apache/hadoop/mapred/JobTracker.getAddress(Lorg/apache/hadoop/conf/Configuration;)Ljava/net/InetSocketAddress; at org.apache.hadoop.mapred.JobTrackerClientProtocolProvider.create(JobTrackerClientProtocolProvider.java:44) at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:95) at org.apache.hadoop.mapreduce.Cluster.init(Cluster.java:82) at org.apache.hadoop.mapreduce.Cluster.init(Cluster.java:75) at org.apache.hadoop.hbase.mapreduce.JobUtil.getStagingDir(JobUtil.java:54) at org.apache.hadoop.hbase.snapshot.ExportSnapshot.getInputFolderPath(ExportSnapshot.java:541) at org.apache.hadoop.hbase.snapshot.ExportSnapshot.runCopyJob(ExportSnapshot.java:609) at org.apache.hadoop.hbase.snapshot.ExportSnapshot.run(ExportSnapshot.java:776) at org.apache.hadoop.hbase.backup.BackupCopier.copy(BackupCopier.java:248) at org.apache.hadoop.hbase.backup.BackupHandler.snapshotCopy(BackupHandler.java:559) at org.apache.hadoop.hbase.backup.BackupHandler.call(BackupHandler.java:141) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:314) at java.util.concurrent.FutureTask.run(FutureTask.java:149) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:897) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:919) at java.lang.Thread.run(Thread.java:738) -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HBASE-11407) hbase-client should not require Jackson for pure HBase queries be executed
Sergey Beryozkin created HBASE-11407: Summary: hbase-client should not require Jackson for pure HBase queries be executed Key: HBASE-11407 URL: https://issues.apache.org/jira/browse/HBASE-11407 Project: HBase Issue Type: Improvement Components: Client Reporter: Sergey Beryozkin Priority: Minor Including the hbase-client module dependency and excluding Jackson dependencies causes the pure HBase query (run with HTableInterface) fail with Jackson ObjectMapper ClassNotFoundException. This is due to org.apache.hadoop.hbase.client.Operation having ObjectMapper statically initialized. Moving ObjectMapper to a dedicated utility will help. The patch will be attached. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HBASE-11408) multiple SLF4J bindings warning messages when running HBase shell
Duo Xu created HBASE-11408: -- Summary: multiple SLF4J bindings warning messages when running HBase shell Key: HBASE-11408 URL: https://issues.apache.org/jira/browse/HBASE-11408 Project: HBase Issue Type: Bug Affects Versions: 0.98.3, 0.98.2 Reporter: Duo Xu When running hbase shell, we saw warnings like this: SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/C:/apps/dist/hbase-0.98.0.2.1.3.0-1928-hadoop2/lib/slf4j-log4j12-1.6.4.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/C:/apps/dist/hadoop-2.4.0.2.1.3.0-1928/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.cla ss] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HBASE-11409) Add more flexibility for input directory structure to LoadIncrementalHFiles
churro morales created HBASE-11409: -- Summary: Add more flexibility for input directory structure to LoadIncrementalHFiles Key: HBASE-11409 URL: https://issues.apache.org/jira/browse/HBASE-11409 Project: HBase Issue Type: Bug Affects Versions: 0.94.20 Reporter: churro morales Use case: We were trying to combine two very large tables into a single table. Thus we ran jobs in one datacenter that populated certain column families and another datacenter which populated other column families. Took a snapshot and exported them to their respective datacenters. Wanted to simply take the hdfs restored snapshot and use LoadIncremental to merge the data. It would be nice to add support where we could run LoadIncremental on a directory where the depth of store files is something other than two (current behavior). With snapshots it would be nice if you could pass a restored hdfs snapshot's directory and have the tool run. I am attaching a patch where I parameterize the bulkLoad timeout as well as the default store file depth. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Resolved] (HBASE-11385) [Visibility Controller]Check covering permission of the user issuing Delete for deletes without Cell Visibility
[ https://issues.apache.org/jira/browse/HBASE-11385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell resolved HBASE-11385. Resolution: Duplicate Fix Version/s: (was: 0.98.4) (was: 0.99.0) Assignee: (was: ramkrishna.s.vasudevan) Deletes without visibility expressions should match only cells without visibility labels. The work here ends up being equivalent to HBASE-11384, so closing as dup. [Visibility Controller]Check covering permission of the user issuing Delete for deletes without Cell Visibility --- Key: HBASE-11385 URL: https://issues.apache.org/jira/browse/HBASE-11385 Project: HBase Issue Type: Sub-task Affects Versions: 0.98.3 Reporter: ramkrishna.s.vasudevan In cases where delete does have the Cell visibility passed for exact label matching, then check the user issuing delete and use the user's authorizations to find out the covering cells. Will prepare a patch once the base issue gets checked in. -- This message was sent by Atlassian JIRA (v6.2#6252)
Planning to roll the 0.98.4 RC on 6/30
Planning to roll the 0.98.4 RC on Monday 6/30. I should have done it this week. Sorry, got a little sidetracked. -- Best regards, - Andy Problems worthy of attack prove their worth by hitting back. - Piet Hein (via Tom White)
[jira] [Resolved] (HBASE-9804) Startup option for holding user table deployment
[ https://issues.apache.org/jira/browse/HBASE-9804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell resolved HBASE-9804. --- Resolution: Later Startup option for holding user table deployment Key: HBASE-9804 URL: https://issues.apache.org/jira/browse/HBASE-9804 Project: HBase Issue Type: New Feature Affects Versions: 0.98.0 Reporter: Andrew Purtell Priority: Minor Introduce a boolean configuration option, false by default, that if set to 'true' will cause the master to set all user tables to disabled state at startup. From there, individual tables can be onlined as normal. Add a new admin method HBA#enableAll for convenience, also a new HBA#disableAll for symmetry. Add shell support for sending those new admin commands. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HBASE-11410) A tool for deleting data of a column using BulkDeleteEndpoint
Liu Shaohui created HBASE-11410: --- Summary: A tool for deleting data of a column using BulkDeleteEndpoint Key: HBASE-11410 URL: https://issues.apache.org/jira/browse/HBASE-11410 Project: HBase Issue Type: Improvement Components: Coprocessors Reporter: Liu Shaohui Assignee: Liu Shaohui Priority: Minor Sometimes we need a tool to delete unused or wrong format data in some columns. So we add a tool using BulkDeleteEndpoint. usage: delete column f1:c1 in table t1 {quote} ./hbase org.apache.hadoop.hbase.coprocessor.example.BulkDeleteTool t1 f1:c1 {quote} -- This message was sent by Atlassian JIRA (v6.2#6252)