[jira] [Resolved] (HBASE-5054) hadoop's classpath takes precedence over hbase's classpath

2012-01-17 Thread Harsh J (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J resolved HBASE-5054.


Resolution: Duplicate

This should already be fixed via HBASE-4854

 hadoop's classpath takes precedence over hbase's classpath
 --

 Key: HBASE-5054
 URL: https://issues.apache.org/jira/browse/HBASE-5054
 Project: HBase
  Issue Type: Bug
  Components: scripts
Affects Versions: 0.90.4
Reporter: Steve Hoffman

 Since hbase shares the metrics framework with core hadoop, and they both use 
 'hadoop-metrics.properties' file on the classpath for configuration, the 
 ordering causes hbase's directories to be shadowed by hadoop's.  What this 
 means is that for me to set hbase's hadoop-metrics.properties, I have to do 
 it in /etc/hadoop/conf since the one in /etc/hbase/conf is later in the 
 classpath.
 Running hbase classpath confirms the ordering:
 {quote}
 % hbase classpath
 

[jira] [Resolved] (HBASE-755) Add a method to retrieve the size and other information of a table

2012-01-01 Thread Harsh J (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-755?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J resolved HBASE-755.
---

Resolution: Won't Fix

You should be able to retrieve 'dus' kind of results with Hadoop's FileSystem 
API itself. I think number of splits is a useful metric to show, but doesn't 
deserve an API. We can open a new JIRA if thats still not shown today.

 Add a method to retrieve the size and other information of a table
 --

 Key: HBASE-755
 URL: https://issues.apache.org/jira/browse/HBASE-755
 Project: HBase
  Issue Type: New Feature
  Components: client
Affects Versions: 0.1.3
Reporter: Daniel Yu

 it would be good to have a method to obtain the size of a table - the total 
 bytes of all HStore files in HDFS( hdfs://xxx/table name/column 
 name/xxx/data), 
 and maybe other information like the number of splits etc.
 i think it's a simple but (probably) important aspect to evaluate a table.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HBASE-878) cell purge feature

2012-01-01 Thread Harsh J (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J resolved HBASE-878.
---

Resolution: Won't Fix

This is tricky to do since (I believe) you can't know when the deletion is 
really done. I think a delete should just mean delete. Like in any other 
FS/DB/System.

HBASE-848 should help a bit in achieving something but recovering cells 
shouldn't be a regular part of something HTable carries.

A 'trash' for cells may make sense though, but we can pursue that on a new JIRA 
if needed (so far, its not?)

 cell purge feature
 --

 Key: HBASE-878
 URL: https://issues.apache.org/jira/browse/HBASE-878
 Project: HBase
  Issue Type: New Feature
  Components: regionserver
Affects Versions: 0.2.0
Reporter: Michael Bieniosek

 Sometimes cells get inserted by accident, and we want to delete them so that 
 the cells behind them become visible.  delete just inserts a deleted cell 
 at a newer timestamp, which makes the entire column disappear; but in some 
 cases it's preferable to make the next newest value the current value.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HBASE-753) Safe copy of tables using hdfs copy (WAS - Can't replace the data of a particular table by copying its files on HDFS)

2012-01-01 Thread Harsh J (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-753?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J resolved HBASE-753.
---

Resolution: Not A Problem

HBase presently has utilities that aid in copying over tables, among other 
things (live replication, etc.). These should be sufficient I'd think.

 Safe copy of tables using hdfs copy (WAS - Can't replace the data of a 
 particular table by copying its files on HDFS)
 --

 Key: HBASE-753
 URL: https://issues.apache.org/jira/browse/HBASE-753
 Project: HBase
  Issue Type: New Feature
Affects Versions: 0.2.0
Reporter: Sebastien Rainville
Priority: Minor

 I have 2 instances of hbase running. One is *production* and the other one is 
 *development*. I want to be able to replace the content of a table (not all 
 of them) in development by the content in production. Both of my environments 
 are running hbase-trunk (a snapshot of july 9th). In hbase-0.1.x we used to 
 be able to do that by simply stopping both hbases, copying the files of the 
 required table directly from one HDFS to the other and then restart hbase.
 It doesn't work anymore. In hbase shell I do see the table but it's empty.
 There are no errors. I looked at the master's log and the regionservers logs 
 as well, all in DEBUG mode... but I saw nothing interesting. I do see that 
 the regions for that table are being assigned. So, if there's more than 1 
 region it means that it knows that the table isn't empty.
 So, I have to copy all the tables and then it's fine. It's not practical 
 though.
 My guess is that .META. is holding old information about that table that 
 doesn't get updated when I replace the table's data.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HBASE-31) Add means of getting the timestamps for all cell versions: e.g. long [] getVersions(row, column)

2012-01-01 Thread Harsh J (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-31?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J resolved HBASE-31.
--

Resolution: Not A Problem

This can be resolved, given Bryan's comments earlier.

bq. I think that perhaps this issue won't be of as much importance once gets 
return the relevant timestamp with the values returned.

This is available now.

 Add means of getting the timestamps for all cell versions: e.g. long [] 
 getVersions(row, column)
 

 Key: HBASE-31
 URL: https://issues.apache.org/jira/browse/HBASE-31
 Project: HBase
  Issue Type: Sub-task
  Components: client
Affects Versions: 0.2.0
Reporter: stack
Assignee: Doğacan Güney
Priority: Trivial

 Should be means of asking hbase for list of all the timestamps associated 
 with a particular cell.  The brute force way would be adding a getVersions 
 method but perhaps we can come up w/ something more elegant than this?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HBASE-1122) Leveraging HBase control layer to build a distributed text index

2012-01-01 Thread Harsh J (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-1122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J resolved HBASE-1122.


Resolution: Incomplete

(I know this is very, very late but…)

Thanks for sharing this with the HBase community!

Doesn't look like there was also a code contrib involved? I'm marking this as 
'Incomplete' as I do not see an action item. This was probably better off as an 
email on the lists.

 Leveraging HBase control layer to build a distributed text index
 

 Key: HBASE-1122
 URL: https://issues.apache.org/jira/browse/HBASE-1122
 Project: HBase
  Issue Type: New Feature
Reporter: Jun Rao
 Attachments: usenix09.pdf


 Hi,
 A few us at IBM Almaden Research Center built a distributed text index 
 prototype called HIndex. The key design point of HIndex is to build the index 
 by leveraging the distributed control layer in HBase, for availability, 
 elasticity and load balancing. In our prototype, we used Lucene to implement 
 a new type of region for storing the text index. Attached is a research paper 
 that we wrote and submitted to USENIX 2009. It covers the design of HIndex 
 and a performance evaluation (some of the results are applicable to HBase 
 too).
 We are grateful for the HBase community. We welcome comments and suggestions.
 Jun

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HBASE-1133) Master does not reload META on IOE

2012-01-01 Thread Harsh J (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-1133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J resolved HBASE-1133.


Resolution: Duplicate

Seems to have been fixed via HBASE-1084 (and upwards, towards HDFS where the 
issue lied). Please reopen if am wrong.

 Master does not reload META on IOE
 --

 Key: HBASE-1133
 URL: https://issues.apache.org/jira/browse/HBASE-1133
 Project: HBase
  Issue Type: Bug
Reporter: Andrew Purtell

 This never recalibrates:
 2009-01-18 01:35:30,906 WARN org.apache.hadoop.hbase.master.BaseScanner: Scan 
 one META region: {regionname: .META.,,1, startKey: , server: 
 10.30.94.35:60020}
 java.io.IOException: java.io.IOException: All datanodes 10.30.94.34:50010 are 
 bad. Aborting...
 at 
 org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2443)
 at 
 org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:1995)
 at 
 org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2159)
 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
 Method)
 at 
 sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
 at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
 at 
 org.apache.hadoop.hbase.RemoteExceptionHandler.decodeRemoteException(RemoteExceptionHandler.java:95)
 at 
 org.apache.hadoop.hbase.RemoteExceptionHandler.checkThrowable(RemoteExceptionHandler.java:48)
 ...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HBASE-848) API to inspect cell deletions

2012-01-01 Thread Harsh J (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J resolved HBASE-848.
---

Resolution: Duplicate

Resolving as a dupe. Please see HBASE-4536 and HBASE-4981 for possibilities of 
doing this.

 API to inspect cell deletions
 -

 Key: HBASE-848
 URL: https://issues.apache.org/jira/browse/HBASE-848
 Project: HBase
  Issue Type: New Feature
  Components: client
Affects Versions: 0.2.0
Reporter: Michael Bieniosek

 If a cell gets deleted, I'd like to have some API that gives me the deletion 
 timestamp, as well as any versions that predate the deletion.  
 One possibility might be to add a boolean flag to HTable.get

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HBASE-4705) HBase won't initialize if /hbase is not present

2011-12-11 Thread Harsh J (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-4705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J resolved HBASE-4705.


Resolution: Later

Not a problem right now. Triggering patch was reverted.

 HBase won't initialize if /hbase is not present
 ---

 Key: HBASE-4705
 URL: https://issues.apache.org/jira/browse/HBASE-4705
 Project: HBase
  Issue Type: Bug
Reporter: Harsh J

 {code}
 2011-10-31 00:09:09,549 FATAL org.apache.hadoop.hbase.master.HMaster: 
 Unhandled exception. Starting shutdown.
 java.io.FileNotFoundException: File does not exist: hdfs://C3S31:9000/hbase
 at 
 org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:731)
 at org.apache.hadoop.hbase.util.FSUtils.isInSafeMode(FSUtils.java:163)
 at 
 org.apache.hadoop.hbase.util.FSUtils.waitOnSafeMode(FSUtils.java:458)
 at 
 org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFileSystem.java:301)
 at 
 org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSystemLayout(MasterFileSystem.java:127)
 at 
 org.apache.hadoop.hbase.master.MasterFileSystem.init(MasterFileSystem.java:112)
 at 
 org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:426)
 at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:309)
 at java.lang.Thread.run(Thread.java:662)
 2011-10-31 00:09:09,551 INFO org.apache.hadoop.hbase.master.HMaster: Aborting
 2011-10-31 00:09:09,551 DEBUG org.apache.hadoop.hbase.master.HMaster: 
 Stopping service threads
 {code}
 Trunk won't start HBase unless /hbase is already present, after HBASE-4680 
 (and the silly error I made in HBASE-4510).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HBASE-4680) FSUtils.isInSafeMode() checks should operate on HBase root dir, where we have permissions

2011-12-11 Thread Harsh J (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-4680?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J resolved HBASE-4680.


Resolution: Later

Not a problem right now.

 FSUtils.isInSafeMode() checks should operate on HBase root dir, where we have 
 permissions
 -

 Key: HBASE-4680
 URL: https://issues.apache.org/jira/browse/HBASE-4680
 Project: HBase
  Issue Type: Bug
  Components: util
Affects Versions: 0.92.0, 0.94.0
Reporter: Gary Helmling
Assignee: Gary Helmling
 Attachments: HBASE-4680.patch


 The HDFS safe mode check workaround introduced by HBASE-4510 performs a 
 {{FileSystem.setPermission()}} operation on the root directory (/) when 
 attempting to trigger a {{SafeModeException}}.  As a result, it requires 
 superuser privileges when running with DFS permission checking enabled.  
 Changing the operations to act on the HBase root directory should be safe, 
 since the master process must have write access to it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HBASE-4834) CopyTable: Cannot have ZK source to destination

2011-11-20 Thread Harsh J (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-4834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J resolved HBASE-4834.


Resolution: Duplicate

This was fixed by HBASE-3497. Resolving as dup.

Apologies for the noise, and for the confusion Linden!

Regards,
Harsh

 CopyTable: Cannot have ZK source to destination
 ---

 Key: HBASE-4834
 URL: https://issues.apache.org/jira/browse/HBASE-4834
 Project: HBase
  Issue Type: Bug
  Components: zookeeper
Affects Versions: 0.90.1
Reporter: Linden Hillenbrand
Priority: Critical

 During a Copy Table, involving --peer.adr, we found the following block of 
 code:
 if (address != null) {
 ZKUtil.applyClusterKeyToConf(this.conf, address);
}
 When we set ZK conf in setConf method, that also gets called in frontend when 
 MR initializes TOF, so there's no way now to have two ZK points for a single 
 job, cause source gets reset before job is submitted.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira