[jira] [Updated] (HBASE-4654) [replication] Add a check to make sure we don't replicate to ourselves

2012-01-26 Thread ramkrishna.s.vasudevan (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-4654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-4654:
--

Fix Version/s: (was: 0.90.6)
   0.92.1
   0.90.7

Moving to 0.90.7 and 0.92.1.. Please pull back if you think differently.

> [replication] Add a check to make sure we don't replicate to ourselves
> --
>
> Key: HBASE-4654
> URL: https://issues.apache.org/jira/browse/HBASE-4654
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 0.90.4
>Reporter: Jean-Daniel Cryans
> Fix For: 0.90.7, 0.92.1
>
> Attachments: 4654-trunk.txt
>
>
> It's currently possible to add a peer for replication and point it to the 
> local cluster, which I believe could very well happen for those like us that 
> use only one ZK ensemble per DC so that only the root znode changes when you 
> want to set up replication intra-DC.
> I don't think comparing just the cluster ID would be enough because you would 
> normally use a different one for another cluster and nothing will block you 
> from pointing elsewhere.
> Comparing the ZK ensemble address doesn't work either when you have multiple 
> DNS entries that point at the same place.
> I think this could be resolved by looking up the master address in the 
> relevant znode as it should be exactly the same thing in the case where you 
> have the same cluster.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-4762) ROOT and META region never be assigned if IOE throws in verifyRootRegionLocation

2012-01-26 Thread ramkrishna.s.vasudevan (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-4762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-4762:
--

Fix Version/s: (was: 0.90.6)
   0.90.7
 Hadoop Flags: Reviewed

Moving to 0.90.7

> ROOT and META region never be assigned if IOE throws in 
> verifyRootRegionLocation
> 
>
> Key: HBASE-4762
> URL: https://issues.apache.org/jira/browse/HBASE-4762
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Affects Versions: 0.90.4
>Reporter: mingjian
>Assignee: mingjian
> Fix For: 0.90.7
>
>
> Patch in HBASE-3914 fixed root assigned in two regionservers. But it seemed 
> like root region will never be assigned if verifyRootRegionLocation throws 
> IOE.
> Like following master logs:
> {noformat}
> 2011-10-19 19:13:34,873 ERROR org.apache.hadoop.hbase.executor.EventHandler: 
> Caught throwable while processing event M_META_SERVER_S
> HUTDOWN
> org.apache.hadoop.ipc.RemoteException: 
> org.apache.hadoop.hbase.ipc.ServerNotRunningException: Server is not running 
> yet
> at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1090)
> at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:771)
> at 
> org.apache.hadoop.hbase.ipc.HBaseRPC$Invoker.invoke(HBaseRPC.java:256)
> at $Proxy7.getRegionInfo(Unknown Source)
> at 
> org.apache.hadoop.hbase.catalog.CatalogTracker.verifyRegionLocation(CatalogTracker.java:424)
> at 
> org.apache.hadoop.hbase.catalog.CatalogTracker.verifyRootRegionLocation(CatalogTracker.java:471)
> at 
> org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.verifyAndAssignRoot(ServerShutdownHandler.java:90)
> at 
> org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:126)
> at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:151)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:662)
> {noformat}
> After this, -ROOT-'s region won't be assigned, like this:
> {noformat}
> 2011-10-19 19:18:40,000 DEBUG 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation: 
> locateRegionInMeta parent
> Table=-ROOT-, metaLocation=address: dw79.kgb.sqa.cm4:60020, regioninfo: 
> -ROOT-,,0.70236052, attempt=0 of 10 failed; retrying after s
> leep of 1000 because: org.apache.hadoop.hbase.NotServingRegionException: 
> Region is not online: -ROOT-,,0
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.getRegion(HRegionServer.java:2771)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.getClosestRowBefore(HRegionServer.java:1802)
> at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.hbase.ipc.HBaseRPC$Server.call(HBaseRPC.java:569)
> at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1091)
> {noformat}
> So we should rewrite the verifyRootRegionLocation method.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5004) Better manage standalone setups on Ubuntu, the 127.0.1.1 issue

2012-01-26 Thread ramkrishna.s.vasudevan (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-5004:
--

Fix Version/s: (was: 0.90.6)
   0.90.7

Moving to 0.90.7

> Better manage standalone setups on Ubuntu, the 127.0.1.1 issue
> --
>
> Key: HBASE-5004
> URL: https://issues.apache.org/jira/browse/HBASE-5004
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 0.90.4
>Reporter: Jean-Daniel Cryans
> Fix For: 0.94.0, 0.90.7
>
>
> Numerous times users have come with issues setting up HBase on Ubuntu because 
> it has the 127.0.1.1 line messing everything. Here's an example:
> {quote}
> 2011-12-10 00:18:24,312 INFO 
> org.apache.hadoop.hbase.regionserver.HRegionServer: Serving as 
> localhost,33371,1323476299775, RPC listening on /127.0.1.1:33371, 
> sessionid=0x1342555adc90002
> ...
> 2011-12-10 00:18:27,135 DEBUG 
> org.apache.hadoop.hbase.master.AssignmentManager: Assigning region 
> -ROOT-,,0.70236052 to localhost,33371,1323476299775
> 2011-12-10 00:18:27,135 DEBUG org.apache.hadoop.hbase.master.ServerManager: 
> New connection to localhost,33371,1323476299775
> 2011-12-10 00:18:27,155 INFO org.apache.hadoop.ipc.HbaseRPC: Server at 
> /127.0.0.1:33371 could not be reached after 1 tries, giving up.
> 2011-12-10 00:18:27,156 WARN 
> org.apache.hadoop.hbase.master.AssignmentManager: Failed assignment of 
> -ROOT-,,0.70236052 to serverName=localhost,33371,1323476299775, 
> load=(requests=0, regions=0, usedHeap=23, maxHeap=983), trying to assign 
> elsewhere instead; retry=0
> org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed setting up 
> proxy interface org.apache.hadoop.hbase.ipc.HRegionInterface to 
> /127.0.0.1:33371 after attempts=1
> {quote}
> We should have a special check in standalone mode to make sure we won't fall 
> into that trap and then print a useful error message that would hopefully 
> appear on the command line.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-4462) Properly treating SocketTimeoutException

2012-01-26 Thread ramkrishna.s.vasudevan (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-4462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-4462:
--

Fix Version/s: (was: 0.90.6)
   0.90.7

> Properly treating SocketTimeoutException
> 
>
> Key: HBASE-4462
> URL: https://issues.apache.org/jira/browse/HBASE-4462
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 0.90.4
>Reporter: Jean-Daniel Cryans
>Assignee: ramkrishna.s.vasudevan
> Fix For: 0.90.7
>
> Attachments: HBASE-4462_0.90.x.patch
>
>
> SocketTimeoutException is currently treated like any IOE inside of 
> HCM.getRegionServerWithRetries and I think this is a problem. This method 
> should only do retries in cases where we are pretty sure the operation will 
> complete, but with STE we already waited for (by default) 60 seconds and 
> nothing happened.
> I found this while debugging Douglas Campbell's problem on the mailing list 
> where it seemed like he was using the same scanner from multiple threads, but 
> actually it was just the same client doing retries while the first run didn't 
> even finish yet (that's another problem). You could see the first scanner, 
> then up to two other handlers waiting for it to finish in order to run 
> (because of the synchronization on RegionScanner).
> So what should we do? We could treat STE as a DoNotRetryException and let the 
> client deal with it, or we could retry only once.
> There's also the option of having a different behavior for get/put/icv/scan, 
> the issue with operations that modify a cell is that you don't know if the 
> operation completed or not (same when a RS dies hard after completing let's 
> say a Put but just before returning to the client).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-4064) Two concurrent unassigning of the same region caused the endless loop of "Region has been PENDING_CLOSE for too long..."

2012-01-26 Thread ramkrishna.s.vasudevan (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-4064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-4064:
--

Fix Version/s: (was: 0.90.6)
   0.90.7

> Two concurrent unassigning of the same region caused the endless loop of 
> "Region has been PENDING_CLOSE for too long..."
> 
>
> Key: HBASE-4064
> URL: https://issues.apache.org/jira/browse/HBASE-4064
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Affects Versions: 0.90.3
>Reporter: Jieshan Bean
> Fix For: 0.90.7
>
> Attachments: HBASE-4064-v1.patch, HBASE-4064_branch90V2.patch, 
> disableflow.png
>
>
> 1. If there is a "rubbish" RegionState object with "PENDING_CLOSE" in 
> regionsInTransition(The RegionState was remained by some exception which 
> should be removed, that's why I called it as "rubbish" object), but the 
> region is not currently assigned anywhere, TimeoutMonitor will fall into an 
> endless loop:
> 2011-06-27 10:32:21,326 INFO 
> org.apache.hadoop.hbase.master.AssignmentManager: Regions in transition timed 
> out:  test2,070712,1308971310309.9a6e26d40293663a79523c58315b930f. 
> state=PENDING_CLOSE, ts=1309141555301
> 2011-06-27 10:32:21,326 INFO 
> org.apache.hadoop.hbase.master.AssignmentManager: Region has been 
> PENDING_CLOSE for too long, running forced unassign again on 
> region=test2,070712,1308971310309.9a6e26d40293663a79523c58315b930f.
> 2011-06-27 10:32:21,438 DEBUG 
> org.apache.hadoop.hbase.master.AssignmentManager: Starting unassignment of 
> region test2,070712,1308971310309.9a6e26d40293663a79523c58315b930f. 
> (offlining)
> 2011-06-27 10:32:21,441 DEBUG 
> org.apache.hadoop.hbase.master.AssignmentManager: Attempted to unassign 
> region test2,070712,1308971310309.9a6e26d40293663a79523c58315b930f. but it is 
> not currently assigned anywhere
> 2011-06-27 10:32:31,207 INFO 
> org.apache.hadoop.hbase.master.AssignmentManager: Regions in transition timed 
> out:  test2,070712,1308971310309.9a6e26d40293663a79523c58315b930f. 
> state=PENDING_CLOSE, ts=1309141555301
> 2011-06-27 10:32:31,207 INFO 
> org.apache.hadoop.hbase.master.AssignmentManager: Region has been 
> PENDING_CLOSE for too long, running forced unassign again on 
> region=test2,070712,1308971310309.9a6e26d40293663a79523c58315b930f.
> 2011-06-27 10:32:31,215 DEBUG 
> org.apache.hadoop.hbase.master.AssignmentManager: Starting unassignment of 
> region test2,070712,1308971310309.9a6e26d40293663a79523c58315b930f. 
> (offlining)
> 2011-06-27 10:32:31,215 DEBUG 
> org.apache.hadoop.hbase.master.AssignmentManager: Attempted to unassign 
> region test2,070712,1308971310309.9a6e26d40293663a79523c58315b930f. but it is 
> not currently assigned anywhere
> 2011-06-27 10:32:41,164 INFO 
> org.apache.hadoop.hbase.master.AssignmentManager: Regions in transition timed 
> out:  test2,070712,1308971310309.9a6e26d40293663a79523c58315b930f. 
> state=PENDING_CLOSE, ts=1309141555301
> 2011-06-27 10:32:41,164 INFO 
> org.apache.hadoop.hbase.master.AssignmentManager: Region has been 
> PENDING_CLOSE for too long, running forced unassign again on 
> region=test2,070712,1308971310309.9a6e26d40293663a79523c58315b930f.
> 2011-06-27 10:32:41,172 DEBUG 
> org.apache.hadoop.hbase.master.AssignmentManager: Starting unassignment of 
> region test2,070712,1308971310309.9a6e26d40293663a79523c58315b930f. 
> (offlining)
> 2011-06-27 10:32:41,172 DEBUG 
> org.apache.hadoop.hbase.master.AssignmentManager: Attempted to unassign 
> region test2,070712,1308971310309.9a6e26d40293663a79523c58315b930f. but it is 
> not currently assigned anywhere
> .
> 2  In the following scenario, two concurrent unassigning call of the same 
> region may lead to the above problem:
> the first unassign call send rpc call success, the master watched the event 
> of "RS_ZK_REGION_CLOSED", process this event, will create a 
> ClosedRegionHandler to remove the state of the region in master.eg.
> while ClosedRegionHandler is running in  
> "hbase.master.executor.closeregion.threads" thread (A), another unassign call 
> of same region run in another thread(B).
> while thread B  run "if (!regions.containsKey(region))", this.regions have 
> the region info, now  cpu switch to thread A.
> The thread A will remove the region from the sets of "this.regions" and 
> "regionsInTransition", then switch to thread B. the thread B run continue, 
> will throw an exception with the msg of "Server null returned 
> java.lang.NullPointerException: Passed server is null for 
> 9a6e26d40293663a79523c58315b930f", but without removing the new-adding 
> RegionState from "regionsInTransition",and it can not be removed for ever.
>  public void unassign(HRegionInfo region, boolean force) {
> L

[jira] [Updated] (HBASE-4094) improve hbck tool to fix more hbase problem

2012-01-26 Thread ramkrishna.s.vasudevan (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-4094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-4094:
--

Fix Version/s: (was: 0.90.6)
   0.90.7

Moving to 0.90.7.  HBASE-5128 also is related to improving hbck tool.

> improve hbck tool to fix more hbase problem
> ---
>
> Key: HBASE-4094
> URL: https://issues.apache.org/jira/browse/HBASE-4094
> Project: HBase
>  Issue Type: New Feature
>  Components: master
>Affects Versions: 0.90.3
>Reporter: feng xu
> Fix For: 0.90.7
>
> Attachments: HbaseFsck_TableChain.patch
>
>   Original Estimate: 12h
>  Remaining Estimate: 12h
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-4083) If Enable table is not completed and is partial, then scanning of the table is not working

2012-01-26 Thread ramkrishna.s.vasudevan (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-4083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-4083:
--

Fix Version/s: (was: 0.90.6)
   0.92.0
   0.90.7

Not fixed in 0.90.  Hence not resolving the issue.  But committed in trunk and 
0.92


> If Enable table is not completed and is partial, then scanning of the table 
> is not working 
> ---
>
> Key: HBASE-4083
> URL: https://issues.apache.org/jira/browse/HBASE-4083
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.90.3
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 0.90.7, 0.92.0
>
> Attachments: HBASE-4083-1.patch, HBASE-4083_0.90.patch, 
> HBASE-4083_0.90_1.patch, HBASE-4083_trunk.patch, HBASE-4083_trunk_1.patch
>
>
> Consider the following scenario
> Start the Master, Backup master and RegionServer.
> Create a table which in turn creates a region.
> Disable the table.
> Enable the table again. 
> Kill the Active master exactly at the point before the actual region 
> assignment is started.
> Restart or switch master.
> Scan the table.
> NotServingRegionExcepiton is thrown.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5197) [replication] Handle socket timeouts in ReplicationSource to prevent DDOS

2012-01-26 Thread ramkrishna.s.vasudevan (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-5197:
--

Fix Version/s: (was: 0.90.6)
   0.90.7

Updating affect versions to 0.90.7

> [replication] Handle socket timeouts in ReplicationSource to prevent DDOS
> -
>
> Key: HBASE-5197
> URL: https://issues.apache.org/jira/browse/HBASE-5197
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 0.90.5
>Reporter: Jean-Daniel Cryans
>Assignee: Jean-Daniel Cryans
> Fix For: 0.94.0, 0.90.7, 0.92.1
>
>
> Kind of like HBASE-4462 but for replication. If while replicating you get a 
> socket timeout, the last thing you want to do is to retry it right away. 
> Since we can't fail the replication thread, the best I can think of is to 
> sleep a really long amount of time.
> Planning to bring this to all branches.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-3917) Separate the Avro schema definition file from the code

2012-01-26 Thread ramkrishna.s.vasudevan (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-3917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-3917:
--

Fix Version/s: (was: 0.90.6)
   0.90.7

> Separate the Avro schema definition file from the code
> --
>
> Key: HBASE-3917
> URL: https://issues.apache.org/jira/browse/HBASE-3917
> Project: HBase
>  Issue Type: Improvement
>  Components: avro
>Affects Versions: 0.90.3
>Reporter: Lars George
>Assignee: Alex Newman
>Priority: Trivial
>  Labels: noob
> Fix For: 0.90.7
>
> Attachments: 
> 0001-HBASE-3917.-Separate-the-Avro-schema-definition-file.patch
>
>
> The Avro schema files are in the src/main/java path, but should be in 
> /src/main/resources just like the Hbase.thrift is. Makes the separation the 
> same and cleaner.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5157) Backport HBASE-4880- Region is on service before openRegionHandler completes, may cause data loss

2012-01-26 Thread ramkrishna.s.vasudevan (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-5157:
--

Fix Version/s: (was: 0.90.6)
   0.90.7

Moving to 0.90.7.  Needs some more code rewrite to make this fit in 0.90.

> Backport HBASE-4880- Region is on service before openRegionHandler completes, 
> may cause data loss
> -
>
> Key: HBASE-5157
> URL: https://issues.apache.org/jira/browse/HBASE-5157
> Project: HBase
>  Issue Type: Bug
>Reporter: ramkrishna.s.vasudevan
> Fix For: 0.90.7
>
> Attachments: HBASE-4880_branch90_1.patch
>
>
> Backporting to 0.90.6 considering the importance of the issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HBASE-5276) PerformanceEvaluation does not set the correct classpath for MR because it lives in the test jar

2012-01-26 Thread Tim Robertson (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tim Robertson resolved HBASE-5276.
--

   Resolution: Fixed
Fix Version/s: 0.92.1

Was an issue discovered in the CDH distribution that was already fixed in the 
Apache codestream.  Closing this issue and opened one in the CDH jira.

> PerformanceEvaluation does not set the correct classpath for MR because it 
> lives in the test jar
> 
>
> Key: HBASE-5276
> URL: https://issues.apache.org/jira/browse/HBASE-5276
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.90.4
>Reporter: Tim Robertson
>Priority: Minor
> Fix For: 0.92.1
>
>
> Note: This was discovered running the CDH version hbase-0.90.4-cdh3u2
> Running the PerformanceEvaluation as follows:
>   $HADOOP_HOME/bin/hadoop org.apache.hadoop.hbase.PerformanceEvaluation scan 5
> fails because the MR tasks do not get the HBase jar on the CP, and thus hit 
> ClassNotFoundExceptions.
> The job gets the following only:
>   file:/Users/tim/dev/hadoop/hbase-0.90.4-cdh3u2/hbase-0.90.4-cdh3u2-tests.jar
>   
> file:/Users/tim/dev/hadoop/hadoop-0.20.2-cdh3u2/hadoop-core-0.20.2-cdh3u2.jar
>   
> file:/Users/tim/dev/hadoop/hbase-0.90.4-cdh3u2/lib/zookeeper-3.3.3-cdh3u2.jar
> The RowCounter etc all work because they live in the HBase jar, not the test 
> jar, and they get the following 
>   file:/Users/tim/dev/hadoop/hbase-0.90.4-cdh3u2/lib/guava-r06.jar
>   
> file:/Users/tim/dev/hadoop/hadoop-0.20.2-cdh3u2/hadoop-core-0.20.2-cdh3u2.jar
>   file:/Users/tim/dev/hadoop/hbase-0.90.4-cdh3u2/hbase-0.90.4-cdh3u2.jar
>   
> file:/Users/tim/dev/hadoop/hbase-0.90.4-cdh3u2/lib/zookeeper-3.3.3-cdh3u2.jar
> Presumably this relates to 
>   job.setJarByClass(PerformanceEvaluation.class);
>   ...
>   TableMapReduceUtil.addDependencyJars(job);
> A (cowboy) workaround to run PE is to unpack the jars, and copy the 
> PerformanceEvaluation* classes building a patched jar.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HBASE-5281) Should a failure in creating an unassigned node abort the master?

2012-01-26 Thread Harsh J (Created) (JIRA)
Should a failure in creating an unassigned node abort the master?
-

 Key: HBASE-5281
 URL: https://issues.apache.org/jira/browse/HBASE-5281
 Project: HBase
  Issue Type: Bug
  Components: master
Affects Versions: 0.90.5
Reporter: Harsh J
 Fix For: 0.94.0, 0.92.1


In {{AssignmentManager}}'s {{CreateUnassignedAsyncCallback}}, we have the 
following condition:

{code}
if (rc != 0) {
// Thisis resultcode.  If non-zero, need to resubmit.
LOG.warn("rc != 0 for " + path + " -- retryable connectionloss -- " +
  "FIX see http://wiki.apache.org/hadoop/ZooKeeper/FAQ#A2";);
this.zkw.abort("Connectionloss writing unassigned at " + path +
  ", rc=" + rc, null);
return;
}
{code}

While a similar structure inside {{ExistsUnassignedAsyncCallback}} (which the 
above is linked to), does not have such a force abort.

Do we really require the abort statement here, or can we make do without?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Reopened] (HBASE-5276) PerformanceEvaluation does not set the correct classpath for MR because it lives in the test jar

2012-01-26 Thread Jonathan Hsieh (Reopened) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hsieh reopened HBASE-5276:
---

  Assignee: Jonathan Hsieh

> PerformanceEvaluation does not set the correct classpath for MR because it 
> lives in the test jar
> 
>
> Key: HBASE-5276
> URL: https://issues.apache.org/jira/browse/HBASE-5276
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.90.4
>Reporter: Tim Robertson
>Assignee: Jonathan Hsieh
>Priority: Minor
> Fix For: 0.92.1
>
>
> Note: This was discovered running the CDH version hbase-0.90.4-cdh3u2
> Running the PerformanceEvaluation as follows:
>   $HADOOP_HOME/bin/hadoop org.apache.hadoop.hbase.PerformanceEvaluation scan 5
> fails because the MR tasks do not get the HBase jar on the CP, and thus hit 
> ClassNotFoundExceptions.
> The job gets the following only:
>   file:/Users/tim/dev/hadoop/hbase-0.90.4-cdh3u2/hbase-0.90.4-cdh3u2-tests.jar
>   
> file:/Users/tim/dev/hadoop/hadoop-0.20.2-cdh3u2/hadoop-core-0.20.2-cdh3u2.jar
>   
> file:/Users/tim/dev/hadoop/hbase-0.90.4-cdh3u2/lib/zookeeper-3.3.3-cdh3u2.jar
> The RowCounter etc all work because they live in the HBase jar, not the test 
> jar, and they get the following 
>   file:/Users/tim/dev/hadoop/hbase-0.90.4-cdh3u2/lib/guava-r06.jar
>   
> file:/Users/tim/dev/hadoop/hadoop-0.20.2-cdh3u2/hadoop-core-0.20.2-cdh3u2.jar
>   file:/Users/tim/dev/hadoop/hbase-0.90.4-cdh3u2/hbase-0.90.4-cdh3u2.jar
>   
> file:/Users/tim/dev/hadoop/hbase-0.90.4-cdh3u2/lib/zookeeper-3.3.3-cdh3u2.jar
> Presumably this relates to 
>   job.setJarByClass(PerformanceEvaluation.class);
>   ...
>   TableMapReduceUtil.addDependencyJars(job);
> A (cowboy) workaround to run PE is to unpack the jars, and copy the 
> PerformanceEvaluation* classes building a patched jar.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HBASE-5276) PerformanceEvaluation does not set the correct classpath for MR because it lives in the test jar

2012-01-26 Thread Jonathan Hsieh (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hsieh resolved HBASE-5276.
---

   Resolution: Won't Fix
Fix Version/s: (was: 0.92.1)

This is essentially a dupe for HBASE-4688 which is fixed for Apache HBase 0.92. 
 A new backport request issue specific to CDH is filed here:  
https://issues.cloudera.org/browse/DISTRO-369

> PerformanceEvaluation does not set the correct classpath for MR because it 
> lives in the test jar
> 
>
> Key: HBASE-5276
> URL: https://issues.apache.org/jira/browse/HBASE-5276
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.90.4
>Reporter: Tim Robertson
>Assignee: Jonathan Hsieh
>Priority: Minor
>
> Note: This was discovered running the CDH version hbase-0.90.4-cdh3u2
> Running the PerformanceEvaluation as follows:
>   $HADOOP_HOME/bin/hadoop org.apache.hadoop.hbase.PerformanceEvaluation scan 5
> fails because the MR tasks do not get the HBase jar on the CP, and thus hit 
> ClassNotFoundExceptions.
> The job gets the following only:
>   file:/Users/tim/dev/hadoop/hbase-0.90.4-cdh3u2/hbase-0.90.4-cdh3u2-tests.jar
>   
> file:/Users/tim/dev/hadoop/hadoop-0.20.2-cdh3u2/hadoop-core-0.20.2-cdh3u2.jar
>   
> file:/Users/tim/dev/hadoop/hbase-0.90.4-cdh3u2/lib/zookeeper-3.3.3-cdh3u2.jar
> The RowCounter etc all work because they live in the HBase jar, not the test 
> jar, and they get the following 
>   file:/Users/tim/dev/hadoop/hbase-0.90.4-cdh3u2/lib/guava-r06.jar
>   
> file:/Users/tim/dev/hadoop/hadoop-0.20.2-cdh3u2/hadoop-core-0.20.2-cdh3u2.jar
>   file:/Users/tim/dev/hadoop/hbase-0.90.4-cdh3u2/hbase-0.90.4-cdh3u2.jar
>   
> file:/Users/tim/dev/hadoop/hbase-0.90.4-cdh3u2/lib/zookeeper-3.3.3-cdh3u2.jar
> Presumably this relates to 
>   job.setJarByClass(PerformanceEvaluation.class);
>   ...
>   TableMapReduceUtil.addDependencyJars(job);
> A (cowboy) workaround to run PE is to unpack the jars, and copy the 
> PerformanceEvaluation* classes building a patched jar.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Reopened] (HBASE-5276) PerformanceEvaluation does not set the correct classpath for MR because it lives in the test jar

2012-01-26 Thread Jonathan Hsieh (Reopened) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hsieh reopened HBASE-5276:
---


> PerformanceEvaluation does not set the correct classpath for MR because it 
> lives in the test jar
> 
>
> Key: HBASE-5276
> URL: https://issues.apache.org/jira/browse/HBASE-5276
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.90.4
>Reporter: Tim Robertson
>Assignee: Jonathan Hsieh
>Priority: Minor
>
> Note: This was discovered running the CDH version hbase-0.90.4-cdh3u2
> Running the PerformanceEvaluation as follows:
>   $HADOOP_HOME/bin/hadoop org.apache.hadoop.hbase.PerformanceEvaluation scan 5
> fails because the MR tasks do not get the HBase jar on the CP, and thus hit 
> ClassNotFoundExceptions.
> The job gets the following only:
>   file:/Users/tim/dev/hadoop/hbase-0.90.4-cdh3u2/hbase-0.90.4-cdh3u2-tests.jar
>   
> file:/Users/tim/dev/hadoop/hadoop-0.20.2-cdh3u2/hadoop-core-0.20.2-cdh3u2.jar
>   
> file:/Users/tim/dev/hadoop/hbase-0.90.4-cdh3u2/lib/zookeeper-3.3.3-cdh3u2.jar
> The RowCounter etc all work because they live in the HBase jar, not the test 
> jar, and they get the following 
>   file:/Users/tim/dev/hadoop/hbase-0.90.4-cdh3u2/lib/guava-r06.jar
>   
> file:/Users/tim/dev/hadoop/hadoop-0.20.2-cdh3u2/hadoop-core-0.20.2-cdh3u2.jar
>   file:/Users/tim/dev/hadoop/hbase-0.90.4-cdh3u2/hbase-0.90.4-cdh3u2.jar
>   
> file:/Users/tim/dev/hadoop/hbase-0.90.4-cdh3u2/lib/zookeeper-3.3.3-cdh3u2.jar
> Presumably this relates to 
>   job.setJarByClass(PerformanceEvaluation.class);
>   ...
>   TableMapReduceUtil.addDependencyJars(job);
> A (cowboy) workaround to run PE is to unpack the jars, and copy the 
> PerformanceEvaluation* classes building a patched jar.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HBASE-5276) PerformanceEvaluation does not set the correct classpath for MR because it lives in the test jar

2012-01-26 Thread Jonathan Hsieh (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hsieh resolved HBASE-5276.
---

Resolution: Duplicate

> PerformanceEvaluation does not set the correct classpath for MR because it 
> lives in the test jar
> 
>
> Key: HBASE-5276
> URL: https://issues.apache.org/jira/browse/HBASE-5276
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.90.4
>Reporter: Tim Robertson
>Assignee: Jonathan Hsieh
>Priority: Minor
>
> Note: This was discovered running the CDH version hbase-0.90.4-cdh3u2
> Running the PerformanceEvaluation as follows:
>   $HADOOP_HOME/bin/hadoop org.apache.hadoop.hbase.PerformanceEvaluation scan 5
> fails because the MR tasks do not get the HBase jar on the CP, and thus hit 
> ClassNotFoundExceptions.
> The job gets the following only:
>   file:/Users/tim/dev/hadoop/hbase-0.90.4-cdh3u2/hbase-0.90.4-cdh3u2-tests.jar
>   
> file:/Users/tim/dev/hadoop/hadoop-0.20.2-cdh3u2/hadoop-core-0.20.2-cdh3u2.jar
>   
> file:/Users/tim/dev/hadoop/hbase-0.90.4-cdh3u2/lib/zookeeper-3.3.3-cdh3u2.jar
> The RowCounter etc all work because they live in the HBase jar, not the test 
> jar, and they get the following 
>   file:/Users/tim/dev/hadoop/hbase-0.90.4-cdh3u2/lib/guava-r06.jar
>   
> file:/Users/tim/dev/hadoop/hadoop-0.20.2-cdh3u2/hadoop-core-0.20.2-cdh3u2.jar
>   file:/Users/tim/dev/hadoop/hbase-0.90.4-cdh3u2/hbase-0.90.4-cdh3u2.jar
>   
> file:/Users/tim/dev/hadoop/hbase-0.90.4-cdh3u2/lib/zookeeper-3.3.3-cdh3u2.jar
> Presumably this relates to 
>   job.setJarByClass(PerformanceEvaluation.class);
>   ...
>   TableMapReduceUtil.addDependencyJars(job);
> A (cowboy) workaround to run PE is to unpack the jars, and copy the 
> PerformanceEvaluation* classes building a patched jar.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5278) HBase shell script refers to removed "migrate" functionality

2012-01-26 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13193773#comment-13193773
 ] 

Hudson commented on HBASE-5278:
---

Integrated in HBase-0.92-security #89 (See 
[https://builds.apache.org/job/HBase-0.92-security/89/])
HBASE-5278 HBase shell script refers to removed 'migrate' functionality

stack : 
Files : 
* /hbase/branches/0.92/CHANGES.txt
* /hbase/branches/0.92/bin/hbase


> HBase shell script refers to removed "migrate" functionality
> 
>
> Key: HBASE-5278
> URL: https://issues.apache.org/jira/browse/HBASE-5278
> Project: HBase
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 0.94.0, 0.90.5, 0.92.0
>Reporter: Shaneal Manek
>Assignee: Shaneal Manek
>Priority: Trivial
> Fix For: 0.94.0, 0.92.1
>
> Attachments: hbase-5278.patch
>
>
> $ hbase migrate
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> org/apache/hadoop/hbase/util/Migrate
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.hadoop.hbase.util.Migrate
> at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
> at java.security.AccessController.doPrivileged(Native Method)
> at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
> Could not find the main class: org.apache.hadoop.hbase.util.Migrate. Program 
> will exit.
> The 'hbase' shell script has docs referring to a 'migrate' command which no 
> longer exists.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5231) Backport HBASE-3373 (per-table load balancing) to 0.92

2012-01-26 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13193774#comment-13193774
 ] 

Hudson commented on HBASE-5231:
---

Integrated in HBase-0.92-security #89 (See 
[https://builds.apache.org/job/HBase-0.92-security/89/])
HBASE-5231 revert - need to add unit test for per table load balancing

tedyu : 
Files : 
* /hbase/branches/0.92/CHANGES.txt
* 
/hbase/branches/0.92/src/main/java/org/apache/hadoop/hbase/master/AssignmentManager.java
* 
/hbase/branches/0.92/src/main/java/org/apache/hadoop/hbase/master/DefaultLoadBalancer.java
* /hbase/branches/0.92/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
* 
/hbase/branches/0.92/src/test/java/org/apache/hadoop/hbase/TestRegionRebalancing.java


> Backport HBASE-3373 (per-table load balancing) to 0.92
> --
>
> Key: HBASE-5231
> URL: https://issues.apache.org/jira/browse/HBASE-5231
> Project: HBase
>  Issue Type: Improvement
>Reporter: Zhihong Yu
> Fix For: 0.92.1
>
> Attachments: 5231-v2.txt, 5231.addendum, 5231.txt
>
>
> This JIRA backports per-table load balancing to 0.90

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HBASE-5282) Possible file handle leak with truncate HLog file.

2012-01-26 Thread Jonathan Hsieh (Created) (JIRA)
Possible file handle leak with truncate HLog file.
--

 Key: HBASE-5282
 URL: https://issues.apache.org/jira/browse/HBASE-5282
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.92.0, 0.90.5, 0.94.0
Reporter: Jonathan Hsieh
Assignee: Jonathan Hsieh


When debugging hbck, found that the code responsible for this exception can 
leak open file handles.

{code}
12/01/15 05:58:11 INFO regionserver.HRegion: Replaying edits from hdfs://haus01.
sf.cloudera.com:56020/hbase-jon/test5/98a1e7255731aae44b3836641840113e/recovered
.edits/3211315; minSequenceid=3214658
12/01/15 05:58:11 ERROR handler.OpenRegionHandler: Failed open of region=test5,8
\x90\x00\x00\x00\x00\x00\x00/05_0,1326597390073.98a1e7255731aae44b3836641840
113e.
java.io.EOFException
at java.io.DataInputStream.readByte(DataInputStream.java:250)
at org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:299)
at org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:320)
at org.apache.hadoop.io.Text.readString(Text.java:400)
at org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1486)
at 
org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1437)
at 
org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1424)
at 
org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1419)
at 
org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader$WALReader.(SequenceFileLogReader.java:57)
at 
org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.init(SequenceFileLogReader.java:158)
at 
org.apache.hadoop.hbase.regionserver.wal.HLog.getReader(HLog.java:572)
at 
org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEdits(HRegion.java:1940)
at 
org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEditsIfAny(HRegion.java:1896)
at 
org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:366)
at 
org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2661)
at 
org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2647)
at 
org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:312)
at 
org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:99)
at 
org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:158)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:619)
{code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5282) Possible file handle leak with truncated HLog file.

2012-01-26 Thread Jonathan Hsieh (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hsieh updated HBASE-5282:
--

Summary: Possible file handle leak with truncated HLog file.  (was: 
Possible file handle leak with truncate HLog file.)

> Possible file handle leak with truncated HLog file.
> ---
>
> Key: HBASE-5282
> URL: https://issues.apache.org/jira/browse/HBASE-5282
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.0, 0.90.5, 0.92.0
>Reporter: Jonathan Hsieh
>Assignee: Jonathan Hsieh
>
> When debugging hbck, found that the code responsible for this exception can 
> leak open file handles.
> {code}
> 12/01/15 05:58:11 INFO regionserver.HRegion: Replaying edits from 
> hdfs://haus01.
> sf.cloudera.com:56020/hbase-jon/test5/98a1e7255731aae44b3836641840113e/recovered
> .edits/3211315; minSequenceid=3214658
> 12/01/15 05:58:11 ERROR handler.OpenRegionHandler: Failed open of 
> region=test5,8
> \x90\x00\x00\x00\x00\x00\x00/05_0,1326597390073.98a1e7255731aae44b3836641840
> 113e.
> java.io.EOFException
> at java.io.DataInputStream.readByte(DataInputStream.java:250)
> at 
> org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:299)
> at org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:320)
> at org.apache.hadoop.io.Text.readString(Text.java:400)
> at 
> org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1486)
> at 
> org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1437)
> at 
> org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1424)
> at 
> org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1419)
> at 
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader$WALReader.(SequenceFileLogReader.java:57)
> at 
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.init(SequenceFileLogReader.java:158)
> at 
> org.apache.hadoop.hbase.regionserver.wal.HLog.getReader(HLog.java:572)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEdits(HRegion.java:1940)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEditsIfAny(HRegion.java:1896)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:366)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2661)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2647)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:312)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:99)
> at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:158)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:619)
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5282) Possible file handle leak with truncated HLog file.

2012-01-26 Thread Jonathan Hsieh (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hsieh updated HBASE-5282:
--

Attachment: hbase-5282.patch

> Possible file handle leak with truncated HLog file.
> ---
>
> Key: HBASE-5282
> URL: https://issues.apache.org/jira/browse/HBASE-5282
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.0, 0.90.5, 0.92.0
>Reporter: Jonathan Hsieh
>Assignee: Jonathan Hsieh
> Attachments: hbase-5282.patch
>
>
> When debugging hbck, found that the code responsible for this exception can 
> leak open file handles.
> {code}
> 12/01/15 05:58:11 INFO regionserver.HRegion: Replaying edits from 
> hdfs://haus01.
> sf.cloudera.com:56020/hbase-jon/test5/98a1e7255731aae44b3836641840113e/recovered
> .edits/3211315; minSequenceid=3214658
> 12/01/15 05:58:11 ERROR handler.OpenRegionHandler: Failed open of 
> region=test5,8
> \x90\x00\x00\x00\x00\x00\x00/05_0,1326597390073.98a1e7255731aae44b3836641840
> 113e.
> java.io.EOFException
> at java.io.DataInputStream.readByte(DataInputStream.java:250)
> at 
> org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:299)
> at org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:320)
> at org.apache.hadoop.io.Text.readString(Text.java:400)
> at 
> org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1486)
> at 
> org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1437)
> at 
> org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1424)
> at 
> org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1419)
> at 
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader$WALReader.(SequenceFileLogReader.java:57)
> at 
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.init(SequenceFileLogReader.java:158)
> at 
> org.apache.hadoop.hbase.regionserver.wal.HLog.getReader(HLog.java:572)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEdits(HRegion.java:1940)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEditsIfAny(HRegion.java:1896)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:366)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2661)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2647)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:312)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:99)
> at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:158)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:619)
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5282) Possible file handle leak with truncated HLog file.

2012-01-26 Thread Jonathan Hsieh (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13193839#comment-13193839
 ] 

Jonathan Hsieh commented on HBASE-5282:
---


When debugging, open region file was attempting to open either a truncated or 0 
size hlogfile (which is throws IOException at out from getReader), and leaking 
a handle on every open attempt.

Patch applies on 0.92 and trunk.

> Possible file handle leak with truncated HLog file.
> ---
>
> Key: HBASE-5282
> URL: https://issues.apache.org/jira/browse/HBASE-5282
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.0, 0.90.5, 0.92.0
>Reporter: Jonathan Hsieh
>Assignee: Jonathan Hsieh
> Attachments: hbase-5282.patch
>
>
> When debugging hbck, found that the code responsible for this exception can 
> leak open file handles.
> {code}
> 12/01/15 05:58:11 INFO regionserver.HRegion: Replaying edits from 
> hdfs://haus01.
> sf.cloudera.com:56020/hbase-jon/test5/98a1e7255731aae44b3836641840113e/recovered
> .edits/3211315; minSequenceid=3214658
> 12/01/15 05:58:11 ERROR handler.OpenRegionHandler: Failed open of 
> region=test5,8
> \x90\x00\x00\x00\x00\x00\x00/05_0,1326597390073.98a1e7255731aae44b3836641840
> 113e.
> java.io.EOFException
> at java.io.DataInputStream.readByte(DataInputStream.java:250)
> at 
> org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:299)
> at org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:320)
> at org.apache.hadoop.io.Text.readString(Text.java:400)
> at 
> org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1486)
> at 
> org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1437)
> at 
> org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1424)
> at 
> org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1419)
> at 
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader$WALReader.(SequenceFileLogReader.java:57)
> at 
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.init(SequenceFileLogReader.java:158)
> at 
> org.apache.hadoop.hbase.regionserver.wal.HLog.getReader(HLog.java:572)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEdits(HRegion.java:1940)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEditsIfAny(HRegion.java:1896)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:366)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2661)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2647)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:312)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:99)
> at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:158)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:619)
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5282) Possible file handle leak with truncated HLog file.

2012-01-26 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13193885#comment-13193885
 ] 

Zhihong Yu commented on HBASE-5282:
---

reader.close() may throw IOE.
I think we should protect the execution of status.cleanup().

> Possible file handle leak with truncated HLog file.
> ---
>
> Key: HBASE-5282
> URL: https://issues.apache.org/jira/browse/HBASE-5282
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.0, 0.90.5, 0.92.0
>Reporter: Jonathan Hsieh
>Assignee: Jonathan Hsieh
> Attachments: hbase-5282.patch
>
>
> When debugging hbck, found that the code responsible for this exception can 
> leak open file handles.
> {code}
> 12/01/15 05:58:11 INFO regionserver.HRegion: Replaying edits from 
> hdfs://haus01.
> sf.cloudera.com:56020/hbase-jon/test5/98a1e7255731aae44b3836641840113e/recovered
> .edits/3211315; minSequenceid=3214658
> 12/01/15 05:58:11 ERROR handler.OpenRegionHandler: Failed open of 
> region=test5,8
> \x90\x00\x00\x00\x00\x00\x00/05_0,1326597390073.98a1e7255731aae44b3836641840
> 113e.
> java.io.EOFException
> at java.io.DataInputStream.readByte(DataInputStream.java:250)
> at 
> org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:299)
> at org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:320)
> at org.apache.hadoop.io.Text.readString(Text.java:400)
> at 
> org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1486)
> at 
> org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1437)
> at 
> org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1424)
> at 
> org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1419)
> at 
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader$WALReader.(SequenceFileLogReader.java:57)
> at 
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.init(SequenceFileLogReader.java:158)
> at 
> org.apache.hadoop.hbase.regionserver.wal.HLog.getReader(HLog.java:572)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEdits(HRegion.java:1940)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEditsIfAny(HRegion.java:1896)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:366)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2661)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2647)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:312)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:99)
> at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:158)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:619)
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5282) Possible file handle leak with truncated HLog file.

2012-01-26 Thread Jonathan Hsieh (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hsieh updated HBASE-5282:
--

Status: Patch Available  (was: Open)

> Possible file handle leak with truncated HLog file.
> ---
>
> Key: HBASE-5282
> URL: https://issues.apache.org/jira/browse/HBASE-5282
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.92.0, 0.90.5, 0.94.0
>Reporter: Jonathan Hsieh
>Assignee: Jonathan Hsieh
> Attachments: hbase-5282.patch
>
>
> When debugging hbck, found that the code responsible for this exception can 
> leak open file handles.
> {code}
> 12/01/15 05:58:11 INFO regionserver.HRegion: Replaying edits from 
> hdfs://haus01.
> sf.cloudera.com:56020/hbase-jon/test5/98a1e7255731aae44b3836641840113e/recovered
> .edits/3211315; minSequenceid=3214658
> 12/01/15 05:58:11 ERROR handler.OpenRegionHandler: Failed open of 
> region=test5,8
> \x90\x00\x00\x00\x00\x00\x00/05_0,1326597390073.98a1e7255731aae44b3836641840
> 113e.
> java.io.EOFException
> at java.io.DataInputStream.readByte(DataInputStream.java:250)
> at 
> org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:299)
> at org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:320)
> at org.apache.hadoop.io.Text.readString(Text.java:400)
> at 
> org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1486)
> at 
> org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1437)
> at 
> org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1424)
> at 
> org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1419)
> at 
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader$WALReader.(SequenceFileLogReader.java:57)
> at 
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.init(SequenceFileLogReader.java:158)
> at 
> org.apache.hadoop.hbase.regionserver.wal.HLog.getReader(HLog.java:572)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEdits(HRegion.java:1940)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEditsIfAny(HRegion.java:1896)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:366)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2661)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2647)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:312)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:99)
> at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:158)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:619)
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5282) Possible file handle leak with truncated HLog file.

2012-01-26 Thread Jonathan Hsieh (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13193902#comment-13193902
 ] 

Jonathan Hsieh commented on HBASE-5282:
---

True, but #replayRecoveredEdits is only used in one place, wrapped with {{try 
catch}} that checks for IOE and seems like reasonable behavior:

#replayRecoverededitsIfAny(...)
{code}
  try {
seqid = replayRecoveredEdits(edits, seqid, reporter);
  } catch (IOException e) {
boolean skipErrors = conf.getBoolean("hbase.skip.errors", false);
if (skipErrors) {
  Path p = HLog.moveAsideBadEditsFile(fs, edits);
  LOG.error("hbase.skip.errors=true so continuing. Renamed " + edits +
" as " + p, e);
} else {
  throw e;
}
  }
{code}

What do you mean by protect status.cleanup()? Check for {{status == null}}? (it 
cannot be).

> Possible file handle leak with truncated HLog file.
> ---
>
> Key: HBASE-5282
> URL: https://issues.apache.org/jira/browse/HBASE-5282
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.0, 0.90.5, 0.92.0
>Reporter: Jonathan Hsieh
>Assignee: Jonathan Hsieh
> Attachments: hbase-5282.patch
>
>
> When debugging hbck, found that the code responsible for this exception can 
> leak open file handles.
> {code}
> 12/01/15 05:58:11 INFO regionserver.HRegion: Replaying edits from 
> hdfs://haus01.
> sf.cloudera.com:56020/hbase-jon/test5/98a1e7255731aae44b3836641840113e/recovered
> .edits/3211315; minSequenceid=3214658
> 12/01/15 05:58:11 ERROR handler.OpenRegionHandler: Failed open of 
> region=test5,8
> \x90\x00\x00\x00\x00\x00\x00/05_0,1326597390073.98a1e7255731aae44b3836641840
> 113e.
> java.io.EOFException
> at java.io.DataInputStream.readByte(DataInputStream.java:250)
> at 
> org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:299)
> at org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:320)
> at org.apache.hadoop.io.Text.readString(Text.java:400)
> at 
> org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1486)
> at 
> org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1437)
> at 
> org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1424)
> at 
> org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1419)
> at 
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader$WALReader.(SequenceFileLogReader.java:57)
> at 
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.init(SequenceFileLogReader.java:158)
> at 
> org.apache.hadoop.hbase.regionserver.wal.HLog.getReader(HLog.java:572)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEdits(HRegion.java:1940)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEditsIfAny(HRegion.java:1896)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:366)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2661)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2647)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:312)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:99)
> at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:158)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:619)
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5282) Possible file handle leak with truncated HLog file.

2012-01-26 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13193912#comment-13193912
 ] 

Zhihong Yu commented on HBASE-5282:
---

What I meant is that if close() throws IOE, status.cleanup() would be skipped.
status.cleanup() can be placed before the call to close().

> Possible file handle leak with truncated HLog file.
> ---
>
> Key: HBASE-5282
> URL: https://issues.apache.org/jira/browse/HBASE-5282
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.0, 0.90.5, 0.92.0
>Reporter: Jonathan Hsieh
>Assignee: Jonathan Hsieh
> Attachments: hbase-5282.patch
>
>
> When debugging hbck, found that the code responsible for this exception can 
> leak open file handles.
> {code}
> 12/01/15 05:58:11 INFO regionserver.HRegion: Replaying edits from 
> hdfs://haus01.
> sf.cloudera.com:56020/hbase-jon/test5/98a1e7255731aae44b3836641840113e/recovered
> .edits/3211315; minSequenceid=3214658
> 12/01/15 05:58:11 ERROR handler.OpenRegionHandler: Failed open of 
> region=test5,8
> \x90\x00\x00\x00\x00\x00\x00/05_0,1326597390073.98a1e7255731aae44b3836641840
> 113e.
> java.io.EOFException
> at java.io.DataInputStream.readByte(DataInputStream.java:250)
> at 
> org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:299)
> at org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:320)
> at org.apache.hadoop.io.Text.readString(Text.java:400)
> at 
> org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1486)
> at 
> org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1437)
> at 
> org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1424)
> at 
> org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1419)
> at 
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader$WALReader.(SequenceFileLogReader.java:57)
> at 
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.init(SequenceFileLogReader.java:158)
> at 
> org.apache.hadoop.hbase.regionserver.wal.HLog.getReader(HLog.java:572)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEdits(HRegion.java:1940)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEditsIfAny(HRegion.java:1896)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:366)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2661)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2647)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:312)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:99)
> at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:158)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:619)
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5282) Possible file handle leak with truncated HLog file.

2012-01-26 Thread Jonathan Hsieh (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13193917#comment-13193917
 ] 

Jonathan Hsieh commented on HBASE-5282:
---

Ah, got it.  Good catch.  

> Possible file handle leak with truncated HLog file.
> ---
>
> Key: HBASE-5282
> URL: https://issues.apache.org/jira/browse/HBASE-5282
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.0, 0.90.5, 0.92.0
>Reporter: Jonathan Hsieh
>Assignee: Jonathan Hsieh
> Attachments: hbase-5282.patch
>
>
> When debugging hbck, found that the code responsible for this exception can 
> leak open file handles.
> {code}
> 12/01/15 05:58:11 INFO regionserver.HRegion: Replaying edits from 
> hdfs://haus01.
> sf.cloudera.com:56020/hbase-jon/test5/98a1e7255731aae44b3836641840113e/recovered
> .edits/3211315; minSequenceid=3214658
> 12/01/15 05:58:11 ERROR handler.OpenRegionHandler: Failed open of 
> region=test5,8
> \x90\x00\x00\x00\x00\x00\x00/05_0,1326597390073.98a1e7255731aae44b3836641840
> 113e.
> java.io.EOFException
> at java.io.DataInputStream.readByte(DataInputStream.java:250)
> at 
> org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:299)
> at org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:320)
> at org.apache.hadoop.io.Text.readString(Text.java:400)
> at 
> org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1486)
> at 
> org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1437)
> at 
> org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1424)
> at 
> org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1419)
> at 
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader$WALReader.(SequenceFileLogReader.java:57)
> at 
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.init(SequenceFileLogReader.java:158)
> at 
> org.apache.hadoop.hbase.regionserver.wal.HLog.getReader(HLog.java:572)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEdits(HRegion.java:1940)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEditsIfAny(HRegion.java:1896)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:366)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2661)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2647)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:312)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:99)
> at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:158)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:619)
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5282) Possible file handle leak with truncated HLog file.

2012-01-26 Thread Jonathan Hsieh (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hsieh updated HBASE-5282:
--

Status: Open  (was: Patch Available)

> Possible file handle leak with truncated HLog file.
> ---
>
> Key: HBASE-5282
> URL: https://issues.apache.org/jira/browse/HBASE-5282
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.92.0, 0.90.5, 0.94.0
>Reporter: Jonathan Hsieh
>Assignee: Jonathan Hsieh
> Attachments: hbase-5282.patch
>
>
> When debugging hbck, found that the code responsible for this exception can 
> leak open file handles.
> {code}
> 12/01/15 05:58:11 INFO regionserver.HRegion: Replaying edits from 
> hdfs://haus01.
> sf.cloudera.com:56020/hbase-jon/test5/98a1e7255731aae44b3836641840113e/recovered
> .edits/3211315; minSequenceid=3214658
> 12/01/15 05:58:11 ERROR handler.OpenRegionHandler: Failed open of 
> region=test5,8
> \x90\x00\x00\x00\x00\x00\x00/05_0,1326597390073.98a1e7255731aae44b3836641840
> 113e.
> java.io.EOFException
> at java.io.DataInputStream.readByte(DataInputStream.java:250)
> at 
> org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:299)
> at org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:320)
> at org.apache.hadoop.io.Text.readString(Text.java:400)
> at 
> org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1486)
> at 
> org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1437)
> at 
> org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1424)
> at 
> org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1419)
> at 
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader$WALReader.(SequenceFileLogReader.java:57)
> at 
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.init(SequenceFileLogReader.java:158)
> at 
> org.apache.hadoop.hbase.regionserver.wal.HLog.getReader(HLog.java:572)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEdits(HRegion.java:1940)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEditsIfAny(HRegion.java:1896)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:366)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2661)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2647)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:312)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:99)
> at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:158)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:619)
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5282) Possible file handle leak with truncated HLog file.

2012-01-26 Thread Jonathan Hsieh (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hsieh updated HBASE-5282:
--

Attachment: hbase-5282.v2.patch

Updated to call status.cleanup() before close.

> Possible file handle leak with truncated HLog file.
> ---
>
> Key: HBASE-5282
> URL: https://issues.apache.org/jira/browse/HBASE-5282
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.0, 0.90.5, 0.92.0
>Reporter: Jonathan Hsieh
>Assignee: Jonathan Hsieh
> Attachments: hbase-5282.patch, hbase-5282.v2.patch
>
>
> When debugging hbck, found that the code responsible for this exception can 
> leak open file handles.
> {code}
> 12/01/15 05:58:11 INFO regionserver.HRegion: Replaying edits from 
> hdfs://haus01.
> sf.cloudera.com:56020/hbase-jon/test5/98a1e7255731aae44b3836641840113e/recovered
> .edits/3211315; minSequenceid=3214658
> 12/01/15 05:58:11 ERROR handler.OpenRegionHandler: Failed open of 
> region=test5,8
> \x90\x00\x00\x00\x00\x00\x00/05_0,1326597390073.98a1e7255731aae44b3836641840
> 113e.
> java.io.EOFException
> at java.io.DataInputStream.readByte(DataInputStream.java:250)
> at 
> org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:299)
> at org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:320)
> at org.apache.hadoop.io.Text.readString(Text.java:400)
> at 
> org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1486)
> at 
> org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1437)
> at 
> org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1424)
> at 
> org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1419)
> at 
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader$WALReader.(SequenceFileLogReader.java:57)
> at 
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.init(SequenceFileLogReader.java:158)
> at 
> org.apache.hadoop.hbase.regionserver.wal.HLog.getReader(HLog.java:572)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEdits(HRegion.java:1940)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEditsIfAny(HRegion.java:1896)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:366)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2661)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2647)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:312)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:99)
> at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:158)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:619)
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5282) Possible file handle leak with truncated HLog file.

2012-01-26 Thread Jonathan Hsieh (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hsieh updated HBASE-5282:
--

Status: Patch Available  (was: Open)

> Possible file handle leak with truncated HLog file.
> ---
>
> Key: HBASE-5282
> URL: https://issues.apache.org/jira/browse/HBASE-5282
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.92.0, 0.90.5, 0.94.0
>Reporter: Jonathan Hsieh
>Assignee: Jonathan Hsieh
> Attachments: hbase-5282.patch, hbase-5282.v2.patch
>
>
> When debugging hbck, found that the code responsible for this exception can 
> leak open file handles.
> {code}
> 12/01/15 05:58:11 INFO regionserver.HRegion: Replaying edits from 
> hdfs://haus01.
> sf.cloudera.com:56020/hbase-jon/test5/98a1e7255731aae44b3836641840113e/recovered
> .edits/3211315; minSequenceid=3214658
> 12/01/15 05:58:11 ERROR handler.OpenRegionHandler: Failed open of 
> region=test5,8
> \x90\x00\x00\x00\x00\x00\x00/05_0,1326597390073.98a1e7255731aae44b3836641840
> 113e.
> java.io.EOFException
> at java.io.DataInputStream.readByte(DataInputStream.java:250)
> at 
> org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:299)
> at org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:320)
> at org.apache.hadoop.io.Text.readString(Text.java:400)
> at 
> org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1486)
> at 
> org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1437)
> at 
> org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1424)
> at 
> org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1419)
> at 
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader$WALReader.(SequenceFileLogReader.java:57)
> at 
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.init(SequenceFileLogReader.java:158)
> at 
> org.apache.hadoop.hbase.regionserver.wal.HLog.getReader(HLog.java:572)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEdits(HRegion.java:1940)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEditsIfAny(HRegion.java:1896)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:366)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2661)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2647)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:312)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:99)
> at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:158)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:619)
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5282) Possible file handle leak with truncated HLog file.

2012-01-26 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13193934#comment-13193934
 ] 

Hadoop QA commented on HBASE-5282:
--

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12511978/hbase-5282.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

-1 javadoc.  The javadoc tool appears to have generated -140 warning 
messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

-1 findbugs.  The patch appears to introduce 161 new Findbugs (version 
1.3.9) warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

 -1 core tests.  The patch failed these unit tests:
   org.apache.hadoop.hbase.io.hfile.TestHFileBlock
  org.apache.hadoop.hbase.mapreduce.TestImportTsv
  org.apache.hadoop.hbase.mapred.TestTableMapReduce
  org.apache.hadoop.hbase.mapreduce.TestHFileOutputFormat

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/853//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/853//artifact/trunk/patchprocess/newPatchFindbugsWarnings.html
Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/853//console

This message is automatically generated.

> Possible file handle leak with truncated HLog file.
> ---
>
> Key: HBASE-5282
> URL: https://issues.apache.org/jira/browse/HBASE-5282
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.0, 0.90.5, 0.92.0
>Reporter: Jonathan Hsieh
>Assignee: Jonathan Hsieh
> Attachments: hbase-5282.patch, hbase-5282.v2.patch
>
>
> When debugging hbck, found that the code responsible for this exception can 
> leak open file handles.
> {code}
> 12/01/15 05:58:11 INFO regionserver.HRegion: Replaying edits from 
> hdfs://haus01.
> sf.cloudera.com:56020/hbase-jon/test5/98a1e7255731aae44b3836641840113e/recovered
> .edits/3211315; minSequenceid=3214658
> 12/01/15 05:58:11 ERROR handler.OpenRegionHandler: Failed open of 
> region=test5,8
> \x90\x00\x00\x00\x00\x00\x00/05_0,1326597390073.98a1e7255731aae44b3836641840
> 113e.
> java.io.EOFException
> at java.io.DataInputStream.readByte(DataInputStream.java:250)
> at 
> org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:299)
> at org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:320)
> at org.apache.hadoop.io.Text.readString(Text.java:400)
> at 
> org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1486)
> at 
> org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1437)
> at 
> org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1424)
> at 
> org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1419)
> at 
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader$WALReader.(SequenceFileLogReader.java:57)
> at 
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.init(SequenceFileLogReader.java:158)
> at 
> org.apache.hadoop.hbase.regionserver.wal.HLog.getReader(HLog.java:572)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEdits(HRegion.java:1940)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEditsIfAny(HRegion.java:1896)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:366)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2661)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2647)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:312)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:99)
> at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:158)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:619)
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/j

[jira] [Commented] (HBASE-5282) Possible file handle leak with truncated HLog file.

2012-01-26 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13193960#comment-13193960
 ] 

Hadoop QA commented on HBASE-5282:
--

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12511993/hbase-5282.v2.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

-1 javadoc.  The javadoc tool appears to have generated -140 warning 
messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

-1 findbugs.  The patch appears to introduce 161 new Findbugs (version 
1.3.9) warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

 -1 core tests.  The patch failed these unit tests:
   org.apache.hadoop.hbase.mapreduce.TestHFileOutputFormat
  org.apache.hadoop.hbase.mapred.TestTableMapReduce
  org.apache.hadoop.hbase.io.hfile.TestHFileBlock
  org.apache.hadoop.hbase.mapreduce.TestImportTsv

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/854//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/854//artifact/trunk/patchprocess/newPatchFindbugsWarnings.html
Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/854//console

This message is automatically generated.

> Possible file handle leak with truncated HLog file.
> ---
>
> Key: HBASE-5282
> URL: https://issues.apache.org/jira/browse/HBASE-5282
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.0, 0.90.5, 0.92.0
>Reporter: Jonathan Hsieh
>Assignee: Jonathan Hsieh
> Attachments: hbase-5282.patch, hbase-5282.v2.patch
>
>
> When debugging hbck, found that the code responsible for this exception can 
> leak open file handles.
> {code}
> 12/01/15 05:58:11 INFO regionserver.HRegion: Replaying edits from 
> hdfs://haus01.
> sf.cloudera.com:56020/hbase-jon/test5/98a1e7255731aae44b3836641840113e/recovered
> .edits/3211315; minSequenceid=3214658
> 12/01/15 05:58:11 ERROR handler.OpenRegionHandler: Failed open of 
> region=test5,8
> \x90\x00\x00\x00\x00\x00\x00/05_0,1326597390073.98a1e7255731aae44b3836641840
> 113e.
> java.io.EOFException
> at java.io.DataInputStream.readByte(DataInputStream.java:250)
> at 
> org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:299)
> at org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:320)
> at org.apache.hadoop.io.Text.readString(Text.java:400)
> at 
> org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1486)
> at 
> org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1437)
> at 
> org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1424)
> at 
> org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1419)
> at 
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader$WALReader.(SequenceFileLogReader.java:57)
> at 
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.init(SequenceFileLogReader.java:158)
> at 
> org.apache.hadoop.hbase.regionserver.wal.HLog.getReader(HLog.java:572)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEdits(HRegion.java:1940)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEditsIfAny(HRegion.java:1896)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:366)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2661)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2647)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:312)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:99)
> at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:158)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:619)
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.or

[jira] [Commented] (HBASE-5282) Possible file handle leak with truncated HLog file.

2012-01-26 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13193966#comment-13193966
 ] 

Zhihong Yu commented on HBASE-5282:
---

+1 on patch v2.

> Possible file handle leak with truncated HLog file.
> ---
>
> Key: HBASE-5282
> URL: https://issues.apache.org/jira/browse/HBASE-5282
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.0, 0.90.5, 0.92.0
>Reporter: Jonathan Hsieh
>Assignee: Jonathan Hsieh
> Attachments: hbase-5282.patch, hbase-5282.v2.patch
>
>
> When debugging hbck, found that the code responsible for this exception can 
> leak open file handles.
> {code}
> 12/01/15 05:58:11 INFO regionserver.HRegion: Replaying edits from 
> hdfs://haus01.
> sf.cloudera.com:56020/hbase-jon/test5/98a1e7255731aae44b3836641840113e/recovered
> .edits/3211315; minSequenceid=3214658
> 12/01/15 05:58:11 ERROR handler.OpenRegionHandler: Failed open of 
> region=test5,8
> \x90\x00\x00\x00\x00\x00\x00/05_0,1326597390073.98a1e7255731aae44b3836641840
> 113e.
> java.io.EOFException
> at java.io.DataInputStream.readByte(DataInputStream.java:250)
> at 
> org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:299)
> at org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:320)
> at org.apache.hadoop.io.Text.readString(Text.java:400)
> at 
> org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1486)
> at 
> org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1437)
> at 
> org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1424)
> at 
> org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1419)
> at 
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader$WALReader.(SequenceFileLogReader.java:57)
> at 
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.init(SequenceFileLogReader.java:158)
> at 
> org.apache.hadoop.hbase.regionserver.wal.HLog.getReader(HLog.java:572)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEdits(HRegion.java:1940)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEditsIfAny(HRegion.java:1896)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:366)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2661)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2647)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:312)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:99)
> at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:158)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:619)
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5282) Possible file handle leak with truncated HLog file.

2012-01-26 Thread Jonathan Hsieh (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13193967#comment-13193967
 ] 

Jonathan Hsieh commented on HBASE-5282:
---

I'll commit later today.  

> Possible file handle leak with truncated HLog file.
> ---
>
> Key: HBASE-5282
> URL: https://issues.apache.org/jira/browse/HBASE-5282
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.0, 0.90.5, 0.92.0
>Reporter: Jonathan Hsieh
>Assignee: Jonathan Hsieh
> Attachments: hbase-5282.patch, hbase-5282.v2.patch
>
>
> When debugging hbck, found that the code responsible for this exception can 
> leak open file handles.
> {code}
> 12/01/15 05:58:11 INFO regionserver.HRegion: Replaying edits from 
> hdfs://haus01.
> sf.cloudera.com:56020/hbase-jon/test5/98a1e7255731aae44b3836641840113e/recovered
> .edits/3211315; minSequenceid=3214658
> 12/01/15 05:58:11 ERROR handler.OpenRegionHandler: Failed open of 
> region=test5,8
> \x90\x00\x00\x00\x00\x00\x00/05_0,1326597390073.98a1e7255731aae44b3836641840
> 113e.
> java.io.EOFException
> at java.io.DataInputStream.readByte(DataInputStream.java:250)
> at 
> org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:299)
> at org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:320)
> at org.apache.hadoop.io.Text.readString(Text.java:400)
> at 
> org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1486)
> at 
> org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1437)
> at 
> org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1424)
> at 
> org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1419)
> at 
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader$WALReader.(SequenceFileLogReader.java:57)
> at 
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.init(SequenceFileLogReader.java:158)
> at 
> org.apache.hadoop.hbase.regionserver.wal.HLog.getReader(HLog.java:572)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEdits(HRegion.java:1940)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEditsIfAny(HRegion.java:1896)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:366)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2661)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2647)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:312)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:99)
> at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:158)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:619)
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HBASE-5283) Request counters may become negative for heavily loaded regions

2012-01-26 Thread Zhihong Yu (Created) (JIRA)
Request counters may become negative for heavily loaded regions
---

 Key: HBASE-5283
 URL: https://issues.apache.org/jira/browse/HBASE-5283
 Project: HBase
  Issue Type: Bug
Reporter: Zhihong Yu


Requests counter showing negative count, example under 'Requests' column: 
-645470239
{code}
NameRegion Server   Start Key   End Key Requests
usertable,user2037516127892189021,1326756873774.16833e4566d1daef109b8fdcd1f4b5a6.
   xxx.com:60030   user2037516127892189021 user2296868939942738705  
   -645470239
{code}
RegionLoad.readRequestsCount and RegionLoad.writeRequestsCount are of int type. 
Our Ops has been running lots of heavy load operation. 
RegionLoad.getRequestsCount() overflows int.MAX_VALUE. It is set to D986E7E1. 
In table.jsp, RegionLoad.getRequestsCount() is assigned to long type. D986E7E1 
is converted to long D986E7E1 which is -645470239 in decimal.

Suggested fix is to make readRequestsCount and writeRequestsCount long type. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5283) Request counters may become negative for heavily loaded regions

2012-01-26 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5283:
--

Affects Version/s: 0.92.0
Fix Version/s: 0.92.1
   0.94.0

> Request counters may become negative for heavily loaded regions
> ---
>
> Key: HBASE-5283
> URL: https://issues.apache.org/jira/browse/HBASE-5283
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.92.0
>Reporter: Zhihong Yu
> Fix For: 0.94.0, 0.92.1
>
>
> Requests counter showing negative count, example under 'Requests' column: 
> -645470239
> {code}
> Name  Region Server   Start Key   End Key Requests
> usertable,user2037516127892189021,1326756873774.16833e4566d1daef109b8fdcd1f4b5a6.
>  xxx.com:60030   user2037516127892189021 user2296868939942738705  
>-645470239
> {code}
> RegionLoad.readRequestsCount and RegionLoad.writeRequestsCount are of int 
> type. Our Ops has been running lots of heavy load operation. 
> RegionLoad.getRequestsCount() overflows int.MAX_VALUE. It is set to D986E7E1. 
> In table.jsp, RegionLoad.getRequestsCount() is assigned to long type. 
> D986E7E1 is converted to long D986E7E1 which is -645470239 in decimal.
> Suggested fix is to make readRequestsCount and writeRequestsCount long type. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5271) Result.getValue and Result.getColumnLatest return the wrong column.

2012-01-26 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13194012#comment-13194012
 ] 

Zhihong Yu commented on HBASE-5271:
---

Will integrate if there is no objection.

> Result.getValue and Result.getColumnLatest return the wrong column.
> ---
>
> Key: HBASE-5271
> URL: https://issues.apache.org/jira/browse/HBASE-5271
> Project: HBase
>  Issue Type: Bug
>  Components: client
>Affects Versions: 0.90.5
>Reporter: Ghais Issa
>Assignee: Ghais Issa
> Fix For: 0.94.0, 0.90.7, 0.92.1
>
> Attachments: 5271-90.txt, 5271-v2.txt, 
> fixKeyValueMatchingColumn.diff, testGetValue.diff
>
>
> In the following example result.getValue returns the wrong column
> KeyValue kv = new KeyValue(Bytes.toBytes("r"), Bytes.toBytes("24"), 
> Bytes.toBytes("2"), Bytes.toBytes(7L));
> Result result = new Result(new KeyValue[] { kv });
> System.out.println(Bytes.toLong(result.getValue(Bytes.toBytes("2"), 
> Bytes.toBytes("2"; //prints 7.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5282) Possible file handle leak with truncated HLog file.

2012-01-26 Thread Lars Hofhansl (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13194015#comment-13194015
 ] 

Lars Hofhansl commented on HBASE-5282:
--

+1 on v2

> Possible file handle leak with truncated HLog file.
> ---
>
> Key: HBASE-5282
> URL: https://issues.apache.org/jira/browse/HBASE-5282
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.0, 0.90.5, 0.92.0
>Reporter: Jonathan Hsieh
>Assignee: Jonathan Hsieh
> Attachments: hbase-5282.patch, hbase-5282.v2.patch
>
>
> When debugging hbck, found that the code responsible for this exception can 
> leak open file handles.
> {code}
> 12/01/15 05:58:11 INFO regionserver.HRegion: Replaying edits from 
> hdfs://haus01.
> sf.cloudera.com:56020/hbase-jon/test5/98a1e7255731aae44b3836641840113e/recovered
> .edits/3211315; minSequenceid=3214658
> 12/01/15 05:58:11 ERROR handler.OpenRegionHandler: Failed open of 
> region=test5,8
> \x90\x00\x00\x00\x00\x00\x00/05_0,1326597390073.98a1e7255731aae44b3836641840
> 113e.
> java.io.EOFException
> at java.io.DataInputStream.readByte(DataInputStream.java:250)
> at 
> org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:299)
> at org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:320)
> at org.apache.hadoop.io.Text.readString(Text.java:400)
> at 
> org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1486)
> at 
> org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1437)
> at 
> org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1424)
> at 
> org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1419)
> at 
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader$WALReader.(SequenceFileLogReader.java:57)
> at 
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.init(SequenceFileLogReader.java:158)
> at 
> org.apache.hadoop.hbase.regionserver.wal.HLog.getReader(HLog.java:572)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEdits(HRegion.java:1940)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEditsIfAny(HRegion.java:1896)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:366)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2661)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2647)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:312)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:99)
> at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:158)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:619)
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5271) Result.getValue and Result.getColumnLatest return the wrong column.

2012-01-26 Thread Lars Hofhansl (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13194023#comment-13194023
 ] 

Lars Hofhansl commented on HBASE-5271:
--

+1

This is actually pretty bad now. Imagine you pass a long family byte[] to 
matchingColumn. Now we could potentially compare past the size of the KeyValue 
backing array.

> Result.getValue and Result.getColumnLatest return the wrong column.
> ---
>
> Key: HBASE-5271
> URL: https://issues.apache.org/jira/browse/HBASE-5271
> Project: HBase
>  Issue Type: Bug
>  Components: client
>Affects Versions: 0.90.5
>Reporter: Ghais Issa
>Assignee: Ghais Issa
> Fix For: 0.94.0, 0.90.7, 0.92.1
>
> Attachments: 5271-90.txt, 5271-v2.txt, 
> fixKeyValueMatchingColumn.diff, testGetValue.diff
>
>
> In the following example result.getValue returns the wrong column
> KeyValue kv = new KeyValue(Bytes.toBytes("r"), Bytes.toBytes("24"), 
> Bytes.toBytes("2"), Bytes.toBytes(7L));
> Result result = new Result(new KeyValue[] { kv });
> System.out.println(Bytes.toLong(result.getValue(Bytes.toBytes("2"), 
> Bytes.toBytes("2"; //prints 7.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HBASE-5284) TestLogRolling.java doesn't compile against the latest 0.23.1-SNAPSHOT

2012-01-26 Thread Roman Shaposhnik (Created) (JIRA)
TestLogRolling.java doesn't compile against the latest 0.23.1-SNAPSHOT
--

 Key: HBASE-5284
 URL: https://issues.apache.org/jira/browse/HBASE-5284
 Project: HBase
  Issue Type: Bug
  Components: test
Affects Versions: 0.92.0
Reporter: Roman Shaposhnik


Here's how to reproduce:

{noformat}
$ mvn clean -DskipTests -Dhadoop.profile=23 -Dinstall site assembly:assembly 
-Dmaven.repo.local=/home/rvs/.m2/repository

[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:2.0.2:testCompile 
(default-testCompile) on project hbase: Compilation failure
[ERROR] 
/home/rvs/src/bigtop/output/hbase/hbase-0.92.0/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestLogRolling.java:[341,33]
 cannot find symbol
[ERROR] symbol  : variable dnRegistration
[ERROR] location: class org.apache.hadoop.hdfs.server.datanode.DataNode
[ERROR] -> [Help 1]
{noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5284) TestLogRolling.java doesn't compile against the latest 0.23.1-SNAPSHOT

2012-01-26 Thread Roman Shaposhnik (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13194035#comment-13194035
 ] 

Roman Shaposhnik commented on HBASE-5284:
-

Perhaps, given that the tests in this class are predicated on HDFS-826 it might 
make sense to disable it for 0.23 profile.

> TestLogRolling.java doesn't compile against the latest 0.23.1-SNAPSHOT
> --
>
> Key: HBASE-5284
> URL: https://issues.apache.org/jira/browse/HBASE-5284
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.92.0
>Reporter: Roman Shaposhnik
>
> Here's how to reproduce:
> {noformat}
> $ mvn clean -DskipTests -Dhadoop.profile=23 -Dinstall site assembly:assembly 
> -Dmaven.repo.local=/home/rvs/.m2/repository
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:2.0.2:testCompile 
> (default-testCompile) on project hbase: Compilation failure
> [ERROR] 
> /home/rvs/src/bigtop/output/hbase/hbase-0.92.0/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestLogRolling.java:[341,33]
>  cannot find symbol
> [ERROR] symbol  : variable dnRegistration
> [ERROR] location: class org.apache.hadoop.hdfs.server.datanode.DataNode
> [ERROR] -> [Help 1]
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HBASE-5285) runtime exception -- cached an already cached block -- during compaction

2012-01-26 Thread Simon Dircks (Created) (JIRA)
runtime exception -- cached an already cached block -- during compaction


 Key: HBASE-5285
 URL: https://issues.apache.org/jira/browse/HBASE-5285
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.92.0
 Environment: hadoop-1.0 and hbase-0.92
18 node cluster, dedicated namenode, zookeeper, hbasemaster, and YCSB client 
machine. 
latest YCSB

Reporter: Simon Dircks
Priority: Trivial


#On YCSB client machine:
/usr/local/bin/java -cp "build/ycsb.jar:db/hbase/lib/*:db/hbase/conf/" 
com.yahoo.ycsb.Client -load -db com.yahoo.ycsb.db.HBaseClient -P 
workloads/workloada -p columnfamily=family1 -p recordcount=500 -s > load.dat

loaded 5mil records, that created 8 regions. (balanced all onto the same RS)

/usr/local/bin/java -cp "build/ycsb.jar:db/hbase/lib/*:db/hbase/conf/" 
com.yahoo.ycsb.Client -t -db com.yahoo.ycsb.db.HBaseClient -P 
workloads/workloada -p columnfamily=family1 -p operationcount=500 -threads 
10 -s > transaction.dat



#On RS that was holding the 8 regions above. 
2012-01-25 23:23:51,556 DEBUG org.apache.hadoop.hbase.zookeeper.ZKAssign: 
regionserver:60020-0x134f70a343101a0 Successfully transitioned node 
162702503c650e551130e5fb588b3ec2 from RS_ZK_REGION_SPLIT to RS_ZK_REGION_SPLIT
2012-01-25 23:23:51,616 ERROR 
org.apache.hadoop.hbase.regionserver.HRegionServer:
java.lang.RuntimeException: Cached an already cached block
at 
org.apache.hadoop.hbase.io.hfile.LruBlockCache.cacheBlock(LruBlockCache.java:268)
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:276)
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:487)
at 
org.apache.hadoop.hbase.io.HalfStoreFileReader$1.seekTo(HalfStoreFileReader.java:168)
at 
org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:181)
at 
org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:111)
at 
org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:83)
at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:1721)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:2861)
at 
org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1432)
at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1424)
at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1400)
at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3688)
at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3581)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1771)
at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1325)
2012-01-25 23:23:51,656 DEBUG org.apache.hadoop.hbase.zookeeper.ZKAssign: 
regionserver:60020-0x134f70a343101a0 Attempting to transition node 
162702503c650e551130e5fb588b3ec2 from RS_ZK_REGION_SPLIT to RS_ZK_REGION_SPLIT

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-4890) fix possible NPE in HConnectionManager

2012-01-26 Thread Simon Dircks (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-4890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13194057#comment-13194057
 ] 

Simon Dircks commented on HBASE-4890:
-

I was also able to reproduce this:

hadoop-1.0 and hbase-0.92 with YCSB. 

2012/01/25 15:19:24 WARN client.HConnectionManager$HConnectionImplementation: 
Failed all from 
region=usertable,user3076346045817661344,1327530607222.bab55fba6adb17bc8757eb6cdee99a91.,
 hostname=datatask6.hadoop.telescope.tv, port=60020
java.util.concurrent.ExecutionException: java.lang.RuntimeException: 
java.lang.NullPointerException

Got this error on the LOAD part of YCSB

/usr/local/bin/java -cp "build/ycsb.jar:db/hbase/lib/*:db/hbase/conf/" 
com.yahoo.ycsb.Client -load -db com.yahoo.ycsb.db.HBaseClient -P 
workloads/workloada -p columnfamily=family1 -p recordcount=500 -s > load.dat



> fix possible NPE in HConnectionManager
> --
>
> Key: HBASE-4890
> URL: https://issues.apache.org/jira/browse/HBASE-4890
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.92.0
>Reporter: Jonathan Hsieh
>
> I was running YCSB against a 0.92 branch and encountered this error message:
> {code}
> 11/11/29 08:47:16 WARN client.HConnectionManager$HConnectionImplementation: 
> Failed all from 
> region=usertable,user3917479014967760871,1322555655231.f78d161e5724495a9723bcd972f97f41.,
>  hostname=c0316.hal.cloudera.com, port=57020
> java.util.concurrent.ExecutionException: java.lang.RuntimeException: 
> java.lang.NullPointerException
> at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222)
> at java.util.concurrent.FutureTask.get(FutureTask.java:83)
> at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:1501)
> at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatch(HConnectionManager.java:1353)
> at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:898)
> at org.apache.hadoop.hbase.client.HTable.doPut(HTable.java:775)
> at org.apache.hadoop.hbase.client.HTable.put(HTable.java:750)
> at com.yahoo.ycsb.db.HBaseClient.update(Unknown Source)
> at com.yahoo.ycsb.DBWrapper.update(Unknown Source)
> at com.yahoo.ycsb.workloads.CoreWorkload.doTransactionUpdate(Unknown 
> Source)
> at com.yahoo.ycsb.workloads.CoreWorkload.doTransaction(Unknown Source)
> at com.yahoo.ycsb.ClientThread.run(Unknown Source)
> Caused by: java.lang.RuntimeException: java.lang.NullPointerException
> at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getRegionServerWithoutRetries(HConnectionManager.java:1315)
> at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$3.call(HConnectionManager.java:1327)
> at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$3.call(HConnectionManager.java:1325)
> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
> at java.util.concurrent.FutureTask.run(FutureTask.java:138)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:619)
> Caused by: java.lang.NullPointerException
> at 
> org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:158)
> at $Proxy4.multi(Unknown Source)
> at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$3$1.call(HConnectionManager.java:1330)
> at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$3$1.call(HConnectionManager.java:1328)
> at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getRegionServerWithoutRetries(HConnectionManager.java:1309)
> ... 7 more
> {code}
> It looks like the NPE is caused by server being null in the MultiRespone 
> call() method.
> {code}
>  public MultiResponse call() throws IOException {
>  return getRegionServerWithoutRetries(
>  new ServerCallable(connection, tableName, null) {
>public MultiResponse call() throws IOException {
>  return server.multi(multi);
>}
>@Override
>public void connect(boolean reload) throws IOException {
>  server =
>connection.getHRegionConnection(loc.getHostname(), 
> loc.getPort());
>}
>  }
>  );
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, pleas

[jira] [Commented] (HBASE-5284) TestLogRolling.java doesn't compile against the latest 0.23.1-SNAPSHOT

2012-01-26 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13194063#comment-13194063
 ] 

Zhihong Yu commented on HBASE-5284:
---

I think this is the 3rd JIRA logged on compilation against 0.23
The first was HBASE-5191 and second was HBASE-5212.

The trick is that while we make the code compile against 0.23, we have to make 
the tests pass for hadoop 1.
See some more details in HBASE-5191

> TestLogRolling.java doesn't compile against the latest 0.23.1-SNAPSHOT
> --
>
> Key: HBASE-5284
> URL: https://issues.apache.org/jira/browse/HBASE-5284
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.92.0
>Reporter: Roman Shaposhnik
>
> Here's how to reproduce:
> {noformat}
> $ mvn clean -DskipTests -Dhadoop.profile=23 -Dinstall site assembly:assembly 
> -Dmaven.repo.local=/home/rvs/.m2/repository
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:2.0.2:testCompile 
> (default-testCompile) on project hbase: Compilation failure
> [ERROR] 
> /home/rvs/src/bigtop/output/hbase/hbase-0.92.0/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestLogRolling.java:[341,33]
>  cannot find symbol
> [ERROR] symbol  : variable dnRegistration
> [ERROR] location: class org.apache.hadoop.hdfs.server.datanode.DataNode
> [ERROR] -> [Help 1]
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HBASE-5286) bin/hbase's logic of adding Hadoop jar files to the classpath is fragile when presented with split packaged Hadoop 0.23 installation

2012-01-26 Thread Roman Shaposhnik (Created) (JIRA)
bin/hbase's logic of adding Hadoop jar files to the classpath is fragile when 
presented with split packaged Hadoop 0.23 installation


 Key: HBASE-5286
 URL: https://issues.apache.org/jira/browse/HBASE-5286
 Project: HBase
  Issue Type: Bug
  Components: scripts
Affects Versions: 0.92.0
Reporter: Roman Shaposhnik
Assignee: Roman Shaposhnik


Here's the bit from bin/hbase that might need TLC now that Hadoop can be 
spotted in the wild in split-package configuration:

{noformat}
#If avail, add Hadoop to the CLASSPATH and to the JAVA_LIBRARY_PATH
if [ ! -z $HADOOP_HOME ]; then
  HADOOPCPPATH=""
  if [ -z $HADOOP_CONF_DIR ]; then
HADOOPCPPATH=$(append_path "${HADOOPCPPATH}" "${HADOOP_HOME}/conf")
  else
HADOOPCPPATH=$(append_path "${HADOOPCPPATH}" "${HADOOP_CONF_DIR}")
  fi
  if [ "`echo ${HADOOP_HOME}/hadoop-core*.jar`" != 
"${HADOOP_HOME}/hadoop-core*.jar" ] ; then
HADOOPCPPATH=$(append_path "${HADOOPCPPATH}" `ls 
${HADOOP_HOME}/hadoop-core*.jar | head -1`)
  else
HADOOPCPPATH=$(append_path "${HADOOPCPPATH}" `ls 
${HADOOP_HOME}/hadoop-common*.jar | head -1`)
HADOOPCPPATH=$(append_path "${HADOOPCPPATH}" `ls 
${HADOOP_HOME}/hadoop-hdfs*.jar | head -1`)
HADOOPCPPATH=$(append_path "${HADOOPCPPATH}" `ls 
${HADOOP_HOME}/hadoop-mapred*.jar | head -1`)
  fi
{noformat}

There's a couple of issues with the above code:
   0. HADOOP_HOME is now deprecated in Hadoop 0.23
   1. the list of jar files added to the class-path should be revised
   2. we need to figure out a more robust way to get the jar files that are 
needed to the classpath (things like hadoop-mapred*.jar tend to match src/test 
jars as well)

Better yet, it would be useful to look into whether we can transition HBase's 
bin/hbase onto using bin/hadoop as a launcher script instead of direct JAVA 
invocations (Pig, Hive, Sqoop and Mahout already do that)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5271) Result.getValue and Result.getColumnLatest return the wrong column.

2012-01-26 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13194076#comment-13194076
 ] 

Zhihong Yu commented on HBASE-5271:
---

Integrated to 0.90, 0.92 and TRUNK.

Thanks for the patch Ghais.

Thanks for the review Lars.

> Result.getValue and Result.getColumnLatest return the wrong column.
> ---
>
> Key: HBASE-5271
> URL: https://issues.apache.org/jira/browse/HBASE-5271
> Project: HBase
>  Issue Type: Bug
>  Components: client
>Affects Versions: 0.90.5
>Reporter: Ghais Issa
>Assignee: Ghais Issa
> Fix For: 0.94.0, 0.90.7, 0.92.1
>
> Attachments: 5271-90.txt, 5271-v2.txt, 
> fixKeyValueMatchingColumn.diff, testGetValue.diff
>
>
> In the following example result.getValue returns the wrong column
> KeyValue kv = new KeyValue(Bytes.toBytes("r"), Bytes.toBytes("24"), 
> Bytes.toBytes("2"), Bytes.toBytes(7L));
> Result result = new Result(new KeyValue[] { kv });
> System.out.println(Bytes.toLong(result.getValue(Bytes.toBytes("2"), 
> Bytes.toBytes("2"; //prints 7.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-4991) Provide capability to delete named region

2012-01-26 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-4991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13194079#comment-13194079
 ] 

Zhihong Yu commented on HBASE-4991:
---

One of our teams is asking for this feature as well.

> Provide capability to delete named region
> -
>
> Key: HBASE-4991
> URL: https://issues.apache.org/jira/browse/HBASE-4991
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>
> See discussion titled 'Able to control routing to Solr shards or not' on 
> lily-discuss
> User may want to quickly dispose of out of date records by deleting specific 
> regions. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5284) TestLogRolling.java doesn't compile against the latest 0.23.1-SNAPSHOT

2012-01-26 Thread Roman Shaposhnik (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13194083#comment-13194083
 ] 

Roman Shaposhnik commented on HBASE-5284:
-

FWIW: I believe this is also related to HDFS-1670

> TestLogRolling.java doesn't compile against the latest 0.23.1-SNAPSHOT
> --
>
> Key: HBASE-5284
> URL: https://issues.apache.org/jira/browse/HBASE-5284
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.92.0
>Reporter: Roman Shaposhnik
>
> Here's how to reproduce:
> {noformat}
> $ mvn clean -DskipTests -Dhadoop.profile=23 -Dinstall site assembly:assembly 
> -Dmaven.repo.local=/home/rvs/.m2/repository
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:2.0.2:testCompile 
> (default-testCompile) on project hbase: Compilation failure
> [ERROR] 
> /home/rvs/src/bigtop/output/hbase/hbase-0.92.0/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestLogRolling.java:[341,33]
>  cannot find symbol
> [ERROR] symbol  : variable dnRegistration
> [ERROR] location: class org.apache.hadoop.hdfs.server.datanode.DataNode
> [ERROR] -> [Help 1]
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5284) TestLogRolling.java doesn't compile against the latest 0.23.1-SNAPSHOT

2012-01-26 Thread Andrew Purtell (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13194086#comment-13194086
 ] 

Andrew Purtell commented on HBASE-5284:
---

This is being addressed as part of HBASE-5212, but I don't think we need to 
close this as a dup. Split handling of the two problems into these separate 
issues.

> TestLogRolling.java doesn't compile against the latest 0.23.1-SNAPSHOT
> --
>
> Key: HBASE-5284
> URL: https://issues.apache.org/jira/browse/HBASE-5284
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.92.0
>Reporter: Roman Shaposhnik
>
> Here's how to reproduce:
> {noformat}
> $ mvn clean -DskipTests -Dhadoop.profile=23 -Dinstall site assembly:assembly 
> -Dmaven.repo.local=/home/rvs/.m2/repository
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:2.0.2:testCompile 
> (default-testCompile) on project hbase: Compilation failure
> [ERROR] 
> /home/rvs/src/bigtop/output/hbase/hbase-0.92.0/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestLogRolling.java:[341,33]
>  cannot find symbol
> [ERROR] symbol  : variable dnRegistration
> [ERROR] location: class org.apache.hadoop.hdfs.server.datanode.DataNode
> [ERROR] -> [Help 1]
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-4991) Provide capability to delete named region

2012-01-26 Thread Mubarak Seyed (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-4991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13194103#comment-13194103
 ] 

Mubarak Seyed commented on HBASE-4991:
--

Do we need add a command under tools in hbase shell (with public API for delete 
named region)?

How about this?

hbase(main)> delete_region 

compact and major_compact supports region-name as an argument, can we use the 
same approach? Thanks.

> Provide capability to delete named region
> -
>
> Key: HBASE-4991
> URL: https://issues.apache.org/jira/browse/HBASE-4991
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>
> See discussion titled 'Able to control routing to Solr shards or not' on 
> lily-discuss
> User may want to quickly dispose of out of date records by deleting specific 
> regions. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-4991) Provide capability to delete named region

2012-01-26 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-4991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13194107#comment-13194107
 ] 

Zhihong Yu commented on HBASE-4991:
---

The above syntax makes sense.

> Provide capability to delete named region
> -
>
> Key: HBASE-4991
> URL: https://issues.apache.org/jira/browse/HBASE-4991
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>
> See discussion titled 'Able to control routing to Solr shards or not' on 
> lily-discuss
> User may want to quickly dispose of out of date records by deleting specific 
> regions. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HBASE-5287) fsync can go into an infinite loop

2012-01-26 Thread Prakash Khemani (Created) (JIRA)
fsync can go into an infinite loop
--

 Key: HBASE-5287
 URL: https://issues.apache.org/jira/browse/HBASE-5287
 Project: HBase
  Issue Type: Bug
Reporter: Prakash Khemani


HBaseFsckRepair.prompt() should check for -1 return value from System.in.read()

Only affects 0.89 release.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HBASE-5288) Security source code dirs missing from 0.92.0 release tarballs.

2012-01-26 Thread Jonathan Hsieh (Created) (JIRA)
Security source code dirs missing from 0.92.0 release tarballs.
---

 Key: HBASE-5288
 URL: https://issues.apache.org/jira/browse/HBASE-5288
 Project: HBase
  Issue Type: Bug
  Components: build
Affects Versions: 0.92.0, 0.94.0
Reporter: Jonathan Hsieh
Priority: Blocker
 Fix For: 0.94.0, 0.92.1


The release tarballs have a compiled version of the hbase jars and the security 
tarball seems to have the compiled security bits.  However, the source code and 
resources for security implementation are missing from the release tarballs in 
both distributions.  They should be included in both.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HBASE-5289) NullPointerException in resetZooKeeperTrackers in HConnectionManager / HConnectionImplementation

2012-01-26 Thread Krystian Nowak (Created) (JIRA)
NullPointerException in resetZooKeeperTrackers in HConnectionManager / 
HConnectionImplementation


 Key: HBASE-5289
 URL: https://issues.apache.org/jira/browse/HBASE-5289
 Project: HBase
  Issue Type: Bug
  Components: client
Affects Versions: 0.90.5
Reporter: Krystian Nowak


This might happen on heavy load in case of lagging HBase when sharing one 
HConnection by multiple threads:

{noformat}
2012-01-26 13:59:38,396 ERROR [http://*:8080-251-EventThread] 
zookeeper.ClientCnxn$EventThread(532): Error while calling watcher
java.lang.NullPointerException
at 
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.resetZooKeeperTrackers(HConnectionManager.java:533)
at 
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.abort(HConnectionManager.java:1536)
at 
org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.connectionEvent(ZooKeeperWatcher.java:344)
at 
org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.process(ZooKeeperWatcher.java:262)
at 
org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:530)
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506)
{noformat}

The following code is not protected against NPE:

{code}
private synchronized void resetZooKeeperTrackers()
throws ZooKeeperConnectionException {
  LOG.info("Trying to reconnect to zookeeper");
  masterAddressTracker.stop();
  masterAddressTracker = null;
  rootRegionTracker.stop();
  rootRegionTracker = null;
  clusterId = null;
  this.zooKeeper = null;
  setupZookeeperTrackers();
}
{code}

In some cases as proven by the log snippet above it might happen that either 
masterAddressTracker or rootRegionTracker might be null.
Because of the NPE the code can't reach setupZookeeperTrackers() call.

This should be fixed at least the way as shown in one of the patches in 
HBASE-5153

{code}
  LOG.info("Trying to reconnect to zookeeper.");
  if (this.masterAddressTracker != null) {
this.masterAddressTracker.stop();
this.masterAddressTracker = null;
  }
  if (this.rootRegionTracker != null) {
this.rootRegionTracker.stop();
this.rootRegionTracker = null;
  }
{code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5289) NullPointerException in resetZooKeeperTrackers in HConnectionManager / HConnectionImplementation

2012-01-26 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13194175#comment-13194175
 ] 

Zhihong Yu commented on HBASE-5289:
---

Thanks for reporting this case, Krystian.

Do you want to upload a patch ?


> NullPointerException in resetZooKeeperTrackers in HConnectionManager / 
> HConnectionImplementation
> 
>
> Key: HBASE-5289
> URL: https://issues.apache.org/jira/browse/HBASE-5289
> Project: HBase
>  Issue Type: Bug
>  Components: client
>Affects Versions: 0.90.5
>Reporter: Krystian Nowak
> Fix For: 0.94.0, 0.92.1
>
>
> This might happen on heavy load in case of lagging HBase when sharing one 
> HConnection by multiple threads:
> {noformat}
> 2012-01-26 13:59:38,396 ERROR [http://*:8080-251-EventThread] 
> zookeeper.ClientCnxn$EventThread(532): Error while calling watcher
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.resetZooKeeperTrackers(HConnectionManager.java:533)
> at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.abort(HConnectionManager.java:1536)
> at 
> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.connectionEvent(ZooKeeperWatcher.java:344)
> at 
> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.process(ZooKeeperWatcher.java:262)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:530)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506)
> {noformat}
> The following code is not protected against NPE:
> {code}
> private synchronized void resetZooKeeperTrackers()
> throws ZooKeeperConnectionException {
>   LOG.info("Trying to reconnect to zookeeper");
>   masterAddressTracker.stop();
>   masterAddressTracker = null;
>   rootRegionTracker.stop();
>   rootRegionTracker = null;
>   clusterId = null;
>   this.zooKeeper = null;
>   setupZookeeperTrackers();
> }
> {code}
> In some cases as proven by the log snippet above it might happen that either 
> masterAddressTracker or rootRegionTracker might be null.
> Because of the NPE the code can't reach setupZookeeperTrackers() call.
> This should be fixed at least the way as shown in one of the patches in 
> HBASE-5153
> {code}
>   LOG.info("Trying to reconnect to zookeeper.");
>   if (this.masterAddressTracker != null) {
> this.masterAddressTracker.stop();
> this.masterAddressTracker = null;
>   }
>   if (this.rootRegionTracker != null) {
> this.rootRegionTracker.stop();
> this.rootRegionTracker = null;
>   }
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5289) NullPointerException in resetZooKeeperTrackers in HConnectionManager / HConnectionImplementation

2012-01-26 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-5289:
--

Fix Version/s: 0.92.1
   0.94.0

> NullPointerException in resetZooKeeperTrackers in HConnectionManager / 
> HConnectionImplementation
> 
>
> Key: HBASE-5289
> URL: https://issues.apache.org/jira/browse/HBASE-5289
> Project: HBase
>  Issue Type: Bug
>  Components: client
>Affects Versions: 0.90.5
>Reporter: Krystian Nowak
> Fix For: 0.94.0, 0.92.1
>
>
> This might happen on heavy load in case of lagging HBase when sharing one 
> HConnection by multiple threads:
> {noformat}
> 2012-01-26 13:59:38,396 ERROR [http://*:8080-251-EventThread] 
> zookeeper.ClientCnxn$EventThread(532): Error while calling watcher
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.resetZooKeeperTrackers(HConnectionManager.java:533)
> at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.abort(HConnectionManager.java:1536)
> at 
> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.connectionEvent(ZooKeeperWatcher.java:344)
> at 
> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.process(ZooKeeperWatcher.java:262)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:530)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506)
> {noformat}
> The following code is not protected against NPE:
> {code}
> private synchronized void resetZooKeeperTrackers()
> throws ZooKeeperConnectionException {
>   LOG.info("Trying to reconnect to zookeeper");
>   masterAddressTracker.stop();
>   masterAddressTracker = null;
>   rootRegionTracker.stop();
>   rootRegionTracker = null;
>   clusterId = null;
>   this.zooKeeper = null;
>   setupZookeeperTrackers();
> }
> {code}
> In some cases as proven by the log snippet above it might happen that either 
> masterAddressTracker or rootRegionTracker might be null.
> Because of the NPE the code can't reach setupZookeeperTrackers() call.
> This should be fixed at least the way as shown in one of the patches in 
> HBASE-5153
> {code}
>   LOG.info("Trying to reconnect to zookeeper.");
>   if (this.masterAddressTracker != null) {
> this.masterAddressTracker.stop();
> this.masterAddressTracker = null;
>   }
>   if (this.rootRegionTracker != null) {
> this.rootRegionTracker.stop();
> this.rootRegionTracker = null;
>   }
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5153) Add retry logic in HConnectionImplementation#resetZooKeeperTrackers

2012-01-26 Thread Krystian Nowak (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13194176#comment-13194176
 ] 

Krystian Nowak commented on HBASE-5153:
---

FYI: [^HBASE-5153-V6-90-minorchange.patch] also fixes HBASE-5289 (linked)

> Add retry logic in HConnectionImplementation#resetZooKeeperTrackers
> ---
>
> Key: HBASE-5153
> URL: https://issues.apache.org/jira/browse/HBASE-5153
> Project: HBase
>  Issue Type: Bug
>  Components: client
>Affects Versions: 0.90.4
>Reporter: Jieshan Bean
>Assignee: Jieshan Bean
> Fix For: 0.94.0, 0.90.6, 0.92.1
>
> Attachments: 5153-92.txt, 5153-trunk.txt, 5153-trunk.txt, 
> HBASE-5153-V2.patch, HBASE-5153-V3.patch, HBASE-5153-V4-90.patch, 
> HBASE-5153-V5-90.patch, HBASE-5153-V6-90-minorchange.patch, 
> HBASE-5153-V6-90.txt, HBASE-5153-trunk-v2.patch, HBASE-5153-trunk.patch, 
> HBASE-5153.patch, TestResults-hbase5153.out
>
>
> HBASE-4893 is related to this issue. In that issue, we know, if multi-threads 
> share a same connection, once this connection got abort in one thread, the 
> other threads will got a 
> "HConnectionManager$HConnectionImplementation@18fb1f7 closed" exception.
> It solve the problem of "stale connection can't removed". But the orignal 
> HTable instance cann't be continue to use. The connection in HTable should be 
> recreated.
> Actually, there's two aproach to solve this:
> 1. In user code, once catch an IOE, close connection and re-create HTable 
> instance. We can use this as a workaround.
> 2. In HBase Client side, catch this exception, and re-create connection.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5271) Result.getValue and Result.getColumnLatest return the wrong column.

2012-01-26 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13194186#comment-13194186
 ] 

Hudson commented on HBASE-5271:
---

Integrated in HBase-0.92 #263 (See 
[https://builds.apache.org/job/HBase-0.92/263/])
HBASE-5271  Result.getValue and Result.getColumnLatest return the wrong 
column (Ghais Issa)

tedyu : 
Files : 
* /hbase/branches/0.92/CHANGES.txt
* /hbase/branches/0.92/src/main/java/org/apache/hadoop/hbase/KeyValue.java
* /hbase/branches/0.92/src/test/java/org/apache/hadoop/hbase/TestKeyValue.java


> Result.getValue and Result.getColumnLatest return the wrong column.
> ---
>
> Key: HBASE-5271
> URL: https://issues.apache.org/jira/browse/HBASE-5271
> Project: HBase
>  Issue Type: Bug
>  Components: client
>Affects Versions: 0.90.5
>Reporter: Ghais Issa
>Assignee: Ghais Issa
> Fix For: 0.94.0, 0.90.7, 0.92.1
>
> Attachments: 5271-90.txt, 5271-v2.txt, 
> fixKeyValueMatchingColumn.diff, testGetValue.diff
>
>
> In the following example result.getValue returns the wrong column
> KeyValue kv = new KeyValue(Bytes.toBytes("r"), Bytes.toBytes("24"), 
> Bytes.toBytes("2"), Bytes.toBytes(7L));
> Result result = new Result(new KeyValue[] { kv });
> System.out.println(Bytes.toLong(result.getValue(Bytes.toBytes("2"), 
> Bytes.toBytes("2"; //prints 7.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5153) Add retry logic in HConnectionImplementation#resetZooKeeperTrackers

2012-01-26 Thread Lars Hofhansl (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13194189#comment-13194189
 ] 

Lars Hofhansl commented on HBASE-5153:
--

Also it occurred to me that another nice change would be to be able specify the 
retry count for resetZooKeeperTrackersWithRetries different from the other 
operations. 
The thinking is this:
While the ZK is not reachable the HConnection (and any other HConnection) is 
essentially not usable. In some settings it might be good to have the 
connection just sit there, and retry until the connection is bad. Maybe for 
another jira.

Where are we with this generally?
Is it just TestMergeTool hanging? If so I'll have a look at it today.

> Add retry logic in HConnectionImplementation#resetZooKeeperTrackers
> ---
>
> Key: HBASE-5153
> URL: https://issues.apache.org/jira/browse/HBASE-5153
> Project: HBase
>  Issue Type: Bug
>  Components: client
>Affects Versions: 0.90.4
>Reporter: Jieshan Bean
>Assignee: Jieshan Bean
> Fix For: 0.94.0, 0.90.6, 0.92.1
>
> Attachments: 5153-92.txt, 5153-trunk.txt, 5153-trunk.txt, 
> HBASE-5153-V2.patch, HBASE-5153-V3.patch, HBASE-5153-V4-90.patch, 
> HBASE-5153-V5-90.patch, HBASE-5153-V6-90-minorchange.patch, 
> HBASE-5153-V6-90.txt, HBASE-5153-trunk-v2.patch, HBASE-5153-trunk.patch, 
> HBASE-5153.patch, TestResults-hbase5153.out
>
>
> HBASE-4893 is related to this issue. In that issue, we know, if multi-threads 
> share a same connection, once this connection got abort in one thread, the 
> other threads will got a 
> "HConnectionManager$HConnectionImplementation@18fb1f7 closed" exception.
> It solve the problem of "stale connection can't removed". But the orignal 
> HTable instance cann't be continue to use. The connection in HTable should be 
> recreated.
> Actually, there's two aproach to solve this:
> 1. In user code, once catch an IOE, close connection and re-create HTable 
> instance. We can use this as a workaround.
> 2. In HBase Client side, catch this exception, and re-create connection.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5153) Add retry logic in HConnectionImplementation#resetZooKeeperTrackers

2012-01-26 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13194193#comment-13194193
 ] 

Zhihong Yu commented on HBASE-5153:
---

There were two failed tests:
https://builds.apache.org/job/HBase-0.92-security/81/

If you can resolve the hanging TestMergeTool, that would be great.

I am on-call this week, FYI

> Add retry logic in HConnectionImplementation#resetZooKeeperTrackers
> ---
>
> Key: HBASE-5153
> URL: https://issues.apache.org/jira/browse/HBASE-5153
> Project: HBase
>  Issue Type: Bug
>  Components: client
>Affects Versions: 0.90.4
>Reporter: Jieshan Bean
>Assignee: Jieshan Bean
> Fix For: 0.94.0, 0.90.6, 0.92.1
>
> Attachments: 5153-92.txt, 5153-trunk.txt, 5153-trunk.txt, 
> HBASE-5153-V2.patch, HBASE-5153-V3.patch, HBASE-5153-V4-90.patch, 
> HBASE-5153-V5-90.patch, HBASE-5153-V6-90-minorchange.patch, 
> HBASE-5153-V6-90.txt, HBASE-5153-trunk-v2.patch, HBASE-5153-trunk.patch, 
> HBASE-5153.patch, TestResults-hbase5153.out
>
>
> HBASE-4893 is related to this issue. In that issue, we know, if multi-threads 
> share a same connection, once this connection got abort in one thread, the 
> other threads will got a 
> "HConnectionManager$HConnectionImplementation@18fb1f7 closed" exception.
> It solve the problem of "stale connection can't removed". But the orignal 
> HTable instance cann't be continue to use. The connection in HTable should be 
> recreated.
> Actually, there's two aproach to solve this:
> 1. In user code, once catch an IOE, close connection and re-create HTable 
> instance. We can use this as a workaround.
> 2. In HBase Client side, catch this exception, and re-create connection.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5274) Filter out the expired store file scanner during the compaction

2012-01-26 Thread Phabricator (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phabricator updated HBASE-5274:
---

Attachment: D1473.1.patch

mbautin requested code review of "[jira] [HBASE-5274] Filter out expired 
scanners on compaction as well".
Reviewers: Liyin, JIRA, lhofhansl, Kannan

  This is a followup for D1017 to make it similar to D909 (89-fb). The fix for 
89-fb used the TTL-based scanner filtering logic on both normal scanners and 
compactions, while the trunk fix D1017 did not. This is just the delta between 
the two diffs that brings filtering expired store files on compaction to trunk.

TEST PLAN
  Unit tests

REVISION DETAIL
  https://reviews.facebook.net/D1473

AFFECTED FILES
  src/main/java/org/apache/hadoop/hbase/regionserver/Store.java
  src/main/java/org/apache/hadoop/hbase/regionserver/StoreScanner.java
  src/main/java/org/apache/hadoop/hbase/regionserver/metrics/SchemaMetrics.java
  
src/test/java/org/apache/hadoop/hbase/io/hfile/TestScannerSelectionUsingTTL.java
  src/test/java/org/apache/hadoop/hbase/regionserver/TestCompaction.java

MANAGE HERALD DIFFERENTIAL RULES
  https://reviews.facebook.net/herald/view/differential/

WHY DID I GET THIS EMAIL?
  https://reviews.facebook.net/herald/transcript/3063/

Tip: use the X-Herald-Rules header to filter Herald messages in your client.


> Filter out the expired store file scanner during the compaction
> ---
>
> Key: HBASE-5274
> URL: https://issues.apache.org/jira/browse/HBASE-5274
> Project: HBase
>  Issue Type: Improvement
>Reporter: Liyin Tang
>Assignee: Liyin Tang
> Attachments: D1407.1.patch, D1407.1.patch, D1407.1.patch, 
> D1407.1.patch, D1407.1.patch, D1473.1.patch
>
>
> During the compaction time, HBase will generate a store scanner which will 
> scan a list of store files. And it would be more efficient to filer out the 
> expired store file since there is no need to read any key values from these 
> store files.
> This optimization has been already implemented on 89-fb and this is the 
> building block for HBASE-5199 as well. It is supposed to be no-ops to compact 
> the expired store files.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5274) Filter out the expired store file scanner during the compaction

2012-01-26 Thread Phabricator (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13194213#comment-13194213
 ] 

Phabricator commented on HBASE-5274:


Liyin has accepted the revision "[jira] [HBASE-5274] Filter out expired 
scanners on compaction as well".

  LGTM. Thanks Mikhail !

REVISION DETAIL
  https://reviews.facebook.net/D1473


> Filter out the expired store file scanner during the compaction
> ---
>
> Key: HBASE-5274
> URL: https://issues.apache.org/jira/browse/HBASE-5274
> Project: HBase
>  Issue Type: Improvement
>Reporter: Liyin Tang
>Assignee: Liyin Tang
> Attachments: D1407.1.patch, D1407.1.patch, D1407.1.patch, 
> D1407.1.patch, D1407.1.patch, D1473.1.patch
>
>
> During the compaction time, HBase will generate a store scanner which will 
> scan a list of store files. And it would be more efficient to filer out the 
> expired store file since there is no need to read any key values from these 
> store files.
> This optimization has been already implemented on 89-fb and this is the 
> building block for HBASE-5199 as well. It is supposed to be no-ops to compact 
> the expired store files.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5274) Filter out the expired store file scanner during the compaction

2012-01-26 Thread Phabricator (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13194220#comment-13194220
 ] 

Phabricator commented on HBASE-5274:


tedyu has commented on the revision "[jira] [HBASE-5274] Filter out expired 
scanners on compaction as well".

  Looks good.

INLINE COMMENTS
  src/main/java/org/apache/hadoop/hbase/regionserver/StoreScanner.java:218 
isCompaction is not needed, can pass false directly.

REVISION DETAIL
  https://reviews.facebook.net/D1473


> Filter out the expired store file scanner during the compaction
> ---
>
> Key: HBASE-5274
> URL: https://issues.apache.org/jira/browse/HBASE-5274
> Project: HBase
>  Issue Type: Improvement
>Reporter: Liyin Tang
>Assignee: Liyin Tang
> Attachments: D1407.1.patch, D1407.1.patch, D1407.1.patch, 
> D1407.1.patch, D1407.1.patch, D1473.1.patch
>
>
> During the compaction time, HBase will generate a store scanner which will 
> scan a list of store files. And it would be more efficient to filer out the 
> expired store file since there is no need to read any key values from these 
> store files.
> This optimization has been already implemented on 89-fb and this is the 
> building block for HBASE-5199 as well. It is supposed to be no-ops to compact 
> the expired store files.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5274) Filter out the expired store file scanner during the compaction

2012-01-26 Thread Phabricator (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13194224#comment-13194224
 ] 

Phabricator commented on HBASE-5274:


Kannan has commented on the revision "[jira] [HBASE-5274] Filter out expired 
scanners on compaction as well".

  +1

REVISION DETAIL
  https://reviews.facebook.net/D1473


> Filter out the expired store file scanner during the compaction
> ---
>
> Key: HBASE-5274
> URL: https://issues.apache.org/jira/browse/HBASE-5274
> Project: HBase
>  Issue Type: Improvement
>Reporter: Liyin Tang
>Assignee: Liyin Tang
> Attachments: D1407.1.patch, D1407.1.patch, D1407.1.patch, 
> D1407.1.patch, D1407.1.patch, D1473.1.patch
>
>
> During the compaction time, HBase will generate a store scanner which will 
> scan a list of store files. And it would be more efficient to filer out the 
> expired store file since there is no need to read any key values from these 
> store files.
> This optimization has been already implemented on 89-fb and this is the 
> building block for HBASE-5199 as well. It is supposed to be no-ops to compact 
> the expired store files.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HBASE-5290) [FindBugs] Synchronization on boxed primitive

2012-01-26 Thread Liyin Tang (Created) (JIRA)
[FindBugs] Synchronization on boxed primitive
-

 Key: HBASE-5290
 URL: https://issues.apache.org/jira/browse/HBASE-5290
 Project: HBase
  Issue Type: Bug
Reporter: Liyin Tang
Assignee: Liyin Tang
Priority: Minor


This bug is reported by the findBugs tool, which is a static analysis tool.

Bug: Synchronization on Integer in 
org.apache.hadoop.hbase.regionserver.compactions.CompactSelection.emptyFileList()
The code synchronizes on a boxed primitive constant, such as an Integer.

private static Integer count = 0;
...
  synchronized(count) {
 count++;
 }
...
Since Integer objects can be cached and shared, this code could be 
synchronizing on the same object as other, unrelated code, leading to 
unresponsiveness and possible deadlock

See CERT CON08-J. Do not synchronize on objects that may be reused for more 
information.

Confidence: Normal, Rank: Troubling (14)
Pattern: DL_SYNCHRONIZATION_ON_BOXED_PRIMITIVE 
Type: DL, Category: MT_CORRECTNESS (Multithreaded correctness)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5274) Filter out the expired store file scanner during the compaction

2012-01-26 Thread Phabricator (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13194238#comment-13194238
 ] 

Phabricator commented on HBASE-5274:


mbautin has commented on the revision "[jira] [HBASE-5274] Filter out expired 
scanners on compaction as well".

  Ted: replying to your comment inline. Please let me know if this is OK to be 
committed.

INLINE COMMENTS
  src/main/java/org/apache/hadoop/hbase/regionserver/StoreScanner.java:218 
@tedyu: this is just for clarity. Boolean parameters are inherently confusing, 
and this is an equivalent of a comment saying that "false" means "isCompaction".

REVISION DETAIL
  https://reviews.facebook.net/D1473


> Filter out the expired store file scanner during the compaction
> ---
>
> Key: HBASE-5274
> URL: https://issues.apache.org/jira/browse/HBASE-5274
> Project: HBase
>  Issue Type: Improvement
>Reporter: Liyin Tang
>Assignee: Liyin Tang
> Attachments: D1407.1.patch, D1407.1.patch, D1407.1.patch, 
> D1407.1.patch, D1407.1.patch, D1473.1.patch
>
>
> During the compaction time, HBase will generate a store scanner which will 
> scan a list of store files. And it would be more efficient to filer out the 
> expired store file since there is no need to read any key values from these 
> store files.
> This optimization has been already implemented on 89-fb and this is the 
> building block for HBASE-5199 as well. It is supposed to be no-ops to compact 
> the expired store files.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5274) Filter out the expired store file scanner during the compaction

2012-01-26 Thread Phabricator (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13194246#comment-13194246
 ] 

Phabricator commented on HBASE-5274:


tedyu has commented on the revision "[jira] [HBASE-5274] Filter out expired 
scanners on compaction as well".

INLINE COMMENTS
  src/main/java/org/apache/hadoop/hbase/regionserver/StoreScanner.java:218 That 
should be fine.
  Another approach is to use comment directly.

REVISION DETAIL
  https://reviews.facebook.net/D1473


> Filter out the expired store file scanner during the compaction
> ---
>
> Key: HBASE-5274
> URL: https://issues.apache.org/jira/browse/HBASE-5274
> Project: HBase
>  Issue Type: Improvement
>Reporter: Liyin Tang
>Assignee: Liyin Tang
> Attachments: D1407.1.patch, D1407.1.patch, D1407.1.patch, 
> D1407.1.patch, D1407.1.patch, D1473.1.patch
>
>
> During the compaction time, HBase will generate a store scanner which will 
> scan a list of store files. And it would be more efficient to filer out the 
> expired store file since there is no need to read any key values from these 
> store files.
> This optimization has been already implemented on 89-fb and this is the 
> building block for HBASE-5199 as well. It is supposed to be no-ops to compact 
> the expired store files.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5274) Filter out the expired store file scanner during the compaction

2012-01-26 Thread Phabricator (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13194249#comment-13194249
 ] 

Phabricator commented on HBASE-5274:


lhofhansl has commented on the revision "[jira] [HBASE-5274] Filter out expired 
scanners on compaction as well".

  +1

INLINE COMMENTS
  src/main/java/org/apache/hadoop/hbase/regionserver/Store.java:975 Minor 
comment: Is there no way to move the tests into the same package and leave this 
protected?

REVISION DETAIL
  https://reviews.facebook.net/D1473


> Filter out the expired store file scanner during the compaction
> ---
>
> Key: HBASE-5274
> URL: https://issues.apache.org/jira/browse/HBASE-5274
> Project: HBase
>  Issue Type: Improvement
>Reporter: Liyin Tang
>Assignee: Liyin Tang
> Attachments: D1407.1.patch, D1407.1.patch, D1407.1.patch, 
> D1407.1.patch, D1407.1.patch, D1473.1.patch
>
>
> During the compaction time, HBase will generate a store scanner which will 
> scan a list of store files. And it would be more efficient to filer out the 
> expired store file since there is no need to read any key values from these 
> store files.
> This optimization has been already implemented on 89-fb and this is the 
> building block for HBASE-5199 as well. It is supposed to be no-ops to compact 
> the expired store files.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-4991) Provide capability to delete named region

2012-01-26 Thread Jonathan Hsieh (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-4991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13194263#comment-13194263
 ] 

Jonathan Hsieh commented on HBASE-4991:
---

When you are deleting regions, do you intend to just getting rid of all the 
data in region, or do you mean to create a hole in a region and the merge with 
an preceding or succeeding region?



> Provide capability to delete named region
> -
>
> Key: HBASE-4991
> URL: https://issues.apache.org/jira/browse/HBASE-4991
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>
> See discussion titled 'Able to control routing to Solr shards or not' on 
> lily-discuss
> User may want to quickly dispose of out of date records by deleting specific 
> regions. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-4991) Provide capability to delete named region

2012-01-26 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-4991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13194265#comment-13194265
 ] 

Zhihong Yu commented on HBASE-4991:
---

I think both of them should be done.

> Provide capability to delete named region
> -
>
> Key: HBASE-4991
> URL: https://issues.apache.org/jira/browse/HBASE-4991
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>
> See discussion titled 'Able to control routing to Solr shards or not' on 
> lily-discuss
> User may want to quickly dispose of out of date records by deleting specific 
> regions. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5274) Filter out the expired store file scanner during the compaction

2012-01-26 Thread Phabricator (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13194268#comment-13194268
 ] 

Phabricator commented on HBASE-5274:


mbautin has commented on the revision "[jira] [HBASE-5274] Filter out expired 
scanners on compaction as well".

  Lars: please see my response inline.

INLINE COMMENTS
  src/main/java/org/apache/hadoop/hbase/regionserver/Store.java:975 In that 
case I would have to make LruBlockCache.getCachedFileNamesForTest public. In 
addition, this patch makes HBASE-5010 implementation consistent in 89-fb and 
trunk, and moving the unit test around might create confusion.

  Please let me know if this is OK to commit.

REVISION DETAIL
  https://reviews.facebook.net/D1473


> Filter out the expired store file scanner during the compaction
> ---
>
> Key: HBASE-5274
> URL: https://issues.apache.org/jira/browse/HBASE-5274
> Project: HBase
>  Issue Type: Improvement
>Reporter: Liyin Tang
>Assignee: Liyin Tang
> Attachments: D1407.1.patch, D1407.1.patch, D1407.1.patch, 
> D1407.1.patch, D1407.1.patch, D1473.1.patch
>
>
> During the compaction time, HBase will generate a store scanner which will 
> scan a list of store files. And it would be more efficient to filer out the 
> expired store file since there is no need to read any key values from these 
> store files.
> This optimization has been already implemented on 89-fb and this is the 
> building block for HBASE-5199 as well. It is supposed to be no-ops to compact 
> the expired store files.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-4991) Provide capability to delete named region

2012-01-26 Thread Jonathan Hsieh (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-4991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13194267#comment-13194267
 ] 

Jonathan Hsieh commented on HBASE-4991:
---

Oops -- wasn't looking at the comment tab.

There is similar code in OnlineMerge and uber hbck.

The code in uber hbck creates a new empty region, closes old regions,  moves 
data into the new empty region, and then activates the new now populated region.

Beware -- I found just closing a region seems to have left data around in the 
HMaster's memory which cause disabling to have problems in the 0.90.x version.  
I'm in the process of porting to trunk/0.92 currently and am finding out if 
there are similar or different problems.  I think I saw something else in 
closeRegion recently that I need to try out -- don't remember which version 
that is however.


> Provide capability to delete named region
> -
>
> Key: HBASE-4991
> URL: https://issues.apache.org/jira/browse/HBASE-4991
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>
> See discussion titled 'Able to control routing to Solr shards or not' on 
> lily-discuss
> User may want to quickly dispose of out of date records by deleting specific 
> regions. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5282) Possible file handle leak with truncated HLog file.

2012-01-26 Thread Jonathan Hsieh (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13194278#comment-13194278
 ] 

Jonathan Hsieh commented on HBASE-5282:
---

First code commit! Thanks for the review Ted, Lars!

> Possible file handle leak with truncated HLog file.
> ---
>
> Key: HBASE-5282
> URL: https://issues.apache.org/jira/browse/HBASE-5282
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.0, 0.90.5, 0.92.0
>Reporter: Jonathan Hsieh
>Assignee: Jonathan Hsieh
> Attachments: hbase-5282.patch, hbase-5282.v2.patch
>
>
> When debugging hbck, found that the code responsible for this exception can 
> leak open file handles.
> {code}
> 12/01/15 05:58:11 INFO regionserver.HRegion: Replaying edits from 
> hdfs://haus01.
> sf.cloudera.com:56020/hbase-jon/test5/98a1e7255731aae44b3836641840113e/recovered
> .edits/3211315; minSequenceid=3214658
> 12/01/15 05:58:11 ERROR handler.OpenRegionHandler: Failed open of 
> region=test5,8
> \x90\x00\x00\x00\x00\x00\x00/05_0,1326597390073.98a1e7255731aae44b3836641840
> 113e.
> java.io.EOFException
> at java.io.DataInputStream.readByte(DataInputStream.java:250)
> at 
> org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:299)
> at org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:320)
> at org.apache.hadoop.io.Text.readString(Text.java:400)
> at 
> org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1486)
> at 
> org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1437)
> at 
> org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1424)
> at 
> org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1419)
> at 
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader$WALReader.(SequenceFileLogReader.java:57)
> at 
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.init(SequenceFileLogReader.java:158)
> at 
> org.apache.hadoop.hbase.regionserver.wal.HLog.getReader(HLog.java:572)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEdits(HRegion.java:1940)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEditsIfAny(HRegion.java:1896)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:366)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2661)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2647)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:312)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:99)
> at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:158)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:619)
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-4991) Provide capability to delete named region

2012-01-26 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-4991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13194282#comment-13194282
 ] 

Zhihong Yu commented on HBASE-4991:
---

BTW OnlineMerger is in src/main/java/org/apache/hadoop/hbase/util/HMerge.java

I think for this case we don't need to create an empty region because we would 
end up closing at least two regions. That may increase the downtime for the 
underlying table.

> Provide capability to delete named region
> -
>
> Key: HBASE-4991
> URL: https://issues.apache.org/jira/browse/HBASE-4991
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>
> See discussion titled 'Able to control routing to Solr shards or not' on 
> lily-discuss
> User may want to quickly dispose of out of date records by deleting specific 
> regions. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5274) Filter out the expired store file scanner during the compaction

2012-01-26 Thread Phabricator (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13194286#comment-13194286
 ] 

Phabricator commented on HBASE-5274:


lhofhansl has commented on the revision "[jira] [HBASE-5274] Filter out expired 
scanners on compaction as well".

INLINE COMMENTS
  src/main/java/org/apache/hadoop/hbase/regionserver/Store.java:975 Fair 
enough. +1 :)

REVISION DETAIL
  https://reviews.facebook.net/D1473


> Filter out the expired store file scanner during the compaction
> ---
>
> Key: HBASE-5274
> URL: https://issues.apache.org/jira/browse/HBASE-5274
> Project: HBase
>  Issue Type: Improvement
>Reporter: Liyin Tang
>Assignee: Liyin Tang
> Attachments: D1407.1.patch, D1407.1.patch, D1407.1.patch, 
> D1407.1.patch, D1407.1.patch, D1473.1.patch
>
>
> During the compaction time, HBase will generate a store scanner which will 
> scan a list of store files. And it would be more efficient to filer out the 
> expired store file since there is no need to read any key values from these 
> store files.
> This optimization has been already implemented on 89-fb and this is the 
> building block for HBASE-5199 as well. It is supposed to be no-ops to compact 
> the expired store files.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5282) Possible file handle leak with truncated HLog file.

2012-01-26 Thread Jonathan Hsieh (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hsieh updated HBASE-5282:
--

   Resolution: Fixed
Fix Version/s: 0.92.1
   0.94.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

> Possible file handle leak with truncated HLog file.
> ---
>
> Key: HBASE-5282
> URL: https://issues.apache.org/jira/browse/HBASE-5282
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.0, 0.90.5, 0.92.0
>Reporter: Jonathan Hsieh
>Assignee: Jonathan Hsieh
> Fix For: 0.94.0, 0.92.1
>
> Attachments: hbase-5282.patch, hbase-5282.v2.patch
>
>
> When debugging hbck, found that the code responsible for this exception can 
> leak open file handles.
> {code}
> 12/01/15 05:58:11 INFO regionserver.HRegion: Replaying edits from 
> hdfs://haus01.
> sf.cloudera.com:56020/hbase-jon/test5/98a1e7255731aae44b3836641840113e/recovered
> .edits/3211315; minSequenceid=3214658
> 12/01/15 05:58:11 ERROR handler.OpenRegionHandler: Failed open of 
> region=test5,8
> \x90\x00\x00\x00\x00\x00\x00/05_0,1326597390073.98a1e7255731aae44b3836641840
> 113e.
> java.io.EOFException
> at java.io.DataInputStream.readByte(DataInputStream.java:250)
> at 
> org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:299)
> at org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:320)
> at org.apache.hadoop.io.Text.readString(Text.java:400)
> at 
> org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1486)
> at 
> org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1437)
> at 
> org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1424)
> at 
> org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1419)
> at 
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader$WALReader.(SequenceFileLogReader.java:57)
> at 
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.init(SequenceFileLogReader.java:158)
> at 
> org.apache.hadoop.hbase.regionserver.wal.HLog.getReader(HLog.java:572)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEdits(HRegion.java:1940)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEditsIfAny(HRegion.java:1896)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:366)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2661)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2647)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:312)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:99)
> at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:158)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:619)
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HBASE-5291) Add Kerberos HTTP SPNEGO authentication support to HBase web consoles

2012-01-26 Thread Andrew Purtell (Created) (JIRA)
Add Kerberos HTTP SPNEGO authentication support to HBase web consoles
-

 Key: HBASE-5291
 URL: https://issues.apache.org/jira/browse/HBASE-5291
 Project: HBase
  Issue Type: Improvement
  Components: master, regionserver, security
Reporter: Andrew Purtell


Like HADOOP-7119, the same motivations:

{quote}
Hadoop RPC already supports Kerberos authentication. 
{quote}

As does the HBase secure RPC engine.

{quote}
Kerberos enables single sign-on.

Popular browsers (Firefox and Internet Explorer) have support for Kerberos HTTP 
SPNEGO.

Adding support for Kerberos HTTP SPNEGO to [HBase] web consoles would provide a 
unified authentication mechanism and single sign-on for web UI and RPC.
{quote}

Also like HADOOP-7119, the same solution:

A servlet filter is configured in front of all Hadoop web consoles for 
authentication.

This filter verifies if the incoming request is already authenticated by the 
presence of a signed HTTP cookie. If the cookie is present, its signature is 
valid and its value didn't expire; then the request continues its way to the 
page invoked by the request. If the cookie is not present, it is invalid or it 
expired; then the request is delegated to an authenticator handler. The 
authenticator handler then is responsible for requesting/validating the 
user-agent for the user credentials. This may require one or more additional 
interactions between the authenticator handler and the user-agent (which will 
be multiple HTTP requests). Once the authenticator handler verifies the 
credentials and generates an authentication token, a signed cookie is returned 
to the user-agent for all subsequent invocations.

The authenticator handler is pluggable and 2 implementations are provided out 
of the box: pseudo/simple and kerberos.

1. The pseudo/simple authenticator handler is equivalent to the Hadoop 
pseudo/simple authentication. It trusts the value of the user.name query string 
parameter. The pseudo/simple authenticator handler supports an anonymous mode 
which accepts any request without requiring the user.name query string 
parameter to create the token. This is the default behavior, preserving the 
behavior of the HBase web consoles before this patch.

2. The kerberos authenticator handler implements the Kerberos HTTP SPNEGO 
implementation. This authenticator handler will generate a token only if a 
successful Kerberos HTTP SPNEGO interaction is performed between the user-agent 
and the authenticator. Browsers like Firefox and Internet Explorer support 
Kerberos HTTP SPNEGO.

We can build on the support added to Hadoop via HADOOP-7119. Should just be a 
matter of wiring up the filter to our infoservers in a similar manner. 

And from 
https://issues.apache.org/jira/browse/HBASE-5050?focusedCommentId=13171086&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13171086

{quote}
Hadoop 0.23 onwards has a hadoop-auth artifact that provides SPNEGO/Kerberos 
authentication for webapps via a filter. You should consider using it. You 
don't have to move Hbase to 0.23 for that, just consume the hadoop-auth 
artifact, which has no dependencies on the rest of Hadoop 0.23 artifacts.
{quote}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5291) Add Kerberos HTTP SPNEGO authentication support to HBase web consoles

2012-01-26 Thread Alejandro Abdelnur (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13194317#comment-13194317
 ] 

Alejandro Abdelnur commented on HBASE-5291:
---

You could copycat hadoop-httpfs AuthFilter (this would enable reading the 
security related config from hbase config files)



> Add Kerberos HTTP SPNEGO authentication support to HBase web consoles
> -
>
> Key: HBASE-5291
> URL: https://issues.apache.org/jira/browse/HBASE-5291
> Project: HBase
>  Issue Type: Improvement
>  Components: master, regionserver, security
>Reporter: Andrew Purtell
>
> Like HADOOP-7119, the same motivations:
> {quote}
> Hadoop RPC already supports Kerberos authentication. 
> {quote}
> As does the HBase secure RPC engine.
> {quote}
> Kerberos enables single sign-on.
> Popular browsers (Firefox and Internet Explorer) have support for Kerberos 
> HTTP SPNEGO.
> Adding support for Kerberos HTTP SPNEGO to [HBase] web consoles would provide 
> a unified authentication mechanism and single sign-on for web UI and RPC.
> {quote}
> Also like HADOOP-7119, the same solution:
> A servlet filter is configured in front of all Hadoop web consoles for 
> authentication.
> This filter verifies if the incoming request is already authenticated by the 
> presence of a signed HTTP cookie. If the cookie is present, its signature is 
> valid and its value didn't expire; then the request continues its way to the 
> page invoked by the request. If the cookie is not present, it is invalid or 
> it expired; then the request is delegated to an authenticator handler. The 
> authenticator handler then is responsible for requesting/validating the 
> user-agent for the user credentials. This may require one or more additional 
> interactions between the authenticator handler and the user-agent (which will 
> be multiple HTTP requests). Once the authenticator handler verifies the 
> credentials and generates an authentication token, a signed cookie is 
> returned to the user-agent for all subsequent invocations.
> The authenticator handler is pluggable and 2 implementations are provided out 
> of the box: pseudo/simple and kerberos.
> 1. The pseudo/simple authenticator handler is equivalent to the Hadoop 
> pseudo/simple authentication. It trusts the value of the user.name query 
> string parameter. The pseudo/simple authenticator handler supports an 
> anonymous mode which accepts any request without requiring the user.name 
> query string parameter to create the token. This is the default behavior, 
> preserving the behavior of the HBase web consoles before this patch.
> 2. The kerberos authenticator handler implements the Kerberos HTTP SPNEGO 
> implementation. This authenticator handler will generate a token only if a 
> successful Kerberos HTTP SPNEGO interaction is performed between the 
> user-agent and the authenticator. Browsers like Firefox and Internet Explorer 
> support Kerberos HTTP SPNEGO.
> We can build on the support added to Hadoop via HADOOP-7119. Should just be a 
> matter of wiring up the filter to our infoservers in a similar manner. 
> And from 
> https://issues.apache.org/jira/browse/HBASE-5050?focusedCommentId=13171086&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13171086
> {quote}
> Hadoop 0.23 onwards has a hadoop-auth artifact that provides SPNEGO/Kerberos 
> authentication for webapps via a filter. You should consider using it. You 
> don't have to move Hbase to 0.23 for that, just consume the hadoop-auth 
> artifact, which has no dependencies on the rest of Hadoop 0.23 artifacts.
> {quote}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HBASE-5292) getsize per-CF metric incorrectly counts compaction related reads as well

2012-01-26 Thread Kannan Muthukkaruppan (Created) (JIRA)
getsize per-CF metric incorrectly counts compaction related reads as well 
--

 Key: HBASE-5292
 URL: https://issues.apache.org/jira/browse/HBASE-5292
 Project: HBase
  Issue Type: Bug
Reporter: Kannan Muthukkaruppan


The per-CF "getsize" metric's intent was to track bytes returned per-CF. [Note: 
We already have metrics to track # of HFileBlock's read for compaction vs. 
non-compaction cases -- e.g., compactionblockreadcnt vs. fsblockreadcnt.]

However, currently, the metric gets updated for both client initiated Get/Scan 
operations as well for compaction related reads. The metric is updated in 
StoreScanner.java:next() when the Scan query matcher returns an INCLUDE* code 
via a:

 HRegion.incrNumericMetric(this.metricNameGetsize, copyKv.getLength());

We should not do the above in case of compactions.


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5292) getsize per-CF metric incorrectly counts compaction related reads as well

2012-01-26 Thread Kannan Muthukkaruppan (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kannan Muthukkaruppan updated HBASE-5292:
-

Description: 
The per-CF "getsize" metric's intent was to track bytes returned (to HBase 
clients) per-CF. [Note: We already have metrics to track # of HFileBlock's read 
for compaction vs. non-compaction cases -- e.g., compactionblockreadcnt vs. 
fsblockreadcnt.]

However, currently, the metric gets updated for both client initiated Get/Scan 
operations as well for compaction related reads. The metric is updated in 
StoreScanner.java:next() when the Scan query matcher returns an INCLUDE* code 
via a:

 HRegion.incrNumericMetric(this.metricNameGetsize, copyKv.getLength());

We should not do the above in case of compactions.


  was:
The per-CF "getsize" metric's intent was to track bytes returned per-CF. [Note: 
We already have metrics to track # of HFileBlock's read for compaction vs. 
non-compaction cases -- e.g., compactionblockreadcnt vs. fsblockreadcnt.]

However, currently, the metric gets updated for both client initiated Get/Scan 
operations as well for compaction related reads. The metric is updated in 
StoreScanner.java:next() when the Scan query matcher returns an INCLUDE* code 
via a:

 HRegion.incrNumericMetric(this.metricNameGetsize, copyKv.getLength());

We should not do the above in case of compactions.



> getsize per-CF metric incorrectly counts compaction related reads as well 
> --
>
> Key: HBASE-5292
> URL: https://issues.apache.org/jira/browse/HBASE-5292
> Project: HBase
>  Issue Type: Bug
>Reporter: Kannan Muthukkaruppan
>
> The per-CF "getsize" metric's intent was to track bytes returned (to HBase 
> clients) per-CF. [Note: We already have metrics to track # of HFileBlock's 
> read for compaction vs. non-compaction cases -- e.g., compactionblockreadcnt 
> vs. fsblockreadcnt.]
> However, currently, the metric gets updated for both client initiated 
> Get/Scan operations as well for compaction related reads. The metric is 
> updated in StoreScanner.java:next() when the Scan query matcher returns an 
> INCLUDE* code via a:
>  HRegion.incrNumericMetric(this.metricNameGetsize, copyKv.getLength());
> We should not do the above in case of compactions.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5292) getsize per-CF metric incorrectly counts compaction related reads as well

2012-01-26 Thread Kannan Muthukkaruppan (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kannan Muthukkaruppan updated HBASE-5292:
-

Description: 
The per-CF "getsize" metric's intent was to track bytes returned (to HBase 
clients) per-CF. [Note: We already have metrics to track # of HFileBlock's read 
for compaction vs. non-compaction cases -- e.g., compactionblockreadcnt vs. 
fsblockreadcnt.]

Currently, the "getsize" metric gets updated for both client initiated Get/Scan 
operations as well for compaction related reads. The metric is updated in 
StoreScanner.java:next() when the Scan query matcher returns an INCLUDE* code 
via a:

 HRegion.incrNumericMetric(this.metricNameGetsize, copyKv.getLength());

We should not do the above in case of compactions.


  was:
The per-CF "getsize" metric's intent was to track bytes returned (to HBase 
clients) per-CF. [Note: We already have metrics to track # of HFileBlock's read 
for compaction vs. non-compaction cases -- e.g., compactionblockreadcnt vs. 
fsblockreadcnt.]

However, currently, the metric gets updated for both client initiated Get/Scan 
operations as well for compaction related reads. The metric is updated in 
StoreScanner.java:next() when the Scan query matcher returns an INCLUDE* code 
via a:

 HRegion.incrNumericMetric(this.metricNameGetsize, copyKv.getLength());

We should not do the above in case of compactions.



> getsize per-CF metric incorrectly counts compaction related reads as well 
> --
>
> Key: HBASE-5292
> URL: https://issues.apache.org/jira/browse/HBASE-5292
> Project: HBase
>  Issue Type: Bug
>Reporter: Kannan Muthukkaruppan
>
> The per-CF "getsize" metric's intent was to track bytes returned (to HBase 
> clients) per-CF. [Note: We already have metrics to track # of HFileBlock's 
> read for compaction vs. non-compaction cases -- e.g., compactionblockreadcnt 
> vs. fsblockreadcnt.]
> Currently, the "getsize" metric gets updated for both client initiated 
> Get/Scan operations as well for compaction related reads. The metric is 
> updated in StoreScanner.java:next() when the Scan query matcher returns an 
> INCLUDE* code via a:
>  HRegion.incrNumericMetric(this.metricNameGetsize, copyKv.getLength());
> We should not do the above in case of compactions.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5274) Filter out the expired store file scanner during the compaction

2012-01-26 Thread Phabricator (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13194344#comment-13194344
 ] 

Phabricator commented on HBASE-5274:


mbautin has committed the revision "[jira] [HBASE-5274] Filter out expired 
scanners on compaction as well".

REVISION DETAIL
  https://reviews.facebook.net/D1473

COMMIT
  https://reviews.facebook.net/rHBASE1236483


> Filter out the expired store file scanner during the compaction
> ---
>
> Key: HBASE-5274
> URL: https://issues.apache.org/jira/browse/HBASE-5274
> Project: HBase
>  Issue Type: Improvement
>Reporter: Liyin Tang
>Assignee: Liyin Tang
> Attachments: D1407.1.patch, D1407.1.patch, D1407.1.patch, 
> D1407.1.patch, D1407.1.patch, D1473.1.patch
>
>
> During the compaction time, HBase will generate a store scanner which will 
> scan a list of store files. And it would be more efficient to filer out the 
> expired store file since there is no need to read any key values from these 
> store files.
> This optimization has been already implemented on 89-fb and this is the 
> building block for HBASE-5199 as well. It is supposed to be no-ops to compact 
> the expired store files.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5153) Add retry logic in HConnectionImplementation#resetZooKeeperTrackers

2012-01-26 Thread Lars Hofhansl (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13194347#comment-13194347
 ] 

Lars Hofhansl commented on HBASE-5153:
--

So is this change in 0.90 now? I'm confused. Should revert it from there too, I 
guess.
I will see what's up with TestMergeTool in trunk now.

> Add retry logic in HConnectionImplementation#resetZooKeeperTrackers
> ---
>
> Key: HBASE-5153
> URL: https://issues.apache.org/jira/browse/HBASE-5153
> Project: HBase
>  Issue Type: Bug
>  Components: client
>Affects Versions: 0.90.4
>Reporter: Jieshan Bean
>Assignee: Jieshan Bean
> Fix For: 0.94.0, 0.90.6, 0.92.1
>
> Attachments: 5153-92.txt, 5153-trunk.txt, 5153-trunk.txt, 
> HBASE-5153-V2.patch, HBASE-5153-V3.patch, HBASE-5153-V4-90.patch, 
> HBASE-5153-V5-90.patch, HBASE-5153-V6-90-minorchange.patch, 
> HBASE-5153-V6-90.txt, HBASE-5153-trunk-v2.patch, HBASE-5153-trunk.patch, 
> HBASE-5153.patch, TestResults-hbase5153.out
>
>
> HBASE-4893 is related to this issue. In that issue, we know, if multi-threads 
> share a same connection, once this connection got abort in one thread, the 
> other threads will got a 
> "HConnectionManager$HConnectionImplementation@18fb1f7 closed" exception.
> It solve the problem of "stale connection can't removed". But the orignal 
> HTable instance cann't be continue to use. The connection in HTable should be 
> recreated.
> Actually, there's two aproach to solve this:
> 1. In user code, once catch an IOE, close connection and re-create HTable 
> instance. We can use this as a workaround.
> 2. In HBase Client side, catch this exception, and re-create connection.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5153) Add retry logic in HConnectionImplementation#resetZooKeeperTrackers

2012-01-26 Thread Lars Hofhansl (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13194357#comment-13194357
 ] 

Lars Hofhansl commented on HBASE-5153:
--

So here's the problem. This is hanging while validating that HBase is not 
running via HBaseAdmin.checkHBaseAvailable, which just attempts to create a new 
HBaseAdmin after it sets hbase.client.retries.number to 1. However 
HConnectionImpl caches hbase.client.retries.number in numRetries, and hence if 
ZK is not running resetZooKeeperTrackersWithRetries will retry for a while.
The simplest fix would be for resetZooKeeperTrackersWithRetries to ignore he 
cached setting and to retrieve the value again from the setting. While I am at 
it, I'll also add another option to a different number of retries here.

> Add retry logic in HConnectionImplementation#resetZooKeeperTrackers
> ---
>
> Key: HBASE-5153
> URL: https://issues.apache.org/jira/browse/HBASE-5153
> Project: HBase
>  Issue Type: Bug
>  Components: client
>Affects Versions: 0.90.4
>Reporter: Jieshan Bean
>Assignee: Jieshan Bean
> Fix For: 0.94.0, 0.90.6, 0.92.1
>
> Attachments: 5153-92.txt, 5153-trunk.txt, 5153-trunk.txt, 
> HBASE-5153-V2.patch, HBASE-5153-V3.patch, HBASE-5153-V4-90.patch, 
> HBASE-5153-V5-90.patch, HBASE-5153-V6-90-minorchange.patch, 
> HBASE-5153-V6-90.txt, HBASE-5153-trunk-v2.patch, HBASE-5153-trunk.patch, 
> HBASE-5153.patch, TestResults-hbase5153.out
>
>
> HBASE-4893 is related to this issue. In that issue, we know, if multi-threads 
> share a same connection, once this connection got abort in one thread, the 
> other threads will got a 
> "HConnectionManager$HConnectionImplementation@18fb1f7 closed" exception.
> It solve the problem of "stale connection can't removed". But the orignal 
> HTable instance cann't be continue to use. The connection in HTable should be 
> recreated.
> Actually, there's two aproach to solve this:
> 1. In user code, once catch an IOE, close connection and re-create HTable 
> instance. We can use this as a workaround.
> 2. In HBase Client side, catch this exception, and re-create connection.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5010) Filter HFiles based on TTL

2012-01-26 Thread Mikhail Bautin (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Bautin updated HBASE-5010:
--

   Resolution: Fixed
Fix Version/s: 0.94.0
 Assignee: Mikhail Bautin  (was: Zhihong Yu)
   Status: Resolved  (was: Patch Available)

A follow-up fix was submitted as part of HBASE-5274 to bring the trunk fix for 
this issue to parity with the 89-fb fix. Resolving.

> Filter HFiles based on TTL
> --
>
> Key: HBASE-5010
> URL: https://issues.apache.org/jira/browse/HBASE-5010
> Project: HBase
>  Issue Type: Bug
>Reporter: Mikhail Bautin
>Assignee: Mikhail Bautin
> Fix For: 0.94.0
>
> Attachments: 5010.patch, D1017.1.patch, D1017.2.patch, D909.1.patch, 
> D909.2.patch, D909.3.patch, D909.4.patch, D909.5.patch, D909.6.patch
>
>
> In ScanWildcardColumnTracker we have
> {code:java}
>  
>   this.oldestStamp = EnvironmentEdgeManager.currentTimeMillis() - ttl;
>   ...
>   private boolean isExpired(long timestamp) {
> return timestamp < oldestStamp;
>   }
> {code}
> but this time range filtering does not participate in HFile selection. In one 
> real case this caused next() calls to time out because all KVs in a table got 
> expired, but next() had to iterate over the whole table to find that out. We 
> should be able to filter out those HFiles right away. I think a reasonable 
> approach is to add a "default timerange filter" to every scan for a CF with a 
> finite TTL and utilize existing filtering in 
> StoreFile.Reader.passesTimerangeFilter.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HBASE-5274) Filter out the expired store file scanner during the compaction

2012-01-26 Thread Mikhail Bautin (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Bautin resolved HBASE-5274.
---

Resolution: Fixed
  Assignee: Mikhail Bautin  (was: Liyin Tang)

Fix committed to trunk.

> Filter out the expired store file scanner during the compaction
> ---
>
> Key: HBASE-5274
> URL: https://issues.apache.org/jira/browse/HBASE-5274
> Project: HBase
>  Issue Type: Improvement
>Reporter: Liyin Tang
>Assignee: Mikhail Bautin
> Attachments: D1407.1.patch, D1407.1.patch, D1407.1.patch, 
> D1407.1.patch, D1407.1.patch, D1473.1.patch
>
>
> During the compaction time, HBase will generate a store scanner which will 
> scan a list of store files. And it would be more efficient to filer out the 
> expired store file since there is no need to read any key values from these 
> store files.
> This optimization has been already implemented on 89-fb and this is the 
> building block for HBASE-5199 as well. It is supposed to be no-ops to compact 
> the expired store files.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-3134) [replication] Add the ability to enable/disable streams

2012-01-26 Thread Teruyoshi Zenmyo (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-3134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Teruyoshi Zenmyo updated HBASE-3134:


Labels: replication  (was: )
Status: Patch Available  (was: Open)

> [replication] Add the ability to enable/disable streams
> ---
>
> Key: HBASE-3134
> URL: https://issues.apache.org/jira/browse/HBASE-3134
> Project: HBase
>  Issue Type: New Feature
>  Components: replication
>Reporter: Jean-Daniel Cryans
>Priority: Minor
>  Labels: replication
> Attachments: HBASE-3134.patch
>
>
> This jira was initially in the scope of HBASE-2201, but was pushed out since 
> it has low value compared to the required effort (and when want to ship 
> 0.90.0 rather soonish).
> We need to design a way to enable/disable replication streams in a 
> determinate fashion.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5153) Add retry logic in HConnectionImplementation#resetZooKeeperTrackers

2012-01-26 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13194373#comment-13194373
 ] 

Zhihong Yu commented on HBASE-5153:
---

Thanks for tracking down the issue, Lars.
If you can upload the latest 5153-trunk.txt to reviewboard first followed by 
your new patch, that would help us know your changes easily.

> Add retry logic in HConnectionImplementation#resetZooKeeperTrackers
> ---
>
> Key: HBASE-5153
> URL: https://issues.apache.org/jira/browse/HBASE-5153
> Project: HBase
>  Issue Type: Bug
>  Components: client
>Affects Versions: 0.90.4
>Reporter: Jieshan Bean
>Assignee: Jieshan Bean
> Fix For: 0.94.0, 0.90.6, 0.92.1
>
> Attachments: 5153-92.txt, 5153-trunk.txt, 5153-trunk.txt, 
> HBASE-5153-V2.patch, HBASE-5153-V3.patch, HBASE-5153-V4-90.patch, 
> HBASE-5153-V5-90.patch, HBASE-5153-V6-90-minorchange.patch, 
> HBASE-5153-V6-90.txt, HBASE-5153-trunk-v2.patch, HBASE-5153-trunk.patch, 
> HBASE-5153.patch, TestResults-hbase5153.out
>
>
> HBASE-4893 is related to this issue. In that issue, we know, if multi-threads 
> share a same connection, once this connection got abort in one thread, the 
> other threads will got a 
> "HConnectionManager$HConnectionImplementation@18fb1f7 closed" exception.
> It solve the problem of "stale connection can't removed". But the orignal 
> HTable instance cann't be continue to use. The connection in HTable should be 
> recreated.
> Actually, there's two aproach to solve this:
> 1. In user code, once catch an IOE, close connection and re-create HTable 
> instance. We can use this as a workaround.
> 2. In HBase Client side, catch this exception, and re-create connection.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-4720) Implement atomic update operations (checkAndPut, checkAndDelete) for REST client/server

2012-01-26 Thread Mubarak Seyed (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-4720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mubarak Seyed updated HBASE-4720:
-

Attachment: HBASE-4720.trunk.v7.patch

The attached file (HBASE-4720.trunk.v7.patch)   addresses option # 1 to add 
query param /table/row?check=put or /table/row?check=delete

@Andrew
Can you please review the changes?

> Implement atomic update operations (checkAndPut, checkAndDelete) for REST 
> client/server 
> 
>
> Key: HBASE-4720
> URL: https://issues.apache.org/jira/browse/HBASE-4720
> Project: HBase
>  Issue Type: Improvement
>Reporter: Daniel Lord
>Assignee: Mubarak Seyed
> Fix For: 0.94.0
>
> Attachments: HBASE-4720.trunk.v1.patch, HBASE-4720.trunk.v2.patch, 
> HBASE-4720.trunk.v3.patch, HBASE-4720.trunk.v4.patch, 
> HBASE-4720.trunk.v5.patch, HBASE-4720.trunk.v6.patch, 
> HBASE-4720.trunk.v7.patch, HBASE-4720.v1.patch, HBASE-4720.v3.patch
>
>
> I have several large application/HBase clusters where an application node 
> will occasionally need to talk to HBase from a different cluster.  In order 
> to help ensure some of my consistency guarantees I have a sentinel table that 
> is updated atomically as users interact with the system.  This works quite 
> well for the "regular" hbase client but the REST client does not implement 
> the checkAndPut and checkAndDelete operations.  This exposes the application 
> to some race conditions that have to be worked around.  It would be ideal if 
> the same checkAndPut/checkAndDelete operations could be supported by the REST 
> client.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5282) Possible file handle leak with truncated HLog file.

2012-01-26 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13194381#comment-13194381
 ] 

Hudson commented on HBASE-5282:
---

Integrated in HBase-0.92 #265 (See 
[https://builds.apache.org/job/HBase-0.92/265/])
HBASE-5282 Possible file handle leak with truncated HLog file

jmhsieh : 
Files : 
* /hbase/branches/0.92/CHANGES.txt
* 
/hbase/branches/0.92/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java


> Possible file handle leak with truncated HLog file.
> ---
>
> Key: HBASE-5282
> URL: https://issues.apache.org/jira/browse/HBASE-5282
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.0, 0.90.5, 0.92.0
>Reporter: Jonathan Hsieh
>Assignee: Jonathan Hsieh
> Fix For: 0.94.0, 0.92.1
>
> Attachments: hbase-5282.patch, hbase-5282.v2.patch
>
>
> When debugging hbck, found that the code responsible for this exception can 
> leak open file handles.
> {code}
> 12/01/15 05:58:11 INFO regionserver.HRegion: Replaying edits from 
> hdfs://haus01.
> sf.cloudera.com:56020/hbase-jon/test5/98a1e7255731aae44b3836641840113e/recovered
> .edits/3211315; minSequenceid=3214658
> 12/01/15 05:58:11 ERROR handler.OpenRegionHandler: Failed open of 
> region=test5,8
> \x90\x00\x00\x00\x00\x00\x00/05_0,1326597390073.98a1e7255731aae44b3836641840
> 113e.
> java.io.EOFException
> at java.io.DataInputStream.readByte(DataInputStream.java:250)
> at 
> org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:299)
> at org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:320)
> at org.apache.hadoop.io.Text.readString(Text.java:400)
> at 
> org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1486)
> at 
> org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1437)
> at 
> org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1424)
> at 
> org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1419)
> at 
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader$WALReader.(SequenceFileLogReader.java:57)
> at 
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.init(SequenceFileLogReader.java:158)
> at 
> org.apache.hadoop.hbase.regionserver.wal.HLog.getReader(HLog.java:572)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEdits(HRegion.java:1940)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEditsIfAny(HRegion.java:1896)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:366)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2661)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2647)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:312)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:99)
> at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:158)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:619)
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-3134) [replication] Add the ability to enable/disable streams

2012-01-26 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-3134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13194385#comment-13194385
 ] 

Zhihong Yu commented on HBASE-3134:
---

{code}
+  ZKUtil.deleteNode(this.zookeeper, getPeerStateZNode(id));
{code}
There might be confusion because whether the peer is enabled/disabled is 
represented by the presence of the peer state znode. A better way is to store 
data in corresponding peer state znode.

I also see similarity between enablePeer() and disablePeer(). Is it possible to 
create a single method, changePeerState(String id, ChangeType ct) where 
ChangeType is an enum indicating what to change ?

Uploading the patch onto reviewboard would allow other people to give more 
precise reviews.

Thanks

> [replication] Add the ability to enable/disable streams
> ---
>
> Key: HBASE-3134
> URL: https://issues.apache.org/jira/browse/HBASE-3134
> Project: HBase
>  Issue Type: New Feature
>  Components: replication
>Reporter: Jean-Daniel Cryans
>Priority: Minor
>  Labels: replication
> Attachments: HBASE-3134.patch
>
>
> This jira was initially in the scope of HBASE-2201, but was pushed out since 
> it has low value compared to the required effort (and when want to ship 
> 0.90.0 rather soonish).
> We need to design a way to enable/disable replication streams in a 
> determinate fashion.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-3134) [replication] Add the ability to enable/disable streams

2012-01-26 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-3134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13194386#comment-13194386
 ] 

Hadoop QA commented on HBASE-3134:
--

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12511955/HBASE-3134.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 3 new or modified tests.

-1 javadoc.  The javadoc tool appears to have generated -140 warning 
messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

-1 findbugs.  The patch appears to introduce 161 new Findbugs (version 
1.3.9) warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

 -1 core tests.  The patch failed these unit tests:
   org.apache.hadoop.hbase.client.TestFromClientSide
  org.apache.hadoop.hbase.replication.TestReplicationPeer
  org.apache.hadoop.hbase.io.hfile.TestHFileBlock
  org.apache.hadoop.hbase.mapreduce.TestImportTsv
  org.apache.hadoop.hbase.mapred.TestTableMapReduce
  org.apache.hadoop.hbase.mapreduce.TestHFileOutputFormat

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/855//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/855//artifact/trunk/patchprocess/newPatchFindbugsWarnings.html
Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/855//console

This message is automatically generated.

> [replication] Add the ability to enable/disable streams
> ---
>
> Key: HBASE-3134
> URL: https://issues.apache.org/jira/browse/HBASE-3134
> Project: HBase
>  Issue Type: New Feature
>  Components: replication
>Reporter: Jean-Daniel Cryans
>Priority: Minor
>  Labels: replication
> Attachments: HBASE-3134.patch
>
>
> This jira was initially in the scope of HBASE-2201, but was pushed out since 
> it has low value compared to the required effort (and when want to ship 
> 0.90.0 rather soonish).
> We need to design a way to enable/disable replication streams in a 
> determinate fashion.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5153) Add retry logic in HConnectionImplementation#resetZooKeeperTrackers

2012-01-26 Thread Lars Hofhansl (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13194388#comment-13194388
 ] 

Lars Hofhansl commented on HBASE-5153:
--

Sure... There's a bit more to this too. resetZooKeeperTrackersWithRetries on 
its last try calls setupZookeeperTrackers with allow aborts, which will call 
resetZooKeeperTrackersWithRetries again. Leading to an endless loop. Need to 
think about how to refactor this.

> Add retry logic in HConnectionImplementation#resetZooKeeperTrackers
> ---
>
> Key: HBASE-5153
> URL: https://issues.apache.org/jira/browse/HBASE-5153
> Project: HBase
>  Issue Type: Bug
>  Components: client
>Affects Versions: 0.90.4
>Reporter: Jieshan Bean
>Assignee: Jieshan Bean
> Fix For: 0.94.0, 0.90.6, 0.92.1
>
> Attachments: 5153-92.txt, 5153-trunk.txt, 5153-trunk.txt, 
> HBASE-5153-V2.patch, HBASE-5153-V3.patch, HBASE-5153-V4-90.patch, 
> HBASE-5153-V5-90.patch, HBASE-5153-V6-90-minorchange.patch, 
> HBASE-5153-V6-90.txt, HBASE-5153-trunk-v2.patch, HBASE-5153-trunk.patch, 
> HBASE-5153.patch, TestResults-hbase5153.out
>
>
> HBASE-4893 is related to this issue. In that issue, we know, if multi-threads 
> share a same connection, once this connection got abort in one thread, the 
> other threads will got a 
> "HConnectionManager$HConnectionImplementation@18fb1f7 closed" exception.
> It solve the problem of "stale connection can't removed". But the orignal 
> HTable instance cann't be continue to use. The connection in HTable should be 
> recreated.
> Actually, there's two aproach to solve this:
> 1. In user code, once catch an IOE, close connection and re-create HTable 
> instance. We can use this as a workaround.
> 2. In HBase Client side, catch this exception, and re-create connection.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Assigned] (HBASE-3134) [replication] Add the ability to enable/disable streams

2012-01-26 Thread Zhihong Yu (Assigned) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-3134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu reassigned HBASE-3134:
-

Assignee: Teruyoshi Zenmyo

> [replication] Add the ability to enable/disable streams
> ---
>
> Key: HBASE-3134
> URL: https://issues.apache.org/jira/browse/HBASE-3134
> Project: HBase
>  Issue Type: New Feature
>  Components: replication
>Reporter: Jean-Daniel Cryans
>Assignee: Teruyoshi Zenmyo
>Priority: Minor
>  Labels: replication
> Attachments: HBASE-3134.patch
>
>
> This jira was initially in the scope of HBASE-2201, but was pushed out since 
> it has low value compared to the required effort (and when want to ship 
> 0.90.0 rather soonish).
> We need to design a way to enable/disable replication streams in a 
> determinate fashion.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-3134) [replication] Add the ability to enable/disable streams

2012-01-26 Thread Zhihong Yu (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-3134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-3134:
--

Fix Version/s: 0.94.0

> [replication] Add the ability to enable/disable streams
> ---
>
> Key: HBASE-3134
> URL: https://issues.apache.org/jira/browse/HBASE-3134
> Project: HBase
>  Issue Type: New Feature
>  Components: replication
>Reporter: Jean-Daniel Cryans
>Assignee: Teruyoshi Zenmyo
>Priority: Minor
>  Labels: replication
> Fix For: 0.94.0
>
> Attachments: HBASE-3134.patch
>
>
> This jira was initially in the scope of HBASE-2201, but was pushed out since 
> it has low value compared to the required effort (and when want to ship 
> 0.90.0 rather soonish).
> We need to design a way to enable/disable replication streams in a 
> determinate fashion.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5186) Add metrics to ThriftServer

2012-01-26 Thread Phabricator (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phabricator updated HBASE-5186:
---

Attachment: HBASE-5186.D1461.5.patch

sc updated the revision "HBASE-5186 [jira] Add metrics to ThriftServer".
Reviewers: dhruba, tedyu, JIRA, heyongqiang

  Remove unnecessary locking in ThriftMetrics

REVISION DETAIL
  https://reviews.facebook.net/D1461

AFFECTED FILES
  src/main/java/org/apache/hadoop/hbase/regionserver/HRegionThriftServer.java
  src/main/java/org/apache/hadoop/hbase/thrift/CallQueue.java
  src/main/java/org/apache/hadoop/hbase/thrift/HbaseHandlerMetricsProxy.java
  src/main/java/org/apache/hadoop/hbase/thrift/TBoundedThreadPoolServer.java
  src/main/java/org/apache/hadoop/hbase/thrift/ThriftMetrics.java
  src/main/java/org/apache/hadoop/hbase/thrift/ThriftServerRunner.java
  src/test/java/org/apache/hadoop/hbase/thrift/TestCallQueue.java
  src/test/java/org/apache/hadoop/hbase/thrift/TestThriftServer.java


> Add metrics to ThriftServer
> ---
>
> Key: HBASE-5186
> URL: https://issues.apache.org/jira/browse/HBASE-5186
> Project: HBase
>  Issue Type: Improvement
>Reporter: Scott Chen
>Assignee: Scott Chen
> Attachments: HBASE-5186.D1461.1.patch, HBASE-5186.D1461.2.patch, 
> HBASE-5186.D1461.3.patch, HBASE-5186.D1461.4.patch, HBASE-5186.D1461.5.patch
>
>
> It will be useful to have some metrics (queue length, waiting time, 
> processing time ...) similar to Hadoop RPC server. This allows us to monitor 
> system health also provide a tool to diagnose the problem where thrift calls 
> are slow.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-4720) Implement atomic update operations (checkAndPut, checkAndDelete) for REST client/server

2012-01-26 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-4720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13194399#comment-13194399
 ] 

Zhihong Yu commented on HBASE-4720:
---

@Mubarak:
Thanks for your persistence.

Please also describe the scenarios that you tested in your cluster.

> Implement atomic update operations (checkAndPut, checkAndDelete) for REST 
> client/server 
> 
>
> Key: HBASE-4720
> URL: https://issues.apache.org/jira/browse/HBASE-4720
> Project: HBase
>  Issue Type: Improvement
>Reporter: Daniel Lord
>Assignee: Mubarak Seyed
> Fix For: 0.94.0
>
> Attachments: HBASE-4720.trunk.v1.patch, HBASE-4720.trunk.v2.patch, 
> HBASE-4720.trunk.v3.patch, HBASE-4720.trunk.v4.patch, 
> HBASE-4720.trunk.v5.patch, HBASE-4720.trunk.v6.patch, 
> HBASE-4720.trunk.v7.patch, HBASE-4720.v1.patch, HBASE-4720.v3.patch
>
>
> I have several large application/HBase clusters where an application node 
> will occasionally need to talk to HBase from a different cluster.  In order 
> to help ensure some of my consistency guarantees I have a sentinel table that 
> is updated atomically as users interact with the system.  This works quite 
> well for the "regular" hbase client but the REST client does not implement 
> the checkAndPut and checkAndDelete operations.  This exposes the application 
> to some race conditions that have to be worked around.  It would be ideal if 
> the same checkAndPut/checkAndDelete operations could be supported by the REST 
> client.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-4720) Implement atomic update operations (checkAndPut, checkAndDelete) for REST client/server

2012-01-26 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-4720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13194401#comment-13194401
 ] 

Hadoop QA commented on HBASE-4720:
--

-1 overall.  Here are the results of testing the latest attachment 
  
http://issues.apache.org/jira/secure/attachment/12512066/HBASE-4720.trunk.v7.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 3 new or modified tests.

-1 javadoc.  The javadoc tool appears to have generated -140 warning 
messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

-1 findbugs.  The patch appears to introduce 161 new Findbugs (version 
1.3.9) warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

 -1 core tests.  The patch failed these unit tests:
   org.apache.hadoop.hbase.mapreduce.TestHFileOutputFormat
  org.apache.hadoop.hbase.mapred.TestTableMapReduce
  org.apache.hadoop.hbase.io.hfile.TestHFileBlock
  org.apache.hadoop.hbase.mapreduce.TestImportTsv

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/856//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/856//artifact/trunk/patchprocess/newPatchFindbugsWarnings.html
Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/856//console

This message is automatically generated.

> Implement atomic update operations (checkAndPut, checkAndDelete) for REST 
> client/server 
> 
>
> Key: HBASE-4720
> URL: https://issues.apache.org/jira/browse/HBASE-4720
> Project: HBase
>  Issue Type: Improvement
>Reporter: Daniel Lord
>Assignee: Mubarak Seyed
> Fix For: 0.94.0
>
> Attachments: HBASE-4720.trunk.v1.patch, HBASE-4720.trunk.v2.patch, 
> HBASE-4720.trunk.v3.patch, HBASE-4720.trunk.v4.patch, 
> HBASE-4720.trunk.v5.patch, HBASE-4720.trunk.v6.patch, 
> HBASE-4720.trunk.v7.patch, HBASE-4720.v1.patch, HBASE-4720.v3.patch
>
>
> I have several large application/HBase clusters where an application node 
> will occasionally need to talk to HBase from a different cluster.  In order 
> to help ensure some of my consistency guarantees I have a sentinel table that 
> is updated atomically as users interact with the system.  This works quite 
> well for the "regular" hbase client but the REST client does not implement 
> the checkAndPut and checkAndDelete operations.  This exposes the application 
> to some race conditions that have to be worked around.  It would be ideal if 
> the same checkAndPut/checkAndDelete operations could be supported by the REST 
> client.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5153) Add retry logic in HConnectionImplementation#resetZooKeeperTrackers

2012-01-26 Thread Jieshan Bean (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13194406#comment-13194406
 ] 

Jieshan Bean commented on HBASE-5153:
-

It should not leading to an endless loop. Unless, each retry will get a 
ZookeeperLossException. If this exception happened for long time, Zookeeper 
must has some problem. so when create a new Zookeeper instance, it already 
thrown a Exception. So it won't be an endless loop:
{noformat}
if ((t instanceof KeeperException.SessionExpiredException)
  || (t instanceof KeeperException.ConnectionLossException)) {
try {
  LOG.info("This client just lost it's session with ZooKeeper, trying" +
  " to reconnect.");
  resetZooKeeperTrackersWithRetries();
  LOG.info("Reconnected successfully. This disconnect could have been" +
  " caused by a network partition or a long-running GC pause," +
  " either way it's recommended that you verify your environment.");
  return;
} catch (ZooKeeperConnectionException e) {
  LOG.error("Could not reconnect to ZooKeeper after session" +
  " expiration, aborting");
  t = e;
}
  }
  if (t != null) LOG.fatal(msg, t);
  else LOG.fatal(msg);
  HConnectionManager.deleteStaleConnection(this);
{noformat}

> Add retry logic in HConnectionImplementation#resetZooKeeperTrackers
> ---
>
> Key: HBASE-5153
> URL: https://issues.apache.org/jira/browse/HBASE-5153
> Project: HBase
>  Issue Type: Bug
>  Components: client
>Affects Versions: 0.90.4
>Reporter: Jieshan Bean
>Assignee: Jieshan Bean
> Fix For: 0.94.0, 0.90.6, 0.92.1
>
> Attachments: 5153-92.txt, 5153-trunk.txt, 5153-trunk.txt, 
> HBASE-5153-V2.patch, HBASE-5153-V3.patch, HBASE-5153-V4-90.patch, 
> HBASE-5153-V5-90.patch, HBASE-5153-V6-90-minorchange.patch, 
> HBASE-5153-V6-90.txt, HBASE-5153-trunk-v2.patch, HBASE-5153-trunk.patch, 
> HBASE-5153.patch, TestResults-hbase5153.out
>
>
> HBASE-4893 is related to this issue. In that issue, we know, if multi-threads 
> share a same connection, once this connection got abort in one thread, the 
> other threads will got a 
> "HConnectionManager$HConnectionImplementation@18fb1f7 closed" exception.
> It solve the problem of "stale connection can't removed". But the orignal 
> HTable instance cann't be continue to use. The connection in HTable should be 
> recreated.
> Actually, there's two aproach to solve this:
> 1. In user code, once catch an IOE, close connection and re-create HTable 
> instance. We can use this as a workaround.
> 2. In HBase Client side, catch this exception, and re-create connection.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-4218) Data Block Encoding of KeyValues (aka delta encoding / prefix compression)

2012-01-26 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-4218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13194408#comment-13194408
 ] 

Zhihong Yu commented on HBASE-4218:
---

TestHFileBlock was reported as failing by Hadoop QA (@26/Jan/12 02:58) before 
the checkin.

Now the test failure appears in every TRUNK build and every Hadoop QA report.

> Data Block Encoding of KeyValues  (aka delta encoding / prefix compression)
> ---
>
> Key: HBASE-4218
> URL: https://issues.apache.org/jira/browse/HBASE-4218
> Project: HBase
>  Issue Type: Improvement
>  Components: io
>Affects Versions: 0.94.0
>Reporter: Jacek Migdal
>Assignee: Mikhail Bautin
>  Labels: compression
> Fix For: 0.94.0
>
> Attachments: 0001-Delta-encoding-fixed-encoded-scanners.patch, 
> 0001-Delta-encoding.patch, 4218-2012-01-14.txt, 4218-v16.txt, 4218.txt, 
> D447.1.patch, D447.10.patch, D447.11.patch, D447.12.patch, D447.13.patch, 
> D447.14.patch, D447.15.patch, D447.16.patch, D447.17.patch, D447.18.patch, 
> D447.19.patch, D447.2.patch, D447.20.patch, D447.21.patch, D447.22.patch, 
> D447.23.patch, D447.24.patch, D447.25.patch, D447.26.patch, D447.3.patch, 
> D447.4.patch, D447.5.patch, D447.6.patch, D447.7.patch, D447.8.patch, 
> D447.9.patch, Data-block-encoding-2011-12-23.patch, 
> Delta-encoding-2012-01-17_11_09_09.patch, 
> Delta-encoding-2012-01-25_00_45_29.patch, 
> Delta-encoding-2012-01-25_16_32_14.patch, 
> Delta-encoding.patch-2011-12-22_11_52_07.patch, 
> Delta-encoding.patch-2012-01-05_15_16_43.patch, 
> Delta-encoding.patch-2012-01-05_16_31_44.patch, 
> Delta-encoding.patch-2012-01-05_16_31_44_copy.patch, 
> Delta-encoding.patch-2012-01-05_18_50_47.patch, 
> Delta-encoding.patch-2012-01-07_14_12_48.patch, 
> Delta-encoding.patch-2012-01-13_12_20_07.patch, 
> Delta_encoding_with_memstore_TS.patch, open-source.diff
>
>
> A compression for keys. Keys are sorted in HFile and they are usually very 
> similar. Because of that, it is possible to design better compression than 
> general purpose algorithms,
> It is an additional step designed to be used in memory. It aims to save 
> memory in cache as well as speeding seeks within HFileBlocks. It should 
> improve performance a lot, if key lengths are larger than value lengths. For 
> example, it makes a lot of sense to use it when value is a counter.
> Initial tests on real data (key length = ~ 90 bytes , value length = 8 bytes) 
> shows that I could achieve decent level of compression:
>  key compression ratio: 92%
>  total compression ratio: 85%
>  LZO on the same data: 85%
>  LZO after delta encoding: 91%
> While having much better performance (20-80% faster decompression ratio than 
> LZO). Moreover, it should allow far more efficient seeking which should 
> improve performance a bit.
> It seems that a simple compression algorithms are good enough. Most of the 
> savings are due to prefix compression, int128 encoding, timestamp diffs and 
> bitfields to avoid duplication. That way, comparisons of compressed data can 
> be much faster than a byte comparator (thanks to prefix compression and 
> bitfields).
> In order to implement it in HBase two important changes in design will be 
> needed:
> -solidify interface to HFileBlock / HFileReader Scanner to provide seeking 
> and iterating; access to uncompressed buffer in HFileBlock will have bad 
> performance
> -extend comparators to support comparison assuming that N first bytes are 
> equal (or some fields are equal)
> Link to a discussion about something similar:
> http://search-hadoop.com/m/5aqGXJEnaD1/hbase+windows&subj=Re+prefix+compression

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




  1   2   >