[jira] [Commented] (HBASE-7897) Add support for tags to Cell Interface

2013-05-03 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13649022#comment-13649022
 ] 

ramkrishna.s.vasudevan commented on HBASE-7897:
---

I have things in place Stack.  I can give a patch may be here so that i can 
incorporate any comments from them.

> Add support for tags to Cell Interface
> --
>
> Key: HBASE-7897
> URL: https://issues.apache.org/jira/browse/HBASE-7897
> Project: HBase
>  Issue Type: Task
>Reporter: stack
>Assignee: ramkrishna.s.vasudevan
>Priority: Critical
> Fix For: 0.95.1
>
>
> Cell Interface has suppport for mvcc.   The only thing we'd add to Cell in 
> the near future is support for tags it would seem.  Should be easy to add.  
> Should add it now.  See backing discussion here: 
> https://issues.apache.org/jira/browse/HBASE-7233?focusedCommentId=13573784&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13573784
> Matt outlines what the additions to Cell might look like here:
> https://issues.apache.org/jira/browse/HBASE-7233?focusedCommentId=13531619&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13531619
> Would be good to get these in now.
> Marking as 0.96.  Can more later.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (HBASE-8355) BaseRegionObserver#preCompactScannerOpen returns null

2013-05-03 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13649018#comment-13649018
 ] 

Lars Hofhansl edited comment on HBASE-8355 at 5/4/13 5:29 AM:
--

Hmm... Yes. So at least I got that part of my original patch right. :)

I seem to recall that I had a good reason for returning null in 
BaseRegionObserver, but since I can't remember why now and there appears to be 
no good reason for it I agree with Jesse.


  was (Author: lhofhansl):
Hmm... Yes. So at least I got that part of my original patch right. :)

I seem to recall that I had a good reason for return null in 
BaseRegionObserver, but since I can't remember why now and there appears to be 
no good reason for for it I agree with Jesse.

  
> BaseRegionObserver#preCompactScannerOpen returns null
> -
>
> Key: HBASE-8355
> URL: https://issues.apache.org/jira/browse/HBASE-8355
> Project: HBase
>  Issue Type: Bug
>  Components: Coprocessors
>Affects Versions: 0.98.0, 0.94.8, 0.95.1
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
> Attachments: 8355-0.94.patch, 8355.patch
>
>
> As pointed out in https://github.com/forcedotcom/phoenix/pull/131, 
> BaseRegionObserver#preCompactScannerOpen returns null by default, which hoses 
> any coprocessors down the line, making override of this method mandatory. The 
> fix is trivial, patch coming momentarily.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8355) BaseRegionObserver#preCompactScannerOpen returns null

2013-05-03 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13649018#comment-13649018
 ] 

Lars Hofhansl commented on HBASE-8355:
--

Hmm... Yes. So at least I got that part of my original patch right. :)

I seem to recall that I had a good reason for return null in 
BaseRegionObserver, but since I can't remember why now and there appears to be 
no good reason for for it I agree with Jesse.


> BaseRegionObserver#preCompactScannerOpen returns null
> -
>
> Key: HBASE-8355
> URL: https://issues.apache.org/jira/browse/HBASE-8355
> Project: HBase
>  Issue Type: Bug
>  Components: Coprocessors
>Affects Versions: 0.98.0, 0.94.8, 0.95.1
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
> Attachments: 8355-0.94.patch, 8355.patch
>
>
> As pointed out in https://github.com/forcedotcom/phoenix/pull/131, 
> BaseRegionObserver#preCompactScannerOpen returns null by default, which hoses 
> any coprocessors down the line, making override of this method mandatory. The 
> fix is trivial, patch coming momentarily.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8420) Port HBASE-6874 Implement prefetching for scanners from 0.89-fb

2013-05-03 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13649011#comment-13649011
 ] 

Lars Hofhansl commented on HBASE-8420:
--

I find it hard to convince myself that the changed code behaves exactly like 
the exiting.

We probably should not do this in 0.94:
{code}
-this.caching = conf.getInt("hbase.client.scanner.caching", 1);
+this.caching = conf.getInt("hbase.client.scanner.caching", 100);
{code}

Lastly in 0.94, we could use scan attributes to indicate this per scanner in a 
backward compatible way.


> Port  HBASE-6874  Implement prefetching for scanners from 0.89-fb
> -
>
> Key: HBASE-8420
> URL: https://issues.apache.org/jira/browse/HBASE-8420
> Project: HBase
>  Issue Type: Improvement
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
> Attachments: 0.94-8420_v1.patch, trunk-8420_v1.patch
>
>
> This should help scanner performance.  We should have it in trunk.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7122) Proper warning message when opening a log file with no entries (idle cluster)

2013-05-03 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13649002#comment-13649002
 ] 

Lars Hofhansl commented on HBASE-7122:
--

Committed to 0.94 as well.

> Proper warning message when opening a log file with no entries (idle cluster)
> -
>
> Key: HBASE-7122
> URL: https://issues.apache.org/jira/browse/HBASE-7122
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 0.94.2
>Reporter: Himanshu Vashishtha
>Assignee: Himanshu Vashishtha
> Fix For: 0.98.0, 0.94.8, 0.95.1
>
> Attachments: HBase-7122-94.patch, HBase-7122-94-v2.patch, 
> HBase-7122-95.patch, HBase-7122-95-v2.patch, HBase-7122-95-v3.patch, 
> HBase-7122-95-v4.patch, HBase-7122.patch, HBASE-7122.v2.patch
>
>
> In case the cluster is idle and the log has rolled (offset to 0), 
> replicationSource tries to open the log and gets an EOF exception. This gets 
> printed after every 10 sec until an entry is inserted in it.
> {code}
> 2012-11-07 15:47:40,924 DEBUG regionserver.ReplicationSource 
> (ReplicationSource.java:openReader(487)) - Opening log for replication 
> c0315.hal.cloudera.com%2C40020%2C1352324202860.1352327804874 at 0
> 2012-11-07 15:47:40,926 WARN  regionserver.ReplicationSource 
> (ReplicationSource.java:openReader(543)) - 1 Got: 
> java.io.EOFException
>   at java.io.DataInputStream.readFully(DataInputStream.java:180)
>   at java.io.DataInputStream.readFully(DataInputStream.java:152)
>   at org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1508)
>   at 
> org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1486)
>   at 
> org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1475)
>   at 
> org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1470)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader$WALReader.(SequenceFileLogReader.java:55)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.init(SequenceFileLogReader.java:175)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLog.getReader(HLog.java:716)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.openReader(ReplicationSource.java:491)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:290)
> 2012-11-07 15:47:40,927 WARN  regionserver.ReplicationSource 
> (ReplicationSource.java:openReader(547)) - Waited too long for this file, 
> considering dumping
> 2012-11-07 15:47:40,927 DEBUG regionserver.ReplicationSource 
> (ReplicationSource.java:sleepForRetries(562)) - Unable to open a reader, 
> sleeping 1000 times 10
> {code}
> We should reduce the log spewing in this case (or some informative message, 
> based on the offset).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8362) Possible MultiGet optimization

2013-05-03 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13649001#comment-13649001
 ] 

Lars Hofhansl commented on HBASE-8362:
--

It's on my (long'ish) list of things to do.
It is not 100% clear that we would see a large improvement from this.

If the Gets in a batch a far apart (in terms of blocks) reseek won't buy much; 
if on the other hand there is a high probability that the Gets end up on the 
same block we'd expect to see an improvement.

I am not saying it should not be done, just that I do not think this should be 
generally enabled, but be at the discretion of the caller. I.e. a new API.

If ROW bloomfilters would work with scans this might be a different story. 
Another consideration could be a new scanner API (something like reseekExact or 
something), which would attempt to reseek to an exact position and indicate 
whether than succeeded or not.


> Possible MultiGet optimization
> --
>
> Key: HBASE-8362
> URL: https://issues.apache.org/jira/browse/HBASE-8362
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>
> Currently MultiGets are executed on a RegionServer in a single thread in a 
> loop that handles each Get separately (opening a scanner, seeking, etc).
> It seems we could optimize this (per region at least) by opening a single 
> scanner and issue a reseek for each Get that was requested.
> I have not tested this yet and no patch, but I would like to solicit feedback 
> on this idea.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7006) [MTTR] Study distributed log splitting to see how we can make it faster

2013-05-03 Thread Jeffrey Zhong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeffrey Zhong updated HBASE-7006:
-

Attachment: hbase-7006-combined-v4.patch

v4 patch incorporating Stack's comments.

> [MTTR] Study distributed log splitting to see how we can make it faster
> ---
>
> Key: HBASE-7006
> URL: https://issues.apache.org/jira/browse/HBASE-7006
> Project: HBase
>  Issue Type: Bug
>  Components: MTTR
>Reporter: stack
>Assignee: Jeffrey Zhong
>Priority: Critical
> Fix For: 0.95.1
>
> Attachments: hbase-7006-combined.patch, hbase-7006-combined-v1.patch, 
> hbase-7006-combined-v3.patch, hbase-7006-combined-v4.patch, 
> hbase-7006-combined-v4.patch, LogSplitting Comparison.pdf, 
> ProposaltoimprovelogsplittingprocessregardingtoHBASE-7006-v2.pdf
>
>
> Just saw interesting issue where a cluster went down  hard and 30 nodes had 
> 1700 WALs to replay.  Replay took almost an hour.  It looks like it could run 
> faster that much of the time is spent zk'ing and nn'ing.
> Putting in 0.96 so it gets a look at least.  Can always punt.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-3457) Auto-tune some GC settings

2013-05-03 Thread Jean-Daniel Cryans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-3457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jean-Daniel Cryans updated HBASE-3457:
--

Status: Open  (was: Patch Available)

Unmarking patch available to trigger discussion.

> Auto-tune some GC settings
> --
>
> Key: HBASE-3457
> URL: https://issues.apache.org/jira/browse/HBASE-3457
> Project: HBase
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 0.90.1
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
> Attachments: hbase-3457.txt, hbase-3457.txt, hbase-env.sh
>
>
> The settings we ship with aren't really optimal for an actual deployment. We 
> can take a look at some things like /proc/cpuinfo and figure out whether to 
> enable parallel GC, turn off CMSIncrementalMode, etc.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-2600) Change how we do meta tables; from tablename+STARTROW+randomid to instead, tablename+ENDROW+randomid

2013-05-03 Thread Jean-Daniel Cryans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-2600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jean-Daniel Cryans updated HBASE-2600:
--

Status: Open  (was: Patch Available)

Patch is stale, all the code moved, unmarking as available.

> Change how we do meta tables; from tablename+STARTROW+randomid to instead, 
> tablename+ENDROW+randomid
> 
>
> Key: HBASE-2600
> URL: https://issues.apache.org/jira/browse/HBASE-2600
> Project: HBase
>  Issue Type: Bug
>Reporter: stack
>Assignee: Alex Newman
> Attachments: 
> 0001-Changed-regioninfo-format-to-use-endKey-instead-of-s.patch, 
> 0001-HBASE-2600.-Change-how-we-do-meta-tables-from-tablen.patch, 
> 0001-HBASE-2600.-Change-how-we-do-meta-tables-from-tablen-v2.patch, 
> 0001-HBASE-2600.-Change-how-we-do-meta-tables-from-tablen-v4.patch, 
> 0001-HBASE-2600.-Change-how-we-do-meta-tables-from-tablen-v6.patch, 
> 0001-HBASE-2600.-Change-how-we-do-meta-tables-from-tablen-v7.2.patch, 
> 0001-HBASE-2600.-Change-how-we-do-meta-tables-from-tablen-v8, 
> 0001-HBASE-2600.-Change-how-we-do-meta-tables-from-tablen-v8.1, 
> 0001-HBASE-2600.-Change-how-we-do-meta-tables-from-tablen-v9.patch, 
> 0001-HBASE-2600.v10.patch, 0001-HBASE-2600-v11.patch, 2600-trunk-01-17.txt, 
> HBASE-2600+5217-Sun-Mar-25-2012-v3.patch, 
> HBASE-2600+5217-Sun-Mar-25-2012-v4.patch, hbase-2600-root.dir.tgz, jenkins.pdf
>
>
> This is an idea that Ryan and I have been kicking around on and off for a 
> while now.
> If regionnames were made of tablename+endrow instead of tablename+startrow, 
> then in the metatables, doing a search for the region that contains the 
> wanted row, we'd just have to open a scanner using passed row and the first 
> row found by the scan would be that of the region we need (If offlined 
> parent, we'd have to scan to the next row).
> If we redid the meta tables in this format, we'd be using an access that is 
> natural to hbase, a scan as opposed to the perverse, expensive 
> getClosestRowBefore we currently have that has to walk backward in meta 
> finding a containing region.
> This issue is about changing the way we name regions.
> If we were using scans, prewarming client cache would be near costless (as 
> opposed to what we'll currently have to do which is first a 
> getClosestRowBefore and then a scan from the closestrowbefore forward).
> Converting to the new method, we'd have to run a migration on startup 
> changing the content in meta.
> Up to this, the randomid component of a region name has been the timestamp of 
> region creation.   HBASE-2531 "32-bit encoding of regionnames waaay 
> too susceptible to hash clashes" proposes changing the randomid so that it 
> contains actual name of the directory in the filesystem that hosts the 
> region.  If we had this in place, I think it would help with the migration to 
> this new way of doing the meta because as is, the region name in fs is a hash 
> of regionname... changing the format of the regionname would mean we generate 
> a different hash... so we'd need hbase-2531 to be in place before we could do 
> this change.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6581) Build with hadoop.profile=3.0

2013-05-03 Thread Jean-Daniel Cryans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jean-Daniel Cryans updated HBASE-6581:
--

Status: Open  (was: Patch Available)

Patch went a little stale, unmarking as available.

> Build with hadoop.profile=3.0
> -
>
> Key: HBASE-6581
> URL: https://issues.apache.org/jira/browse/HBASE-6581
> Project: HBase
>  Issue Type: Bug
>Reporter: Eric Charles
> Attachments: HBASE-6581-1.patch, HBASE-6581-2.patch, HBASE-6581.diff, 
> HBASE-6581.diff
>
>
> Building trunk with hadoop.profile=3.0 gives exceptions (see [1]) due to 
> change in the hadoop maven modules naming (and also usage of 3.0-SNAPSHOT 
> instead of 3.0.0-SNAPSHOT in hbase-common).
> I can provide a patch that would move most of hadoop dependencies in their 
> respective profiles and will define the correct hadoop deps in the 3.0 
> profile.
> Please tell me if that's ok to go this way.
> Thx, Eric
> [1]
> $ mvn clean install -Dhadoop.profile=3.0
> [INFO] Scanning for projects...
> [ERROR] The build could not read 3 projects -> [Help 1]
> [ERROR]   
> [ERROR]   The project org.apache.hbase:hbase-server:0.95-SNAPSHOT 
> (/d/hbase.svn/hbase-server/pom.xml) has 3 errors
> [ERROR] 'dependencies.dependency.version' for 
> org.apache.hadoop:hadoop-common:jar is missing. @ line 655, column 21
> [ERROR] 'dependencies.dependency.version' for 
> org.apache.hadoop:hadoop-annotations:jar is missing. @ line 659, column 21
> [ERROR] 'dependencies.dependency.version' for 
> org.apache.hadoop:hadoop-minicluster:jar is missing. @ line 663, column 21
> [ERROR]   
> [ERROR]   The project org.apache.hbase:hbase-common:0.95-SNAPSHOT 
> (/d/hbase.svn/hbase-common/pom.xml) has 3 errors
> [ERROR] 'dependencies.dependency.version' for 
> org.apache.hadoop:hadoop-common:jar is missing. @ line 170, column 21
> [ERROR] 'dependencies.dependency.version' for 
> org.apache.hadoop:hadoop-annotations:jar is missing. @ line 174, column 21
> [ERROR] 'dependencies.dependency.version' for 
> org.apache.hadoop:hadoop-minicluster:jar is missing. @ line 178, column 21
> [ERROR]   
> [ERROR]   The project org.apache.hbase:hbase-it:0.95-SNAPSHOT 
> (/d/hbase.svn/hbase-it/pom.xml) has 3 errors
> [ERROR] 'dependencies.dependency.version' for 
> org.apache.hadoop:hadoop-common:jar is missing. @ line 220, column 18
> [ERROR] 'dependencies.dependency.version' for 
> org.apache.hadoop:hadoop-annotations:jar is missing. @ line 224, column 21
> [ERROR] 'dependencies.dependency.version' for 
> org.apache.hadoop:hadoop-minicluster:jar is missing. @ line 228, column 21
> [ERROR] 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-3680) Publish more metrics about mslab

2013-05-03 Thread Jean-Daniel Cryans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-3680?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jean-Daniel Cryans updated HBASE-3680:
--

Status: Open  (was: Patch Available)

Patch doesn't apply and needs to be changed apparently, unmarking as available.

> Publish more metrics about mslab
> 
>
> Key: HBASE-3680
> URL: https://issues.apache.org/jira/browse/HBASE-3680
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 0.90.1
>Reporter: Jean-Daniel Cryans
>Assignee: Todd Lipcon
> Fix For: 0.92.3
>
> Attachments: hbase-3680.txt, hbase-3680.txt
>
>
> We have been using mslab on all our clusters for a while now and it seems it 
> tends to OOME or send us into GC loops of death a lot more than it used to. 
> For example, one RS with mslab enabled and 7GB of heap died out of OOME this 
> afternoon; it had .55GB in the block cache and 2.03GB in the memstores which 
> doesn't account for much... but it could be that because of mslab a lot of 
> space was lost in those incomplete 2MB blocks and without metrics we can't 
> really tell. Compactions were running at the time of the OOME and I see block 
> cache activity. The average load on that cluster is 531.
> We should at least publish the total size of all those blocks and maybe even 
> take actions based on that (like force flushing).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-3778) HBaseAdmin.create doesn't create empty boundary keys

2013-05-03 Thread Jean-Daniel Cryans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-3778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jean-Daniel Cryans updated HBASE-3778:
--

Status: Open  (was: Patch Available)

Patch doesn't apply anymore, unmarking as available.

> HBaseAdmin.create doesn't create empty boundary keys
> 
>
> Key: HBASE-3778
> URL: https://issues.apache.org/jira/browse/HBASE-3778
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.90.2
>Reporter: Ted Dunning
> Attachments: HBASE-3778.patch
>
>
> In my ycsb stuff, I have code that looks like this:
> {code}
> String startKey = "user102000";
> String endKey = "user94000";
> admin.createTable(descriptor, startKey.getBytes(), endKey.getBytes(), 
> regions);
> {code}
> The result, however, is a table where the first and last region has defined 
> first and last keys rather than empty keys.
> The patch I am about to attach fixes this, I think.  I have some worries 
> about other uses of Bytes.split, however, and would like some eyes on this 
> patch.  Perhaps we need a new dialect of split.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-5673) The OOM problem of IPC client call cause all handle block

2013-05-03 Thread Jean-Daniel Cryans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jean-Daniel Cryans updated HBASE-5673:
--

Status: Open  (was: Patch Available)

Patch doesn't apply anymore, unmarking as available.

> The OOM problem of IPC client call  cause all handle block
> --
>
> Key: HBASE-5673
> URL: https://issues.apache.org/jira/browse/HBASE-5673
> Project: HBase
>  Issue Type: Bug
> Environment: 0.90.6
>Reporter: xufeng
>Assignee: xufeng
> Fix For: 0.92.3
>
> Attachments: HBASE-5673-90.patch, HBASE-5673-90-V2.patch
>
>
> if HBaseClient meet "unable to create new native thread" exception, the call 
> will never complete because it be lost in calls queue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-5860) splitlogmanager should not unnecessarily resubmit tasks when zk unavailable

2013-05-03 Thread Jean-Daniel Cryans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jean-Daniel Cryans updated HBASE-5860:
--

Status: Open  (was: Patch Available)

The patch doesn't apply to trunk, unmarking as available.

> splitlogmanager should not unnecessarily resubmit tasks when zk unavailable
> ---
>
> Key: HBASE-5860
> URL: https://issues.apache.org/jira/browse/HBASE-5860
> Project: HBase
>  Issue Type: Improvement
>Reporter: Prakash Khemani
>Assignee: Prakash Khemani
> Attachments: 
> 0001-HBASE-5860-splitlogmanager-should-not-unnecessarily-.patch, 
> 0001-HBASE-5860-splitlogmanager-should-not-unnecessarily-.patch
>
>
> (Doesn't really impact the run time or correctness of log splitting)
> say the master has lost connection to zk. splitlogmanager's timeoutmanager 
> will realize that all the tasks that were submitted are still unassigned. It 
> will resubmit those tasks (i.e. create dummy znodes)
> splitlogmanager should realze that the tasks are unassigned but their znodes 
> have not been created.
> 012-04-20 13:11:20,516 INFO org.apache.hadoop.hbase.master.SplitLogManager: 
> dead splitlog worker msgstore295.snc4.facebook.com,60020,1334948757026
> 2012-04-20 13:11:20,517 DEBUG org.apache.hadoop.hbase.master.SplitLogManager: 
> Scheduling batch of logs to split
> 2012-04-20 13:11:20,517 INFO org.apache.hadoop.hbase.master.SplitLogManager: 
> started splitting logs in 
> [hdfs://msgstore215.snc4.facebook.com:9000/MSGSTORE215-SNC4-HBASE/.logs/msgstore295.snc4.facebook.com,60020,1334948757026-splitting]
> 2012-04-20 13:11:20,565 INFO org.apache.zookeeper.ClientCnxn: Opening socket 
> connection to server msgstore235.snc4.facebook.com/10.30.222.186:2181
> 2012-04-20 13:11:20,566 INFO org.apache.zookeeper.ClientCnxn: Socket 
> connection established to msgstore235.snc4.facebook.com/10.30.222.186:2181, 
> initiating session
> 2012-04-20 13:11:20,575 INFO org.apache.hadoop.hbase.master.SplitLogManager: 
> total tasks = 4 unassigned = 4
> 2012-04-20 13:11:20,576 DEBUG org.apache.hadoop.hbase.master.SplitLogManager: 
> resubmitting unassigned task(s) after timeout
> 2012-04-20 13:11:21,577 DEBUG org.apache.hadoop.hbase.master.SplitLogManager: 
> resubmitting unassigned task(s) after timeout
> 2012-04-20 13:11:21,683 INFO org.apache.zookeeper.ClientCnxn: Unable to read 
> additional data from server sessionid 0x36ccb0f8010002, likely server has 
> closed socket, closing socket connection and attempting reconnect
> 2012-04-20 13:11:21,683 INFO org.apache.zookeeper.ClientCnxn: Unable to read 
> additional data from server sessionid 0x136ccb0f489, likely server has 
> closed socket, closing socket connection and attempting reconnect
> 2012-04-20 13:11:21,786 WARN 
> org.apache.hadoop.hbase.master.SplitLogManager$CreateAsyncCallback: create rc 
> =CONNECTIONLOSS for 
> /hbase/splitlog/hdfs%3A%2F%2Fmsgstore215.snc4.facebook.com%3A9000%2FMSGSTORE215-SNC4-HBASE%2F.logs%2Fmsgstore295.snc4.facebook.com%2C60020%2C1334948757026-splitting%2F10.30.251.186%253A60020.1334951586677
>  retry=3
> 2012-04-20 13:11:21,786 WARN 
> org.apache.hadoop.hbase.master.SplitLogManager$CreateAsyncCallback: create rc 
> =CONNECTIONLOSS for 
> /hbase/splitlog/hdfs%3A%2F%2Fmsgstore215.snc4.facebook.com%3A9000%2FMSGSTORE215-SNC4-HBASE%2F.logs%2Fmsgstore295.snc4.facebook.com%2C60020%2C1334948757026-splitting%2F10.30.251.186%253A60020.1334951920332
>  retry=3

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-5835) [hbck] Catch and handle NotServingRegionException when close region attempt fails

2013-05-03 Thread Jean-Daniel Cryans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jean-Daniel Cryans updated HBASE-5835:
--

Attachment: HBASE-5835-v2.patch

Patch wasn't applying cleanly without fuzz anymore, here's a new one for the 
build to try.

> [hbck] Catch and handle NotServingRegionException when close region attempt 
> fails
> -
>
> Key: HBASE-5835
> URL: https://issues.apache.org/jira/browse/HBASE-5835
> Project: HBase
>  Issue Type: Bug
>  Components: hbck
>Affects Versions: 0.90.7, 0.92.2, 0.94.0, 0.95.2
>Reporter: Jonathan Hsieh
> Attachments: HBASE-5835.patch, HBASE-5835-v2.patch
>
>
> Currently, if hbck attempts to close a region and catches a 
> NotServerRegionException, hbck may hang outputting a stack trace.  Since the 
> goal is to close the region at a particular server, and since it is not 
> serving the region, the region is closed, and we should just warn and eat 
> this exception.
> {code}
> Exception in thread "main" org.apache.hadoop.ipc.RemoteException: 
> org.apache.hadoop.hbase.NotServingRegionException: Received close for 
>  but we are not serving it
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.closeRegion(HRegionServer.java:2162)
> at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.hbase.ipc.HBaseRPC$Server.call(HBaseRPC.java:570)
> at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1039)
> at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:771)
> at org.apache.hadoop.hbase.ipc.HBaseRPC$Invoker.invoke(HBaseRPC.java:257)
> at $Proxy5.closeRegion(Unknown Source)
> at 
> org.apache.hadoop.hbase.util.HBaseFsckRepair.closeRegionSilentlyAndWait(HBaseFsckRepair.java:165)
> at org.apache.hadoop.hbase.util.HBaseFsck.closeRegion(HBaseFsck.java:1185)
> at 
> org.apache.hadoop.hbase.util.HBaseFsck.checkRegionConsistency(HBaseFsck.java:1302)
> at 
> org.apache.hadoop.hbase.util.HBaseFsck.checkAndFixConsistency(HBaseFsck.java:1065)
> at 
> org.apache.hadoop.hbase.util.HBaseFsck.onlineConsistencyRepair(HBaseFsck.java:351)
> at org.apache.hadoop.hbase.util.HBaseFsck.onlineHbck(HBaseFsck.java:370)
> at org.apache.hadoop.hbase.util.HBaseFsck.main(HBaseFsck.java:3001)
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HBASE-8476) locateRegionInMeta should check the cache before doing the prefetch

2013-05-03 Thread Himanshu Vashishtha (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Himanshu Vashishtha resolved HBASE-8476.


Resolution: Fixed
  Assignee: Himanshu Vashishtha  (was: Amitanand Aiyer)

Folded it in HBASE-8346

> locateRegionInMeta should check the cache before doing the prefetch
> ---
>
> Key: HBASE-8476
> URL: https://issues.apache.org/jira/browse/HBASE-8476
> Project: HBase
>  Issue Type: Bug
>Reporter: Amitanand Aiyer
>Assignee: Himanshu Vashishtha
>Priority: Minor
> Fix For: 0.89-fb, 0.95.2
>
>
> locateRegionInMeta uses a regionLockObject to synchronize all accesses to 
> prefetch the RegionCache.
>  synchronized (regionLockObject) {
> // If the parent table is META, we may want to pre-fetch some
> // region info into the global region cache for this table.
> if (Bytes.equals(parentTable, HConstants.META_TABLE_NAME) &&
> (getRegionCachePrefetch(tableName)) )  {
>   prefetchRegionCache(tableName, row);
> }
> // Check the cache again for a hit in case some other thread made 
> the
> // same query while we were waiting on the lock. If not supposed 
> to
> // be using the cache, delete any existing cached location so it 
> won't
> // interfere.
> if (useCache) {
>   location = getCachedLocation(tableName, row);
>   if (location != null) {
> return location;
>   }
> } else {
>   deleteCachedLocation(tableName, row);
> }
>  
> However, for this to be effective, we need to check the cache as soon as we 
> grab the lock; before doing the prefetch. Checking the cache after doing the 
> prefetch does not help the current thread, in case another thread has done 
> the prefetch.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-8346) Prefetching .META. rows in case only when useCache is set to true

2013-05-03 Thread Himanshu Vashishtha (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8346?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Himanshu Vashishtha updated HBASE-8346:
---

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Prefetching .META. rows in case only when useCache is set to true
> -
>
> Key: HBASE-8346
> URL: https://issues.apache.org/jira/browse/HBASE-8346
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 0.95.0
>Reporter: Himanshu Vashishtha
>Assignee: Himanshu Vashishtha
>Priority: Minor
> Fix For: 0.98.0, 0.95.1
>
> Attachments: HBase-8346-v1.patch, HBase-8346-v2.patch, 
> HBase-8346-v3.patch
>
>
> While doing a .META. lookup (HCM#locateRegionInMeta), we also prefetch some 
> other region's info for that table. The usual call to the meta lookup has 
> useCache variable set to true. 
> Currently, it calls preFetch irrespective of the value useCache flag:
> {code}
> if (Bytes.equals(parentTable, HConstants.META_TABLE_NAME) &&
> (getRegionCachePrefetch(tableName)))  {
>   prefetchRegionCache(tableName, row);
> }
> {code}
> Later on, if useCache flag is set to false, it deletes the entry for that row 
> from the cache with a forceDeleteCachedLocation() call. This always results 
> in two calls to the .META. table in this case. The useCache variable is set 
> to false in case we are retrying to find a region (regionserver failover).
> It can be verified from the log statements of a client while having a 
> regionserver failover. In the below example, the client was connected to 
> a1217, when a1217 got killed. The region in question is moved to a1215. 
> Client got this info from META scan, where as client cache this info from 
> META, but then delete it from cache as it want the latest info. 
> The result is even the meta provides the latest info, it is still deleted 
> This causes even the latest info to be deleted. Thus, client deletes 
> a1215.abc.com even though it is correct info.
> {code}
> 13/04/15 09:49:12 DEBUG client.HConnectionManager$HConnectionImplementation: 
> Cached location for 
> t,user7225973201630273569,1365536809331.40382355b8c45e1338d620c018f8ff6c. is 
> a1217.abc.com:40020
> 13/04/15 09:49:12 WARN client.ServerCallable: Received exception, tries=1, 
> numRetries=30 message=Connection refused
> 13/04/15 09:49:12 DEBUG client.HConnectionManager$HConnectionImplementation: 
> Removed all cached region locations that map to 
> a1217.abc.com,40020,1365621947381
> 13/04/15 09:49:13 DEBUG client.MetaScanner: Current INFO from scan results = 
> {NAME => 
> 't,user7225973201630273569,1365536809331.40382355b8c45e1338d620c018f8ff6c.', 
> STARTKEY => 'user7225973201630273569', ENDKEY => '', ENCODED => 
> 40382355b8c45e1338d620c018f8ff6c,}
> 13/04/15 09:49:13 DEBUG client.MetaScanner: Scanning .META. starting at 
> row=t,user7225973201630273569,00 for max=10 rows using 
> hconnection-0x7786df0f
> 13/04/15 09:49:13 DEBUG client.MetaScanner: Current INFO from scan results = 
> {NAME => 
> 't,user7225973201630273569,1365536809331.40382355b8c45e1338d620c018f8ff6c.', 
> STARTKEY => 'user7225973201630273569', ENDKEY => '', ENCODED => 
> 40382355b8c45e1338d620c018f8ff6c,}
> 13/04/15 09:49:13 DEBUG client.HConnectionManager$HConnectionImplementation: 
> Cached location for 
> t,user7225973201630273569,1365536809331.40382355b8c45e1338d620c018f8ff6c. is 
> a1215.abc.com:40020
> 13/04/15 09:49:13 DEBUG client.HConnectionManager$HConnectionImplementation: 
> Removed a1215.abc.com:40020 as a location of 
> t,user7225973201630273569,1365536809331.40382355b8c45e1338d620c018f8ff6c. for 
> tableName=t from cache
> 13/04/15 09:49:13 DEBUG client.MetaScanner: Current INFO from scan results = 
> {NAME => 
> 't,user7225973201630273569,1365536809331.40382355b8c45e1338d620c018f8ff6c.', 
> STARTKEY => 'user7225973201630273569', ENDKEY => '', ENCODED => 
> 40382355b8c45e1338d620c018f8ff6c,}
> 13/04/15 09:49:13 DEBUG client.HConnectionManager$HConnectionImplementation: 
> Cached location for 
> t,user7225973201630273569,1365536809331.40382355b8c45e1338d620c018f8ff6c. is 
> a1215.abc.com:40020
> 13/04/15 09:49:13 WARN client.ServerCallable: Received exception, tries=2, 
> numRetries=30 
> message=org.apache.hadoop.hbase.exceptions.UnknownScannerException: Name: 
> -6313340536390503703, already closed?
> 13/04/15 09:49:13 DEBUG client.ClientScanner: Advancing internal scanner to 
> startKey at 'user760712450403198900'
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8355) BaseRegionObserver#preCompactScannerOpen returns null

2013-05-03 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13648903#comment-13648903
 ] 

Andrew Purtell commented on HBASE-8355:
---

Thanks for looking at this again [~jesse_yates]. I haven't had time. Reassign 
to yourself?

> BaseRegionObserver#preCompactScannerOpen returns null
> -
>
> Key: HBASE-8355
> URL: https://issues.apache.org/jira/browse/HBASE-8355
> Project: HBase
>  Issue Type: Bug
>  Components: Coprocessors
>Affects Versions: 0.98.0, 0.94.8, 0.95.1
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
> Attachments: 8355-0.94.patch, 8355.patch
>
>
> As pointed out in https://github.com/forcedotcom/phoenix/pull/131, 
> BaseRegionObserver#preCompactScannerOpen returns null by default, which hoses 
> any coprocessors down the line, making override of this method mandatory. The 
> fix is trivial, patch coming momentarily.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-5912) org.apache.hadoop.hbase.ipc.ProtocolSignature.getFingerprint takes significant CPU

2013-05-03 Thread Jean-Daniel Cryans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jean-Daniel Cryans updated HBASE-5912:
--

Status: Open  (was: Patch Available)

Unmarking patch as available, it doesn't apply anymore since the code moved 
into sub modules.

> org.apache.hadoop.hbase.ipc.ProtocolSignature.getFingerprint takes 
> significant CPU
> --
>
> Key: HBASE-5912
> URL: https://issues.apache.org/jira/browse/HBASE-5912
> Project: HBase
>  Issue Type: Improvement
>  Components: IPC/RPC, Performance
>Affects Versions: 0.94.1
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
> Attachments: hbase-5912.txt
>
>
> I ran oprofile on a YCSB client and found that a large percentage of the CPU 
> time was going to this function:
> 51991 0.4913  25361.jo java 
> java.lang.reflect.Method[] 
> java.lang.Class.copyMethods(java.lang.reflect.Method[])
> 51384 0.4856  25361.jo java int 
> org.apache.hadoop.hbase.ipc.ProtocolSignature.getFingerprint(java.lang.reflect.Method)
> 50428 0.4766  25361.jo java void 
> java.util.Arrays.sort1(int[], int, int)
> We should introduce a simple cache to avoid this overhead.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6192) Document ACL matrix in the book

2013-05-03 Thread Jean-Daniel Cryans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jean-Daniel Cryans updated HBASE-6192:
--

Status: Open  (was: Patch Available)

The "patch" will soon be a year old, unmarking as available.

> Document ACL matrix in the book
> ---
>
> Key: HBASE-6192
> URL: https://issues.apache.org/jira/browse/HBASE-6192
> Project: HBase
>  Issue Type: Sub-task
>  Components: documentation, security
>Affects Versions: 0.94.1, 0.95.2
>Reporter: Enis Soztutar
>Assignee: Laxman
>  Labels: documentaion, security
> Attachments: HBase Security-ACL Matrix.pdf, HBase Security-ACL 
> Matrix.pdf, HBase Security-ACL Matrix.pdf, HBase Security-ACL Matrix.xls, 
> HBase Security-ACL Matrix.xls, HBase Security-ACL Matrix.xls
>
>
> We have an excellent matrix at 
> https://issues.apache.org/jira/secure/attachment/12531252/Security-ACL%20Matrix.pdf
>  for ACL. Once the changes are done, we can adapt that and put it in the 
> book, also add some more documentation about the new authorization features. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6199) Change PENDING_OPEN scope from pre-rpc open to OPENING to just post-rpc open to OPENING

2013-05-03 Thread Jean-Daniel Cryans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jean-Daniel Cryans updated HBASE-6199:
--

Status: Open  (was: Patch Available)

Unmarking the patch as available, went stale.

> Change PENDING_OPEN scope from pre-rpc open to OPENING to just post-rpc open 
> to OPENING
> ---
>
> Key: HBASE-6199
> URL: https://issues.apache.org/jira/browse/HBASE-6199
> Project: HBase
>  Issue Type: Improvement
>Reporter: stack
>Assignee: stack
> Attachments: 6199v4.txt, pending_open2.txt, pending_open3.txt, 
> pending_open.txt
>
>
> PENDING_OPEN currently is a murky state.  Its a master in-memory state with 
> no corresponding znode state that sits between OFFLINE and OPENING states.
> The OFFLINE state is set by the master when it goes to open a region.  
> OPENING is set by the regionserver after its assumed control of a region and 
> is moving it through the OPENING process.  PENDING_OPEN currently spans the 
> open rpc invocation.  This state is in place pre-open-rpc-invocation, during 
> open-rpc-invocation, and post-rpc-invocation until we get the OPENING 
> callback. That PENDING_OPEN covers this many different conditions effectively 
> makes it unactionable.
> This issue proposes PENDING_OPEN only be in place post-rpc-invocation.  Now 
> its meaning is clear as the space between rpc-open-invocation and our 
> receiving the callback which sets RegionState to OPENING.  PENDING_OPEN 
> becomes actionable too in that if a regionserver dies post 
> rpc-open-invocation, we know that we can reassign the region.
> See 
> https://issues.apache.org/jira/browse/HBASE-6060?focusedCommentId=13292646&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13292646
>  for more discussion.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6205) Support an option to keep data of dropped table for some time

2013-05-03 Thread Jean-Daniel Cryans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jean-Daniel Cryans updated HBASE-6205:
--

Status: Open  (was: Patch Available)

Patch available status went stale, unmarking.

> Support an option to keep data of dropped table for some time
> -
>
> Key: HBASE-6205
> URL: https://issues.apache.org/jira/browse/HBASE-6205
> Project: HBase
>  Issue Type: New Feature
>Affects Versions: 0.94.0, 0.95.2
>Reporter: chunhui shen
>Assignee: chunhui shen
> Attachments: HBASE-6205.patch, HBASE-6205v2.patch, 
> HBASE-6205v3.patch, HBASE-6205v4.patch, HBASE-6205v5.patch
>
>
> User may drop table accidentally because of error code or other uncertain 
> reasons.
> Unfortunately, it happens in our environment because one user make a mistake 
> between production cluster and testing cluster.
> So, I just give a suggestion, do we need to support an option to keep data of 
> dropped table for some time, e.g. 1 day
> In the patch:
> We make a new dir named .trashtables in the rood dir.
> In the DeleteTableHandler, we move files in dropped table's dir to trash 
> table dir instead of deleting them directly.
> And Create new class TrashCleaner which will clean dropped tables if it is 
> time out with a period check.
> Default keep time for dropped tables is 1 day, and check period is 1 hour.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6217) reduce overhead of maintaing get/next size metric

2013-05-03 Thread Jean-Daniel Cryans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jean-Daniel Cryans updated HBASE-6217:
--

Status: Open  (was: Patch Available)

No patch for trunk is available.

> reduce overhead of maintaing get/next size metric
> -
>
> Key: HBASE-6217
> URL: https://issues.apache.org/jira/browse/HBASE-6217
> Project: HBase
>  Issue Type: Improvement
>Reporter: Kannan Muthukkaruppan
>Assignee: M. Chen
>  Labels: patch
> Attachments: jira-6217.patch
>
>
> [Forked off this specific issue as a separate JIRA from HBASE-6066].
> Reduce overhead of "size metric" maintained in StoreScanner.next().
> {code}
> if (metric != null) {
>  HRegion.incrNumericMetric(this.metricNamePrefix + metric,
>copyKv.getLength());
>   }
>   results.add(copyKv);
> {code}
> A single call to next() might fetch a lot of KVs. We can first add up the 
> size of those KVs in a local variable and then in a finally clause increment 
> the metric one shot, rather than updating AtomicLongs for each KV.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-4364) Filters applied to columns not in the selected column list are ignored

2013-05-03 Thread Jean-Daniel Cryans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-4364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jean-Daniel Cryans updated HBASE-4364:
--

Status: Open  (was: Patch Available)

Unmarking patch available, the conversation died.

> Filters applied to columns not in the selected column list are ignored
> --
>
> Key: HBASE-4364
> URL: https://issues.apache.org/jira/browse/HBASE-4364
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 0.94.0, 0.92.0, 0.90.4
>Reporter: Todd Lipcon
>Priority: Critical
> Attachments: 
> HBASE-4364-failing-test-with-simplest-custom-filter.patch, 
> hbase-4364_trunk.patch, hbase-4364_trunk-v2.patch
>
>
> For a scan, if you select some set of columns using addColumns(), and then 
> apply a SingleColumnValueFilter that restricts the results based on some 
> other columns which aren't selected, then those filter conditions are ignored.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7066) Some HMaster coprocessor exceptions are being swallowed in try catch blocks

2013-05-03 Thread Jean-Daniel Cryans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jean-Daniel Cryans updated HBASE-7066:
--

Status: Open  (was: Patch Available)

Looks like stuff was committed already here in this jira and it's marked patch 
available, unmarking to trigger action.

> Some HMaster coprocessor exceptions are being swallowed in try catch blocks
> ---
>
> Key: HBASE-7066
> URL: https://issues.apache.org/jira/browse/HBASE-7066
> Project: HBase
>  Issue Type: Bug
>  Components: Coprocessors, security
>Affects Versions: 0.94.2, 0.95.2
>Reporter: Francis Liu
>Assignee: Francis Liu
>Priority: Critical
> Attachments: 7066-addendum.txt, 7066-addendum-v2.txt, 
> HBASE-7066_94.patch, HBASE-7066_trunk.patch, HBASE-7066_trunk.patch
>
>
> This is causing HMaster.shutdown() and HMaster.stopMaster() to succeed even 
> when an AccessDeniedException is thrown.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8355) BaseRegionObserver#preCompactScannerOpen returns null

2013-05-03 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13648892#comment-13648892
 ] 

Jesse Yates commented on HBASE-8355:


Looking at this more, I think Lar's comment doesn't apply:
{quote}
I don't think this is right. The internal scanner passed is not the right 
scanner to use. I am surprised that this does not break many more tests.

In Compactor.compact the result of preCompactScannerOpen need to be null in 
order to have the default action happening.
{quote}

The code that call this is:
{code}
  public InternalScanner preCompactScannerOpen(Store store, 
List scanners,
  ScanType scanType, long earliestPutTs, CompactionRequest request) throws 
IOException {
ObserverContext ctx = null;
InternalScanner s = null;
for (RegionEnvironment env: coprocessors) {
  if (env.getInstance() instanceof RegionObserver) {
ctx = ObserverContext.createAndPrepare(env, ctx);
try {
  s = ((RegionObserver) env.getInstance()).preCompactScannerOpen(ctx, 
store, scanners,
scanType, earliestPutTs, s, request);
} catch (Throwable e) {
  handleCoprocessorThrowable(env,e);
}
if (ctx.shouldComplete()) {
  break;
}
  }
}
return s;
  }
{code}

Which starts by passing in null as the InternalScanner and pulls an internal 
scanner back out. So returning the passed InternalScanner - s - should be the 
correct default. 

So back to +1 for Andy's original, unless I'm missing something?

This led to two other things (that shouldn't necessarily be tackled here):
1. This same logic is present in preFlushScannerOpen and preCompactScanner 
open, with the same consequences
2. There is no easy solution for nested CPs creating new scanners - creating a 
brand new scanner will ignore any previous and setting Context#complete will 
ignore later CPs. I'm working on a first cut solution and will open a new jira 
(or at least blog about it :) when I figure out what it looks like... unless 
someone has a suggestions?

> BaseRegionObserver#preCompactScannerOpen returns null
> -
>
> Key: HBASE-8355
> URL: https://issues.apache.org/jira/browse/HBASE-8355
> Project: HBase
>  Issue Type: Bug
>  Components: Coprocessors
>Affects Versions: 0.98.0, 0.94.8, 0.95.1
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
> Attachments: 8355-0.94.patch, 8355.patch
>
>
> As pointed out in https://github.com/forcedotcom/phoenix/pull/131, 
> BaseRegionObserver#preCompactScannerOpen returns null by default, which hoses 
> any coprocessors down the line, making override of this method mandatory. The 
> fix is trivial, patch coming momentarily.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-5578) NPE when regionserver reported server load, caused rs stop.

2013-05-03 Thread Jean-Daniel Cryans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jean-Daniel Cryans updated HBASE-5578:
--

Status: Open  (was: Patch Available)

Unmarking patch available, it's a year old.

> NPE when regionserver reported server load, caused rs stop.
> ---
>
> Key: HBASE-5578
> URL: https://issues.apache.org/jira/browse/HBASE-5578
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 0.92.0
> Environment: centos6.2 hadoop-1.0.0 hbase-0.92.0
>Reporter: Storm Lee
>Priority: Critical
> Fix For: 0.92.3
>
> Attachments: 5589.txt
>
>
> The regeionserver log:
> 2012-03-11 11:55:37,808 FATAL 
> org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region server 
> data3,60020,1331286604591: Unhandled exception: null
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.regionserver.Store.getTotalStaticIndexSize(Store.java:1788)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.createRegionLoad(HRegionServer.java:994)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.buildServerLoad(HRegionServer.java:800)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.tryRegionServerReport(HRegionServer.java:776)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:678)
>   at java.lang.Thread.run(Thread.java:662)
> 2012-03-11 11:55:37,808 FATAL 
> org.apache.hadoop.hbase.regionserver.HRegionServer: RegionServer abort: 
> loaded coprocessors are: []
> 2012-03-11 11:55:37,808 INFO 
> org.apache.hadoop.hbase.regionserver.HRegionServer: Dump of metrics: 
> requestsPerSecond=1687, numberOfOnlineRegions=37, numberOfStores=37, 
> numberOfStorefiles=144, storefileIndexSizeMB=2, rootIndexSizeKB=2362, 
> totalStaticIndexSizeKB=229808, totalStaticBloomSizeKB=2166296, 
> memstoreSizeMB=2854, readRequestsCount=1352673, writeRequestsCount=113137586, 
> compactionQueueSize=8, flushQueueSize=3, usedHeapMB=7359, maxHeapMB=12999, 
> blockCacheSizeMB=32.31, blockCacheFreeMB=3867.52, blockCacheCount=38, 
> blockCacheHitCount=87713, blockCacheMissCount=22144560, 
> blockCacheEvictedCount=122, blockCacheHitRatio=0%, 
> blockCacheHitCachingRatio=99%, hdfsBlocksLocalityIndex=100
> 2012-03-11 11:55:37,992 INFO 
> org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: Unhandled 
> exception: null

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-4951) master process can not be stopped when it is initializing

2013-05-03 Thread Jean-Daniel Cryans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-4951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jean-Daniel Cryans updated HBASE-4951:
--

Status: Open  (was: Patch Available)

Unmarking patch available, more than a year old.

> master process can not be stopped when it is initializing
> -
>
> Key: HBASE-4951
> URL: https://issues.apache.org/jira/browse/HBASE-4951
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Affects Versions: 0.90.3
>Reporter: xufeng
>Assignee: ramkrishna.s.vasudevan
>Priority: Critical
> Fix For: 0.90.7
>
> Attachments: HBASE-4951_branch.patch, HBASE-4951.patch
>
>
> It is easy to reproduce by following step:
> step1:start master process.(do not start regionserver process in the cluster).
> the master will wait the regionserver to check in:
> org.apache.hadoop.hbase.master.ServerManager: Waiting on regionserver(s) to 
> checkin
> step2:stop the master by sh command bin/hbase master stop
> result:the master process will never die because catalogTracker.waitForRoot() 
> method will block unitl the root region assigned.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6368) Upgrade Guava for critical performance bug fix

2013-05-03 Thread Jean-Daniel Cryans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jean-Daniel Cryans updated HBASE-6368:
--

Status: Open  (was: Patch Available)

Not sure what is going on here, but it's definitely not "patch available".

> Upgrade Guava for critical performance bug fix
> --
>
> Key: HBASE-6368
> URL: https://issues.apache.org/jira/browse/HBASE-6368
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.95.2
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Critical
> Attachments: 6368-trunk.txt
>
>
> The bug is http://code.google.com/p/guava-libraries/issues/detail?id=1055
> See discussion under 'Upgrade to Guava 12.0.1: Performance bug in 
> CacheBuilder/LoadingCache fixed!'

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8404) Extra commas in LruBlockCache.logStats

2013-05-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13648884#comment-13648884
 ] 

Hudson commented on HBASE-8404:
---

Integrated in HBase-TRUNK #4096 (See 
[https://builds.apache.org/job/HBase-TRUNK/4096/])
HBASE-8404 Extra commas in LruBlockCache.logStats (Revision 1478965)

 Result = FAILURE
stack : 
Files : 
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/LruBlockCache.java


> Extra commas in LruBlockCache.logStats
> --
>
> Key: HBASE-8404
> URL: https://issues.apache.org/jira/browse/HBASE-8404
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.4, 0.95.0
>Reporter: Jean-Daniel Cryans
>Assignee: stack
> Fix For: 0.98.0, 0.95.1
>
> Attachments: 8404.txt
>
>
> The Stats log line for the LruBlockCache contains extra commas introduced in 
> HBASE-5616:
> {noformat}
> 2013-04-23 18:40:12,774 DEBUG [LRU Statistics #0] 
> org.apache.hadoop.hbase.io.hfile.LruBlockCache: Stats: total=9.23 MB, 
> free=500.69 MB, max=509.92 MB, blocks=95, accesses=322822, hits=107003, 
> hitRatio=33.14%, , cachingAccesses=232794, cachingHits=106994, 
> cachingHitsRatio=45.96%, , evictions=0, evicted=12, evictedPerRun=Infinity
> {noformat}
> Marking as "noob" :)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7244) Provide a command or argument to startup, that formats znodes if provided

2013-05-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13648885#comment-13648885
 ] 

Hudson commented on HBASE-7244:
---

Integrated in HBase-TRUNK #4096 (See 
[https://builds.apache.org/job/HBase-TRUNK/4096/])
HBASE-7244 Provide a command or argument to startup, that formats znodes if 
provided (Revision 1478962)

 Result = FAILURE
stack : 
Files : 
* /hbase/trunk/bin/hbase
* /hbase/trunk/bin/hbase-cleanup.sh


> Provide a command or argument to startup, that formats znodes if provided
> -
>
> Key: HBASE-7244
> URL: https://issues.apache.org/jira/browse/HBASE-7244
> Project: HBase
>  Issue Type: New Feature
>  Components: Zookeeper
>Affects Versions: 0.94.0
>Reporter: Harsh J
>Assignee: rajeshbabu
>Priority: Critical
> Fix For: 0.98.0, 0.95.1
>
> Attachments: HBASE-7244_2.patch, HBASE-7244_3.patch, 
> HBASE-7244_4.patch, HBASE-7244_5.patch, HBASE-7244_6.patch, 
> HBASE-7244_7.patch, HBASE-7244.patch
>
>
> Many a times I've had to, and have seen instructions being thrown, to stop 
> cluster, clear out ZK and restart.
> While this is only a quick (and painful to master) fix, it is certainly nifty 
> to some smaller cluster users but the process is far too long, roughly:
> 1. Stop HBase
> 2. Start zkCli.sh and connect to the right quorum
> 3. Find and ensure the HBase parent znode from the configs (/hbase only by 
> default)
> 4. Run an "rmr /hbase" in the zkCli.sh shell, or manually delete each znode 
> if on a lower version of ZK.
> 5. Quit zkCli.sh and start HBase again
> Perhaps it may be useful, if the start-hbase.sh itself accepted a formatZK 
> parameter. Such that, when you do a {{start-hbase.sh -formatZK}}, it does 
> steps 2-4 automatically for you.
> For safety, we could make the formatter code ensure that no HBase instance is 
> actually active, and skip the format process if it is. Similar to a HDFS 
> NameNode's format, which would disallow if the name directories are locked.
> Would this be a useful addition for administrators? Bigtop too can provide a 
> service subcommand that could do this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8430) Cell decoder/scanner/etc. should not hide exceptions

2013-05-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13648882#comment-13648882
 ] 

Hudson commented on HBASE-8430:
---

Integrated in HBase-TRUNK #4096 (See 
[https://builds.apache.org/job/HBase-TRUNK/4096/])
HBASE-8430 Cell decoder/scanner/etc. should not hide exceptions (Revision 
1478656)

 Result = FAILURE
stack : 
Files : 
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HBaseAdmin.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/WrongRowIOException.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/exceptions/DoNotRetryIOException.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/exceptions/HBaseIOException.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/exceptions/HBaseSnapshotException.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/exceptions/PleaseHoldException.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/exceptions/RegionException.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/exceptions/TableInfoMissingException.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/StoppedRpcClientException.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java
* 
/hbase/trunk/hbase-client/src/test/java/org/apache/hadoop/hbase/ipc/TestPayloadCarryingRpcController.java
* 
/hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/CellScanner.java
* /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java
* 
/hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/HBaseIOException.java
* 
/hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/codec/BaseDecoder.java
* 
/hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/codec/BaseEncoder.java
* 
/hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/codec/CellCodec.java
* 
/hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/codec/CodecException.java
* 
/hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/codec/KeyValueCodec.java
* 
/hbase/trunk/hbase-prefix-tree/src/test/java/org/apache/hadoop/hbase/codec/prefixtree/row/data/TestRowDataSearcherRowMiss.java
* 
/hbase/trunk/hbase-prefix-tree/src/test/java/org/apache/hadoop/hbase/codec/prefixtree/row/data/TestRowDataSimple.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/codec/MessageCodec.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestPutDeleteEtcCellIteration.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/ipc/TestIPC.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestSplitTransactionOnCluster.java


> Cell decoder/scanner/etc. should not hide exceptions
> 
>
> Key: HBASE-8430
> URL: https://issues.apache.org/jira/browse/HBASE-8430
> Project: HBase
>  Issue Type: Bug
>  Components: IPC/RPC, Protobufs
>Affects Versions: 0.95.0
>Reporter: Sergey Shelukhin
>Assignee: stack
>Priority: Critical
> Fix For: 0.95.1
>
> Attachments: 8430.txt, 8430v2.txt, 8430v3.txt, 8430v4trunk.txt, 
> 8430v4.txt
>
>
> Cell scanner, base decoder, etc., hide IOException inside runtime exception. 
> This can lead to unexpected behavior because a lot of code only expects 
> IOException. There's no logical justification behind this hiding so it should 
> be removed before it's too late (the sooner we do it the less throws 
> declarations need to be added)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8346) Prefetching .META. rows in case only when useCache is set to true

2013-05-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13648883#comment-13648883
 ] 

Hudson commented on HBASE-8346:
---

Integrated in HBase-TRUNK #4096 (See 
[https://builds.apache.org/job/HBase-TRUNK/4096/])
HBASE-8346 Prefetching .META. rows in case only when useCache is set to 
true (Himanshu) (Revision 1478585)

 Result = FAILURE
tedyu : 
Files : 
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HConnectionManager.java


> Prefetching .META. rows in case only when useCache is set to true
> -
>
> Key: HBASE-8346
> URL: https://issues.apache.org/jira/browse/HBASE-8346
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 0.95.0
>Reporter: Himanshu Vashishtha
>Assignee: Himanshu Vashishtha
>Priority: Minor
> Fix For: 0.98.0, 0.95.1
>
> Attachments: HBase-8346-v1.patch, HBase-8346-v2.patch, 
> HBase-8346-v3.patch
>
>
> While doing a .META. lookup (HCM#locateRegionInMeta), we also prefetch some 
> other region's info for that table. The usual call to the meta lookup has 
> useCache variable set to true. 
> Currently, it calls preFetch irrespective of the value useCache flag:
> {code}
> if (Bytes.equals(parentTable, HConstants.META_TABLE_NAME) &&
> (getRegionCachePrefetch(tableName)))  {
>   prefetchRegionCache(tableName, row);
> }
> {code}
> Later on, if useCache flag is set to false, it deletes the entry for that row 
> from the cache with a forceDeleteCachedLocation() call. This always results 
> in two calls to the .META. table in this case. The useCache variable is set 
> to false in case we are retrying to find a region (regionserver failover).
> It can be verified from the log statements of a client while having a 
> regionserver failover. In the below example, the client was connected to 
> a1217, when a1217 got killed. The region in question is moved to a1215. 
> Client got this info from META scan, where as client cache this info from 
> META, but then delete it from cache as it want the latest info. 
> The result is even the meta provides the latest info, it is still deleted 
> This causes even the latest info to be deleted. Thus, client deletes 
> a1215.abc.com even though it is correct info.
> {code}
> 13/04/15 09:49:12 DEBUG client.HConnectionManager$HConnectionImplementation: 
> Cached location for 
> t,user7225973201630273569,1365536809331.40382355b8c45e1338d620c018f8ff6c. is 
> a1217.abc.com:40020
> 13/04/15 09:49:12 WARN client.ServerCallable: Received exception, tries=1, 
> numRetries=30 message=Connection refused
> 13/04/15 09:49:12 DEBUG client.HConnectionManager$HConnectionImplementation: 
> Removed all cached region locations that map to 
> a1217.abc.com,40020,1365621947381
> 13/04/15 09:49:13 DEBUG client.MetaScanner: Current INFO from scan results = 
> {NAME => 
> 't,user7225973201630273569,1365536809331.40382355b8c45e1338d620c018f8ff6c.', 
> STARTKEY => 'user7225973201630273569', ENDKEY => '', ENCODED => 
> 40382355b8c45e1338d620c018f8ff6c,}
> 13/04/15 09:49:13 DEBUG client.MetaScanner: Scanning .META. starting at 
> row=t,user7225973201630273569,00 for max=10 rows using 
> hconnection-0x7786df0f
> 13/04/15 09:49:13 DEBUG client.MetaScanner: Current INFO from scan results = 
> {NAME => 
> 't,user7225973201630273569,1365536809331.40382355b8c45e1338d620c018f8ff6c.', 
> STARTKEY => 'user7225973201630273569', ENDKEY => '', ENCODED => 
> 40382355b8c45e1338d620c018f8ff6c,}
> 13/04/15 09:49:13 DEBUG client.HConnectionManager$HConnectionImplementation: 
> Cached location for 
> t,user7225973201630273569,1365536809331.40382355b8c45e1338d620c018f8ff6c. is 
> a1215.abc.com:40020
> 13/04/15 09:49:13 DEBUG client.HConnectionManager$HConnectionImplementation: 
> Removed a1215.abc.com:40020 as a location of 
> t,user7225973201630273569,1365536809331.40382355b8c45e1338d620c018f8ff6c. for 
> tableName=t from cache
> 13/04/15 09:49:13 DEBUG client.MetaScanner: Current INFO from scan results = 
> {NAME => 
> 't,user7225973201630273569,1365536809331.40382355b8c45e1338d620c018f8ff6c.', 
> STARTKEY => 'user7225973201630273569', ENDKEY => '', ENCODED => 
> 40382355b8c45e1338d620c018f8ff6c,}
> 13/04/15 09:49:13 DEBUG client.HConnectionManager$HConnectionImplementation: 
> Cached location for 
> t,user7225973201630273569,1365536809331.40382355b8c45e1338d620c018f8ff6c. is 
> a1215.abc.com:40020
> 13/04/15 09:49:13 WARN client.ServerCallable: Received exception, tries=2, 
> numRetries=30 
> message=org.apache.hadoop.hbase.exceptions.UnknownScannerException: Name: 
> -6313340536390503703, already closed?
> 13/04/15 09:49:13 DEBUG client.ClientScanner: Advancing internal scanner 

[jira] [Commented] (HBASE-5746) HFileDataBlockEncoderImpl uses wrong header size when reading HFiles with no checksums (0.96)

2013-05-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13648880#comment-13648880
 ] 

Hudson commented on HBASE-5746:
---

Integrated in HBase-TRUNK #4096 (See 
[https://builds.apache.org/job/HBase-TRUNK/4096/])
HBASE-5746 HFileDataBlockEncoderImpl uses wrong header size when reading 
HFiles with no checksums (0.96) (Revision 1478966)

 Result = FAILURE
sershe : 
Files : 
* 
/hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/EncodedDataBlock.java
* 
/hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/HFileBlockDefaultEncodingContext.java
* 
/hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/HFileBlockEncodingContext.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/FixedFileTrailer.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileDataBlockEncoderImpl.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/NoOpDataBlockEncoder.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFile.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlock.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlockCompatibility.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileDataBlockEncoder.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/DataBlockEncodingTool.java


> HFileDataBlockEncoderImpl uses wrong header size when reading HFiles with no 
> checksums (0.96)
> -
>
> Key: HBASE-5746
> URL: https://issues.apache.org/jira/browse/HBASE-5746
> Project: HBase
>  Issue Type: Sub-task
>  Components: io, regionserver
>Reporter: Lars Hofhansl
>Assignee: Sergey Shelukhin
>Priority: Critical
> Fix For: 0.95.1
>
> Attachments: 5720-trunk-v2.txt, HBASE-5746-v0.patch, 
> HBASE-5746-v1.patch, HBASE-5746-v2.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8482) TestHBaseFsck#testCheckTableLocks broke; java.lang.AssertionError: expected:<[]> but was:<[EXPIRED_TABLE_LOCK]>

2013-05-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13648881#comment-13648881
 ] 

Hudson commented on HBASE-8482:
---

Integrated in HBase-TRUNK #4096 (See 
[https://builds.apache.org/job/HBase-TRUNK/4096/])
HBASE-8482 TestHBaseFsck#testCheckTableLocks broke; 
java.lang.AssertionError: expected:<[]> but was:<[EXPIRED_TABLE_LOCK]> 
(Revision 1478556)

 Result = FAILURE
stack : 
Files : 
* /hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/Chore.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestHBaseFsck.java


> TestHBaseFsck#testCheckTableLocks broke; java.lang.AssertionError: 
> expected:<[]> but was:<[EXPIRED_TABLE_LOCK]>
> ---
>
> Key: HBASE-8482
> URL: https://issues.apache.org/jira/browse/HBASE-8482
> Project: HBase
>  Issue Type: Bug
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Fix For: 0.95.1
>
> Attachments: 8482.txt
>
>
> I've been looking into this test failure because I thought it particular to 
> my rpc hackery.
> What I see is like the subject:
> {code}
> java.lang.AssertionError: expected:<[]> but was:<[EXPIRED_TABLE_LOCK]>
> {code}
> and later in same unit test:
> {code}
> java.lang.AssertionError: expected:<[EXPIRED_TABLE_LOCK]> but 
> was:<[EXPIRED_TABLE_LOCK, EXPIRED_TABLE_LOCK]>
> {code}
> The test creates a write lock and then expires it.  In subject failure, we 
> are expiring the lock ahead of the time it should be.  Easier for me to 
> reproduce is that the second write lock we put in place is not allowed to 
> happen because of the presence of the first lock EVEN THOUGH IT HAS BEEN 
> JUDGED EXPIRED:
> {code}
> ERROR: Table lock acquire attempt found:[tableName=foo, 
> lockOwner=localhost,6,1, threadId=387, purpose=testCheckTableLocks, 
> isShared=false, createTime=129898749]
> 2013-05-02 00:34:42,715 INFO  [Thread-183] lock.ZKInterProcessLockBase(431): 
> Lock is held by: write-testing utility00
> ERROR: Table lock acquire attempt found:[tableName=foo, 
> lockOwner=localhost,6,1, threadId=349, purpose=testCheckTableLocks, 
> isShared=false, createTime=28506852]
> {code}
> Above, you see the expired lock and then our hbck lock visitor has it that 
> the second lock is expired because it is held by the first lock.
> I can keep looking at this but input would be appreciated.
> It failed in recent trunk build 
> https://builds.apache.org/view/H-L/view/HBase/job/HBase-TRUNK/4090/testReport/junit/org.apache.hadoop.hbase.util/TestHBaseFsck/testCheckTableLocks/

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8487) Wrong description about regionservers in 2.4. Example configurations

2013-05-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13648878#comment-13648878
 ] 

Hudson commented on HBASE-8487:
---

Integrated in HBase-TRUNK #4096 (See 
[https://builds.apache.org/job/HBase-TRUNK/4096/])
HBASE-8487 Wrong description about regionservers in 2.4. Example 
configurations (Revision 1478895)

 Result = FAILURE
stack : 
Files : 
* /hbase/trunk/src/main/docbkx/configuration.xml


> Wrong description about regionservers in 2.4. Example configurations
> 
>
> Key: HBASE-8487
> URL: https://issues.apache.org/jira/browse/HBASE-8487
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 0.95.0
>Reporter: Jingguo Yao
>Priority: Minor
> Attachments: HBASE-8487-v1.patch
>
>   Original Estimate: 20m
>  Remaining Estimate: 20m
>
> Wrong description about regionservers in "2.4. Example configurations".

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8466) Netty messages in the logs

2013-05-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13648879#comment-13648879
 ] 

Hudson commented on HBASE-8466:
---

Integrated in HBase-TRUNK #4096 (See 
[https://builds.apache.org/job/HBase-TRUNK/4096/])
HBASE-8466  Netty messages in the logs (Revision 1478664)

 Result = FAILURE
nkeywal : 
Files : 
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/ClusterStatusPublisher.java


> Netty messages in the logs
> --
>
> Key: HBASE-8466
> URL: https://issues.apache.org/jira/browse/HBASE-8466
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Affects Versions: 0.98.0, 0.95.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
>Priority: Minor
> Fix For: 0.98.0, 0.95.1
>
> Attachments: 8466.v1.patch, 8466.v2.patch
>
>
> We've got this:
> {noformat}
> ATTENTION: The pipeline contains no upstream handlers; discarding: [id: 
> 0x1f79354a] OPEN
> ATTENTION: The pipeline contains no upstream handlers; discarding: [id: 
> 0x1f79354a] BOUND: 0.0.0.0/0.0.0.0:37250
> ATTENTION: The pipeline contains no upstream handlers; discarding: [id: 
> 0x1f79354a, 0.0.0.0/0.0.0.0:37250 => /226.1.1.3:60100] CONNECTED: 
> /226.1.1.3:60100
> ATTENTION: The pipeline contains no upstream handlers; discarding: [id: 
> 0x1f79354a, 0.0.0.0/0.0.0.0:37250 => /226.1.1.3:60100] WRITTEN_AMOUNT: 129
> ATTENTION: The pipeline contains no upstream handlers; discarding: [id: 
> 0x1f79354a, 0.0.0.0/0.0.0.0:37250 :> /226.1.1.3:60100] DISCONNECTED
> ATTENTION: The pipeline contains no upstream handlers; discarding: [id: 
> 0x1f79354a, 0.0.0.0/0.0.0.0:37250 :> /226.1.1.3:60100] UNBOUND
> ATTENTION: The pipeline contains no upstream handlers; discarding: [id: 
> 0x1f79354a, 0.0.0.0/0.0.0.0:37250 :> /226.1.1.3:60100] CLOSED
> {noformat}
> We can fix this by adding an upstream handler that discards the message 
> without printing them.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8485) Retry to open a HLog on more exceptions

2013-05-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13648877#comment-13648877
 ] 

Hudson commented on HBASE-8485:
---

Integrated in HBase-TRUNK #4096 (See 
[https://builds.apache.org/job/HBase-TRUNK/4096/])
HBASE-8485 Retry to open a HLog on more exceptions (Revision 1478880)

 Result = FAILURE
jxiang : 
Files : 
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/HLogFactory.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestHLogSplit.java


> Retry to open a HLog on more exceptions 
> 
>
> Key: HBASE-8485
> URL: https://issues.apache.org/jira/browse/HBASE-8485
> Project: HBase
>  Issue Type: Improvement
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
>Priority: Minor
> Fix For: 0.98.0, 0.95.1
>
> Attachments: trunk-8485.patch
>
>
> Currently we only retry to open a HLog file in case "Cannot obtain block 
> length" (HBASE-8314). We can retry also in case "Could not obtain the last 
> block locations.",  "Blocklist for " + src + " has changed!", which are 
> possible IOException messages I can find in case open a file.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8214) Remove proxy and engine, rely directly on pb generated Service

2013-05-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13648876#comment-13648876
 ] 

Hudson commented on HBASE-8214:
---

Integrated in HBase-TRUNK #4096 (See 
[https://builds.apache.org/job/HBase-TRUNK/4096/])
HBASE-8214 Remove proxy and engine, rely directly on pb generated Service 
(Revision 1478637)

 Result = FAILURE
stack : 
Files : 
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/IpcProtocol.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/MasterAdminProtocol.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/MasterMonitorProtocol.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/MasterProtocol.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/RemoteExceptionHandler.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/catalog/CatalogTracker.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AdminProtocol.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/ClientProtocol.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/ClientScanner.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HBaseAdmin.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HConnectable.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HConnection.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HConnectionKey.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HConnectionManager.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HTable.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/MasterAdminKeepAliveConnection.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/MasterMonitorKeepAliveConnection.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/MetaScanner.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/MultiServerCallable.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/ScannerCallable.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/ServerCallable.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/BadAuthException.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/HBaseClient.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/HBaseClientRPC.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/IPCUtil.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/ProtobufRpcClientEngine.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/ReflectionCache.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/RegionCoprocessorRpcChannel.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/RemoteWithExtrasException.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/RpcClient.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/RpcClientEngine.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/StoppedRpcClientException.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/UnsupportedCellCodecException.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/UnsupportedCompressionCodecException.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/WrongVersionException.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/security/KerberosInfo.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/security/SecurityInfo.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/security/TokenInfo.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/zookeeper/RecoverableZooKeeper.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKConfig.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKUtil.java
* 
/hbase/trunk/hbase-client/src/test/java/org/apache/hadoop/hbase/client/TestSnapshotFromAdmin.java
* 
/hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java
* 
/hbase/trunk/hbase-common/src/test/java/org/apache/hadoop/hbase/ClassFinder.java
* 
/hbase/trunk/hbase-it/src/test/java/org/apache/hadoop/hbase/DistributedHBaseCluster.java
* 
/hbase/trunk/hbase-it/src/test/java/org/apache/hadoop/hbase/IntegrationTestRebalanceAndKillServersTargeted.java
* 
/hbase/trunk/hbase-it/src/test/java/org/apache/hadoop/hbase/Integra

[jira] [Updated] (HBASE-5816) Balancer and ServerShutdownHandler concurrently reassign the same region

2013-05-03 Thread Jean-Daniel Cryans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jean-Daniel Cryans updated HBASE-5816:
--

Status: Open  (was: Patch Available)

Aborting patch available status, the conversation is stale.

> Balancer and ServerShutdownHandler concurrently reassign the same region
> 
>
> Key: HBASE-5816
> URL: https://issues.apache.org/jira/browse/HBASE-5816
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Affects Versions: 0.90.6
>Reporter: Maryann Xue
>Assignee: ramkrishna.s.vasudevan
>Priority: Critical
> Attachments: HBASE-5816.patch
>
>
> The first assign thread exits with success after updating the RegionState to 
> PENDING_OPEN, while the second assign follows immediately into "assign" and 
> fails the RegionState check in setOfflineInZooKeeper(). This causes the 
> master to abort.
> In the below case, the two concurrent assigns occurred when AM tried to 
> assign a region to a dying/dead RS, and meanwhile the ShutdownServerHandler 
> tried to assign this region (from the region plan) spontaneously.
> {code}
> 2012-04-17 05:44:57,648 INFO org.apache.hadoop.hbase.master.HMaster: balance 
> hri=TABLE_ORDER_CUSTOMER,,1334017820846.fe38fe31caf40b6e607a3e6bbed6404b., 
> src=hadoop05.sh.intel.com,60020,1334544902186, 
> dest=xmlqa-clv16.sh.intel.com,60020,1334612497253
> 2012-04-17 05:44:57,648 DEBUG 
> org.apache.hadoop.hbase.master.AssignmentManager: Starting unassignment of 
> region TABLE_ORDER_CUSTOMER,,1334017820846.fe38fe31caf40b6e607a3e6bbed6404b. 
> (offlining)
> 2012-04-17 05:44:57,648 DEBUG 
> org.apache.hadoop.hbase.master.AssignmentManager: Sent CLOSE to 
> serverName=hadoop05.sh.intel.com,60020,1334544902186, load=(requests=0, 
> regions=0, usedHeap=0, maxHeap=0) for region 
> TABLE_ORDER_CUSTOMER,,1334017820846.fe38fe31caf40b6e607a3e6bbed6404b.
> 2012-04-17 05:44:57,666 DEBUG 
> org.apache.hadoop.hbase.master.AssignmentManager: Handling new unassigned 
> node: /hbase/unassigned/fe38fe31caf40b6e607a3e6bbed6404b 
> (region=TABLE_ORDER_CUSTOMER,,1334017820846.fe38fe31caf40b6e607a3e6bbed6404b.,
>  server=hadoop05.sh.intel.com,60020,1334544902186, state=RS_ZK_REGION_CLOSING)
> 2012-04-17 05:52:58,984 DEBUG 
> org.apache.hadoop.hbase.master.AssignmentManager: Forcing OFFLINE; 
> was=TABLE_ORDER_CUSTOMER,,1334017820846.fe38fe31caf40b6e607a3e6bbed6404b. 
> state=CLOSED, ts=1334612697672, 
> server=hadoop05.sh.intel.com,60020,1334544902186
> 2012-04-17 05:52:58,984 DEBUG org.apache.hadoop.hbase.zookeeper.ZKAssign: 
> master:6-0x236b912e9b3000e Creating (or updating) unassigned node for 
> fe38fe31caf40b6e607a3e6bbed6404b with OFFLINE state
> 2012-04-17 05:52:59,096 DEBUG 
> org.apache.hadoop.hbase.master.AssignmentManager: Using pre-existing plan for 
> region TABLE_ORDER_CUSTOMER,,1334017820846.fe38fe31caf40b6e607a3e6bbed6404b.; 
> plan=hri=TABLE_ORDER_CUSTOMER,,1334017820846.fe38fe31caf40b6e607a3e6bbed6404b.,
>  src=hadoop05.sh.intel.com,60020,1334544902186, 
> dest=xmlqa-clv16.sh.intel.com,60020,1334612497253
> 2012-04-17 05:52:59,096 DEBUG 
> org.apache.hadoop.hbase.master.AssignmentManager: Assigning region 
> TABLE_ORDER_CUSTOMER,,1334017820846.fe38fe31caf40b6e607a3e6bbed6404b. to 
> xmlqa-clv16.sh.intel.com,60020,1334612497253
> 2012-04-17 05:54:19,159 DEBUG 
> org.apache.hadoop.hbase.master.AssignmentManager: Forcing OFFLINE; 
> was=TABLE_ORDER_CUSTOMER,,1334017820846.fe38fe31caf40b6e607a3e6bbed6404b. 
> state=PENDING_OPEN, ts=1334613179096, 
> server=xmlqa-clv16.sh.intel.com,60020,1334612497253
> 2012-04-17 05:54:59,033 WARN 
> org.apache.hadoop.hbase.master.AssignmentManager: Failed assignment of 
> TABLE_ORDER_CUSTOMER,,1334017820846.fe38fe31caf40b6e607a3e6bbed6404b. to 
> serverName=xmlqa-clv16.sh.intel.com,60020,1334612497253, load=(requests=0, 
> regions=0, usedHeap=0, maxHeap=0), trying to assign elsewhere instead; retry=0
> java.net.SocketTimeoutException: Call to /10.239.47.87:60020 failed on socket 
> timeout exception: java.net.SocketTimeoutException: 12 millis timeout 
> while waiting for channel to be ready for read. ch : 
> java.nio.channels.SocketChannel[connected local=/10.239.47.89:41302 
> remote=/10.239.47.87:60020]
> at 
> org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:805)
> at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:778)
> at 
> org.apache.hadoop.hbase.ipc.HBaseRPC$Invoker.invoke(HBaseRPC.java:283)
> at $Proxy7.openRegion(Unknown Source)
> at 
> org.apache.hadoop.hbase.master.ServerManager.sendRegionOpen(ServerManager.java:573)
> at 
> org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1127)
> at 
> org.apache.hadoop.hbase.master.

[jira] [Commented] (HBASE-8420) Port HBASE-6874 Implement prefetching for scanners from 0.89-fb

2013-05-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13648867#comment-13648867
 ] 

Hadoop QA commented on HBASE-8420:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12581739/trunk-8420_v1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 12 new 
or modified tests.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces lines longer than 
100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5553//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5553//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5553//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5553//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5553//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5553//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5553//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5553//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5553//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5553//console

This message is automatically generated.

> Port  HBASE-6874  Implement prefetching for scanners from 0.89-fb
> -
>
> Key: HBASE-8420
> URL: https://issues.apache.org/jira/browse/HBASE-8420
> Project: HBase
>  Issue Type: Improvement
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
> Attachments: 0.94-8420_v1.patch, trunk-8420_v1.patch
>
>
> This should help scanner performance.  We should have it in trunk.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8015) Support for Namespaces

2013-05-03 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13648860#comment-13648860
 ] 

Ted Yu commented on HBASE-8015:
---

@Francis:
Can you add new tests for this feature ?
The new tests should include unit tests and, integration tests.

> Support for Namespaces
> --
>
> Key: HBASE-8015
> URL: https://issues.apache.org/jira/browse/HBASE-8015
> Project: HBase
>  Issue Type: New Feature
>Reporter: Francis Liu
>Assignee: Francis Liu
> Attachments: HBASE-8015_draft_94.patch, Namespace Design.pdf
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8015) Support for Namespaces

2013-05-03 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13648848#comment-13648848
 ] 

Ted Yu commented on HBASE-8015:
---

Using dot ('.') as separator is intuitive. Would using some other character 
make sense so that migration effort is lower ?

For security, should each namespace have its own permission settings ?

For enforcing namespace quota, that would be implemented in a follow-on JIRA, 
right ?

> Support for Namespaces
> --
>
> Key: HBASE-8015
> URL: https://issues.apache.org/jira/browse/HBASE-8015
> Project: HBase
>  Issue Type: New Feature
>Reporter: Francis Liu
>Assignee: Francis Liu
> Attachments: HBASE-8015_draft_94.patch, Namespace Design.pdf
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7958) Statistics per-column family per-region

2013-05-03 Thread Jonathan Hsieh (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13648842#comment-13648842
 ] 

Jonathan Hsieh commented on HBASE-7958:
---

bumping from 0.95.1, read it if makes it in.

> Statistics per-column family per-region
> ---
>
> Key: HBASE-7958
> URL: https://issues.apache.org/jira/browse/HBASE-7958
> Project: HBase
>  Issue Type: New Feature
>Affects Versions: 0.95.2
>Reporter: Jesse Yates
>Assignee: Jesse Yates
> Attachments: hbase-7958_rough-cut-v0.patch, 
> hbase-7958-v0-parent.patch, hbase-7958-v0.patch
>
>
> Originating from this discussion on the dev list: 
> http://search-hadoop.com/m/coDKU1urovS/Simple+stastics+per+region/v=plain
> Essentially, we should have built-in statistics gathering for HBase tables. 
> This allows clients to have a better understanding of the distribution of 
> keys within a table and a given region. We could also surface this 
> information via the UI.
> There are a couple different proposals from the email, the overview is this:
> We add in something on compactions that gathers stats about the keys that are 
> written and then we surface them to a table.
> The possible proposals include:
> *How to implement it?*
> # Coprocessors - 
> ** advantage - it easily plugs in and people could pretty easily add their 
> own statistics. 
> ** disadvantage - UI elements would also require this, we get into dependent 
> loading, which leads down the OSGi path. Also, these CPs need to be installed 
> _after_ all the other CPs on compaction to ensure they see exactly what gets 
> written (doable, but a pain)
> # Built into HBase as a custom scanner
> ** advantage - always goes in the right place and no need to muck about with 
> loading CPs etc.
> ** disadvantage - less pluggable, at least for the initial cut
> *Where do we store data?*
> # .META.
> ** advantage - its an existing table, so we can jam it into another CF there
> ** disadvantage - this would make META much larger, possibly leading to 
> splits AND will make it much harder for other processes to read the info
> # A new stats table
> ** advantage - cleanly separates out the information from META
> ** disadvantage - should use a 'system table' idea to prevent accidental 
> deletion, manipulation by arbitrary clients, but still allow clients to read 
> it.
> Once we have this framework, we can then move to an actual implementation of 
> various statistics.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (HBASE-7958) Statistics per-column family per-region

2013-05-03 Thread Jonathan Hsieh (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13648842#comment-13648842
 ] 

Jonathan Hsieh edited comment on HBASE-7958 at 5/3/13 10:21 PM:


bumping from 0.95.1, readd it if makes it in.

  was (Author: jmhsieh):
bumping from 0.95.1, read it if makes it in.
  
> Statistics per-column family per-region
> ---
>
> Key: HBASE-7958
> URL: https://issues.apache.org/jira/browse/HBASE-7958
> Project: HBase
>  Issue Type: New Feature
>Affects Versions: 0.95.2
>Reporter: Jesse Yates
>Assignee: Jesse Yates
> Attachments: hbase-7958_rough-cut-v0.patch, 
> hbase-7958-v0-parent.patch, hbase-7958-v0.patch
>
>
> Originating from this discussion on the dev list: 
> http://search-hadoop.com/m/coDKU1urovS/Simple+stastics+per+region/v=plain
> Essentially, we should have built-in statistics gathering for HBase tables. 
> This allows clients to have a better understanding of the distribution of 
> keys within a table and a given region. We could also surface this 
> information via the UI.
> There are a couple different proposals from the email, the overview is this:
> We add in something on compactions that gathers stats about the keys that are 
> written and then we surface them to a table.
> The possible proposals include:
> *How to implement it?*
> # Coprocessors - 
> ** advantage - it easily plugs in and people could pretty easily add their 
> own statistics. 
> ** disadvantage - UI elements would also require this, we get into dependent 
> loading, which leads down the OSGi path. Also, these CPs need to be installed 
> _after_ all the other CPs on compaction to ensure they see exactly what gets 
> written (doable, but a pain)
> # Built into HBase as a custom scanner
> ** advantage - always goes in the right place and no need to muck about with 
> loading CPs etc.
> ** disadvantage - less pluggable, at least for the initial cut
> *Where do we store data?*
> # .META.
> ** advantage - its an existing table, so we can jam it into another CF there
> ** disadvantage - this would make META much larger, possibly leading to 
> splits AND will make it much harder for other processes to read the info
> # A new stats table
> ** advantage - cleanly separates out the information from META
> ** disadvantage - should use a 'system table' idea to prevent accidental 
> deletion, manipulation by arbitrary clients, but still allow clients to read 
> it.
> Once we have this framework, we can then move to an actual implementation of 
> various statistics.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-8089) Add type support

2013-05-03 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-8089:


Fix Version/s: 0.95.2
 Assignee: Nick Dimiduk

> Add type support
> 
>
> Key: HBASE-8089
> URL: https://issues.apache.org/jira/browse/HBASE-8089
> Project: HBase
>  Issue Type: New Feature
>  Components: Client
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
> Fix For: 0.95.2
>
> Attachments: HBASE-8089-types.txt, HBASE-8089-types.txt, 
> HBASE-8089-types.txt, HBASE-8089-types.txt
>
>
> This proposal outlines an improvement to HBase that provides for a set of 
> types, above and beyond the existing "byte-bucket" strategy. This is intended 
> to reduce user-level duplication of effort, provide better support for 
> 3rd-party integration, and provide an overall improved experience for 
> developers using HBase.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7958) Statistics per-column family per-region

2013-05-03 Thread Jonathan Hsieh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hsieh updated HBASE-7958:
--

Fix Version/s: (was: 0.95.1)

> Statistics per-column family per-region
> ---
>
> Key: HBASE-7958
> URL: https://issues.apache.org/jira/browse/HBASE-7958
> Project: HBase
>  Issue Type: New Feature
>Affects Versions: 0.95.2
>Reporter: Jesse Yates
>Assignee: Jesse Yates
> Attachments: hbase-7958_rough-cut-v0.patch, 
> hbase-7958-v0-parent.patch, hbase-7958-v0.patch
>
>
> Originating from this discussion on the dev list: 
> http://search-hadoop.com/m/coDKU1urovS/Simple+stastics+per+region/v=plain
> Essentially, we should have built-in statistics gathering for HBase tables. 
> This allows clients to have a better understanding of the distribution of 
> keys within a table and a given region. We could also surface this 
> information via the UI.
> There are a couple different proposals from the email, the overview is this:
> We add in something on compactions that gathers stats about the keys that are 
> written and then we surface them to a table.
> The possible proposals include:
> *How to implement it?*
> # Coprocessors - 
> ** advantage - it easily plugs in and people could pretty easily add their 
> own statistics. 
> ** disadvantage - UI elements would also require this, we get into dependent 
> loading, which leads down the OSGi path. Also, these CPs need to be installed 
> _after_ all the other CPs on compaction to ensure they see exactly what gets 
> written (doable, but a pain)
> # Built into HBase as a custom scanner
> ** advantage - always goes in the right place and no need to muck about with 
> loading CPs etc.
> ** disadvantage - less pluggable, at least for the initial cut
> *Where do we store data?*
> # .META.
> ** advantage - its an existing table, so we can jam it into another CF there
> ** disadvantage - this would make META much larger, possibly leading to 
> splits AND will make it much harder for other processes to read the info
> # A new stats table
> ** advantage - cleanly separates out the information from META
> ** disadvantage - should use a 'system table' idea to prevent accidental 
> deletion, manipulation by arbitrary clients, but still allow clients to read 
> it.
> Once we have this framework, we can then move to an actual implementation of 
> various statistics.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-8084) Sundry mapreduce improvements

2013-05-03 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-8084:


Fix Version/s: (was: 0.95.1)

Moving out of 0.95 for now.

> Sundry mapreduce improvements
> -
>
> Key: HBASE-8084
> URL: https://issues.apache.org/jira/browse/HBASE-8084
> Project: HBase
>  Issue Type: Improvement
>  Components: mapreduce
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
>
> Umbrella issue for a handful of improvements to the mapreduce infrastructure.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7910) Dont use reflection for security

2013-05-03 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-7910:
-

Fix Version/s: (was: 0.95.1)
   0.98.0

> Dont use reflection for security
> 
>
> Key: HBASE-7910
> URL: https://issues.apache.org/jira/browse/HBASE-7910
> Project: HBase
>  Issue Type: Improvement
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
>  Labels: noob
> Fix For: 0.98.0
>
>
> security.User class uses reflection so that HBase can work with older 
> Hadoop's not having security. Now that we require 1.x, or 0.23 or 2.x, all 
> Hadoop versions have security code. We can get rid of most of the User class. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7910) Dont use reflection for security

2013-05-03 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13648834#comment-13648834
 ] 

Enis Soztutar commented on HBASE-7910:
--

moving out of 0.95. 

> Dont use reflection for security
> 
>
> Key: HBASE-7910
> URL: https://issues.apache.org/jira/browse/HBASE-7910
> Project: HBase
>  Issue Type: Improvement
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
>  Labels: noob
> Fix For: 0.98.0
>
>
> security.User class uses reflection so that HBase can work with older 
> Hadoop's not having security. Now that we require 1.x, or 0.23 or 2.x, all 
> Hadoop versions have security code. We can get rid of most of the User class. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8143) HBase on Hadoop 2 with local short circuit reads (ssr) causes OOM

2013-05-03 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13648833#comment-13648833
 ] 

Enis Soztutar commented on HBASE-8143:
--

Raising this to critical. Will get back to this for sure. 

> HBase on Hadoop 2 with local short circuit reads (ssr) causes OOM 
> --
>
> Key: HBASE-8143
> URL: https://issues.apache.org/jira/browse/HBASE-8143
> Project: HBase
>  Issue Type: Bug
>  Components: hadoop2
>Affects Versions: 0.98.0, 0.94.7, 0.95.0
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
>Priority: Critical
> Fix For: 0.98.0, 0.94.8, 0.95.1
>
> Attachments: OpenFileTest.java
>
>
> We've run into an issue with HBase 0.94 on Hadoop2, with SSR turned on that 
> the memory usage of the HBase process grows to 7g, on an -Xmx3g, after some 
> time, this causes OOM for the RSs. 
> Upon further investigation, I've found out that we end up with 200 regions, 
> each having 3-4 store files open. Under hadoop2 SSR, BlockReaderLocal 
> allocates DirectBuffers, which is unlike HDFS 1 where there is no direct 
> buffer allocation. 
> It seems that there is no guards against the memory used by local buffers in 
> hdfs 2, and having a large number of open files causes multiple GB of memory 
> to be consumed from the RS process. 
> This issue is to further investigate what is going on. Whether we can limit 
> the memory usage in HDFS, or HBase, and/or document the setup. 
> Possible mitigation scenarios are: 
>  - Turn off SSR for Hadoop 2
>  - Ensure that there is enough unallocated memory for the RS based on 
> expected # of store files
>  - Ensure that there is lower number of regions per region server (hence 
> number of open files)
> Stack trace:
> {code}
> org.apache.hadoop.hbase.DroppedSnapshotException: region: 
> IntegrationTestLoadAndVerify,yC^P\xD7\x945\xD4,1363388517630.24655343d8d356ef708732f34cfe8946.
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1560)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1439)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:1380)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:449)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushOneForGlobalPressure(MemStoreFlusher.java:215)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.access$500(MemStoreFlusher.java:63)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:237)
> at java.lang.Thread.run(Thread.java:662)
> Caused by: java.lang.OutOfMemoryError: Direct buffer memory
> at java.nio.Bits.reserveMemory(Bits.java:632)
> at java.nio.DirectByteBuffer.(DirectByteBuffer.java:97)
> at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:288)
> at 
> org.apache.hadoop.hdfs.util.DirectBufferPool.getBuffer(DirectBufferPool.java:70)
> at 
> org.apache.hadoop.hdfs.BlockReaderLocal.(BlockReaderLocal.java:315)
> at 
> org.apache.hadoop.hdfs.BlockReaderLocal.newBlockReader(BlockReaderLocal.java:208)
> at 
> org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:790)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:888)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:455)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:645)
> at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:689)
> at java.io.DataInputStream.readFully(DataInputStream.java:178)
> at 
> org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.readFromStream(FixedFileTrailer.java:312)
> at 
> org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:543)
> at 
> org.apache.hadoop.hbase.io.hfile.HFile.createReaderWithEncoding(HFile.java:589)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFile$Reader.(StoreFile.java:1261)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:512)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:603)
> at 
> org.apache.hadoop.hbase.regionserver.Store.validateStoreFile(Store.java:1568)
> at 
> org.apache.hadoop.hbase.regionserver.Store.commitFile(Store.java:845)
> at 
> org.apache.hadoop.hbase.regionserver.Store.access$500(Store.java:109)
> at 
> org.apache.hadoop.hbase.regionserver.Store$StoreFlusherImpl.commit(Store.java:2209)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlus

[jira] [Updated] (HBASE-8143) HBase on Hadoop 2 with local short circuit reads (ssr) causes OOM

2013-05-03 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-8143:
-

Priority: Critical  (was: Major)

> HBase on Hadoop 2 with local short circuit reads (ssr) causes OOM 
> --
>
> Key: HBASE-8143
> URL: https://issues.apache.org/jira/browse/HBASE-8143
> Project: HBase
>  Issue Type: Bug
>  Components: hadoop2
>Affects Versions: 0.98.0, 0.94.7, 0.95.0
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
>Priority: Critical
> Fix For: 0.98.0, 0.94.8, 0.95.1
>
> Attachments: OpenFileTest.java
>
>
> We've run into an issue with HBase 0.94 on Hadoop2, with SSR turned on that 
> the memory usage of the HBase process grows to 7g, on an -Xmx3g, after some 
> time, this causes OOM for the RSs. 
> Upon further investigation, I've found out that we end up with 200 regions, 
> each having 3-4 store files open. Under hadoop2 SSR, BlockReaderLocal 
> allocates DirectBuffers, which is unlike HDFS 1 where there is no direct 
> buffer allocation. 
> It seems that there is no guards against the memory used by local buffers in 
> hdfs 2, and having a large number of open files causes multiple GB of memory 
> to be consumed from the RS process. 
> This issue is to further investigate what is going on. Whether we can limit 
> the memory usage in HDFS, or HBase, and/or document the setup. 
> Possible mitigation scenarios are: 
>  - Turn off SSR for Hadoop 2
>  - Ensure that there is enough unallocated memory for the RS based on 
> expected # of store files
>  - Ensure that there is lower number of regions per region server (hence 
> number of open files)
> Stack trace:
> {code}
> org.apache.hadoop.hbase.DroppedSnapshotException: region: 
> IntegrationTestLoadAndVerify,yC^P\xD7\x945\xD4,1363388517630.24655343d8d356ef708732f34cfe8946.
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1560)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1439)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:1380)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:449)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushOneForGlobalPressure(MemStoreFlusher.java:215)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.access$500(MemStoreFlusher.java:63)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:237)
> at java.lang.Thread.run(Thread.java:662)
> Caused by: java.lang.OutOfMemoryError: Direct buffer memory
> at java.nio.Bits.reserveMemory(Bits.java:632)
> at java.nio.DirectByteBuffer.(DirectByteBuffer.java:97)
> at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:288)
> at 
> org.apache.hadoop.hdfs.util.DirectBufferPool.getBuffer(DirectBufferPool.java:70)
> at 
> org.apache.hadoop.hdfs.BlockReaderLocal.(BlockReaderLocal.java:315)
> at 
> org.apache.hadoop.hdfs.BlockReaderLocal.newBlockReader(BlockReaderLocal.java:208)
> at 
> org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:790)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:888)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:455)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:645)
> at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:689)
> at java.io.DataInputStream.readFully(DataInputStream.java:178)
> at 
> org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.readFromStream(FixedFileTrailer.java:312)
> at 
> org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:543)
> at 
> org.apache.hadoop.hbase.io.hfile.HFile.createReaderWithEncoding(HFile.java:589)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFile$Reader.(StoreFile.java:1261)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:512)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:603)
> at 
> org.apache.hadoop.hbase.regionserver.Store.validateStoreFile(Store.java:1568)
> at 
> org.apache.hadoop.hbase.regionserver.Store.commitFile(Store.java:845)
> at 
> org.apache.hadoop.hbase.regionserver.Store.access$500(Store.java:109)
> at 
> org.apache.hadoop.hbase.regionserver.Store$StoreFlusherImpl.commit(Store.java:2209)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1541)
> {code}

--
This message is automatically generated by JIRA.
If 

[jira] [Updated] (HBASE-8420) Port HBASE-6874 Implement prefetching for scanners from 0.89-fb

2013-05-03 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HBASE-8420:
---

Status: Patch Available  (was: Open)

> Port  HBASE-6874  Implement prefetching for scanners from 0.89-fb
> -
>
> Key: HBASE-8420
> URL: https://issues.apache.org/jira/browse/HBASE-8420
> Project: HBase
>  Issue Type: Improvement
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
> Attachments: 0.94-8420_v1.patch, trunk-8420_v1.patch
>
>
> This should help scanner performance.  We should have it in trunk.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-8405) Add more custom options to how ClusterManager runs commands

2013-05-03 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HBASE-8405:


Attachment: HBASE-8405-take2-v0.patch

here's the patch

> Add more custom options to how ClusterManager runs commands
> ---
>
> Key: HBASE-8405
> URL: https://issues.apache.org/jira/browse/HBASE-8405
> Project: HBase
>  Issue Type: Improvement
>  Components: test
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Minor
> Fix For: 0.94.8, 0.95.1
>
> Attachments: HBASE-8405-take2-v0.patch, HBASE-8405-v0.patch, 
> HBASE-8405-v1.patch
>
>
> You may want to run yet more custom commands (such as su as some local user) 
> depending on test setup.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-8405) Add more custom options to how ClusterManager runs commands

2013-05-03 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HBASE-8405:


Status: Patch Available  (was: Reopened)

> Add more custom options to how ClusterManager runs commands
> ---
>
> Key: HBASE-8405
> URL: https://issues.apache.org/jira/browse/HBASE-8405
> Project: HBase
>  Issue Type: Improvement
>  Components: test
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Minor
> Fix For: 0.94.8, 0.95.1
>
> Attachments: HBASE-8405-take2-v0.patch, HBASE-8405-v0.patch, 
> HBASE-8405-v1.patch
>
>
> You may want to run yet more custom commands (such as su as some local user) 
> depending on test setup.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7997) One last set of class moves before 0.95 goes out

2013-05-03 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-7997:
--

Priority: Major  (was: Critical)

Lowering priority, as suggested by group discussion.

> One last set of class moves before 0.95 goes out
> 
>
> Key: HBASE-7997
> URL: https://issues.apache.org/jira/browse/HBASE-7997
> Project: HBase
>  Issue Type: Task
>  Components: Usability
>Reporter: stack
> Fix For: 0.95.1
>
>
> hbase-server depends on hbase-client.  Alot of exceptions are in hbase-client 
> that are thrown by the hbase-server.  Should these be in a common place 
> instead of in hbase-client explicitly?  Say in hbase-common?  The client move 
> put all of our exceptions into an exception package (apparently this is my 
> fault).  Is this a good idea?  How many of these exceptions can we put beside 
> the place where they are thrown?
> This issue is about spending a few hours looking at class locations before 
> 0.95 goes out.
> Any other ideas?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-5923) Cleanup checkAndXXX logic

2013-05-03 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-5923:
--

Priority: Major  (was: Critical)

Lowering priority, as suggested by group discussion.

> Cleanup checkAndXXX logic
> -
>
> Key: HBASE-5923
> URL: https://issues.apache.org/jira/browse/HBASE-5923
> Project: HBase
>  Issue Type: Improvement
>  Components: Client, regionserver
>Reporter: Lars Hofhansl
> Fix For: 0.95.1
>
> Attachments: 5923-0.94.txt, 5923-trunk.txt
>
>
> 1. the checkAnd{Put|Delete} method that takes a CompareOP is not exposed via 
> HTable[Interface].
> 2. there is unnecessary duplicate code in the check{Put|Delete} code in 
> HRegionServer.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8405) Add more custom options to how ClusterManager runs commands

2013-05-03 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13648811#comment-13648811
 ] 

Sergey Shelukhin commented on HBASE-8405:
-

I think it should be alright in this jira... I am superseding the current patch 
here completely

> Add more custom options to how ClusterManager runs commands
> ---
>
> Key: HBASE-8405
> URL: https://issues.apache.org/jira/browse/HBASE-8405
> Project: HBase
>  Issue Type: Improvement
>  Components: test
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Minor
> Fix For: 0.94.8, 0.95.1
>
> Attachments: HBASE-8405-v0.patch, HBASE-8405-v1.patch
>
>
> You may want to run yet more custom commands (such as su as some local user) 
> depending on test setup.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8184) Remove thrift2 from 0.95 and trunk

2013-05-03 Thread Lars George (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13648810#comment-13648810
 ] 

Lars George commented on HBASE-8184:


Oh, also, removing Thrift2 would imply that no one is using it. Which I think 
is unlikely, though I have not asked around. But simply voting here and 
dropping it without checking user impact is bad form. My 2 cents.

> Remove thrift2 from 0.95 and trunk
> --
>
> Key: HBASE-8184
> URL: https://issues.apache.org/jira/browse/HBASE-8184
> Project: HBase
>  Issue Type: Bug
>  Components: Thrift
>Reporter: stack
> Attachments: 8184.txt
>
>
> thrift2 is what our thrift interface should be.  It is unfinished though and 
> without an owner.  While in place, it prompts why a thrift2 and a thrift1 
> questions.  Meantime, thrift1 is what folks use and it is getting bug fixes.  
> Suggest we remove thrift2 till it gets carried beyond thrift1.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-5746) HFileDataBlockEncoderImpl uses wrong header size when reading HFiles with no checksums (0.96)

2013-05-03 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HBASE-5746:


Resolution: Fixed
Status: Resolved  (was: Patch Available)

committed to 95 and trunk

> HFileDataBlockEncoderImpl uses wrong header size when reading HFiles with no 
> checksums (0.96)
> -
>
> Key: HBASE-5746
> URL: https://issues.apache.org/jira/browse/HBASE-5746
> Project: HBase
>  Issue Type: Sub-task
>  Components: io, regionserver
>Reporter: Lars Hofhansl
>Assignee: Sergey Shelukhin
>Priority: Critical
> Fix For: 0.95.1
>
> Attachments: 5720-trunk-v2.txt, HBASE-5746-v0.patch, 
> HBASE-5746-v1.patch, HBASE-5746-v2.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8184) Remove thrift2 from 0.95 and trunk

2013-05-03 Thread Lars George (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13648807#comment-13648807
 ] 

Lars George commented on HBASE-8184:


-1 on removal. I am +1 for deprecating Thrift1 and fix Thrift2 up. Not much 
missing. I am willing to investigate and spend time on it, as otherwise we 
carry on that Thrift1 monster. 

> Remove thrift2 from 0.95 and trunk
> --
>
> Key: HBASE-8184
> URL: https://issues.apache.org/jira/browse/HBASE-8184
> Project: HBase
>  Issue Type: Bug
>  Components: Thrift
>Reporter: stack
> Attachments: 8184.txt
>
>
> thrift2 is what our thrift interface should be.  It is unfinished though and 
> without an owner.  While in place, it prompts why a thrift2 and a thrift1 
> questions.  Meantime, thrift1 is what folks use and it is getting bug fixes.  
> Suggest we remove thrift2 till it gets carried beyond thrift1.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HBASE-8491) Fixing the TestHeapSizes.

2013-05-03 Thread Manukranth Kolloju (JIRA)
Manukranth Kolloju created HBASE-8491:
-

 Summary: Fixing the TestHeapSizes.
 Key: HBASE-8491
 URL: https://issues.apache.org/jira/browse/HBASE-8491
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.89-fb
Reporter: Manukranth Kolloju
Priority: Trivial
 Fix For: 0.89-fb


Accounting for the extra references added. Did an absolute count of non static 
variables and updated accordingly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7244) Provide a command or argument to startup, that formats znodes if provided

2013-05-03 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-7244:
-

   Resolution: Fixed
Fix Version/s: (was: 0.94.8)
   Status: Resolved  (was: Patch Available)

Committed to 0.95 and trunk.  Adding scripts to bin is not responsible for 
hanging tests.

> Provide a command or argument to startup, that formats znodes if provided
> -
>
> Key: HBASE-7244
> URL: https://issues.apache.org/jira/browse/HBASE-7244
> Project: HBase
>  Issue Type: New Feature
>  Components: Zookeeper
>Affects Versions: 0.94.0
>Reporter: Harsh J
>Assignee: rajeshbabu
>Priority: Critical
> Fix For: 0.98.0, 0.95.1
>
> Attachments: HBASE-7244_2.patch, HBASE-7244_3.patch, 
> HBASE-7244_4.patch, HBASE-7244_5.patch, HBASE-7244_6.patch, 
> HBASE-7244_7.patch, HBASE-7244.patch
>
>
> Many a times I've had to, and have seen instructions being thrown, to stop 
> cluster, clear out ZK and restart.
> While this is only a quick (and painful to master) fix, it is certainly nifty 
> to some smaller cluster users but the process is far too long, roughly:
> 1. Stop HBase
> 2. Start zkCli.sh and connect to the right quorum
> 3. Find and ensure the HBase parent znode from the configs (/hbase only by 
> default)
> 4. Run an "rmr /hbase" in the zkCli.sh shell, or manually delete each znode 
> if on a lower version of ZK.
> 5. Quit zkCli.sh and start HBase again
> Perhaps it may be useful, if the start-hbase.sh itself accepted a formatZK 
> parameter. Such that, when you do a {{start-hbase.sh -formatZK}}, it does 
> steps 2-4 automatically for you.
> For safety, we could make the formatter code ensure that no HBase instance is 
> actually active, and skip the format process if it is. Similar to a HDFS 
> NameNode's format, which would disallow if the name directories are locked.
> Would this be a useful addition for administrators? Bigtop too can provide a 
> service subcommand that could do this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HBASE-8404) Extra commas in LruBlockCache.logStats

2013-05-03 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-8404.
--

   Resolution: Fixed
Fix Version/s: (was: 0.94.8)
 Assignee: stack

Committed to 0.95 and trunk. 

> Extra commas in LruBlockCache.logStats
> --
>
> Key: HBASE-8404
> URL: https://issues.apache.org/jira/browse/HBASE-8404
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.4, 0.95.0
>Reporter: Jean-Daniel Cryans
>Assignee: stack
> Fix For: 0.98.0, 0.95.1
>
> Attachments: 8404.txt
>
>
> The Stats log line for the LruBlockCache contains extra commas introduced in 
> HBASE-5616:
> {noformat}
> 2013-04-23 18:40:12,774 DEBUG [LRU Statistics #0] 
> org.apache.hadoop.hbase.io.hfile.LruBlockCache: Stats: total=9.23 MB, 
> free=500.69 MB, max=509.92 MB, blocks=95, accesses=322822, hits=107003, 
> hitRatio=33.14%, , cachingAccesses=232794, cachingHits=106994, 
> cachingHitsRatio=45.96%, , evictions=0, evicted=12, evictedPerRun=Infinity
> {noformat}
> Marking as "noob" :)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8462) Custom timestamps should not be allowed to be negative

2013-05-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13648774#comment-13648774
 ] 

Hadoop QA commented on HBASE-8462:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12581105/hbase-8462_v1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.thrift.TestThriftServer
  org.apache.hadoop.hbase.regionserver.wal.TestHLogFiltering

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5550//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5550//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5550//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5550//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5550//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5550//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5550//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5550//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5550//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5550//console

This message is automatically generated.

> Custom timestamps should not be allowed to be negative
> --
>
> Key: HBASE-8462
> URL: https://issues.apache.org/jira/browse/HBASE-8462
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 0.98.0, 0.94.8, 0.95.1
>
> Attachments: hbase-8462_v1.patch
>
>
> Client supplied timestamps should not be allowed to be negative, otherwise 
> unpredictable results will follow. Especially, since we are encoding the ts 
> using Bytes.Bytes(long), negative timestamps are sorted after positive ones. 
> Plus, the new PB messages define ts' as uint64. 
> Credit goes to Huned Lokhandwala for reporting this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-4050) Update HBase metrics framework to metrics2 framework

2013-05-03 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-4050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-4050:
-

Priority: Major  (was: Critical)

Just waiting on doc.  All else is in.

> Update HBase metrics framework to metrics2 framework
> 
>
> Key: HBASE-4050
> URL: https://issues.apache.org/jira/browse/HBASE-4050
> Project: HBase
>  Issue Type: New Feature
>  Components: metrics
>Affects Versions: 0.90.4
> Environment: Java 6
>Reporter: Eric Yang
>Assignee: Elliott Clark
> Fix For: 0.95.1
>
> Attachments: 4050-metrics-v2.patch, 4050-metrics-v3.patch, 
> HBASE-4050-0.patch, HBASE-4050-1.patch, HBASE-4050-2.patch, 
> HBASE-4050-3.patch, HBASE-4050-5.patch, HBASE-4050-6.patch, 
> HBASE-4050-7.patch, HBASE-4050-8_1.patch, HBASE-4050-8.patch, HBASE-4050.patch
>
>
> Metrics Framework has been marked deprecated in Hadoop 0.20.203+ and 0.22+, 
> and it might get removed in future Hadoop release.  Hence, HBase needs to 
> revise the dependency of MetricsContext to use Metrics2 framework.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7244) Provide a command or argument to startup, that formats znodes if provided

2013-05-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13648765#comment-13648765
 ] 

Hadoop QA commented on HBASE-7244:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12581655/HBASE-7244_7.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces lines longer than 
100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s): 

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5549//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5549//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5549//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5549//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5549//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5549//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5549//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5549//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5549//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5549//console

This message is automatically generated.

> Provide a command or argument to startup, that formats znodes if provided
> -
>
> Key: HBASE-7244
> URL: https://issues.apache.org/jira/browse/HBASE-7244
> Project: HBase
>  Issue Type: New Feature
>  Components: Zookeeper
>Affects Versions: 0.94.0
>Reporter: Harsh J
>Assignee: rajeshbabu
>Priority: Critical
> Fix For: 0.98.0, 0.94.8, 0.95.1
>
> Attachments: HBASE-7244_2.patch, HBASE-7244_3.patch, 
> HBASE-7244_4.patch, HBASE-7244_5.patch, HBASE-7244_6.patch, 
> HBASE-7244_7.patch, HBASE-7244.patch
>
>
> Many a times I've had to, and have seen instructions being thrown, to stop 
> cluster, clear out ZK and restart.
> While this is only a quick (and painful to master) fix, it is certainly nifty 
> to some smaller cluster users but the process is far too long, roughly:
> 1. Stop HBase
> 2. Start zkCli.sh and connect to the right quorum
> 3. Find and ensure the HBase parent znode from the configs (/hbase only by 
> default)
> 4. Run an "rmr /hbase" in the zkCli.sh shell, or manually delete each znode 
> if on a lower version of ZK.
> 5. Quit zkCli.sh and start HBase again
> Perhaps it may be useful, if the start-hbase.sh itself accepted a formatZK 
> parameter. Such that, when you do a {{start-hbase.sh -formatZK}}, it does 
> steps 2-4 automatically for you.
> For safety, we could make the formatter code ensure that no HBase instance is 
> actually active, and skip the format process if it is. Similar to a HDFS 
> NameNode's format, which would disallow if the name directories are locked.
> Would this be a useful addition for administrators? Bigtop too can provide a 
> service subcom

[jira] [Resolved] (HBASE-57) [hbase] Master should allocate regions to regionservers based upon data locality and rack awareness

2013-05-03 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-57?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-57.


Resolution: Duplicate

Fixed by the stochastic balancer in trunk/0.95

> [hbase] Master should allocate regions to regionservers based upon data 
> locality and rack awareness
> ---
>
> Key: HBASE-57
> URL: https://issues.apache.org/jira/browse/HBASE-57
> Project: HBase
>  Issue Type: Improvement
>  Components: master
>Affects Versions: 0.2.0
>Reporter: stack
>Assignee: Li Chongxin
>  Labels: gsoc
>
> Currently, regions are assigned regionservers based off a basic loading 
> attribute.  A factor to include in the assignment calcuation is the location 
> of the region in hdfs; i.e. servers hosting region replicas.  If the cluster 
> is such that regionservers are being run on the same nodes as those running 
> hdfs, then ideally the regionserver for a particular region should be running 
> on the same server as hosts a region replica.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HBASE-4794) Altering a tables that splits can hold the command for the CatalogJanitor sleep time

2013-05-03 Thread Jean-Daniel Cryans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-4794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jean-Daniel Cryans resolved HBASE-4794.
---

Resolution: Invalid

Table locking takes care of this, closing.

> Altering a tables that splits can hold the command for the CatalogJanitor 
> sleep time
> 
>
> Key: HBASE-4794
> URL: https://issues.apache.org/jira/browse/HBASE-4794
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.92.0
>Reporter: Jean-Daniel Cryans
> Fix For: 0.92.3
>
>
> In AssignmentManager.getReopenStatus, it calls a version of 
> MetaReader.getTableRegions that sets excludeOfflinedSplitParents to false 
> meaning that the offline parents are returned. What this means is that if one 
> of them was already closed before the alter command was issued (and I believe 
> there are a few other cases) then the alter will hang until the 
> CatalogJanitor sweeps the parent .META. row.
> Since the CJ sleep time is 5 minutes, the worst case scenario is an alter 
> that takes almost 5 minutes.
> Here's an example:
> {quote}
> 925/948 regions updated.
> 920/943 regions updated.
> 913/934 regions updated.
> 912/928 regions updated.
> 912/928 regions updated.
> (5 minutes later)
> 912/928 regions updated.
> 912/928 regions updated.
> 905/918 regions updated.
> 897/906 regions updated.
> 891/892 regions updated.
> 891/891 regions updated.
> Done.
> {quote}
> I can confirm with the log that 37 parent regions were cleaned up.
> Also it's pretty nice to see how the number fluctuates up and down :)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HBASE-5086) Reopening a region on a RS can leave it in PENDING_OPEN

2013-05-03 Thread Jean-Daniel Cryans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jean-Daniel Cryans resolved HBASE-5086.
---

Resolution: Invalid

This issue is so old it's probably fixed or failing differently in the latest 
versions, closing.

> Reopening a region on a RS can leave it in PENDING_OPEN
> ---
>
> Key: HBASE-5086
> URL: https://issues.apache.org/jira/browse/HBASE-5086
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.92.0
>Reporter: Jean-Daniel Cryans
>Assignee: ramkrishna.s.vasudevan
> Fix For: 0.92.3
>
>
> I got this twice during the same test.
> If the region servers are slow enough and you run an online alter, it's 
> possible for the RS to change the znode status to CLOSED and have the master 
> send an OPEN before the region server is able to remove the region from it's 
> list of RITs.
> This is what the master sees:
> {quote}
> 011-12-21 22:24:09,498 DEBUG 
> org.apache.hadoop.hbase.master.AssignmentManager: Starting unassignment of 
> region test1,db6db6b4,1324501004642.43123e2e3fc83ec25fe2a76b4f09077f. 
> (offlining)
> 2011-12-21 22:24:09,498 DEBUG org.apache.hadoop.hbase.zookeeper.ZKAssign: 
> master:62003-0x134589d3db033f7 Creating unassigned node for 
> 43123e2e3fc83ec25fe2a76b4f09077f in a CLOSING state
> 2011-12-21 22:24:09,524 DEBUG 
> org.apache.hadoop.hbase.master.AssignmentManager: Sent CLOSE to 
> sv4r25s44,62023,1324494325099 for region 
> test1,db6db6b4,1324501004642.43123e2e3fc83ec25fe2a76b4f09077f.
> 2011-12-21 22:24:15,656 DEBUG 
> org.apache.hadoop.hbase.master.AssignmentManager: Handling 
> transition=RS_ZK_REGION_CLOSED, server=sv4r25s44,62023,1324494325099, 
> region=43123e2e3fc83ec25fe2a76b4f09077f
> 2011-12-21 22:24:15,656 DEBUG 
> org.apache.hadoop.hbase.master.handler.ClosedRegionHandler: Handling CLOSED 
> event for 43123e2e3fc83ec25fe2a76b4f09077f
> 2011-12-21 22:24:15,656 DEBUG 
> org.apache.hadoop.hbase.master.AssignmentManager: Forcing OFFLINE; 
> was=test1,db6db6b4,1324501004642.43123e2e3fc83ec25fe2a76b4f09077f. 
> state=CLOSED, ts=1324506255629, server=sv4r25s44,62023,1324494325099
> 2011-12-21 22:24:15,656 DEBUG org.apache.hadoop.hbase.zookeeper.ZKAssign: 
> master:62003-0x134589d3db033f7 Creating (or updating) unassigned node for 
> 43123e2e3fc83ec25fe2a76b4f09077f with OFFLINE state
> 2011-12-21 22:24:15,663 DEBUG 
> org.apache.hadoop.hbase.master.AssignmentManager: Found an existing plan for 
> test1,db6db6b4,1324501004642.43123e2e3fc83ec25fe2a76b4f09077f. destination 
> server is + sv4r25s44,62023,1324494325099
> 2011-12-21 22:24:15,663 DEBUG 
> org.apache.hadoop.hbase.master.AssignmentManager: Using pre-existing plan for 
> region test1,db6db6b4,1324501004642.43123e2e3fc83ec25fe2a76b4f09077f.; 
> plan=hri=test1,db6db6b4,1324501004642.43123e2e3fc83ec25fe2a76b4f09077f., 
> src=, dest=sv4r25s44,62023,1324494325099
> 2011-12-21 22:24:15,663 DEBUG 
> org.apache.hadoop.hbase.master.AssignmentManager: Assigning region 
> test1,db6db6b4,1324501004642.43123e2e3fc83ec25fe2a76b4f09077f. to 
> sv4r25s44,62023,1324494325099
> 2011-12-21 22:24:15,664 ERROR 
> org.apache.hadoop.hbase.master.AssignmentManager: Failed assignment in: 
> sv4r25s44,62023,1324494325099 due to 
> org.apache.hadoop.hbase.regionserver.RegionAlreadyInTransitionException: 
> Received:OPEN for the 
> region:test1,db6db6b4,1324501004642.43123e2e3fc83ec25fe2a76b4f09077f. ,which 
> we are already trying to CLOSE.
> {quote}
> After that the master abandons.
> And the region server:
> {quote}
> 2011-12-21 22:24:09,523 INFO 
> org.apache.hadoop.hbase.regionserver.HRegionServer: Received close region: 
> test1,db6db6b4,1324501004642.43123e2e3fc83ec25fe2a76b4f09077f.
> 2011-12-21 22:24:09,523 DEBUG 
> org.apache.hadoop.hbase.regionserver.handler.CloseRegionHandler: Processing 
> close of test1,db6db6b4,1324501004642.43123e2e3fc83ec25fe2a76b4f09077f.
> 2011-12-21 22:24:09,524 DEBUG org.apache.hadoop.hbase.regionserver.HRegion: 
> Closing test1,db6db6b4,1324501004642.43123e2e3fc83ec25fe2a76b4f09077f.: 
> disabling compactions & flushes
> 2011-12-21 22:24:09,524 INFO org.apache.hadoop.hbase.regionserver.HRegion: 
> Running close preflush of 
> test1,db6db6b4,1324501004642.43123e2e3fc83ec25fe2a76b4f09077f.
> 2011-12-21 22:24:09,524 DEBUG org.apache.hadoop.hbase.regionserver.HRegion: 
> Started memstore flush for 
> test1,db6db6b4,1324501004642.43123e2e3fc83ec25fe2a76b4f09077f., current 
> region memstore size 40.5m
> 2011-12-21 22:24:09,524 DEBUG org.apache.hadoop.hbase.regionserver.HRegion: 
> Finished snapshotting 
> test1,db6db6b4,1324501004642.43123e2e3fc83ec25fe2a76b4f09077f., commencing 
> wait for mvcc, flushsize=42482936
> 2011-12-21 22:24:13,368 DEBUG org.apache.hadoop.hbase.regionserver.Store: 
> Renaming flushed file at 
> hdfs://sv4r11s38:9

[jira] [Commented] (HBASE-5349) Automagically tweak global memstore and block cache sizes based on workload

2013-05-03 Thread Jean-Daniel Cryans (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13648756#comment-13648756
 ] 

Jean-Daniel Cryans commented on HBASE-5349:
---

If anyone is looking for a good jira to solve, this is one.

> Automagically tweak global memstore and block cache sizes based on workload
> ---
>
> Key: HBASE-5349
> URL: https://issues.apache.org/jira/browse/HBASE-5349
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 0.92.0
>Reporter: Jean-Daniel Cryans
>
> Hypertable does a neat thing where it changes the size given to the CellCache 
> (our MemStores) and Block Cache based on the workload. If you need an image, 
> scroll down at the bottom of this link: 
> http://www.hypertable.com/documentation/architecture/
> That'd be one less thing to configure.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HBASE-5386) [usability] Soft limit for eager region splitting of young tables

2013-05-03 Thread Jean-Daniel Cryans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jean-Daniel Cryans resolved HBASE-5386.
---

Resolution: Duplicate

We actually did this in 0.94 where we split early when a RS has a few regions 
from a table.

> [usability] Soft limit for eager region splitting of young tables
> -
>
> Key: HBASE-5386
> URL: https://issues.apache.org/jira/browse/HBASE-5386
> Project: HBase
>  Issue Type: New Feature
>  Components: Usability
>Reporter: Jean-Daniel Cryans
>
> Coming out of HBASE-2375, we need a new functionality much like hypertable's 
> where we would have a lower split size for new tables and it would grow up to 
> a certain hard limit. This helps usability in different ways:
>  - With that we can set the default split size much higher and users will 
> still have good data distribution
>  - No more messing with force splits
>  - Not mandatory to pre-split your table in order to get good out of the box 
> performance
> The way Doug Judd described how it works for them, they start with a low 
> value and then double it every time it splits. For example if we started with 
> a soft size of 32MB and a hard size of 2GB, it wouldn't be until you have 64 
> regions that you hit the ceiling.
> On the implementation side, we could add a new qualifier in .META. that has 
> that soft limit. When that field doesn't exist, this feature doesn't kick in. 
> It would be written by the region servers after a split and by the master 
> when the table is created with 1 region.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HBASE-5556) The JRuby jar we're shipping has a readline problem on some OS

2013-05-03 Thread Jean-Daniel Cryans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jean-Daniel Cryans resolved HBASE-5556.
---

Resolution: Won't Fix

Haven't seen this issue in a while, closing.

> The JRuby jar we're shipping has a readline problem on some OS
> --
>
> Key: HBASE-5556
> URL: https://issues.apache.org/jira/browse/HBASE-5556
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.92.0
>Reporter: Jean-Daniel Cryans
> Fix For: 0.92.3
>
>
> I started seeing this problem on our Ubuntu servers since 0.92.0, ^H isn't 
> detected correctly anymore in the readline rb version that's shipped with 
> jruby 1.6.5
> It works when I use the 1.6.0 jar.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7541) Convert all tests that use HBaseTestingUtility.createMultiRegions to HBA.createTable

2013-05-03 Thread Himanshu Vashishtha (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Himanshu Vashishtha updated HBASE-7541:
---

Assignee: (was: Himanshu Vashishtha)

> Convert all tests that use HBaseTestingUtility.createMultiRegions to 
> HBA.createTable
> 
>
> Key: HBASE-7541
> URL: https://issues.apache.org/jira/browse/HBASE-7541
> Project: HBase
>  Issue Type: Improvement
>Reporter: Jean-Daniel Cryans
>
> Like I discussed in HBASE-7534, {{HBaseTestingUtility.createMultiRegions}} 
> should disappear and not come back. There's about 25 different places in the 
> code that rely on it that need to be changed the same way I changed 
> TestReplication.
> Perfect for someone that wants to get started with HBase dev :)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7541) Convert all tests that use HBaseTestingUtility.createMultiRegions to HBA.createTable

2013-05-03 Thread Himanshu Vashishtha (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13648750#comment-13648750
 ] 

Himanshu Vashishtha commented on HBASE-7541:


No, not right now.

> Convert all tests that use HBaseTestingUtility.createMultiRegions to 
> HBA.createTable
> 
>
> Key: HBASE-7541
> URL: https://issues.apache.org/jira/browse/HBASE-7541
> Project: HBase
>  Issue Type: Improvement
>Reporter: Jean-Daniel Cryans
>
> Like I discussed in HBASE-7534, {{HBaseTestingUtility.createMultiRegions}} 
> should disappear and not come back. There's about 25 different places in the 
> code that rely on it that need to be changed the same way I changed 
> TestReplication.
> Perfect for someone that wants to get started with HBase dev :)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8420) Port HBASE-6874 Implement prefetching for scanners from 0.89-fb

2013-05-03 Thread Jimmy Xiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13648749#comment-13648749
 ] 

Jimmy Xiang commented on HBASE-8420:


The trunk patch was on RB: https://reviews.apache.org/r/10934/

> Port  HBASE-6874  Implement prefetching for scanners from 0.89-fb
> -
>
> Key: HBASE-8420
> URL: https://issues.apache.org/jira/browse/HBASE-8420
> Project: HBase
>  Issue Type: Improvement
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
> Attachments: 0.94-8420_v1.patch, trunk-8420_v1.patch
>
>
> This should help scanner performance.  We should have it in trunk.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HBASE-6417) hbck merges .META. regions if there's an old leftover

2013-05-03 Thread Jean-Daniel Cryans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jean-Daniel Cryans resolved HBASE-6417.
---

Resolution: Won't Fix

I'm not working with the clusters that had the issue anymore and it doesn't 
look like people hit this issue anyways. Closing.

> hbck merges .META. regions if there's an old leftover
> -
>
> Key: HBASE-6417
> URL: https://issues.apache.org/jira/browse/HBASE-6417
> Project: HBase
>  Issue Type: Bug
>Reporter: Jean-Daniel Cryans
> Attachments: hbck.log
>
>
> Trying to see what caused HBASE-6310, one of the things I figured is that the 
> bad .META. row is actually one from the time that we were permitting meta 
> splitting and that folder had just been staying there for a while.
> So I tried to recreate the issue with -repair and it merged my good .META. 
> region with the one that's 3 years old that also has the same start key. I 
> ended up with a brand new .META. region!
> I'll be attaching the full log in a separate file.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6990) Pretty print TTL

2013-05-03 Thread Jean-Daniel Cryans (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13648738#comment-13648738
 ] 

Jean-Daniel Cryans commented on HBASE-6990:
---

[~kevin.odell], still planning on fixing this?

> Pretty print TTL
> 
>
> Key: HBASE-6990
> URL: https://issues.apache.org/jira/browse/HBASE-6990
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 0.94.6, 0.95.0
>Reporter: Jean-Daniel Cryans
>Assignee: Kevin Odell
>Priority: Minor
>
> I've seen a lot of users getting confused by the TTL configuration and I 
> think that if we just pretty printed it it would solve most of the issues. 
> For example, let's say a user wanted to set a TTL of 90 days. That would be 
> 7776000. But let's say that it was typo'd to 7776 instead, it gives you 
> 900 days!
> So when we print the TTL we could do something like "x days, x hours, x 
> minutes, x seconds (real_ttl_value)". This would also help people when they 
> use ms instead of seconds as they would see really big values in there.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7541) Convert all tests that use HBaseTestingUtility.createMultiRegions to HBA.createTable

2013-05-03 Thread Jean-Daniel Cryans (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13648734#comment-13648734
 ] 

Jean-Daniel Cryans commented on HBASE-7541:
---

[~himan...@cloudera.com] still working on a fix?

> Convert all tests that use HBaseTestingUtility.createMultiRegions to 
> HBA.createTable
> 
>
> Key: HBASE-7541
> URL: https://issues.apache.org/jira/browse/HBASE-7541
> Project: HBase
>  Issue Type: Improvement
>Reporter: Jean-Daniel Cryans
>Assignee: Himanshu Vashishtha
>
> Like I discussed in HBASE-7534, {{HBaseTestingUtility.createMultiRegions}} 
> should disappear and not come back. There's about 25 different places in the 
> code that rely on it that need to be changed the same way I changed 
> TestReplication.
> Perfect for someone that wants to get started with HBase dev :)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HBASE-7613) Default hbase.regionserver.checksum.verify to true now that HDFS-3429 was committed

2013-05-03 Thread Jean-Daniel Cryans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jean-Daniel Cryans resolved HBASE-7613.
---

Resolution: Duplicate

[~enis] opened HBASE-8322 and did work there, so closing this one.

> Default hbase.regionserver.checksum.verify to true now that HDFS-3429 was 
> committed
> ---
>
> Key: HBASE-7613
> URL: https://issues.apache.org/jira/browse/HBASE-7613
> Project: HBase
>  Issue Type: Bug
>Reporter: Jean-Daniel Cryans
> Attachments: HBASE-7613.patch
>
>
> HDFS-3429 was committed to Hadoop trunk and 2.0 branch, so eventually we'll 
> be able to turn HBase checksums on by default without requiring short circuit 
> reads.
> I don't expect this to be committed for 0.96, this is more like a reminder.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-8420) Port HBASE-6874 Implement prefetching for scanners from 0.89-fb

2013-05-03 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HBASE-8420:
---

Attachment: trunk-8420_v1.patch

The patch is a little different from the original one in 0.89-fb.  The reason 
is that we have coprocessor supports and the scanner logic is a little 
different.  We have pb too. 

> Port  HBASE-6874  Implement prefetching for scanners from 0.89-fb
> -
>
> Key: HBASE-8420
> URL: https://issues.apache.org/jira/browse/HBASE-8420
> Project: HBase
>  Issue Type: Improvement
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
> Attachments: 0.94-8420_v1.patch, trunk-8420_v1.patch
>
>
> This should help scanner performance.  We should have it in trunk.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-8420) Port HBASE-6874 Implement prefetching for scanners from 0.89-fb

2013-05-03 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HBASE-8420:
---

Attachment: 0.94-8420_v1.patch

In the 0.94 version, Scan is not touched since it breaks compatibility. So 
prefetching can be globally enabled or disabled.  I have ran all unit tests 
with prefetching enabled.  In the patch, it is disabled by default though.

> Port  HBASE-6874  Implement prefetching for scanners from 0.89-fb
> -
>
> Key: HBASE-8420
> URL: https://issues.apache.org/jira/browse/HBASE-8420
> Project: HBase
>  Issue Type: Improvement
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
> Attachments: 0.94-8420_v1.patch
>
>
> This should help scanner performance.  We should have it in trunk.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-5826) Improve sync of HLog edits

2013-05-03 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-5826:
-

Fix Version/s: (was: 0.95.1)

Moving out of 0.95.  Not being worked on.  Later.

> Improve sync of HLog edits
> --
>
> Key: HBASE-5826
> URL: https://issues.apache.org/jira/browse/HBASE-5826
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Todd Lipcon
> Attachments: 5826.txt, 5826-v2.txt, 5826-v3.txt, 5826-v4.txt, 
> 5826-v5.txt
>
>
> HBASE-5782 solved the correctness issue for the sync of HLog edits.
> Todd provided a patch that would achieve higher throughput.
> This JIRA is a continuation of Todd's work submitted there.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-5162) Basic client pushback mechanism

2013-05-03 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-5162:
-

Fix Version/s: (was: 0.95.1)

Important issue but won't be done for 0.95

> Basic client pushback mechanism
> ---
>
> Key: HBASE-5162
> URL: https://issues.apache.org/jira/browse/HBASE-5162
> Project: HBase
>  Issue Type: New Feature
>Affects Versions: 0.92.0
>Reporter: Jean-Daniel Cryans
> Attachments: java_HBASE-5162.patch
>
>
> The current blocking we do when we are close to some limits (memstores over 
> the multiplier factor, too many store files, global memstore memory) is bad, 
> too coarse and confusing. After hitting HBASE-5161, it really becomes obvious 
> that we need something better.
> I did a little brainstorm with Stack, we came up quickly with two solutions:
>  - Send some exception to the client, like OverloadedException, that's thrown 
> when some situation happens like getting past the low memory barrier. It 
> would be thrown when the client gets a handler and does some check while 
> putting or deleting. The client would treat this a retryable exception but 
> ideally wouldn't check .META. for a new location. It could be fancy and have 
> multiple levels of pushback, like send the exception to 25% of the clients, 
> and then go up if the situation persists. Should be "easy" to implement but 
> we'll be using a lot more IO to send the payload over and over again (but at 
> least it wouldn't sit in the RS's memory).
>  - Send a message alongside a successful put or delete to tell the client to 
> slow down a little, this way we don't have to do back and forth with the 
> payload between the client and the server. It's a cleaner (I think) but more 
> involved solution.
> In every case the RS should do very obvious things to notify the operators of 
> this situation, through logs, web UI, metrics, etc.
> Other ideas?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-8485) Retry to open a HLog on more exceptions

2013-05-03 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HBASE-8485:
---

   Resolution: Fixed
Fix Version/s: 0.95.1
   0.98.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Jenkins is green locally.  Integrated into trunk and 0.95.  Thanks Stack for 
the review.

> Retry to open a HLog on more exceptions 
> 
>
> Key: HBASE-8485
> URL: https://issues.apache.org/jira/browse/HBASE-8485
> Project: HBase
>  Issue Type: Improvement
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
>Priority: Minor
> Fix For: 0.98.0, 0.95.1
>
> Attachments: trunk-8485.patch
>
>
> Currently we only retry to open a HLog file in case "Cannot obtain block 
> length" (HBASE-8314). We can retry also in case "Could not obtain the last 
> block locations.",  "Blocklist for " + src + " has changed!", which are 
> possible IOException messages I can find in case open a file.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8404) Extra commas in LruBlockCache.logStats

2013-05-03 Thread Jean-Daniel Cryans (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13648720#comment-13648720
 ] 

Jean-Daniel Cryans commented on HBASE-8404:
---

+1

> Extra commas in LruBlockCache.logStats
> --
>
> Key: HBASE-8404
> URL: https://issues.apache.org/jira/browse/HBASE-8404
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.4, 0.95.0
>Reporter: Jean-Daniel Cryans
> Fix For: 0.98.0, 0.94.8, 0.95.1
>
> Attachments: 8404.txt
>
>
> The Stats log line for the LruBlockCache contains extra commas introduced in 
> HBASE-5616:
> {noformat}
> 2013-04-23 18:40:12,774 DEBUG [LRU Statistics #0] 
> org.apache.hadoop.hbase.io.hfile.LruBlockCache: Stats: total=9.23 MB, 
> free=500.69 MB, max=509.92 MB, blocks=95, accesses=322822, hits=107003, 
> hitRatio=33.14%, , cachingAccesses=232794, cachingHits=106994, 
> cachingHitsRatio=45.96%, , evictions=0, evicted=12, evictedPerRun=Infinity
> {noformat}
> Marking as "noob" :)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HBASE-4814) Starting an online alter when regions are splitting can leave their daughters unaltered

2013-05-03 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-4814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-4814.
--

Resolution: Invalid

Resolving as no longer valid now we have locks.  Can open new issue if still a 
problem (this is an old issue)

> Starting an online alter when regions are splitting can leave their daughters 
> unaltered
> ---
>
> Key: HBASE-4814
> URL: https://issues.apache.org/jira/browse/HBASE-4814
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.92.0
>Reporter: Jean-Daniel Cryans
> Fix For: 0.95.1
>
>
> I've seen a situation where regions were splitting almost exactly at the same 
> time as an alter command was issued and those regions' daughters were left 
> unaltered. It would even seem that the daughters' daughters also share this 
> situation.
> Reopening all the regions fixes the problem.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-4931) CopyTable instructions could be improved.

2013-05-03 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-4931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-4931:
-

Fix Version/s: (was: 0.95.1)

This doesn't have to be inline w/ 0.95

> CopyTable instructions could be improved.
> -
>
> Key: HBASE-4931
> URL: https://issues.apache.org/jira/browse/HBASE-4931
> Project: HBase
>  Issue Type: Bug
>  Components: documentation, mapreduce
>Affects Versions: 0.90.4, 0.92.0
>Reporter: Jonathan Hsieh
>
> The book and the usage instructions could be improved to include more 
> details, things caveats and to better explain usage.
> One example in particular, could be updated to refer to 
> ReplicationRegionInterface and ReplicationRegionServer in thier current 
> locations (o.a.h.h.client.replication and o.a.h.h.replication.regionserver), 
> and better explain why one would use particular arguments.
> {code}
> $ bin/hbase org.apache.hadoop.hbase.mapreduce.CopyTable
> --rs.class=org.apache.hadoop.hbase.ipc.ReplicationRegionInterface
> --rs.impl=org.apache.hadoop.hbase.regionserver.replication.ReplicationRegionServer
> --starttime=1265875194289 --endtime=1265878794289
> --peer.adr=server1,server2,server3:2181:/hbase TestTable
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-3643) Close the filesystem handle when HRS is aborting

2013-05-03 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-3643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-3643:
-

 Tags: noob
Fix Version/s: (was: 0.95.1)
   Labels: noob  (was: )

Moving out a nice-to-have issue that is not being worked on.

> Close the filesystem handle when HRS is aborting
> 
>
> Key: HBASE-3643
> URL: https://issues.apache.org/jira/browse/HBASE-3643
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 0.90.1
>Reporter: Jean-Daniel Cryans
>  Labels: noob
>
> I thought of a way to fix HBASE-3515 that has a very broad impact, so I'm 
> creating this jira to *raise awareness* and gather comments.
> Currently when we call HRS.abort, it's still possible to do HDFS operations 
> like rolling logs and flushing files. It also has the impact that some 
> threads cannot write to ZK (like the situation described in HBASE-3515) but 
> then can still write to HDFS. Since that call is so central, I think we 
> should {color:red} add fs.close() inside the abort method{color}.
> The impact of this is that everything else that happens after the close call, 
> like closing files or appending, will fail in the most horrible ways. On the 
> bright side, this means less disruptive changes on HDFS.
> Todd pointed at HBASE-2231 as related, but I think my solution is still too 
> sloppy as we could still finish a compaction and immediately close the 
> filesystem after that (damage's done).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-4324) Single unassigned directory is very slow when there are many unassigned nodes

2013-05-03 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-4324?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-4324:
-

Fix Version/s: (was: 0.95.1)

Not being worked on...Moving out.

> Single unassigned directory is very slow when there are many unassigned nodes
> -
>
> Key: HBASE-4324
> URL: https://issues.apache.org/jira/browse/HBASE-4324
> Project: HBase
>  Issue Type: Bug
>  Components: Zookeeper
>Affects Versions: 0.90.4
>Reporter: Todd Lipcon
>
> Because we use a single znode for /unassigned, and we re-list it every time 
> its contents change, assignment speed per region is O(number of unassigned 
> regions) rather than O(1). Every time something changes about one unassigned 
> region, the master has to re-list the entire contents of the directory inside 
> of AssignmentManager.nodeChildrenChanged().

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-2506) Too easy to OOME a RS

2013-05-03 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-2506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-2506:
-

Fix Version/s: (was: 0.95.1)

> Too easy to OOME a RS
> -
>
> Key: HBASE-2506
> URL: https://issues.apache.org/jira/browse/HBASE-2506
> Project: HBase
>  Issue Type: Bug
>Reporter: Jean-Daniel Cryans
>  Labels: moved_from_0_20_5
>
> Testing a cluster with 1GB heap, I found that we are letting the region 
> servers kill themselves too easily when scanning using pre-fetching. To 
> reproduce, get 10-20M rows using PE and run a count in the shell using CACHE 
> => 3 or any other very high number. For good measure, here's the stack 
> trace:
> {code}
> 2010-04-30 13:20:23,241 FATAL 
> org.apache.hadoop.hbase.regionserver.HRegionServer: OutOfMemoryError, 
> aborting.
> java.lang.OutOfMemoryError: Java heap space
> at java.util.Arrays.copyOf(Arrays.java:2786)
> at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:94)
> at java.io.DataOutputStream.write(DataOutputStream.java:90)
> at org.apache.hadoop.hbase.client.Result.writeArray(Result.java:478)
> at 
> org.apache.hadoop.hbase.io.HbaseObjectWritable.writeObject(HbaseObjectWritable.java:312)
> at 
> org.apache.hadoop.hbase.io.HbaseObjectWritable.write(HbaseObjectWritable.java:229)
> at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:941)
> 2010-04-30 13:20:23,241 INFO 
> org.apache.hadoop.hbase.regionserver.HRegionServer: Dump of metrics: 
> request=0.0, regions=29, stores=29, storefiles=44, storefileIndexSize=6, 
> memstoreSize=255,
>  compactionQueueSize=0, usedHeap=926, maxHeap=987, blockCacheSize=1700064, 
> blockCacheFree=205393696, blockCacheCount=0, blockCacheHitRatio=0
> {code}
> I guess the same could happen with largish write buffers. We need something 
> better than OOME.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-3577) enables Thrift client to get the Region location

2013-05-03 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-3577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-3577:
-

Fix Version/s: (was: 0.95.1)

> enables Thrift client to get the Region location
> 
>
> Key: HBASE-3577
> URL: https://issues.apache.org/jira/browse/HBASE-3577
> Project: HBase
>  Issue Type: Improvement
>  Components: Thrift
>Reporter: Kazuki Ohta
> Attachments: HBASE3577-1.patch, HBASE3577-2.patch
>
>
> The current thrift interface has the getTableRegions() interface like below.
> {code}
>   list getTableRegions(
> /** table name */
> 1:Text tableName)
> throws (1:IOError io)
> {code}
> {code}
> struct TRegionInfo {
>   1:Text startKey,
>   2:Text endKey,
>   3:i64 id,
>   4:Text name,
>   5:byte version
> }
> {code}
> But the method don't have the region location information (where the region 
> is located).
> I want to add the Thrift interfaces like below in HTable.java.
> {code}
> public Map getRegionsInfo() throws IOException
> {code}
> {code}
> public HRegionLocation getRegionLocation(final String row)
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-8404) Extra commas in LruBlockCache.logStats

2013-05-03 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-8404:
-

Attachment: 8404.txt

Remove comma and white space

> Extra commas in LruBlockCache.logStats
> --
>
> Key: HBASE-8404
> URL: https://issues.apache.org/jira/browse/HBASE-8404
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.4, 0.95.0
>Reporter: Jean-Daniel Cryans
> Fix For: 0.98.0, 0.94.8, 0.95.1
>
> Attachments: 8404.txt
>
>
> The Stats log line for the LruBlockCache contains extra commas introduced in 
> HBASE-5616:
> {noformat}
> 2013-04-23 18:40:12,774 DEBUG [LRU Statistics #0] 
> org.apache.hadoop.hbase.io.hfile.LruBlockCache: Stats: total=9.23 MB, 
> free=500.69 MB, max=509.92 MB, blocks=95, accesses=322822, hits=107003, 
> hitRatio=33.14%, , cachingAccesses=232794, cachingHits=106994, 
> cachingHitsRatio=45.96%, , evictions=0, evicted=12, evictedPerRun=Infinity
> {noformat}
> Marking as "noob" :)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7686) TestSplitTransactionOnCluster fails occasionally in trunk builds

2013-05-03 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-7686:
-

Resolution: Later
Status: Resolved  (was: Patch Available)

Let me resolve this.  No action on it.  Hasn't failed lately.   Lets open new 
one if patch or more digging in.

> TestSplitTransactionOnCluster fails occasionally in trunk builds
> 
>
> Key: HBASE-7686
> URL: https://issues.apache.org/jira/browse/HBASE-7686
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Priority: Critical
> Fix For: 0.95.1
>
> Attachments: HBASE-7686-v0.patch, HBASE-7686-v1.patch
>
>
> From trunk build #3808:
> {code} 
> testShouldFailSplitIfZNodeDoesNotExistDueToPrevRollBack(org.apache.hadoop.hbase.regionserver.TestSplitTransactionOnCluster):
>  test timed out after 2 milliseconds
>   
> testMasterRestartWhenSplittingIsPartial(org.apache.hadoop.hbase.regionserver.TestSplitTransactionOnCluster):
>  test timed out after 30 milliseconds
>   
> testExistingZnodeBlocksSplitAndWeRollback(org.apache.hadoop.hbase.regionserver.TestSplitTransactionOnCluster):
>  test timed out after 30 milliseconds
> {code}
> From HBase-TRUNK-on-Hadoop-2.0.0 #378 :
> {code}
> testShutdownSimpleFixup(org.apache.hadoop.hbase.regionserver.TestSplitTransactionOnCluster):
>  Region not moved off .META. server
>   
> testShouldFailSplitIfZNodeDoesNotExistDueToPrevRollBack(org.apache.hadoop.hbase.regionserver.TestSplitTransactionOnCluster):
>  test timed out after 2 milliseconds
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7115) [shell] Provide a way to register custom filters with the Filter Language Parser

2013-05-03 Thread Aditya Kishore (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aditya Kishore updated HBASE-7115:
--

Status: Open  (was: Patch Available)

> [shell] Provide a way to register custom filters with the Filter Language 
> Parser
> 
>
> Key: HBASE-7115
> URL: https://issues.apache.org/jira/browse/HBASE-7115
> Project: HBase
>  Issue Type: Improvement
>  Components: Filters, shell
>Affects Versions: 0.95.2
>Reporter: Aditya Kishore
>Assignee: Aditya Kishore
> Fix For: 0.95.2
>
> Attachments: HBASE-7115_trunk.patch, HBASE-7115_trunk.patch
>
>
> HBASE-5428 added this capability to thrift interface but the configuration 
> parameter name is "thrift" specific.
> This patch introduces a more generic parameter "hbase.user.filters" using 
> which the user defined custom filters can be specified in the configuration 
> and loaded in any client that needs to use the filter language parser.
> The patch then uses this new parameter to register any user specified filters 
> while invoking the HBase shell.
> Example usage: Let's say I have written a couple of custom filters with class 
> names *{{org.apache.hadoop.hbase.filter.custom.SuperDuperFilter}}* and 
> *{{org.apache.hadoop.hbase.filter.custom.SilverBulletFilter}}* and I want to 
> use them from HBase shell using the filter language.
> To do that, I would add the following configuration to {{hbase-site.xml}}
> {panel}{{}}
> {{  hbase.user.filters}}
> {{  
> }}*{{SuperDuperFilter}}*{{:org.apache.hadoop.hbase.filter.custom.SuperDuperFilter,}}*{{SilverBulletFilter}}*{{:org.apache.hadoop.hbase.filter.custom.SilverBulletFilter}}
> {{}}{panel}
> Once this is configured, I can launch HBase shell and use these filters in my 
> {{get}} or {{scan}} just the way I would use a built-in filter.
> {code}
> hbase(main):001:0> scan 't', {FILTER => "SuperDuperFilter(true) AND 
> SilverBulletFilter(42)"}
> ROW  COLUMN+CELL
>  status  column=cf:a, 
> timestamp=30438552, value=world_peace
> 1 row(s) in 0. seconds
> {code}
> To use this feature in any client, the client needs to make the following 
> function call as part of its initialization.
> {code}
> ParseFilter.registerUserFilters(configuration);
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7115) [shell] Provide a way to register custom filters with the Filter Language Parser

2013-05-03 Thread Aditya Kishore (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aditya Kishore updated HBASE-7115:
--

Status: Patch Available  (was: Open)

The test failure is unrelated and seems to be related to what is fixed in 
HBASE-8469.

The test ran fine on my machine with updated code. Resubmitting the patch.

> [shell] Provide a way to register custom filters with the Filter Language 
> Parser
> 
>
> Key: HBASE-7115
> URL: https://issues.apache.org/jira/browse/HBASE-7115
> Project: HBase
>  Issue Type: Improvement
>  Components: Filters, shell
>Affects Versions: 0.95.2
>Reporter: Aditya Kishore
>Assignee: Aditya Kishore
> Fix For: 0.95.2
>
> Attachments: HBASE-7115_trunk.patch, HBASE-7115_trunk.patch
>
>
> HBASE-5428 added this capability to thrift interface but the configuration 
> parameter name is "thrift" specific.
> This patch introduces a more generic parameter "hbase.user.filters" using 
> which the user defined custom filters can be specified in the configuration 
> and loaded in any client that needs to use the filter language parser.
> The patch then uses this new parameter to register any user specified filters 
> while invoking the HBase shell.
> Example usage: Let's say I have written a couple of custom filters with class 
> names *{{org.apache.hadoop.hbase.filter.custom.SuperDuperFilter}}* and 
> *{{org.apache.hadoop.hbase.filter.custom.SilverBulletFilter}}* and I want to 
> use them from HBase shell using the filter language.
> To do that, I would add the following configuration to {{hbase-site.xml}}
> {panel}{{}}
> {{  hbase.user.filters}}
> {{  
> }}*{{SuperDuperFilter}}*{{:org.apache.hadoop.hbase.filter.custom.SuperDuperFilter,}}*{{SilverBulletFilter}}*{{:org.apache.hadoop.hbase.filter.custom.SilverBulletFilter}}
> {{}}{panel}
> Once this is configured, I can launch HBase shell and use these filters in my 
> {{get}} or {{scan}} just the way I would use a built-in filter.
> {code}
> hbase(main):001:0> scan 't', {FILTER => "SuperDuperFilter(true) AND 
> SilverBulletFilter(42)"}
> ROW  COLUMN+CELL
>  status  column=cf:a, 
> timestamp=30438552, value=world_peace
> 1 row(s) in 0. seconds
> {code}
> To use this feature in any client, the client needs to make the following 
> function call as part of its initialization.
> {code}
> ParseFilter.registerUserFilters(configuration);
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7244) Provide a command or argument to startup, that formats znodes if provided

2013-05-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13648682#comment-13648682
 ] 

Hadoop QA commented on HBASE-7244:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12581655/HBASE-7244_7.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces lines longer than 
100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s): 

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5548//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5548//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5548//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5548//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5548//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5548//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5548//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5548//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5548//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/5548//console

This message is automatically generated.

> Provide a command or argument to startup, that formats znodes if provided
> -
>
> Key: HBASE-7244
> URL: https://issues.apache.org/jira/browse/HBASE-7244
> Project: HBase
>  Issue Type: New Feature
>  Components: Zookeeper
>Affects Versions: 0.94.0
>Reporter: Harsh J
>Assignee: rajeshbabu
>Priority: Critical
> Fix For: 0.98.0, 0.94.8, 0.95.1
>
> Attachments: HBASE-7244_2.patch, HBASE-7244_3.patch, 
> HBASE-7244_4.patch, HBASE-7244_5.patch, HBASE-7244_6.patch, 
> HBASE-7244_7.patch, HBASE-7244.patch
>
>
> Many a times I've had to, and have seen instructions being thrown, to stop 
> cluster, clear out ZK and restart.
> While this is only a quick (and painful to master) fix, it is certainly nifty 
> to some smaller cluster users but the process is far too long, roughly:
> 1. Stop HBase
> 2. Start zkCli.sh and connect to the right quorum
> 3. Find and ensure the HBase parent znode from the configs (/hbase only by 
> default)
> 4. Run an "rmr /hbase" in the zkCli.sh shell, or manually delete each znode 
> if on a lower version of ZK.
> 5. Quit zkCli.sh and start HBase again
> Perhaps it may be useful, if the start-hbase.sh itself accepted a formatZK 
> parameter. Such that, when you do a {{start-hbase.sh -formatZK}}, it does 
> steps 2-4 automatically for you.
> For safety, we could make the formatter code ensure that no HBase instance is 
> actually active, and skip the format process if it is. Similar to a HDFS 
> NameNode's format, which would disallow if the name directories are locked.
> Would this be a useful addition for administrators? Bigtop too can provide a 
> service subcom

[jira] [Commented] (HBASE-7897) Add support for tags to Cell Interface

2013-05-03 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13648659#comment-13648659
 ] 

stack commented on HBASE-7897:
--

[~ram_krish] As I see this issue, it is just about adding the needed methods to 
the Cell Interface.  I do not think the functionality has to be implemented 
underneath the Interface for us to close this issue.  This issue is about 
making sure the necessary Interfaces changes are in place when 0.96 goes out.

Given the above, is this issue close at all?  Want me to do it?  I could just 
add in the above suggested methods?

> Add support for tags to Cell Interface
> --
>
> Key: HBASE-7897
> URL: https://issues.apache.org/jira/browse/HBASE-7897
> Project: HBase
>  Issue Type: Task
>Reporter: stack
>Assignee: ramkrishna.s.vasudevan
>Priority: Critical
> Fix For: 0.95.1
>
>
> Cell Interface has suppport for mvcc.   The only thing we'd add to Cell in 
> the near future is support for tags it would seem.  Should be easy to add.  
> Should add it now.  See backing discussion here: 
> https://issues.apache.org/jira/browse/HBASE-7233?focusedCommentId=13573784&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13573784
> Matt outlines what the additions to Cell might look like here:
> https://issues.apache.org/jira/browse/HBASE-7233?focusedCommentId=13531619&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13531619
> Would be good to get these in now.
> Marking as 0.96.  Can more later.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-5930) Limits the amount of time an edit can live in the memstore.

2013-05-03 Thread Devaraj Das (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj Das updated HBASE-5930:
---

Attachment: 5930-0.94-added-addendum.txt

Lars's patch with the addendum that Enis had submitted.

> Limits the amount of time an edit can live in the memstore.
> ---
>
> Key: HBASE-5930
> URL: https://issues.apache.org/jira/browse/HBASE-5930
> Project: HBase
>  Issue Type: Improvement
>Reporter: Lars Hofhansl
>Assignee: Devaraj Das
> Fix For: 0.98.0, 0.94.8, 0.95.1
>
> Attachments: 5930-0.94-added-addendum.txt, 5930-0.94.txt, 
> 5930-1.patch, 5930-2.1.patch, 5930-2.2.patch, 5930-2.3.patch, 5930-2.4.patch, 
> 5930-track-oldest-sample.txt, 5930-wip.patch, HBASE-5930-ADD-0.patch, 
> hbase-5930-addendum2.patch, hbase-5930-test-execution.log
>
>
> A colleague of mine ran into an interesting issue.
> He inserted some data with the WAL disabled, which happened to fit in the 
> aggregate Memstores memory.
> Two weeks later he a had problem with the HDFS cluster, which caused the 
> region servers to abort. He found that his data was lost. Looking at the log 
> we found that the Memstores were not flushed at all during these two weeks.
> Should we have an option to flush memstores periodically. There are obvious 
> downsides to this, like many small storefiles, etc.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8488) HBase transitive dependencies not being pulled in when building apps like Flume which depend on HBase

2013-05-03 Thread Roshan Naik (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13648646#comment-13648646
 ] 

Roshan Naik commented on HBASE-8488:


i built a hadoop 2 version of hbase locally (0.97.0-SNAPSHOT) and pointed the 
flume pom to hbase as follows:

{code}
 
org.apache.hbase
hbase-common
${hbaseversion}
  

  
org.apache.hbase
hbase-common
${hbaseversion}
tests
test
  

  
org.apache.hbase
hbase-client
${hbaseversion}
  

  
org.apache.hbase
hbase-server
${hbaseversion}
  

  
org.apache.hbase
hbase-server
${hbaseversion}
tests
test
  

{code}

Then built hbase as follows:
{code}
mvn install -Dhadoop.profile=2.0 -Dhadoop-two.version=2.0.4-alpha -DskipTests
{code}



I built flume as follows:

mvn   -Dhadoop-two.version=2.0.4-alpha -Dhbaseversion=0.97.0-SNAPSHOT 
-Dhadoop.profile=2 clean package -X -DskipTests




> HBase transitive dependencies not being pulled in when building apps like 
> Flume which depend on HBase
> -
>
> Key: HBASE-8488
> URL: https://issues.apache.org/jira/browse/HBASE-8488
> Project: HBase
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.95.0
>Reporter: Roshan Naik
>
> Here is a snippet of the errors seen when building against Hbase
> {code}
> [WARNING] Invalid POM for org.apache.hbase:hbase-common:jar:0.97.0-SNAPSHOT, 
> transitive dependencies (if any) will not be available, enable debug logging 
> for more details: Some problems were encountered while processing the POMs:
> [ERROR] 'dependencyManagement.dependencies.dependency.artifactId' for 
> org.apache.hbase:${compat.module}:jar with value '${compat.module}' does not 
> match a valid id pattern. @ org.apache.hbase:hbase:0.97.0-SNAPSHOT, 
> /Users/rnaik/.m2/repository/org/apache/hbase/hbase/0.97.0-SNAPSHOT/hbase-0.97.0-SNAPSHOT.pom,
>  line 982, column 21
> [ERROR] 'dependencyManagement.dependencies.dependency.artifactId' for 
> org.apache.hbase:${compat.module}:test-jar with value '${compat.module}' does 
> not match a valid id pattern. @ org.apache.hbase:hbase:0.97.0-SNAPSHOT, 
> /Users/rnaik/.m2/repository/org/apache/hbase/hbase/0.97.0-SNAPSHOT/hbase-0.97.0-SNAPSHOT.pom,
>  line 987, column 21
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


  1   2   >