[jira] [Commented] (HBASE-11165) Scaling so cluster can host 1M regions and beyond (50M regions?)

2014-05-20 Thread Francis Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14002864#comment-14002864
 ] 

Francis Liu commented on HBASE-11165:
-

{quote}
autotuning memstore sizes, and lazy allocation (as Andy says), or sharing 
memstores
{quote}
If we increase the memstore multiplier to a high number won't that be a rough 
simulation of this? Also if the writes are uniformly distributed across regions 
then sharing is not needed. 

{quote}
make large regions more workable, splits, compations, etc
{quote}
It's seems to me it's in HBase's DNA to have small regions. My gut tells me 
that it would take less effort to support more regions.

{quote}
allow more RAM to be used by region server (off heap memstores)
{quote}
Or support larger heap :-)

{quote}
allow smaller units of computation in M/R
{quote}
We generally need a smarter way of calculating splits. ie Control number of 
tasks accessing an RS. If only there was some integration between an NM and RS.

{quote}
split META? And then colocate with multiple HMasters?
{quote}
IMHO HBase should be horizontally scalable with regards to # of regions. If I 
have too many regions I should be able to add more machines (ie 
master/regionserver). Currently at ~68k regions, it's consuming about ~200MB. 
Extrapolating at 6M it's 20GB and 60M it's 200GB. 

 Scaling so cluster can host 1M regions and beyond (50M regions?)
 

 Key: HBASE-11165
 URL: https://issues.apache.org/jira/browse/HBASE-11165
 Project: HBase
  Issue Type: Brainstorming
Reporter: stack

 This discussion issue comes out of Co-locate Meta And Master HBASE-10569 
 and comments on the doc posted there.
 A user -- our Francis Liu -- needs to be able to scale a cluster to do 1M 
 regions maybe even 50M later.  This issue is about discussing how we will do 
 that (or if not 50M on a cluster, how otherwise we can attain same end).
 More detail to follow.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11144) Filter to support scan multiple row key ranges

2014-05-20 Thread Li Jiajia (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Jiajia updated HBASE-11144:
--

Description: 
HBase is quite efficient when scanning only one small row key range. If user 
needs to specify multiple row key ranges in one scan, the typical solutions 
are: 1. through FilterList which is a list of row key Filters, 2. using the SQL 
layer over HBase to join with two table, such as hive, phoenix etc. However, 
both solutions are inefficient. Both of them can’t utilize the range info to 
perform fast forwarding during scan which is quite time consuming. If the 
number of ranges are quite big (e.g. millions), join is a proper solution 
though it is slow. However, there are cases that user wants to specify a small 
number of ranges to scan (e.g. 1000 ranges). Both solutions can’t provide 
satisfactory performance in such case. 
We provide this filter (MultiRowRangeFilter) to support such use case (scan 
multiple row key ranges), which can construct the row key ranges from user 
specified sorted list and perform fast-forwarding during scan. Thus, the scan 
will be quite efficient. 

  was:
HBase is quite efficient when scanning only one small row key range. If user 
needs to specify multiple row key ranges in one scan, the typical solutions 
are: 1. through FilterList which is a list of row key Filters, 2. using the SQL 
layer over HBase to join with two table, such as hive, phoenix etc. However, 
both solutions are inefficient. Both of them can’t utilize the range info to 
perform fast forwarding during scan. Thus, all rows are scanned, which is quite 
time consuming. If the number of ranges are quite big (e.g. millions), join is 
a proper solution though it is slow. However, there are cases that user wants 
to specify a small number of ranges to scan (e.g. 1000 ranges). Both solutions 
can’t provide satisfactory performance in such case. 
We provide this filter (MultiRowRangeFilter) to support such use case (scan 
multiple row key ranges), which can construct the row key ranges from user 
specified sorted list and perform fast-forwarding during scan to skip unwanted 
rows. Thus, the scan will be quite efficient. 


 Filter to support scan multiple row key ranges
 --

 Key: HBASE-11144
 URL: https://issues.apache.org/jira/browse/HBASE-11144
 Project: HBase
  Issue Type: Improvement
  Components: Filters
Reporter: Li Jiajia
 Attachments: MultiRowRangeFilter.patch, MultiRowRangeFilter2.patch


 HBase is quite efficient when scanning only one small row key range. If user 
 needs to specify multiple row key ranges in one scan, the typical solutions 
 are: 1. through FilterList which is a list of row key Filters, 2. using the 
 SQL layer over HBase to join with two table, such as hive, phoenix etc. 
 However, both solutions are inefficient. Both of them can’t utilize the range 
 info to perform fast forwarding during scan which is quite time consuming. If 
 the number of ranges are quite big (e.g. millions), join is a proper solution 
 though it is slow. However, there are cases that user wants to specify a 
 small number of ranges to scan (e.g. 1000 ranges). Both solutions can’t 
 provide satisfactory performance in such case. 
 We provide this filter (MultiRowRangeFilter) to support such use case (scan 
 multiple row key ranges), which can construct the row key ranges from user 
 specified sorted list and perform fast-forwarding during scan. Thus, the scan 
 will be quite efficient. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11199) One-time effort to pretty-print the Docbook XML, to make further patch review easier

2014-05-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14002932#comment-14002932
 ] 

Hadoop QA commented on HBASE-11199:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12645673/HBASE-11199-1.patch
  against trunk revision .
  ATTACHMENT ID: 12645673

{color:red}-1 @author{color}.  The patch appears to contain 2 @author tags 
which the Hadoop community has agreed to not allow in code contributions.

{color:green}+1 tests included{color}.  The patch appears to include 17 new 
or modified tests.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9541//console

This message is automatically generated.

 One-time effort to pretty-print the Docbook XML, to make further patch review 
 easier
 

 Key: HBASE-11199
 URL: https://issues.apache.org/jira/browse/HBASE-11199
 Project: HBase
  Issue Type: Task
  Components: documentation
Reporter: Misty Stanley-Jones
Assignee: Misty Stanley-Jones
 Attachments: HBASE-11199-1.patch, HBASE-11199.patch


 Careful not to pretty-print literal layouts and document (at least in this 
 JIRA) the pretty-printing process for next time.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-7623) Username is not available for HConnectionManager to use in HConnectionKey

2014-05-20 Thread yogesh bedekar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

yogesh bedekar updated HBASE-7623:
--

Attachment: yogesh_bedekar.vcf

Thanks, Jimmy. I sent an email to u...@hbase.apache.org.

Meanwhile I notice the following from the decompiled code :-

The class apache.zookeeper.ClientCnxn tries to initiate client 
connection as follows -

ClientCnxn.this.zooKeeperSaslClient = new 
ZooKeeperSaslClient(zookeeper/ + addr.getHostName());


and ZooKeeperSaslClient has -

String clientSection = System.getProperty(zookeeper.sasl.clientconfig, 
Client);

 AppConfigurationEntry[] entries = null;
 SecurityException securityException = null;
 try
 {
   entries = 
Configuration.getConfiguration().getAppConfigurationEntry(clientSection);
 }
 catch (SecurityException e)
 {
   securityException = e;
 }

I think this is throwing the exception - 
java.lang.IllegalArgumentException: No Configuration was registered that
can handle the configuration named Client .

I am not sure why the property 'zookeeper.sasl.clientconfig' needs to be 
set when we are not using security.


Thanks,
Yogesh





 Username is not available for HConnectionManager to use in HConnectionKey
 -

 Key: HBASE-7623
 URL: https://issues.apache.org/jira/browse/HBASE-7623
 Project: HBase
  Issue Type: Improvement
  Components: Client, security
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
Priority: Minor
 Attachments: pom.xml, trunk-7623.patch, yogesh_bedekar.vcf, 
 yogesh_bedekar.vcf, yogesh_bedekar.vcf, yogesh_bedekar.vcf, yogesh_bedekar.vcf


 Sometimes, some non-IOException prevents User.getCurrent() to get a username. 
  It makes it impossible to create a HConnection.  We should catch all 
 exception here:
 {noformat}
   try {
 User currentUser = User.getCurrent();
 if (currentUser != null) {
   username = currentUser.getName();
 }
   } catch (IOException ioe) {
 LOG.warn(Error obtaining current user, skipping username in 
 HConnectionKey,
 ioe);
   }
 {noformat}
 Not just IOException, so that client can move forward.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11201) Enable global procedure members to return values to procedure master

2014-05-20 Thread Matteo Bertozzi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14002990#comment-14002990
 ] 

Matteo Bertozzi commented on HBASE-11201:
-

looks good to me, the only doubt I had was when you get null from 
ZKUtil.getData() but that seems to be handled by isPBMagicPrefix()

 Enable global procedure members to return values to procedure master
 

 Key: HBASE-11201
 URL: https://issues.apache.org/jira/browse/HBASE-11201
 Project: HBase
  Issue Type: Improvement
Reporter: Jerry He
Assignee: Jerry He
 Fix For: 0.99.0

 Attachments: HBASE-11201-trunk-v1.patch


 Currently in the global procedure framework, the procedure coordinator can 
 send data (procedure argument) to the members when starting procedure.
 But we don't support getting data returned from the procedure members back to 
 the master.
 Similar to RPC and normal procedure/function calls, in many cases, this is a 
 useful capability.
 The infrastructure is in place. We just need to plug in the holes and make it 
 happen.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11185) Parallelize Snapshot operations

2014-05-20 Thread Matteo Bertozzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matteo Bertozzi updated HBASE-11185:


Resolution: Fixed
Status: Resolved  (was: Patch Available)

 Parallelize Snapshot operations
 ---

 Key: HBASE-11185
 URL: https://issues.apache.org/jira/browse/HBASE-11185
 Project: HBase
  Issue Type: Bug
  Components: snapshots
Affects Versions: 0.99.0
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
Priority: Minor
 Fix For: 0.99.0

 Attachments: HBASE-11185-v0.patch


 when SnapshotInfo or snapshot verification is executed against a remote path, 
 it may takes a while since all the code is mainly composed by sequential 
 calls to the fs.
 This patch will parallelize all the snapshot operations using a thread pool 
 to dispatch requests. The size of the pool is tunable by using  
 hbase.snapshot.thread.pool.max



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HBASE-11186) Improve TestExportSnapshot verifications

2014-05-20 Thread Matteo Bertozzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matteo Bertozzi resolved HBASE-11186.
-

Resolution: Fixed

 Improve TestExportSnapshot verifications
 

 Key: HBASE-11186
 URL: https://issues.apache.org/jira/browse/HBASE-11186
 Project: HBase
  Issue Type: Bug
  Components: snapshots
Affects Versions: 0.99.0
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
Priority: Minor
 Fix For: 0.99.0

 Attachments: HBASE-11186-v0.patch


 * Remove some code by using the utils that we already have in 
 SnapshotTestingUtil
 * Add an Export with references for both v1 and v2 format
 * add the verification on the actual number of files exported



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11200) AsyncWriter of FSHLog might throw ArrayIndexOutOfBoundsException

2014-05-20 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14003030#comment-14003030
 ] 

Anoop Sam John commented on HBASE-11200:


No issue in Trunk code? Please update the   Fix Version/s:

 AsyncWriter of FSHLog might throw ArrayIndexOutOfBoundsException
 

 Key: HBASE-11200
 URL: https://issues.apache.org/jira/browse/HBASE-11200
 Project: HBase
  Issue Type: Bug
  Components: wal
Affects Versions: 0.98.2
Reporter: cuijianwei
Assignee: cuijianwei
Priority: Minor
 Attachments: HBASE-11200-0.98.patch


 AsyncWriter of FSHLog might throw ArrayIndexOutOfBoundsException because of 
 the following code in AsyncWriter#run():
 {code}
  }
}
if (!hasIdleSyncer) {
 int idx = (int)this.lastWrittenTxid % asyncSyncers.length;
  asyncSyncers[idx].setWrittenTxid(this.lastWrittenTxid);
}
  }
 {code}
 In obove code, this.lastWrittenTxid % asyncSyncers.length might become 
 negative when this.lastWrittenTxid is bigger than Interger.MAX_VALUE where 
 this.lastWrittenTxid  is a long. The attachment gives a quick fix.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11200) AsyncWriter of FSHLog might throw ArrayIndexOutOfBoundsException

2014-05-20 Thread Honghua Feng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Honghua Feng updated HBASE-11200:
-

Fix Version/s: 0.98.4
   0.98.3

 AsyncWriter of FSHLog might throw ArrayIndexOutOfBoundsException
 

 Key: HBASE-11200
 URL: https://issues.apache.org/jira/browse/HBASE-11200
 Project: HBase
  Issue Type: Bug
  Components: wal
Affects Versions: 0.98.2
Reporter: cuijianwei
Assignee: cuijianwei
Priority: Minor
 Fix For: 0.98.3, 0.98.4

 Attachments: HBASE-11200-0.98.patch


 AsyncWriter of FSHLog might throw ArrayIndexOutOfBoundsException because of 
 the following code in AsyncWriter#run():
 {code}
  }
}
if (!hasIdleSyncer) {
 int idx = (int)this.lastWrittenTxid % asyncSyncers.length;
  asyncSyncers[idx].setWrittenTxid(this.lastWrittenTxid);
}
  }
 {code}
 In obove code, this.lastWrittenTxid % asyncSyncers.length might become 
 negative when this.lastWrittenTxid is bigger than Interger.MAX_VALUE where 
 this.lastWrittenTxid  is a long. The attachment gives a quick fix.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11145) Issue with HLog sync

2014-05-20 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14003034#comment-14003034
 ] 

Anoop Sam John commented on HBASE-11145:


Patch looks good to me Stack.
{code}
 } catch (Throwable t) {
-  LOG.warn(UNEXPECTED, continuing, t);
+  LOG.error(UNEXPECTED, t);
{code}
We might not reach this UNEXPECTED state now. Still if it comes, it will be 
better to do cleanupOutstandingSyncsOnException() ??

 Issue with HLog sync
 

 Key: HBASE-11145
 URL: https://issues.apache.org/jira/browse/HBASE-11145
 Project: HBase
  Issue Type: Bug
Reporter: Anoop Sam John
Assignee: stack
Priority: Critical
 Fix For: 0.99.0

 Attachments: 11145.txt


 Got the below Exceptions Log in case of a write heavy test
 {code}
 2014-05-07 11:29:56,417 ERROR [main.append-pool1-t1] 
 wal.FSHLog$RingBufferEventHandler(1882): UNEXPECTED!!!
 java.lang.IllegalStateException: Queue full
  at java.util.AbstractQueue.add(Unknown Source)
  at 
 org.apache.hadoop.hbase.regionserver.wal.FSHLog$SyncRunner.offer(FSHLog.java:1227)
  at 
 org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1878)
  at 
 org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1)
  at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:133)
  at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source)
  at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
  at java.lang.Thread.run(Unknown Source)
 2014-05-07 11:29:56,418 ERROR [main.append-pool1-t1] 
 wal.FSHLog$RingBufferEventHandler(1882): UNEXPECTED!!!
 java.lang.ArrayIndexOutOfBoundsException: 5
  at 
 org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1838)
  at 
 org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1)
  at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:133)
  at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source)
  at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
  at java.lang.Thread.run(Unknown Source)
 2014-05-07 11:29:56,419 ERROR [main.append-pool1-t1] 
 wal.FSHLog$RingBufferEventHandler(1882): UNEXPECTED!!!
 java.lang.ArrayIndexOutOfBoundsException: 6
  at 
 org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1838)
  at 
 org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1)
  at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:133)
  at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source)
  at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
  at java.lang.Thread.run(Unknown Source)
 2014-05-07 11:29:56,419 ERROR [main.append-pool1-t1] 
 wal.FSHLog$RingBufferEventHandler(1882): UNEXPECTED!!!
 java.lang.ArrayIndexOutOfBoundsException: 7
  at 
 org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1838)
  at 
 org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1)
  at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:133)
  at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source)
  at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
  at java.lang.Thread.run(Unknown Source)
  {code}
 In FSHLog$SyncRunner.offer we do BlockingQueue.add() which throws Exception 
 as it is full. The problem is the below shown catch() we do not do any 
 cleanup.
 {code}
 this.syncRunners[index].offer(sequence, this.syncFutures, 
 this.syncFuturesCount);
 attainSafePoint(sequence);
 this.syncFuturesCount = 0;
   } catch (Throwable t) {
 LOG.error(UNEXPECTED!!!, t);
   }
 {code}
 syncFuturesCount is not getting reset to 0 and so the subsequent onEvent() 
 handling throws ArrayIndexOutOfBoundsException.
 I think we should do the below 
 1. Handle the Exception and call cleanupOutstandingSyncsOnException() as in 
 other cases of Exception handling
 2. Instead of BlockingQueue.add() use offer() (?)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11200) AsyncWriter of FSHLog might throw ArrayIndexOutOfBoundsException

2014-05-20 Thread Honghua Feng (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14003037#comment-14003037
 ] 

Honghua Feng commented on HBASE-11200:
--

[~anoop.hbase] : Thanks for reminding to update the Fix Version/s. The write 
model (and the affected code) is refactored out by [~stack] in trunk code, so 
no issue there

 AsyncWriter of FSHLog might throw ArrayIndexOutOfBoundsException
 

 Key: HBASE-11200
 URL: https://issues.apache.org/jira/browse/HBASE-11200
 Project: HBase
  Issue Type: Bug
  Components: wal
Affects Versions: 0.98.2
Reporter: cuijianwei
Assignee: cuijianwei
Priority: Minor
 Fix For: 0.98.3, 0.98.4

 Attachments: HBASE-11200-0.98.patch


 AsyncWriter of FSHLog might throw ArrayIndexOutOfBoundsException because of 
 the following code in AsyncWriter#run():
 {code}
  }
}
if (!hasIdleSyncer) {
 int idx = (int)this.lastWrittenTxid % asyncSyncers.length;
  asyncSyncers[idx].setWrittenTxid(this.lastWrittenTxid);
}
  }
 {code}
 In obove code, this.lastWrittenTxid % asyncSyncers.length might become 
 negative when this.lastWrittenTxid is bigger than Interger.MAX_VALUE where 
 this.lastWrittenTxid  is a long. The attachment gives a quick fix.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11202) Cleanup on HRegion class

2014-05-20 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14003045#comment-14003045
 ] 

Anoop Sam John commented on HBASE-11202:


{code}
+  /**
+   * Return HStore instance. Does do do any copy: as the number of store is 
limited, we
+   *  iterate on the list.
+   */
+  private Store getStore(Cell cell) {
{code}
You mean Does not do?

{code}
-  /**
-   * Lock the updates' readLock first, so that we could safely append logs in 
coprocessors.
-   * @throws RegionTooBusyException
-   * @throws InterruptedIOException
-   */
-  public void updatesLock() throws RegionTooBusyException, 
InterruptedIOException {
-lock(updatesLock.readLock());
-  }
-
-  /**
-   * Unlock the updates' readLock after appending logs in coprocessors.
-   * @throws InterruptedIOException
-   */
-  public void updatesUnlock() throws InterruptedIOException {
-updatesLock.readLock().unlock();
-  }
{code}
Is this removal ok?  The comment itself says that it is intended to be used 
from CPs. So what if someone used this already?  I dont know which Jira issue 
added these 2 public methods.

 Cleanup on HRegion class
 

 Key: HBASE-11202
 URL: https://issues.apache.org/jira/browse/HBASE-11202
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.99.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
Priority: Minor
 Fix For: 0.99.0

 Attachments: 11202.v1.patch


 This is mostly trivial stuff
  - remove some methods not used
  - typos
  - remove some @param w/o any info
  - change the code that uses deprecated methods
 The only non trivial change is when we get the store from a cell: instead of 
 using the map, we iterate on the key set. Likely, it would be better to hava 
 a sorted array instead of a Map, as the number of store is fixed.  Could be 
 done in a later patch.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11104) IntegrationTestImportTsv#testRunFromOutputCommitter misses credential initialization

2014-05-20 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14003047#comment-14003047
 ] 

Nick Dimiduk commented on HBASE-11104:
--

Thanks for clarifying. +1.

 IntegrationTestImportTsv#testRunFromOutputCommitter misses credential 
 initialization
 

 Key: HBASE-11104
 URL: https://issues.apache.org/jira/browse/HBASE-11104
 Project: HBase
  Issue Type: Test
Reporter: Ted Yu
Assignee: Vandana Ayyalasomayajula
Priority: Minor
 Attachments: 11104-v1.txt, HBASE-11104_98_v3.patch, 
 HBASE-11104_trunk.patch, HBASE-11104_trunk.patch, HBASE-11104_trunk_v2.patch, 
 HBASE-11104_trunk_v3.patch


 IntegrationTestImportTsv#testRunFromOutputCommitter a parent job that ships 
 the HBase dependencies.
 However, call to TableMapReduceUtil.initCredentials(job) is missing - making 
 this test fail on a secure cluster.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-9507) Promote methods of WALActionsListener to WALObserver

2014-05-20 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14003049#comment-14003049
 ] 

Nick Dimiduk commented on HBASE-9507:
-

You folks think we should keep this one opened then? Can we not support the two 
different semantics with the same API, just noting the a/sync registration? The 
async operation still must be queued and pushing the operation to the queue 
happens inline.

Perhaps wanting to support the two semantics with the same API is overly 
optimistic.

 Promote methods of WALActionsListener to WALObserver
 

 Key: HBASE-9507
 URL: https://issues.apache.org/jira/browse/HBASE-9507
 Project: HBase
  Issue Type: Brainstorming
  Components: Coprocessors, wal
Reporter: Nick Dimiduk
Priority: Minor
 Fix For: 0.99.0


 The interface exposed by WALObserver is quite minimal. To implement anything 
 of significance based on WAL events, WALActionsListener (at a minimum) is 
 required. This is demonstrated by the implementation of the replication 
 feature (not currently possible with coprocessors) and the corresponding 
 interface exploitation that is the [Side-Effect 
 Processor|https://github.com/NGDATA/hbase-indexer/tree/master/hbase-sep]. 
 Consider promoting the interface of WALActionsListener into WALObserver. This 
 goes a long way to being able refactor replication into a coprocessor. This 
 also removes the duplicate code path for listeners because they're already 
 available via coprocessor hook.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11206) Enable automagically tweaking memstore and blockcache sizes

2014-05-20 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14003054#comment-14003054
 ] 

Nick Dimiduk commented on HBASE-11206:
--

You think we should go this route? Last I looked at this implementation, it 
assumes an entirely on-heap world. I think by definition it cannot support a 
mixed deployment (ie, on-heap memstore, off-heap blockcache). Would be cool to 
support an off-heap version once we have an off-heap MSLAB.

Point is, we can either enable this by default, or BucketCache by default, but 
,as it stands today, not both.

 Enable automagically tweaking memstore and blockcache sizes
 ---

 Key: HBASE-11206
 URL: https://issues.apache.org/jira/browse/HBASE-11206
 Project: HBase
  Issue Type: Task
Reporter: stack
Assignee: stack
 Fix For: 0.99.0


 HBASE-5349
 Automagically tweak global memstore and block cache sizes based on workload 
 adds a nice new feature. It is off by default.  Lets turn it on for 0.99.  
 Liang Xie is concerned that automatically shifting blockcache and memstore 
 sizes could wreck havoc w/ GC'ing in low-latency serving situation -- a valid 
 concern -- but lets enable this feature in 0.99 and see how it does.  Can 
 always disable it before 1.0 if a problem.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11104) IntegrationTestImportTsv#testRunFromOutputCommitter misses credential initialization

2014-05-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14003053#comment-14003053
 ] 

Hadoop QA commented on HBASE-11104:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12645695/HBASE-11104_trunk_v3.patch
  against trunk revision .
  ATTACHMENT ID: 12645695

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 1 
warning messages.

{color:red}-1 findbugs{color}.  The patch appears to introduce 1 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9542//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9542//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9542//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9542//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9542//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9542//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9542//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9542//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9542//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9542//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9542//console

This message is automatically generated.

 IntegrationTestImportTsv#testRunFromOutputCommitter misses credential 
 initialization
 

 Key: HBASE-11104
 URL: https://issues.apache.org/jira/browse/HBASE-11104
 Project: HBase
  Issue Type: Test
Reporter: Ted Yu
Assignee: Vandana Ayyalasomayajula
Priority: Minor
 Attachments: 11104-v1.txt, HBASE-11104_98_v3.patch, 
 HBASE-11104_trunk.patch, HBASE-11104_trunk.patch, HBASE-11104_trunk_v2.patch, 
 HBASE-11104_trunk_v3.patch


 IntegrationTestImportTsv#testRunFromOutputCommitter a parent job that ships 
 the HBase dependencies.
 However, call to TableMapReduceUtil.initCredentials(job) is missing - making 
 this test fail on a secure cluster.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-9965) hbase-0.94.13 fails to start hbase master: java.lang.RuntimeException: Failed suppression of fs shutdown hook

2014-05-20 Thread wengad (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14003061#comment-14003061
 ] 

wengad commented on HBASE-9965:
---

i really get the latest stable version hbase-0.94.19., and following the Quick 
Start,runnig as standalone HBase Instance. But it still failed to start up :(

 hbase-0.94.13 fails to start hbase master: java.lang.RuntimeException: Failed 
 suppression of fs shutdown hook
 -

 Key: HBASE-9965
 URL: https://issues.apache.org/jira/browse/HBASE-9965
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.13
 Environment: Linux x86_64 (Centos 6.4)
Reporter: Jason Vas Dias

 Having installed the latest stable hbase-0.94.13 version, and following the 
 instructions athbase-0.94.13/docs/book/quickstart.html  ,  
 the hbase master fails to start and hbase is unusable, owing to this Java 
 RuntimeException occurring, as shown in the log file :
 2013-11-13 13:52:06,316 INFO 
 org.apache.hadoop.hbase.master.ActiveMasterManager: Deleting ZNode for 
 /hbase/backup-masters/jvds,52926,1384350725521 from backup master directory
 2013-11-13 13:52:06,318 INFO 
 org.apache.zookeeper.server.PrepRequestProcessor: Got user-level 
 KeeperException when processing sessionid:0x14251bbb3d4 type:delete 
 cxid:0x13 zxid:0xb txntype:-1 reqpath:n/a Error 
 Path:/hbase/backup-masters/jvds,52926,1384350725521 Error:KeeperErrorCode = 
 NoNode for /hbase/backup-masters/jvds,52926,1384350725521
 2013-11-13 13:52:06,320 WARN 
 org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Node 
 /hbase/backup-masters/jvds,52926,1384350725521 already deleted, and this is 
 not a retry
 2013-11-13 13:52:06,320 INFO 
 org.apache.hadoop.hbase.master.ActiveMasterManager: 
 Master=jvds,52926,1384350725521
 2013-11-13 13:52:06,348 INFO org.apache.hadoop.hbase.master.SplitLogManager: 
 timeout = 30
 2013-11-13 13:52:06,348 INFO org.apache.hadoop.hbase.master.SplitLogManager: 
 unassigned timeout = 18
 2013-11-13 13:52:06,348 INFO org.apache.hadoop.hbase.master.SplitLogManager: 
 resubmit threshold = 3
 2013-11-13 13:52:06,352 INFO org.apache.hadoop.hbase.master.SplitLogManager: 
 found 0 orphan tasks and 0 rescan nodes
 2013-11-13 13:52:06,385 INFO org.apache.hadoop.util.NativeCodeLoader: Loaded 
 the native-hadoop library
 2013-11-13 13:52:06,385 ERROR 
 org.apache.hadoop.hbase.master.HMasterCommandLine: Failed to start master
 java.lang.RuntimeException: Failed suppression of fs shutdown hook: 
 Thread[Thread-27,5,main]
   at 
 org.apache.hadoop.hbase.regionserver.ShutdownHook.suppressHdfsShutdownHook(ShutdownHook.java:196)
   at 
 org.apache.hadoop.hbase.regionserver.ShutdownHook.install(ShutdownHook.java:83)
   at 
 org.apache.hadoop.hbase.util.JVMClusterUtil.startup(JVMClusterUtil.java:191)
   at 
 org.apache.hadoop.hbase.LocalHBaseCluster.startup(LocalHBaseCluster.java:420)
   at 
 org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:149)
   at 
 org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:104)
   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
   at 
 org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:76)
   at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2120)
 2013-11-13 13:52:06,386 ERROR org.apache.hadoop.io.nativeio.NativeIO: Unable 
 to initialize NativeIO libraries
 java.lang.NoSuchFieldError: workaroundNonThreadSafePasswdCalls
   at org.apache.hadoop.io.nativeio.NativeIO.initNative(Native Method)
   at org.apache.hadoop.io.nativeio.NativeIO.clinit(NativeIO.java:58)
   at org.apache.hadoop.fs.FileUtil.setPermission(FileUtil.java:653)
   at 
 org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:509)
   at 
 org.apache.hadoop.fs.FilterFileSystem.setPermission(FilterFileSystem.java:286)
   at 
 org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:385)
   at 
 org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:364)
   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:555)
   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:536)
   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:443)
   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:435)
   at org.apache.hadoop.hbase.util.FSUtils.setVersion(FSUtils.java:475)
   at org.apache.hadoop.hbase.util.FSUtils.setVersion(FSUtils.java:375)
   at 
 org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFileSystem.java:436)
   at 
 org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSystemLayout(MasterFileSystem.java:148)
   at 
 

[jira] [Updated] (HBASE-11200) AsyncWriter of FSHLog might throw ArrayIndexOutOfBoundsException

2014-05-20 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-11200:
---

Fix Version/s: (was: 0.98.4)

 AsyncWriter of FSHLog might throw ArrayIndexOutOfBoundsException
 

 Key: HBASE-11200
 URL: https://issues.apache.org/jira/browse/HBASE-11200
 Project: HBase
  Issue Type: Bug
  Components: wal
Affects Versions: 0.98.2
Reporter: cuijianwei
Assignee: cuijianwei
Priority: Minor
 Fix For: 0.98.3

 Attachments: HBASE-11200-0.98.patch


 AsyncWriter of FSHLog might throw ArrayIndexOutOfBoundsException because of 
 the following code in AsyncWriter#run():
 {code}
  }
}
if (!hasIdleSyncer) {
 int idx = (int)this.lastWrittenTxid % asyncSyncers.length;
  asyncSyncers[idx].setWrittenTxid(this.lastWrittenTxid);
}
  }
 {code}
 In obove code, this.lastWrittenTxid % asyncSyncers.length might become 
 negative when this.lastWrittenTxid is bigger than Interger.MAX_VALUE where 
 this.lastWrittenTxid  is a long. The attachment gives a quick fix.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11200) AsyncWriter of FSHLog might throw ArrayIndexOutOfBoundsException

2014-05-20 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14003063#comment-14003063
 ] 

Anoop Sam John commented on HBASE-11200:


Thanks.  Ya I have just checked the trunk code after adding the comment.


 AsyncWriter of FSHLog might throw ArrayIndexOutOfBoundsException
 

 Key: HBASE-11200
 URL: https://issues.apache.org/jira/browse/HBASE-11200
 Project: HBase
  Issue Type: Bug
  Components: wal
Affects Versions: 0.98.2
Reporter: cuijianwei
Assignee: cuijianwei
Priority: Minor
 Fix For: 0.98.3

 Attachments: HBASE-11200-0.98.patch


 AsyncWriter of FSHLog might throw ArrayIndexOutOfBoundsException because of 
 the following code in AsyncWriter#run():
 {code}
  }
}
if (!hasIdleSyncer) {
 int idx = (int)this.lastWrittenTxid % asyncSyncers.length;
  asyncSyncers[idx].setWrittenTxid(this.lastWrittenTxid);
}
  }
 {code}
 In obove code, this.lastWrittenTxid % asyncSyncers.length might become 
 negative when this.lastWrittenTxid is bigger than Interger.MAX_VALUE where 
 this.lastWrittenTxid  is a long. The attachment gives a quick fix.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-9806) Add PerfEval tool for BlockCache

2014-05-20 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14003073#comment-14003073
 ] 

Nick Dimiduk commented on HBASE-9806:
-

Moving to the io.hfile package makes sense. Maybe we merge this one into 
HFilePerfEval, as these are closely related?

I'd like to add support for more schema varieties and access distributions on 
PerfEval, similar to what these tools have. The whole business could use some 
refactor toward more code sharing. It all depends on the goal of the tool 
though. I like PerfEval because it's closer to what a user will see. OTOH, it 
makes it difficult for an hbase dev to get a sense for the IO subsystem pieces 
in isolation.

 Add PerfEval tool for BlockCache
 

 Key: HBASE-9806
 URL: https://issues.apache.org/jira/browse/HBASE-9806
 Project: HBase
  Issue Type: Test
  Components: Performance, test
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk
 Attachments: HBASE-9806.00.patch, HBASE-9806.01.patch, 
 HBASE-9806.02.patch, conf_20g.patch, conf_3g.patch, test1_run1_20g.pdf, 
 test1_run1_3g.pdf, test1_run2_20g.pdf, test1_run2_3g.pdf


 We have at least three different block caching layers with myriad 
 configuration settings. Let's add a tool for evaluating memory allocations 
 and configuration combinations with different access patterns.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10573) Use Netty 4

2014-05-20 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10573?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-10573:


  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

 Use Netty 4
 ---

 Key: HBASE-10573
 URL: https://issues.apache.org/jira/browse/HBASE-10573
 Project: HBase
  Issue Type: Sub-task
Affects Versions: 0.99.0, hbase-10191
Reporter: Andrew Purtell
Assignee: Nicolas Liochon
 Fix For: 0.99.0

 Attachments: 10573.patch, 10573.patch, 10573.v3.patch


 Pull in Netty 4 and sort out the consequences.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10573) Use Netty 4

2014-05-20 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14003081#comment-14003081
 ] 

Nicolas Liochon commented on HBASE-10573:
-

I committed it. I was waiting for hadoop-qa feedback (you know all this code 
analysis...) but it seems it will never come, so...

Thanks for the review!

 Use Netty 4
 ---

 Key: HBASE-10573
 URL: https://issues.apache.org/jira/browse/HBASE-10573
 Project: HBase
  Issue Type: Sub-task
Affects Versions: 0.99.0, hbase-10191
Reporter: Andrew Purtell
Assignee: Nicolas Liochon
 Fix For: 0.99.0

 Attachments: 10573.patch, 10573.patch, 10573.v3.patch


 Pull in Netty 4 and sort out the consequences.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11108) Split ZKTable into interface and implementation

2014-05-20 Thread Mikhail Antonov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Antonov updated HBASE-11108:


Status: Patch Available  (was: Open)

 Split ZKTable into interface and implementation
 ---

 Key: HBASE-11108
 URL: https://issues.apache.org/jira/browse/HBASE-11108
 Project: HBase
  Issue Type: Sub-task
  Components: Consensus, Zookeeper
Affects Versions: 0.99.0
Reporter: Konstantin Boudnik
Assignee: Mikhail Antonov
 Attachments: HBASE-11108.patch, HBASE-11108.patch, HBASE-11108.patch, 
 HBASE-11108.patch, HBASE-11108.patch, HBASE-11108.patch, HBASE-11108.patch


 In HBASE-11071 we are trying to split admin handlers away from ZK. However, a 
 ZKTable instance is being used in multiple places, hence it would be 
 beneficial to hide its implementation behind a well defined interface.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11108) Split ZKTable into interface and implementation

2014-05-20 Thread Mikhail Antonov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Antonov updated HBASE-11108:


Status: Open  (was: Patch Available)

 Split ZKTable into interface and implementation
 ---

 Key: HBASE-11108
 URL: https://issues.apache.org/jira/browse/HBASE-11108
 Project: HBase
  Issue Type: Sub-task
  Components: Consensus, Zookeeper
Affects Versions: 0.99.0
Reporter: Konstantin Boudnik
Assignee: Mikhail Antonov
 Attachments: HBASE-11108.patch, HBASE-11108.patch, HBASE-11108.patch, 
 HBASE-11108.patch, HBASE-11108.patch, HBASE-11108.patch, HBASE-11108.patch


 In HBASE-11071 we are trying to split admin handlers away from ZK. However, a 
 ZKTable instance is being used in multiple places, hence it would be 
 beneficial to hide its implementation behind a well defined interface.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11108) Split ZKTable into interface and implementation

2014-05-20 Thread Mikhail Antonov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Antonov updated HBASE-11108:


Attachment: HBASE-11108.patch

Updated patch, fixed regression bug causing TestHBaseFsck to fail.

 Split ZKTable into interface and implementation
 ---

 Key: HBASE-11108
 URL: https://issues.apache.org/jira/browse/HBASE-11108
 Project: HBase
  Issue Type: Sub-task
  Components: Consensus, Zookeeper
Affects Versions: 0.99.0
Reporter: Konstantin Boudnik
Assignee: Mikhail Antonov
 Attachments: HBASE-11108.patch, HBASE-11108.patch, HBASE-11108.patch, 
 HBASE-11108.patch, HBASE-11108.patch, HBASE-11108.patch, HBASE-11108.patch


 In HBASE-11071 we are trying to split admin handlers away from ZK. However, a 
 ZKTable instance is being used in multiple places, hence it would be 
 beneficial to hide its implementation behind a well defined interface.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-11207) [types] usage documentation

2014-05-20 Thread Nick Dimiduk (JIRA)
Nick Dimiduk created HBASE-11207:


 Summary: [types] usage documentation
 Key: HBASE-11207
 URL: https://issues.apache.org/jira/browse/HBASE-11207
 Project: HBase
  Issue Type: Sub-task
  Components: documentation
Reporter: Nick Dimiduk


Once we finalize the APIs and out-of-the-box types/codes, we'll need some 
user-level documentation describing the API and links off to any examples over 
{{hbase-examples}}.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11202) Cleanup on HRegion class

2014-05-20 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14003141#comment-14003141
 ] 

Nicolas Liochon commented on HBASE-11202:
-

bq. +1 (if hadoopqa passes or if tests pass for you locally)
Tests are in progress, it seems ok so far...

bq. You mean Does not do?
yes, will fix on commit.

bq. Is this removal ok? The comment itself says that it is intended to be used 
from CPs. So what if someone used this already? I dont know which Jira issue 
added these 2 public methods.
Hum.. Good point. I did check,  but found the wrong info. This was added by 
[~jeffreyz] it seems. Jeffrey, should we keep them? 

 Cleanup on HRegion class
 

 Key: HBASE-11202
 URL: https://issues.apache.org/jira/browse/HBASE-11202
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.99.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
Priority: Minor
 Fix For: 0.99.0

 Attachments: 11202.v1.patch


 This is mostly trivial stuff
  - remove some methods not used
  - typos
  - remove some @param w/o any info
  - change the code that uses deprecated methods
 The only non trivial change is when we get the store from a cell: instead of 
 using the map, we iterate on the key set. Likely, it would be better to hava 
 a sorted array instead of a Map, as the number of store is fixed.  Could be 
 done in a later patch.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11144) Filter to support scan multiple row key ranges

2014-05-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14003150#comment-14003150
 ] 

Hadoop QA commented on HBASE-11144:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12644549/MultiRowRangeFilter2.patch
  against trunk revision .
  ATTACHMENT ID: 12644549

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 1 
warning messages.

{color:red}-1 findbugs{color}.  The patch appears to introduce 2 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+FilterProtos.MultiRowRangeFilter.Builder builder = 
FilterProtos.MultiRowRangeFilter.newBuilder();
+if (range.startRow != null) 
rangebuilder.setStartRow(HBaseZeroCopyByteString.wrap(range.startRow));
+if (range.stopRow != null) 
rangebuilder.setStopRow(HBaseZeroCopyByteString.wrap(range.stopRow));
+   * @param pbBytes A pb serialized {@link 
org.apache.hadoop.hbase.filter.ColumnRangeFilter} instance
+   * @return An instance of {@link 
org.apache.hadoop.hbase.filter.ColumnRangeFilter} made from codebytes/code
+  RowKeyRange range = new 
RowKeyRange(rangeProto.hasStartRow()?rangeProto.getStartRow().toByteArray():null,
+  sb.append(There might be overlaps between rowkey ranges or the rowkey 
ranges are not arranged in ascending order.\n);
+  private void generateRows(int numberOfRows, HTable ht, byte[] family, byte[] 
qf, byte[] value) throws IOException {

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s):   
at 
org.apache.hadoop.hbase.mapreduce.TestTableMapReduceBase.testMultiRegionTable(TestTableMapReduceBase.java:96)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9543//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9543//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9543//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9543//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9543//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9543//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9543//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9543//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9543//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9543//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9543//console

This message is automatically generated.

 Filter to support scan multiple row key ranges
 --

 Key: HBASE-11144
 URL: https://issues.apache.org/jira/browse/HBASE-11144
 Project: HBase
  Issue Type: Improvement
  Components: Filters
Reporter: Li Jiajia
 Attachments: MultiRowRangeFilter.patch, MultiRowRangeFilter2.patch


 HBase is quite efficient when scanning only one small row key range. If user 
 needs to specify multiple row key ranges in one scan, the typical solutions 
 are: 1. through FilterList which is a list of row key Filters, 2. using the 
 SQL layer over HBase to join with two table, such as hive, phoenix etc. 
 However, both solutions are inefficient. Both of them can’t utilize the range 
 info to perform fast forwarding during scan which is quite time consuming. If 
 the number of ranges are quite big (e.g. millions), join is a proper solution 
 

[jira] [Commented] (HBASE-11201) Enable global procedure members to return values to procedure master

2014-05-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14003161#comment-14003161
 ] 

Hadoop QA commented on HBASE-11201:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12645618/HBASE-11201-trunk-v1.patch
  against trunk revision .
  ATTACHMENT ID: 12645618

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 15 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 1 
warning messages.

{color:red}-1 findbugs{color}.  The patch appears to introduce 1 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:red}-1 site{color}.  The patch appears to cause mvn site goal to 
fail.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.client.TestMultiParallel
  org.apache.hadoop.hbase.client.TestHCM

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9545//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9545//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9545//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9545//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9545//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9545//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9545//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9545//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9545//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9545//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9545//console

This message is automatically generated.

 Enable global procedure members to return values to procedure master
 

 Key: HBASE-11201
 URL: https://issues.apache.org/jira/browse/HBASE-11201
 Project: HBase
  Issue Type: Improvement
Reporter: Jerry He
Assignee: Jerry He
 Fix For: 0.99.0

 Attachments: HBASE-11201-trunk-v1.patch


 Currently in the global procedure framework, the procedure coordinator can 
 send data (procedure argument) to the members when starting procedure.
 But we don't support getting data returned from the procedure members back to 
 the master.
 Similar to RPC and normal procedure/function calls, in many cases, this is a 
 useful capability.
 The infrastructure is in place. We just need to plug in the holes and make it 
 happen.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-11208) Remove the hbase.hstor.blockingStoreFiles setting

2014-05-20 Thread Nicolas Liochon (JIRA)
Nicolas Liochon created HBASE-11208:
---

 Summary: Remove the hbase.hstor.blockingStoreFiles setting
 Key: HBASE-11208
 URL: https://issues.apache.org/jira/browse/HBASE-11208
 Project: HBase
  Issue Type: Brainstorming
  Components: Compaction, regionserver
Affects Versions: 0.99.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 0.99.0


It's a little bit of a provocation, but the rational is:
 - there are some bugs around the delayed flush. For example, if the periodic 
scheduler has asked for a delayed flush, and that we need to flush, we will 
have to wait
 - if the number of WAL files increases, we won't flush immediately if the 
blockingFile number has been reached. This impacts the MTTR.
 - We don't write to limit the compaction impact, but they are many cases where 
we would want to flush anyway, as the writes cannot wait.
 - this obviously leads to huge write latency peaks.

So I'm questioning this setting, it leads to multiple intricate cases, 
unpredictable write latency, and looks like a workaround for compaction 
performances. With all the work done on compaction, I think we can get rid of 
it.  A solution in the middle would be to deprecate it and to set it to a large 
value...

Any opinion before I shoot :-) ? 






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-11209) Increase the default value for hbase.hregion.memstore.block.multipler from 2 to 4

2014-05-20 Thread Nicolas Liochon (JIRA)
Nicolas Liochon created HBASE-11209:
---

 Summary: Increase the default value for 
hbase.hregion.memstore.block.multipler from 2 to 4
 Key: HBASE-11209
 URL: https://issues.apache.org/jira/browse/HBASE-11209
 Project: HBase
  Issue Type: Brainstorming
  Components: regionserver
Affects Versions: 0.98.2, 0.99.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 0.99.0, 0.98.3


On a YCSB test, I saw a 33% performance increase, both on the max latency and 
on the throughput. I'm convinced enough that this value is better that I think 
it makes sense to change it on 0.98 as well.

More fundamentally, but outside of the scope of this patch, I think this 
parameter should be changed to something at the region server level: today, we 
have:
- global memstore check: if we're other 40%, we flush the biggest memstore
- local: no more than 2 (proposed: 4) memstore size per region.

But if we have enough memory and a spike on a region, there is no reason for 
not taking the write.






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10417) index is not incremented in PutSortReducer#reduce()

2014-05-20 Thread Gustavo Anatoly (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gustavo Anatoly updated HBASE-10417:


Attachment: HBASE-10417.patch

 index is not incremented in PutSortReducer#reduce()
 ---

 Key: HBASE-10417
 URL: https://issues.apache.org/jira/browse/HBASE-10417
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Gustavo Anatoly
Priority: Minor
 Attachments: HBASE-10417.patch


 Starting at line 76:
 {code}
   int index = 0;
   for (KeyValue kv : map) {
 context.write(row, kv);
 if (index  0  index % 100 == 0)
   context.setStatus(Wrote  + index);
 {code}
 index is a variable inside while loop that is never incremented.
 The condition index  0 cannot be true.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10336) Remove deprecated usage of Hadoop HttpServer in InfoServer

2014-05-20 Thread Eric Charles (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14003278#comment-14003278
 ] 

Eric Charles commented on HBASE-10336:
--

Thx Ted. I don't find the review request you created... can you post the url. 
Maybe github is also useful to review : 
https://github.com/datalayer/hbase/compare/HBASE-6581...HBASE-10336

 Remove deprecated usage of Hadoop HttpServer in InfoServer
 --

 Key: HBASE-10336
 URL: https://issues.apache.org/jira/browse/HBASE-10336
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.99.0
Reporter: Eric Charles
Assignee: Eric Charles
 Attachments: 10336-v10.txt, HBASE-10336-1.patch, HBASE-10336-2.patch, 
 HBASE-10336-3.patch, HBASE-10336-4.patch, HBASE-10336-5.patch, 
 HBASE-10336-6.patch, HBASE-10336-7.patch, HBASE-10336-8.patch, 
 HBASE-10336-9.patch, HBASE-10569-10.patch


 Recent changes in Hadoop HttpServer give NPE when running on hadoop 
 3.0.0-SNAPSHOT. This way we use HttpServer is deprecated and will probably be 
 not fixed (see HDFS-5760). We'd better move to the new proposed builder 
 pattern, which means we can no more use inheritance to build our nice 
 InfoServer.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10336) Remove deprecated usage of Hadoop HttpServer in InfoServer

2014-05-20 Thread Eric Charles (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14003279#comment-14003279
 ] 

Eric Charles commented on HBASE-10336:
--

[~te...@apache.org] The github push contains the move of GenericTestUtils and 
TimedOutTestsListener you proposed. Any other comment?

 Remove deprecated usage of Hadoop HttpServer in InfoServer
 --

 Key: HBASE-10336
 URL: https://issues.apache.org/jira/browse/HBASE-10336
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.99.0
Reporter: Eric Charles
Assignee: Eric Charles
 Attachments: 10336-v10.txt, HBASE-10336-1.patch, HBASE-10336-2.patch, 
 HBASE-10336-3.patch, HBASE-10336-4.patch, HBASE-10336-5.patch, 
 HBASE-10336-6.patch, HBASE-10336-7.patch, HBASE-10336-8.patch, 
 HBASE-10336-9.patch, HBASE-10569-10.patch


 Recent changes in Hadoop HttpServer give NPE when running on hadoop 
 3.0.0-SNAPSHOT. This way we use HttpServer is deprecated and will probably be 
 not fixed (see HDFS-5760). We'd better move to the new proposed builder 
 pattern, which means we can no more use inheritance to build our nice 
 InfoServer.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10336) Remove deprecated usage of Hadoop HttpServer in InfoServer

2014-05-20 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14003313#comment-14003313
 ] 

Ted Yu commented on HBASE-10336:


@Eric:
You can create review request using the file I attached.
The creator of review request is able to upload new patches onto the same 
review request. That was why I didn't post the request I created.

 Remove deprecated usage of Hadoop HttpServer in InfoServer
 --

 Key: HBASE-10336
 URL: https://issues.apache.org/jira/browse/HBASE-10336
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.99.0
Reporter: Eric Charles
Assignee: Eric Charles
 Attachments: 10336-v10.txt, HBASE-10336-1.patch, HBASE-10336-2.patch, 
 HBASE-10336-3.patch, HBASE-10336-4.patch, HBASE-10336-5.patch, 
 HBASE-10336-6.patch, HBASE-10336-7.patch, HBASE-10336-8.patch, 
 HBASE-10336-9.patch, HBASE-10569-10.patch


 Recent changes in Hadoop HttpServer give NPE when running on hadoop 
 3.0.0-SNAPSHOT. This way we use HttpServer is deprecated and will probably be 
 not fixed (see HDFS-5760). We'd better move to the new proposed builder 
 pattern, which means we can no more use inheritance to build our nice 
 InfoServer.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10933) hbck -fixHdfsOrphans is not working properly it throws null pointer exception

2014-05-20 Thread Kashif J S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kashif J S updated HBASE-10933:
---

Attachment: HBASE-10933-0.94-v1.patch

Patch for 0.94.20 version.

 hbck -fixHdfsOrphans is not working properly it throws null pointer exception
 -

 Key: HBASE-10933
 URL: https://issues.apache.org/jira/browse/HBASE-10933
 Project: HBase
  Issue Type: Bug
  Components: hbck
Affects Versions: 0.94.16, 0.98.2
Reporter: Deepak Sharma
Assignee: Kashif J S
Priority: Critical
 Attachments: HBASE-10933-0.94-v1.patch


 if we regioninfo file is not existing in hbase region then if we run hbck 
 repair or hbck -fixHdfsOrphans
 then it is not able to resolve this problem it throws null pointer exception
 {code}
 2014-04-08 20:11:49,750 INFO  [main] util.HBaseFsck 
 (HBaseFsck.java:adoptHdfsOrphans(470)) - Attempting to handle orphan hdfs 
 dir: 
 hdfs://10.18.40.28:54310/hbase/TestHdfsOrphans1/5a3de9ca65e587cb05c9384a3981c950
 java.lang.NullPointerException
   at 
 org.apache.hadoop.hbase.util.HBaseFsck$TableInfo.access$000(HBaseFsck.java:1939)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphan(HBaseFsck.java:497)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphans(HBaseFsck.java:471)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.restoreHdfsIntegrity(HBaseFsck.java:591)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.offlineHdfsIntegrityRepair(HBaseFsck.java:369)
   at org.apache.hadoop.hbase.util.HBaseFsck.onlineHbck(HBaseFsck.java:447)
   at org.apache.hadoop.hbase.util.HBaseFsck.exec(HBaseFsck.java:3769)
   at org.apache.hadoop.hbase.util.HBaseFsck.run(HBaseFsck.java:3587)
   at 
 com.huawei.isap.test.smartump.hadoop.hbase.HbaseHbckRepair.repairToFixHdfsOrphans(HbaseHbckRepair.java:244)
   at 
 com.huawei.isap.test.smartump.hadoop.hbase.HbaseHbckRepair.setUp(HbaseHbckRepair.java:84)
   at junit.framework.TestCase.runBare(TestCase.java:132)
   at junit.framework.TestResult$1.protect(TestResult.java:110)
   at junit.framework.TestResult.runProtected(TestResult.java:128)
   at junit.framework.TestResult.run(TestResult.java:113)
   at junit.framework.TestCase.run(TestCase.java:124)
   at junit.framework.TestSuite.runTest(TestSuite.java:243)
   at junit.framework.TestSuite.run(TestSuite.java:238)
   at 
 org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
   at 
 org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
   at 
 org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
 {code}
 problem i got it is because since in HbaseFsck class 
 {code}
  private void adoptHdfsOrphan(HbckInfo hi)
 {code}
 we are intializing tableinfo using SortedMapString, TableInfo tablesInfo 
 object
 {code}
 TableInfo tableInfo = tablesInfo.get(tableName);
 {code}
 but  in private SortedMapString, TableInfo loadHdfsRegionInfos()
 {code}
  for (HbckInfo hbi: hbckInfos) {
   if (hbi.getHdfsHRI() == null) {
 // was an orphan
 continue;
   }
 {code}
 we have check if a region is orphan then that table will can not be added in 
 SortedMapString, TableInfo tablesInfo
 so later while using this we get null pointer exception



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10933) hbck -fixHdfsOrphans is not working properly it throws null pointer exception

2014-05-20 Thread Kashif J S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kashif J S updated HBASE-10933:
---

Attachment: TestResults-0.94.txt

Test result for 0.94 version

 hbck -fixHdfsOrphans is not working properly it throws null pointer exception
 -

 Key: HBASE-10933
 URL: https://issues.apache.org/jira/browse/HBASE-10933
 Project: HBase
  Issue Type: Bug
  Components: hbck
Affects Versions: 0.94.16, 0.98.2
Reporter: Deepak Sharma
Assignee: Kashif J S
Priority: Critical
 Attachments: HBASE-10933-0.94-v1.patch, HBASE-10933-trunk-v1.patch, 
 TestResults-0.94.txt


 if we regioninfo file is not existing in hbase region then if we run hbck 
 repair or hbck -fixHdfsOrphans
 then it is not able to resolve this problem it throws null pointer exception
 {code}
 2014-04-08 20:11:49,750 INFO  [main] util.HBaseFsck 
 (HBaseFsck.java:adoptHdfsOrphans(470)) - Attempting to handle orphan hdfs 
 dir: 
 hdfs://10.18.40.28:54310/hbase/TestHdfsOrphans1/5a3de9ca65e587cb05c9384a3981c950
 java.lang.NullPointerException
   at 
 org.apache.hadoop.hbase.util.HBaseFsck$TableInfo.access$000(HBaseFsck.java:1939)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphan(HBaseFsck.java:497)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphans(HBaseFsck.java:471)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.restoreHdfsIntegrity(HBaseFsck.java:591)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.offlineHdfsIntegrityRepair(HBaseFsck.java:369)
   at org.apache.hadoop.hbase.util.HBaseFsck.onlineHbck(HBaseFsck.java:447)
   at org.apache.hadoop.hbase.util.HBaseFsck.exec(HBaseFsck.java:3769)
   at org.apache.hadoop.hbase.util.HBaseFsck.run(HBaseFsck.java:3587)
   at 
 com.huawei.isap.test.smartump.hadoop.hbase.HbaseHbckRepair.repairToFixHdfsOrphans(HbaseHbckRepair.java:244)
   at 
 com.huawei.isap.test.smartump.hadoop.hbase.HbaseHbckRepair.setUp(HbaseHbckRepair.java:84)
   at junit.framework.TestCase.runBare(TestCase.java:132)
   at junit.framework.TestResult$1.protect(TestResult.java:110)
   at junit.framework.TestResult.runProtected(TestResult.java:128)
   at junit.framework.TestResult.run(TestResult.java:113)
   at junit.framework.TestCase.run(TestCase.java:124)
   at junit.framework.TestSuite.runTest(TestSuite.java:243)
   at junit.framework.TestSuite.run(TestSuite.java:238)
   at 
 org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
   at 
 org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
   at 
 org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
 {code}
 problem i got it is because since in HbaseFsck class 
 {code}
  private void adoptHdfsOrphan(HbckInfo hi)
 {code}
 we are intializing tableinfo using SortedMapString, TableInfo tablesInfo 
 object
 {code}
 TableInfo tableInfo = tablesInfo.get(tableName);
 {code}
 but  in private SortedMapString, TableInfo loadHdfsRegionInfos()
 {code}
  for (HbckInfo hbi: hbckInfos) {
   if (hbi.getHdfsHRI() == null) {
 // was an orphan
 continue;
   }
 {code}
 we have check if a region is orphan then that table will can not be added in 
 SortedMapString, TableInfo tablesInfo
 so later while using this we get null pointer exception



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10933) hbck -fixHdfsOrphans is not working properly it throws null pointer exception

2014-05-20 Thread Kashif J S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kashif J S updated HBASE-10933:
---

Attachment: HBASE-10933-trunk-v1.patch

Patch also for trunk version to solve the issue of Single region with single KV 
generating wrong .regioninfo file with same start and end key.  Also added 
JUnit TCs for test

 hbck -fixHdfsOrphans is not working properly it throws null pointer exception
 -

 Key: HBASE-10933
 URL: https://issues.apache.org/jira/browse/HBASE-10933
 Project: HBase
  Issue Type: Bug
  Components: hbck
Affects Versions: 0.94.16, 0.98.2
Reporter: Deepak Sharma
Assignee: Kashif J S
Priority: Critical
 Attachments: HBASE-10933-0.94-v1.patch, HBASE-10933-trunk-v1.patch, 
 TestResults-0.94.txt


 if we regioninfo file is not existing in hbase region then if we run hbck 
 repair or hbck -fixHdfsOrphans
 then it is not able to resolve this problem it throws null pointer exception
 {code}
 2014-04-08 20:11:49,750 INFO  [main] util.HBaseFsck 
 (HBaseFsck.java:adoptHdfsOrphans(470)) - Attempting to handle orphan hdfs 
 dir: 
 hdfs://10.18.40.28:54310/hbase/TestHdfsOrphans1/5a3de9ca65e587cb05c9384a3981c950
 java.lang.NullPointerException
   at 
 org.apache.hadoop.hbase.util.HBaseFsck$TableInfo.access$000(HBaseFsck.java:1939)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphan(HBaseFsck.java:497)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphans(HBaseFsck.java:471)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.restoreHdfsIntegrity(HBaseFsck.java:591)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.offlineHdfsIntegrityRepair(HBaseFsck.java:369)
   at org.apache.hadoop.hbase.util.HBaseFsck.onlineHbck(HBaseFsck.java:447)
   at org.apache.hadoop.hbase.util.HBaseFsck.exec(HBaseFsck.java:3769)
   at org.apache.hadoop.hbase.util.HBaseFsck.run(HBaseFsck.java:3587)
   at 
 com.huawei.isap.test.smartump.hadoop.hbase.HbaseHbckRepair.repairToFixHdfsOrphans(HbaseHbckRepair.java:244)
   at 
 com.huawei.isap.test.smartump.hadoop.hbase.HbaseHbckRepair.setUp(HbaseHbckRepair.java:84)
   at junit.framework.TestCase.runBare(TestCase.java:132)
   at junit.framework.TestResult$1.protect(TestResult.java:110)
   at junit.framework.TestResult.runProtected(TestResult.java:128)
   at junit.framework.TestResult.run(TestResult.java:113)
   at junit.framework.TestCase.run(TestCase.java:124)
   at junit.framework.TestSuite.runTest(TestSuite.java:243)
   at junit.framework.TestSuite.run(TestSuite.java:238)
   at 
 org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
   at 
 org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
   at 
 org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
 {code}
 problem i got it is because since in HbaseFsck class 
 {code}
  private void adoptHdfsOrphan(HbckInfo hi)
 {code}
 we are intializing tableinfo using SortedMapString, TableInfo tablesInfo 
 object
 {code}
 TableInfo tableInfo = tablesInfo.get(tableName);
 {code}
 but  in private SortedMapString, TableInfo loadHdfsRegionInfos()
 {code}
  for (HbckInfo hbi: hbckInfos) {
   if (hbi.getHdfsHRI() == null) {
 // was an orphan
 continue;
   }
 {code}
 we have check if a region is orphan then that table will can not be added in 
 SortedMapString, TableInfo tablesInfo
 so later while using this we get null pointer exception



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11108) Split ZKTable into interface and implementation

2014-05-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14003343#comment-14003343
 ] 

Hadoop QA commented on HBASE-11108:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12645774/HBASE-11108.patch
  against trunk revision .
  ATTACHMENT ID: 12645774

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 115 
new or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 1 
warning messages.

{color:red}-1 findbugs{color}.  The patch appears to introduce 1 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.client.TestHCM

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s):   
at 
org.apache.hadoop.hbase.mapreduce.TestTableMapReduceBase.testMultiRegionTable(TestTableMapReduceBase.java:96)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9546//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9546//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9546//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9546//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9546//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9546//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9546//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9546//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9546//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9546//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9546//console

This message is automatically generated.

 Split ZKTable into interface and implementation
 ---

 Key: HBASE-11108
 URL: https://issues.apache.org/jira/browse/HBASE-11108
 Project: HBase
  Issue Type: Sub-task
  Components: Consensus, Zookeeper
Affects Versions: 0.99.0
Reporter: Konstantin Boudnik
Assignee: Mikhail Antonov
 Attachments: HBASE-11108.patch, HBASE-11108.patch, HBASE-11108.patch, 
 HBASE-11108.patch, HBASE-11108.patch, HBASE-11108.patch, HBASE-11108.patch


 In HBASE-11071 we are trying to split admin handlers away from ZK. However, a 
 ZKTable instance is being used in multiple places, hence it would be 
 beneficial to hide its implementation behind a well defined interface.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10936) Add zeroByte encoding test

2014-05-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14003351#comment-14003351
 ] 

Hudson commented on HBASE-10936:


FAILURE: Integrated in hbase-0.96 #399 (See 
[https://builds.apache.org/job/hbase-0.96/399/])
HBASE-10936 Add zeroByte encoding test. (larsh: rev 1594437)
* 
/hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestDataBlockEncoders.java


 Add zeroByte encoding test
 --

 Key: HBASE-10936
 URL: https://issues.apache.org/jira/browse/HBASE-10936
 Project: HBase
  Issue Type: Sub-task
  Components: test
Reporter: Lars Hofhansl
Priority: Minor
 Fix For: 0.96.3, 0.94.20

 Attachments: 10936-0.94.txt, 10936-0.96.txt, 10936-0.98.txt






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11169) nit: fix incorrect javadoc in OrderedBytes about BlobVar and BlobCopy

2014-05-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14003347#comment-14003347
 ] 

Hudson commented on HBASE-11169:


FAILURE: Integrated in hbase-0.96 #399 (See 
[https://builds.apache.org/job/hbase-0.96/399/])
HBASE-11169 nit: fix incorrect javadoc in OrderedBytes about BlobVar and 
BlobCopy (jmhsieh: rev 1595391)
* 
/hbase/branches/0.96/hbase-common/src/main/java/org/apache/hadoop/hbase/util/OrderedBytes.java


 nit: fix incorrect javadoc in OrderedBytes about BlobVar and BlobCopy
 -

 Key: HBASE-11169
 URL: https://issues.apache.org/jira/browse/HBASE-11169
 Project: HBase
  Issue Type: Bug
  Components: util
Affects Versions: 0.95.0
Reporter: Jonathan Hsieh
Assignee: Jonathan Hsieh
Priority: Trivial
 Fix For: 0.99.0, 0.96.3, 0.98.3

 Attachments: HBASE-11169.patch


 Trivial error in javadoc.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10561) Forward port: HBASE-10212 New rpc metric: number of active handler

2014-05-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14003349#comment-14003349
 ] 

Hudson commented on HBASE-10561:


FAILURE: Integrated in hbase-0.96 #399 (See 
[https://builds.apache.org/job/hbase-0.96/399/])
HBASE-10561 Forward port: HBASE-10212 New rpc metric: number of active handler 
(liangxie: rev 1594131)
* 
/hbase/branches/0.96/hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/ipc/MetricsHBaseServerSource.java
* 
/hbase/branches/0.96/hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/ipc/MetricsHBaseServerWrapper.java
* 
/hbase/branches/0.96/hbase-hadoop1-compat/src/main/java/org/apache/hadoop/hbase/ipc/MetricsHBaseServerSourceImpl.java
* 
/hbase/branches/0.96/hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/ipc/MetricsHBaseServerSourceImpl.java
* 
/hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/MetricsHBaseServerWrapperImpl.java
* 
/hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcServer.java
* 
/hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/ipc/MetricsHBaseServerWrapperStub.java
* 
/hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/ipc/TestRpcMetrics.java


 Forward port: HBASE-10212 New rpc metric: number of active handler
 --

 Key: HBASE-10561
 URL: https://issues.apache.org/jira/browse/HBASE-10561
 Project: HBase
  Issue Type: Sub-task
  Components: IPC/RPC
Reporter: Lars Hofhansl
Assignee: Liang Xie
 Fix For: 0.99.0, 0.96.3, 0.98.3

 Attachments: HBASE-10561.txt


 The metrics implementation has changed a lot in 0.96.
 Forward port HBASE-10212 to 0.96 and later.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10933) hbck -fixHdfsOrphans is not working properly it throws null pointer exception

2014-05-20 Thread Kashif J S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kashif J S updated HBASE-10933:
---

Attachment: TestResults-trunk.txt

Server Test results for trunk version.

 hbck -fixHdfsOrphans is not working properly it throws null pointer exception
 -

 Key: HBASE-10933
 URL: https://issues.apache.org/jira/browse/HBASE-10933
 Project: HBase
  Issue Type: Bug
  Components: hbck
Affects Versions: 0.94.16, 0.98.2
Reporter: Deepak Sharma
Assignee: Kashif J S
Priority: Critical
 Attachments: HBASE-10933-0.94-v1.patch, HBASE-10933-trunk-v1.patch, 
 TestResults-0.94.txt, TestResults-trunk.txt


 if we regioninfo file is not existing in hbase region then if we run hbck 
 repair or hbck -fixHdfsOrphans
 then it is not able to resolve this problem it throws null pointer exception
 {code}
 2014-04-08 20:11:49,750 INFO  [main] util.HBaseFsck 
 (HBaseFsck.java:adoptHdfsOrphans(470)) - Attempting to handle orphan hdfs 
 dir: 
 hdfs://10.18.40.28:54310/hbase/TestHdfsOrphans1/5a3de9ca65e587cb05c9384a3981c950
 java.lang.NullPointerException
   at 
 org.apache.hadoop.hbase.util.HBaseFsck$TableInfo.access$000(HBaseFsck.java:1939)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphan(HBaseFsck.java:497)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphans(HBaseFsck.java:471)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.restoreHdfsIntegrity(HBaseFsck.java:591)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.offlineHdfsIntegrityRepair(HBaseFsck.java:369)
   at org.apache.hadoop.hbase.util.HBaseFsck.onlineHbck(HBaseFsck.java:447)
   at org.apache.hadoop.hbase.util.HBaseFsck.exec(HBaseFsck.java:3769)
   at org.apache.hadoop.hbase.util.HBaseFsck.run(HBaseFsck.java:3587)
   at 
 com.huawei.isap.test.smartump.hadoop.hbase.HbaseHbckRepair.repairToFixHdfsOrphans(HbaseHbckRepair.java:244)
   at 
 com.huawei.isap.test.smartump.hadoop.hbase.HbaseHbckRepair.setUp(HbaseHbckRepair.java:84)
   at junit.framework.TestCase.runBare(TestCase.java:132)
   at junit.framework.TestResult$1.protect(TestResult.java:110)
   at junit.framework.TestResult.runProtected(TestResult.java:128)
   at junit.framework.TestResult.run(TestResult.java:113)
   at junit.framework.TestCase.run(TestCase.java:124)
   at junit.framework.TestSuite.runTest(TestSuite.java:243)
   at junit.framework.TestSuite.run(TestSuite.java:238)
   at 
 org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
   at 
 org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
   at 
 org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
 {code}
 problem i got it is because since in HbaseFsck class 
 {code}
  private void adoptHdfsOrphan(HbckInfo hi)
 {code}
 we are intializing tableinfo using SortedMapString, TableInfo tablesInfo 
 object
 {code}
 TableInfo tableInfo = tablesInfo.get(tableName);
 {code}
 but  in private SortedMapString, TableInfo loadHdfsRegionInfos()
 {code}
  for (HbckInfo hbi: hbckInfos) {
   if (hbi.getHdfsHRI() == null) {
 // was an orphan
 continue;
   }
 {code}
 we have check if a region is orphan then that table will can not be added in 
 SortedMapString, TableInfo tablesInfo
 so later while using this we get null pointer exception



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11134) Add a -list-snapshots option to SnapshotInfo

2014-05-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14003350#comment-14003350
 ] 

Hudson commented on HBASE-11134:


FAILURE: Integrated in hbase-0.96 #399 (See 
[https://builds.apache.org/job/hbase-0.96/399/])
HBASE-11134 Add a -list-snapshots option to SnapshotInfo (mbertozzi: rev 
1594856)
* 
/hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/SnapshotInfo.java


 Add a -list-snapshots option to SnapshotInfo
 

 Key: HBASE-11134
 URL: https://issues.apache.org/jira/browse/HBASE-11134
 Project: HBase
  Issue Type: Improvement
  Components: snapshots
Affects Versions: 0.99.0
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
Priority: Trivial
 Fix For: 0.99.0, 0.96.3, 0.94.20, 0.98.3

 Attachments: HBASE-11134-v0.patch, HBASE-11134-v1.patch


 Add a -list-snapshots option to SnapshotInfo to show all the snapshots 
 available. Also add a -remote-dir option to simplify the usage of 
 SnapshotInfo in case the snapshot dir is not the one of the current hbase 
 cluster
 {code}
 $ hbase org.apache.hadoop.hbase.snapshot.SnapshotInfo -list-snapshots
 SNAPSHOT | CREATION TIME| TABLE NAME
 foo  |  2014-05-07T22:40:13 | testtb
 bar  |  2014-05-07T22:40:16 | testtb
 $ hbase org.apache.hadoop.hbase.snapshot.SnapshotInfo -remote-dir 
 file:///backup/ -snapshot my_local_snapshot
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10212) New rpc metric: number of active handler

2014-05-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14003348#comment-14003348
 ] 

Hudson commented on HBASE-10212:


FAILURE: Integrated in hbase-0.96 #399 (See 
[https://builds.apache.org/job/hbase-0.96/399/])
HBASE-10561 Forward port: HBASE-10212 New rpc metric: number of active handler 
(liangxie: rev 1594131)
* 
/hbase/branches/0.96/hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/ipc/MetricsHBaseServerSource.java
* 
/hbase/branches/0.96/hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/ipc/MetricsHBaseServerWrapper.java
* 
/hbase/branches/0.96/hbase-hadoop1-compat/src/main/java/org/apache/hadoop/hbase/ipc/MetricsHBaseServerSourceImpl.java
* 
/hbase/branches/0.96/hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/ipc/MetricsHBaseServerSourceImpl.java
* 
/hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/MetricsHBaseServerWrapperImpl.java
* 
/hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcServer.java
* 
/hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/ipc/MetricsHBaseServerWrapperStub.java
* 
/hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/ipc/TestRpcMetrics.java


 New rpc metric: number of active handler
 

 Key: HBASE-10212
 URL: https://issues.apache.org/jira/browse/HBASE-10212
 Project: HBase
  Issue Type: Improvement
  Components: IPC/RPC
Reporter: Chao Shi
Assignee: Chao Shi
 Fix For: 0.94.17

 Attachments: hbase-10212.patch


 The attached patch adds a new metric: number of active handler threads. We 
 found this is a good metric to measure how busy of a server. If this number 
 is too high (compared to the total number of handlers), the server has risks 
 in getting call queue full.
 We used to monitor  # reads or # writes. However we found this often produce 
 false alerts, because a read touching HDFS will produce much high workload 
 than a block-cached read.
 The attached patch is based on our internal 0.94 branch, but I think it 
 pretty easy to port to rebase to other branches if you think it is useful.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-6990) Pretty print TTL

2014-05-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14003346#comment-14003346
 ] 

Hudson commented on HBASE-6990:
---

FAILURE: Integrated in hbase-0.96 #399 (See 
[https://builds.apache.org/job/hbase-0.96/399/])
HBASE-6990 ADDENDUM (jmhsieh: rev 1595410)
* 
/hbase/branches/0.96/hbase-common/src/main/java/org/apache/hadoop/hbase/util/PrettyPrinter.java
HBASE-6990 pretty print TTL (Esteban Gutierrez) (jmhsieh: rev 1595395)
* 
/hbase/branches/0.96/hbase-client/src/main/java/org/apache/hadoop/hbase/HColumnDescriptor.java
* 
/hbase/branches/0.96/hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java


 Pretty print TTL
 

 Key: HBASE-6990
 URL: https://issues.apache.org/jira/browse/HBASE-6990
 Project: HBase
  Issue Type: Improvement
  Components: Usability
Reporter: Jean-Daniel Cryans
Assignee: Esteban Gutierrez
Priority: Minor
 Fix For: 0.99.0, 0.96.3, 0.98.3

 Attachments: HBASE-6990.v0.patch, HBASE-6990.v1.patch, 
 HBASE-6990.v2.patch, HBASE-6990.v3.patch, HBASE-6990.v4.patch


 I've seen a lot of users getting confused by the TTL configuration and I 
 think that if we just pretty printed it it would solve most of the issues. 
 For example, let's say a user wanted to set a TTL of 90 days. That would be 
 7776000. But let's say that it was typo'd to 7776 instead, it gives you 
 900 days!
 So when we print the TTL we could do something like x days, x hours, x 
 minutes, x seconds (real_ttl_value). This would also help people when they 
 use ms instead of seconds as they would see really big values in there.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10933) hbck -fixHdfsOrphans is not working properly it throws null pointer exception

2014-05-20 Thread Kashif J S (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14003359#comment-14003359
 ] 

Kashif J S commented on HBASE-10933:


For 0.94.* version the patch fixes 3 issues
1 NullPointerException as reported above
2 When table has 1 region and zero KV, then if regioninfo is deleted, running 
hbck will corrupt the table and TableNotFoundException will be thrown for any 
subsequent operation on table
3 When a table contains 1 region, and only 1 KV, then if regioninfo is missing 
and hbck repair is run, this will create invalid regioninfo with same start and 
end key. 
4 Table with multiple regions, all the regions dir missing (check 
testNoHdfsTable modification)

 hbck -fixHdfsOrphans is not working properly it throws null pointer exception
 -

 Key: HBASE-10933
 URL: https://issues.apache.org/jira/browse/HBASE-10933
 Project: HBase
  Issue Type: Bug
  Components: hbck
Affects Versions: 0.94.16, 0.98.2
Reporter: Deepak Sharma
Assignee: Kashif J S
Priority: Critical
 Attachments: HBASE-10933-0.94-v1.patch, HBASE-10933-trunk-v1.patch, 
 TestResults-0.94.txt, TestResults-trunk.txt


 if we regioninfo file is not existing in hbase region then if we run hbck 
 repair or hbck -fixHdfsOrphans
 then it is not able to resolve this problem it throws null pointer exception
 {code}
 2014-04-08 20:11:49,750 INFO  [main] util.HBaseFsck 
 (HBaseFsck.java:adoptHdfsOrphans(470)) - Attempting to handle orphan hdfs 
 dir: 
 hdfs://10.18.40.28:54310/hbase/TestHdfsOrphans1/5a3de9ca65e587cb05c9384a3981c950
 java.lang.NullPointerException
   at 
 org.apache.hadoop.hbase.util.HBaseFsck$TableInfo.access$000(HBaseFsck.java:1939)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphan(HBaseFsck.java:497)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphans(HBaseFsck.java:471)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.restoreHdfsIntegrity(HBaseFsck.java:591)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.offlineHdfsIntegrityRepair(HBaseFsck.java:369)
   at org.apache.hadoop.hbase.util.HBaseFsck.onlineHbck(HBaseFsck.java:447)
   at org.apache.hadoop.hbase.util.HBaseFsck.exec(HBaseFsck.java:3769)
   at org.apache.hadoop.hbase.util.HBaseFsck.run(HBaseFsck.java:3587)
   at 
 com.huawei.isap.test.smartump.hadoop.hbase.HbaseHbckRepair.repairToFixHdfsOrphans(HbaseHbckRepair.java:244)
   at 
 com.huawei.isap.test.smartump.hadoop.hbase.HbaseHbckRepair.setUp(HbaseHbckRepair.java:84)
   at junit.framework.TestCase.runBare(TestCase.java:132)
   at junit.framework.TestResult$1.protect(TestResult.java:110)
   at junit.framework.TestResult.runProtected(TestResult.java:128)
   at junit.framework.TestResult.run(TestResult.java:113)
   at junit.framework.TestCase.run(TestCase.java:124)
   at junit.framework.TestSuite.runTest(TestSuite.java:243)
   at junit.framework.TestSuite.run(TestSuite.java:238)
   at 
 org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
   at 
 org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
   at 
 org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
 {code}
 problem i got it is because since in HbaseFsck class 
 {code}
  private void adoptHdfsOrphan(HbckInfo hi)
 {code}
 we are intializing tableinfo using SortedMapString, TableInfo tablesInfo 
 object
 {code}
 TableInfo tableInfo = tablesInfo.get(tableName);
 {code}
 but  in private SortedMapString, TableInfo loadHdfsRegionInfos()
 {code}
  for (HbckInfo hbi: hbckInfos) {
   if (hbi.getHdfsHRI() == null) {
 // was an orphan
 continue;
   }
 {code}
 we have check if a region is orphan then that table will can not be added in 
 SortedMapString, TableInfo tablesInfo
 so later while using this we get null pointer exception



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10933) hbck -fixHdfsOrphans is not working properly it throws null pointer exception

2014-05-20 Thread Kashif J S (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14003362#comment-14003362
 ] 

Kashif J S commented on HBASE-10933:


For the TRUNK version the patch fixes the issue of 
1 When table has 1 region and zero KV, then if regioninfo is deleted, running 
hbck will corrupt the table and TableNotFoundException will be thrown for any 
subsequent operation on table
3 JUnit TCs have been added for various scenarios

 hbck -fixHdfsOrphans is not working properly it throws null pointer exception
 -

 Key: HBASE-10933
 URL: https://issues.apache.org/jira/browse/HBASE-10933
 Project: HBase
  Issue Type: Bug
  Components: hbck
Affects Versions: 0.94.16, 0.98.2
Reporter: Deepak Sharma
Assignee: Kashif J S
Priority: Critical
 Attachments: HBASE-10933-0.94-v1.patch, HBASE-10933-trunk-v1.patch, 
 TestResults-0.94.txt, TestResults-trunk.txt


 if we regioninfo file is not existing in hbase region then if we run hbck 
 repair or hbck -fixHdfsOrphans
 then it is not able to resolve this problem it throws null pointer exception
 {code}
 2014-04-08 20:11:49,750 INFO  [main] util.HBaseFsck 
 (HBaseFsck.java:adoptHdfsOrphans(470)) - Attempting to handle orphan hdfs 
 dir: 
 hdfs://10.18.40.28:54310/hbase/TestHdfsOrphans1/5a3de9ca65e587cb05c9384a3981c950
 java.lang.NullPointerException
   at 
 org.apache.hadoop.hbase.util.HBaseFsck$TableInfo.access$000(HBaseFsck.java:1939)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphan(HBaseFsck.java:497)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphans(HBaseFsck.java:471)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.restoreHdfsIntegrity(HBaseFsck.java:591)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.offlineHdfsIntegrityRepair(HBaseFsck.java:369)
   at org.apache.hadoop.hbase.util.HBaseFsck.onlineHbck(HBaseFsck.java:447)
   at org.apache.hadoop.hbase.util.HBaseFsck.exec(HBaseFsck.java:3769)
   at org.apache.hadoop.hbase.util.HBaseFsck.run(HBaseFsck.java:3587)
   at 
 com.huawei.isap.test.smartump.hadoop.hbase.HbaseHbckRepair.repairToFixHdfsOrphans(HbaseHbckRepair.java:244)
   at 
 com.huawei.isap.test.smartump.hadoop.hbase.HbaseHbckRepair.setUp(HbaseHbckRepair.java:84)
   at junit.framework.TestCase.runBare(TestCase.java:132)
   at junit.framework.TestResult$1.protect(TestResult.java:110)
   at junit.framework.TestResult.runProtected(TestResult.java:128)
   at junit.framework.TestResult.run(TestResult.java:113)
   at junit.framework.TestCase.run(TestCase.java:124)
   at junit.framework.TestSuite.runTest(TestSuite.java:243)
   at junit.framework.TestSuite.run(TestSuite.java:238)
   at 
 org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
   at 
 org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
   at 
 org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
 {code}
 problem i got it is because since in HbaseFsck class 
 {code}
  private void adoptHdfsOrphan(HbckInfo hi)
 {code}
 we are intializing tableinfo using SortedMapString, TableInfo tablesInfo 
 object
 {code}
 TableInfo tableInfo = tablesInfo.get(tableName);
 {code}
 but  in private SortedMapString, TableInfo loadHdfsRegionInfos()
 {code}
  for (HbckInfo hbi: hbckInfos) {
   if (hbi.getHdfsHRI() == null) {
 // was an orphan
 continue;
   }
 {code}
 we have check if a region is orphan then that table will can not be added in 
 SortedMapString, TableInfo tablesInfo
 so later while using this we get null pointer exception



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10933) hbck -fixHdfsOrphans is not working properly it throws null pointer exception

2014-05-20 Thread Kashif J S (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14003365#comment-14003365
 ] 

Kashif J S commented on HBASE-10933:


For the TRUNK version the patch fixes the issue of 
1 When a table contains 1 region, and only 1 KV, then if regioninfo is missing 
and hbck repair is run, this will create invalid regioninfo with same start and 
end key. 
2 JUnit TCs have been added for various scenarios


 hbck -fixHdfsOrphans is not working properly it throws null pointer exception
 -

 Key: HBASE-10933
 URL: https://issues.apache.org/jira/browse/HBASE-10933
 Project: HBase
  Issue Type: Bug
  Components: hbck
Affects Versions: 0.94.16, 0.98.2
Reporter: Deepak Sharma
Assignee: Kashif J S
Priority: Critical
 Attachments: HBASE-10933-0.94-v1.patch, HBASE-10933-trunk-v1.patch, 
 TestResults-0.94.txt, TestResults-trunk.txt


 if we regioninfo file is not existing in hbase region then if we run hbck 
 repair or hbck -fixHdfsOrphans
 then it is not able to resolve this problem it throws null pointer exception
 {code}
 2014-04-08 20:11:49,750 INFO  [main] util.HBaseFsck 
 (HBaseFsck.java:adoptHdfsOrphans(470)) - Attempting to handle orphan hdfs 
 dir: 
 hdfs://10.18.40.28:54310/hbase/TestHdfsOrphans1/5a3de9ca65e587cb05c9384a3981c950
 java.lang.NullPointerException
   at 
 org.apache.hadoop.hbase.util.HBaseFsck$TableInfo.access$000(HBaseFsck.java:1939)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphan(HBaseFsck.java:497)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphans(HBaseFsck.java:471)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.restoreHdfsIntegrity(HBaseFsck.java:591)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.offlineHdfsIntegrityRepair(HBaseFsck.java:369)
   at org.apache.hadoop.hbase.util.HBaseFsck.onlineHbck(HBaseFsck.java:447)
   at org.apache.hadoop.hbase.util.HBaseFsck.exec(HBaseFsck.java:3769)
   at org.apache.hadoop.hbase.util.HBaseFsck.run(HBaseFsck.java:3587)
   at 
 com.huawei.isap.test.smartump.hadoop.hbase.HbaseHbckRepair.repairToFixHdfsOrphans(HbaseHbckRepair.java:244)
   at 
 com.huawei.isap.test.smartump.hadoop.hbase.HbaseHbckRepair.setUp(HbaseHbckRepair.java:84)
   at junit.framework.TestCase.runBare(TestCase.java:132)
   at junit.framework.TestResult$1.protect(TestResult.java:110)
   at junit.framework.TestResult.runProtected(TestResult.java:128)
   at junit.framework.TestResult.run(TestResult.java:113)
   at junit.framework.TestCase.run(TestCase.java:124)
   at junit.framework.TestSuite.runTest(TestSuite.java:243)
   at junit.framework.TestSuite.run(TestSuite.java:238)
   at 
 org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
   at 
 org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
   at 
 org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
 {code}
 problem i got it is because since in HbaseFsck class 
 {code}
  private void adoptHdfsOrphan(HbckInfo hi)
 {code}
 we are intializing tableinfo using SortedMapString, TableInfo tablesInfo 
 object
 {code}
 TableInfo tableInfo = tablesInfo.get(tableName);
 {code}
 but  in private SortedMapString, TableInfo loadHdfsRegionInfos()
 {code}
  for (HbckInfo hbi: hbckInfos) {
   if (hbi.getHdfsHRI() == null) {
 // was an orphan
 continue;
   }
 {code}
 we have check if a region is orphan then that table will can not be added in 
 SortedMapString, TableInfo tablesInfo
 so later while using this we get null pointer exception



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10933) hbck -fixHdfsOrphans is not working properly it throws null pointer exception

2014-05-20 Thread Kashif J S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kashif J S updated HBASE-10933:
---

Fix Version/s: 0.94.21
   0.99.0
   Status: Patch Available  (was: In Progress)

Please review the patch. The trunk patch may also be applicable for 0.96 
version IMHO

 hbck -fixHdfsOrphans is not working properly it throws null pointer exception
 -

 Key: HBASE-10933
 URL: https://issues.apache.org/jira/browse/HBASE-10933
 Project: HBase
  Issue Type: Bug
  Components: hbck
Affects Versions: 0.98.2, 0.94.16
Reporter: Deepak Sharma
Assignee: Kashif J S
Priority: Critical
 Fix For: 0.99.0, 0.94.21

 Attachments: HBASE-10933-0.94-v1.patch, HBASE-10933-trunk-v1.patch, 
 TestResults-0.94.txt, TestResults-trunk.txt


 if we regioninfo file is not existing in hbase region then if we run hbck 
 repair or hbck -fixHdfsOrphans
 then it is not able to resolve this problem it throws null pointer exception
 {code}
 2014-04-08 20:11:49,750 INFO  [main] util.HBaseFsck 
 (HBaseFsck.java:adoptHdfsOrphans(470)) - Attempting to handle orphan hdfs 
 dir: 
 hdfs://10.18.40.28:54310/hbase/TestHdfsOrphans1/5a3de9ca65e587cb05c9384a3981c950
 java.lang.NullPointerException
   at 
 org.apache.hadoop.hbase.util.HBaseFsck$TableInfo.access$000(HBaseFsck.java:1939)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphan(HBaseFsck.java:497)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphans(HBaseFsck.java:471)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.restoreHdfsIntegrity(HBaseFsck.java:591)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.offlineHdfsIntegrityRepair(HBaseFsck.java:369)
   at org.apache.hadoop.hbase.util.HBaseFsck.onlineHbck(HBaseFsck.java:447)
   at org.apache.hadoop.hbase.util.HBaseFsck.exec(HBaseFsck.java:3769)
   at org.apache.hadoop.hbase.util.HBaseFsck.run(HBaseFsck.java:3587)
   at 
 com.huawei.isap.test.smartump.hadoop.hbase.HbaseHbckRepair.repairToFixHdfsOrphans(HbaseHbckRepair.java:244)
   at 
 com.huawei.isap.test.smartump.hadoop.hbase.HbaseHbckRepair.setUp(HbaseHbckRepair.java:84)
   at junit.framework.TestCase.runBare(TestCase.java:132)
   at junit.framework.TestResult$1.protect(TestResult.java:110)
   at junit.framework.TestResult.runProtected(TestResult.java:128)
   at junit.framework.TestResult.run(TestResult.java:113)
   at junit.framework.TestCase.run(TestCase.java:124)
   at junit.framework.TestSuite.runTest(TestSuite.java:243)
   at junit.framework.TestSuite.run(TestSuite.java:238)
   at 
 org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
   at 
 org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
   at 
 org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
 {code}
 problem i got it is because since in HbaseFsck class 
 {code}
  private void adoptHdfsOrphan(HbckInfo hi)
 {code}
 we are intializing tableinfo using SortedMapString, TableInfo tablesInfo 
 object
 {code}
 TableInfo tableInfo = tablesInfo.get(tableName);
 {code}
 but  in private SortedMapString, TableInfo loadHdfsRegionInfos()
 {code}
  for (HbckInfo hbi: hbckInfos) {
   if (hbi.getHdfsHRI() == null) {
 // was an orphan
 continue;
   }
 {code}
 we have check if a region is orphan then that table will can not be added in 
 SortedMapString, TableInfo tablesInfo
 so later while using this we get null pointer exception



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Work started] (HBASE-10933) hbck -fixHdfsOrphans is not working properly it throws null pointer exception

2014-05-20 Thread Kashif J S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HBASE-10933 started by Kashif J S.

 hbck -fixHdfsOrphans is not working properly it throws null pointer exception
 -

 Key: HBASE-10933
 URL: https://issues.apache.org/jira/browse/HBASE-10933
 Project: HBase
  Issue Type: Bug
  Components: hbck
Affects Versions: 0.94.16, 0.98.2
Reporter: Deepak Sharma
Assignee: Kashif J S
Priority: Critical
 Attachments: HBASE-10933-0.94-v1.patch, HBASE-10933-trunk-v1.patch, 
 TestResults-0.94.txt, TestResults-trunk.txt


 if we regioninfo file is not existing in hbase region then if we run hbck 
 repair or hbck -fixHdfsOrphans
 then it is not able to resolve this problem it throws null pointer exception
 {code}
 2014-04-08 20:11:49,750 INFO  [main] util.HBaseFsck 
 (HBaseFsck.java:adoptHdfsOrphans(470)) - Attempting to handle orphan hdfs 
 dir: 
 hdfs://10.18.40.28:54310/hbase/TestHdfsOrphans1/5a3de9ca65e587cb05c9384a3981c950
 java.lang.NullPointerException
   at 
 org.apache.hadoop.hbase.util.HBaseFsck$TableInfo.access$000(HBaseFsck.java:1939)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphan(HBaseFsck.java:497)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphans(HBaseFsck.java:471)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.restoreHdfsIntegrity(HBaseFsck.java:591)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.offlineHdfsIntegrityRepair(HBaseFsck.java:369)
   at org.apache.hadoop.hbase.util.HBaseFsck.onlineHbck(HBaseFsck.java:447)
   at org.apache.hadoop.hbase.util.HBaseFsck.exec(HBaseFsck.java:3769)
   at org.apache.hadoop.hbase.util.HBaseFsck.run(HBaseFsck.java:3587)
   at 
 com.huawei.isap.test.smartump.hadoop.hbase.HbaseHbckRepair.repairToFixHdfsOrphans(HbaseHbckRepair.java:244)
   at 
 com.huawei.isap.test.smartump.hadoop.hbase.HbaseHbckRepair.setUp(HbaseHbckRepair.java:84)
   at junit.framework.TestCase.runBare(TestCase.java:132)
   at junit.framework.TestResult$1.protect(TestResult.java:110)
   at junit.framework.TestResult.runProtected(TestResult.java:128)
   at junit.framework.TestResult.run(TestResult.java:113)
   at junit.framework.TestCase.run(TestCase.java:124)
   at junit.framework.TestSuite.runTest(TestSuite.java:243)
   at junit.framework.TestSuite.run(TestSuite.java:238)
   at 
 org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
   at 
 org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
   at 
 org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
 {code}
 problem i got it is because since in HbaseFsck class 
 {code}
  private void adoptHdfsOrphan(HbckInfo hi)
 {code}
 we are intializing tableinfo using SortedMapString, TableInfo tablesInfo 
 object
 {code}
 TableInfo tableInfo = tablesInfo.get(tableName);
 {code}
 but  in private SortedMapString, TableInfo loadHdfsRegionInfos()
 {code}
  for (HbckInfo hbi: hbckInfos) {
   if (hbi.getHdfsHRI() == null) {
 // was an orphan
 continue;
   }
 {code}
 we have check if a region is orphan then that table will can not be added in 
 SortedMapString, TableInfo tablesInfo
 so later while using this we get null pointer exception



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-9857) Blockcache prefetch option

2014-05-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14003338#comment-14003338
 ] 

Hadoop QA commented on HBASE-9857:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12645671/HBASE-9857-trunk.patch
  against trunk revision .
  ATTACHMENT ID: 12645671

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 26 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 2 
warning messages.

{color:red}-1 findbugs{color}.  The patch appears to introduce 1 new 
Findbugs (version 1.3.9) warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 release 
audit warnings (more than the trunk's current 0 warnings).

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+  public HFileReaderV3(final Path path, FixedFileTrailer trailer, final 
FSDataInputStreamWrapper fsdis,
+  
family.setPrefetchBlocksOnOpen(JBoolean.valueOf(arg.delete(org.apache.hadoop.hbase.HColumnDescriptor::PREFETCH_BLOCKS_ON_OPEN)))
 if 
arg.include?(org.apache.hadoop.hbase.HColumnDescriptor::PREFETCH_BLOCKS_ON_OPEN)

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.client.TestHCM

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9547//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9547//artifact/trunk/patchprocess/patchReleaseAuditProblems.txt
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9547//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9547//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9547//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9547//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9547//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9547//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9547//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9547//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9547//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9547//console

This message is automatically generated.

 Blockcache prefetch option
 --

 Key: HBASE-9857
 URL: https://issues.apache.org/jira/browse/HBASE-9857
 Project: HBase
  Issue Type: Improvement
Reporter: Andrew Purtell
Assignee: Andrew Purtell
Priority: Minor
 Fix For: 0.99.0, 0.98.3

 Attachments: 9857.patch, 9857.patch, HBASE-9857-0.98.patch, 
 HBASE-9857-trunk.patch


 Attached patch implements a prefetching function for HFile (v3) blocks, if 
 indicated by a column family or regionserver property. The purpose of this 
 change is to as rapidly after region open as reasonable warm the blockcache 
 with all the data and index blocks of (presumably also in-memory) table data, 
 without counting those block loads as cache misses. Great for fast reads and 
 keeping the cache hit ratio high. Can tune the IO impact versus time until 
 all data blocks are in cache. Works a bit like CompactSplitThread. Makes some 
 effort not to stampede.
 I have been using this for setting up various experiments and thought I'd 
 polish it up a bit and throw it out there. If the data to be preloaded will 
 not fit in blockcache, or if as a percentage of blockcache it is large, this 
 is not a good idea, will just blow out the cache and trigger a lot of useless 
 GC activity. Might be useful as an expert tuning option though. Or not.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10417) index is not incremented in PutSortReducer#reduce()

2014-05-20 Thread Gustavo Anatoly (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gustavo Anatoly updated HBASE-10417:


Status: Patch Available  (was: Open)

 index is not incremented in PutSortReducer#reduce()
 ---

 Key: HBASE-10417
 URL: https://issues.apache.org/jira/browse/HBASE-10417
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Gustavo Anatoly
Priority: Minor
 Attachments: HBASE-10417.patch


 Starting at line 76:
 {code}
   int index = 0;
   for (KeyValue kv : map) {
 context.write(row, kv);
 if (index  0  index % 100 == 0)
   context.setStatus(Wrote  + index);
 {code}
 index is a variable inside while loop that is never incremented.
 The condition index  0 cannot be true.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11104) IntegrationTestImportTsv#testRunFromOutputCommitter misses credential initialization

2014-05-20 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-11104:
---

Fix Version/s: 0.98.3
   0.99.0
 Hadoop Flags: Reviewed

 IntegrationTestImportTsv#testRunFromOutputCommitter misses credential 
 initialization
 

 Key: HBASE-11104
 URL: https://issues.apache.org/jira/browse/HBASE-11104
 Project: HBase
  Issue Type: Test
Reporter: Ted Yu
Assignee: Vandana Ayyalasomayajula
Priority: Minor
 Fix For: 0.99.0, 0.98.3

 Attachments: 11104-v1.txt, HBASE-11104_98_v3.patch, 
 HBASE-11104_trunk.patch, HBASE-11104_trunk.patch, HBASE-11104_trunk_v2.patch, 
 HBASE-11104_trunk_v3.patch


 IntegrationTestImportTsv#testRunFromOutputCommitter a parent job that ships 
 the HBase dependencies.
 However, call to TableMapReduceUtil.initCredentials(job) is missing - making 
 this test fail on a secure cluster.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11145) Issue with HLog sync

2014-05-20 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14003417#comment-14003417
 ] 

stack commented on HBASE-11145:
---

[~anoop.hbase] Thanks for the review.  Let me try your suggestion.  
cleanupOutstandingSyncsOnException requires a certain set of local variables to 
be in scope... which would mess up my neat little layout... and my thought is 
that if we get UNEXPECTED we are just hosed but let me look again.  Thanks 
Anoop.

 Issue with HLog sync
 

 Key: HBASE-11145
 URL: https://issues.apache.org/jira/browse/HBASE-11145
 Project: HBase
  Issue Type: Bug
Reporter: Anoop Sam John
Assignee: stack
Priority: Critical
 Fix For: 0.99.0

 Attachments: 11145.txt


 Got the below Exceptions Log in case of a write heavy test
 {code}
 2014-05-07 11:29:56,417 ERROR [main.append-pool1-t1] 
 wal.FSHLog$RingBufferEventHandler(1882): UNEXPECTED!!!
 java.lang.IllegalStateException: Queue full
  at java.util.AbstractQueue.add(Unknown Source)
  at 
 org.apache.hadoop.hbase.regionserver.wal.FSHLog$SyncRunner.offer(FSHLog.java:1227)
  at 
 org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1878)
  at 
 org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1)
  at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:133)
  at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source)
  at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
  at java.lang.Thread.run(Unknown Source)
 2014-05-07 11:29:56,418 ERROR [main.append-pool1-t1] 
 wal.FSHLog$RingBufferEventHandler(1882): UNEXPECTED!!!
 java.lang.ArrayIndexOutOfBoundsException: 5
  at 
 org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1838)
  at 
 org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1)
  at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:133)
  at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source)
  at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
  at java.lang.Thread.run(Unknown Source)
 2014-05-07 11:29:56,419 ERROR [main.append-pool1-t1] 
 wal.FSHLog$RingBufferEventHandler(1882): UNEXPECTED!!!
 java.lang.ArrayIndexOutOfBoundsException: 6
  at 
 org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1838)
  at 
 org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1)
  at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:133)
  at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source)
  at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
  at java.lang.Thread.run(Unknown Source)
 2014-05-07 11:29:56,419 ERROR [main.append-pool1-t1] 
 wal.FSHLog$RingBufferEventHandler(1882): UNEXPECTED!!!
 java.lang.ArrayIndexOutOfBoundsException: 7
  at 
 org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1838)
  at 
 org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1)
  at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:133)
  at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source)
  at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
  at java.lang.Thread.run(Unknown Source)
  {code}
 In FSHLog$SyncRunner.offer we do BlockingQueue.add() which throws Exception 
 as it is full. The problem is the below shown catch() we do not do any 
 cleanup.
 {code}
 this.syncRunners[index].offer(sequence, this.syncFutures, 
 this.syncFuturesCount);
 attainSafePoint(sequence);
 this.syncFuturesCount = 0;
   } catch (Throwable t) {
 LOG.error(UNEXPECTED!!!, t);
   }
 {code}
 syncFuturesCount is not getting reset to 0 and so the subsequent onEvent() 
 handling throws ArrayIndexOutOfBoundsException.
 I think we should do the below 
 1. Handle the Exception and call cleanupOutstandingSyncsOnException() as in 
 other cases of Exception handling
 2. Instead of BlockingQueue.add() use offer() (?)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11188) Inconsistent configuration for SchemaMetrics is always shown

2014-05-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14003428#comment-14003428
 ] 

Hudson commented on HBASE-11188:


SUCCESS: Integrated in HBase-0.94-security #484 (See 
[https://builds.apache.org/job/HBase-0.94-security/484/])
HBASE-11188 Inconsistent configuration for SchemaMetrics is always shown 
(jdcryans: rev 1595266)
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/metrics/SchemaMetrics.java


 Inconsistent configuration for SchemaMetrics is always shown
 --

 Key: HBASE-11188
 URL: https://issues.apache.org/jira/browse/HBASE-11188
 Project: HBase
  Issue Type: Bug
  Components: metrics
Affects Versions: 0.94.19
Reporter: Jean-Daniel Cryans
Assignee: Jean-Daniel Cryans
 Fix For: 0.94.20

 Attachments: HBASE-11188-0.94-v2.patch, HBASE-11188-0.94.patch


 Some users have been complaining about this message:
 {noformat}
 ERROR org.apache.hadoop.hbase.regionserver.metrics.SchemaMetrics: 
 Inconsistent configuration. Previous configuration for using table name in 
 metrics: true, new configuration: false
 {noformat}
 The interesting thing is that we see it with default configurations, which 
 made me think that some code path must have been passing the wrong thing. I 
 found that if SchemaConfigured is passed a null Configuration in its 
 constructor that it will then pass null to SchemaMetrics#configureGlobally 
 which will interpret useTableName as being false:
 {code}
   public static void configureGlobally(Configuration conf) {
 if (conf != null) {
   final boolean useTableNameNew =
   conf.getBoolean(SHOW_TABLE_NAME_CONF_KEY, false);
   setUseTableName(useTableNameNew);
 } else {
   setUseTableName(false);
 }
   }
 {code}
 It should be set to true since that's the new default, meaning we missed it 
 in HBASE-5671.
 I found one code path that passes a null configuration, StoreFile.Reader 
 extends SchemaConfigured and uses the constructor that only passes a Path, so 
 the Configuration is set to null.
 I'm planning on just passing true instead of false, fixing the problem for 
 almost everyone (those that disable this feature will get the error message). 
 IMO it's not worth more efforts since it's a 0.94-only problem and it's not 
 actually doing anything bad.
 I'm closing both HBASE-10990 and HBASE-10946 as duplicates.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11188) Inconsistent configuration for SchemaMetrics is always shown

2014-05-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14003452#comment-14003452
 ] 

Hudson commented on HBASE-11188:


SUCCESS: Integrated in HBase-0.94 #1368 (See 
[https://builds.apache.org/job/HBase-0.94/1368/])
HBASE-11188 Inconsistent configuration for SchemaMetrics is always shown 
(jdcryans: rev 1595266)
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/metrics/SchemaMetrics.java


 Inconsistent configuration for SchemaMetrics is always shown
 --

 Key: HBASE-11188
 URL: https://issues.apache.org/jira/browse/HBASE-11188
 Project: HBase
  Issue Type: Bug
  Components: metrics
Affects Versions: 0.94.19
Reporter: Jean-Daniel Cryans
Assignee: Jean-Daniel Cryans
 Fix For: 0.94.20

 Attachments: HBASE-11188-0.94-v2.patch, HBASE-11188-0.94.patch


 Some users have been complaining about this message:
 {noformat}
 ERROR org.apache.hadoop.hbase.regionserver.metrics.SchemaMetrics: 
 Inconsistent configuration. Previous configuration for using table name in 
 metrics: true, new configuration: false
 {noformat}
 The interesting thing is that we see it with default configurations, which 
 made me think that some code path must have been passing the wrong thing. I 
 found that if SchemaConfigured is passed a null Configuration in its 
 constructor that it will then pass null to SchemaMetrics#configureGlobally 
 which will interpret useTableName as being false:
 {code}
   public static void configureGlobally(Configuration conf) {
 if (conf != null) {
   final boolean useTableNameNew =
   conf.getBoolean(SHOW_TABLE_NAME_CONF_KEY, false);
   setUseTableName(useTableNameNew);
 } else {
   setUseTableName(false);
 }
   }
 {code}
 It should be set to true since that's the new default, meaning we missed it 
 in HBASE-5671.
 I found one code path that passes a null configuration, StoreFile.Reader 
 extends SchemaConfigured and uses the constructor that only passes a Path, so 
 the Configuration is set to null.
 I'm planning on just passing true instead of false, fixing the problem for 
 almost everyone (those that disable this feature will get the error message). 
 IMO it's not worth more efforts since it's a 0.94-only problem and it's not 
 actually doing anything bad.
 I'm closing both HBASE-10990 and HBASE-10946 as duplicates.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11208) Remove the hbase.hstor.blockingStoreFiles setting

2014-05-20 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14003455#comment-14003455
 ] 

stack commented on HBASE-11208:
---

This config was always a hack.  I'd be game for bringing on a new context -- 
i.e. one where this config is gone, or at least has been way upped -- but would 
be interested in thinking around how we'd then prevent condition where a region 
can get filled with hundreds of hfiles.  Client push-back?

 Remove the hbase.hstor.blockingStoreFiles setting
 -

 Key: HBASE-11208
 URL: https://issues.apache.org/jira/browse/HBASE-11208
 Project: HBase
  Issue Type: Brainstorming
  Components: Compaction, regionserver
Affects Versions: 0.99.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 0.99.0


 It's a little bit of a provocation, but the rational is:
  - there are some bugs around the delayed flush. For example, if the periodic 
 scheduler has asked for a delayed flush, and that we need to flush, we will 
 have to wait
  - if the number of WAL files increases, we won't flush immediately if the 
 blockingFile number has been reached. This impacts the MTTR.
  - We don't write to limit the compaction impact, but they are many cases 
 where we would want to flush anyway, as the writes cannot wait.
  - this obviously leads to huge write latency peaks.
 So I'm questioning this setting, it leads to multiple intricate cases, 
 unpredictable write latency, and looks like a workaround for compaction 
 performances. With all the work done on compaction, I think we can get rid of 
 it.  A solution in the middle would be to deprecate it and to set it to a 
 large value...
 Any opinion before I shoot :-) ? 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11209) Increase the default value for hbase.hregion.memstore.block.multipler from 2 to 4

2014-05-20 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14003457#comment-14003457
 ] 

Andrew Purtell commented on HBASE-11209:


+1 for 0.98

 Increase the default value for hbase.hregion.memstore.block.multipler from 2 
 to 4
 -

 Key: HBASE-11209
 URL: https://issues.apache.org/jira/browse/HBASE-11209
 Project: HBase
  Issue Type: Brainstorming
  Components: regionserver
Affects Versions: 0.99.0, 0.98.2
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 0.99.0, 0.98.3


 On a YCSB test, I saw a 33% performance increase, both on the max latency and 
 on the throughput. I'm convinced enough that this value is better that I 
 think it makes sense to change it on 0.98 as well.
 More fundamentally, but outside of the scope of this patch, I think this 
 parameter should be changed to something at the region server level: today, 
 we have:
 - global memstore check: if we're other 40%, we flush the biggest memstore
 - local: no more than 2 (proposed: 4) memstore size per region.
 But if we have enough memory and a spike on a region, there is no reason for 
 not taking the write.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11209) Increase the default value for hbase.hregion.memstore.block.multipler from 2 to 4

2014-05-20 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14003443#comment-14003443
 ] 

stack commented on HBASE-11209:
---

+1

 Increase the default value for hbase.hregion.memstore.block.multipler from 2 
 to 4
 -

 Key: HBASE-11209
 URL: https://issues.apache.org/jira/browse/HBASE-11209
 Project: HBase
  Issue Type: Brainstorming
  Components: regionserver
Affects Versions: 0.99.0, 0.98.2
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 0.99.0, 0.98.3


 On a YCSB test, I saw a 33% performance increase, both on the max latency and 
 on the throughput. I'm convinced enough that this value is better that I 
 think it makes sense to change it on 0.98 as well.
 More fundamentally, but outside of the scope of this patch, I think this 
 parameter should be changed to something at the region server level: today, 
 we have:
 - global memstore check: if we're other 40%, we flush the biggest memstore
 - local: no more than 2 (proposed: 4) memstore size per region.
 But if we have enough memory and a spike on a region, there is no reason for 
 not taking the write.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11208) Remove the hbase.hstor.blockingStoreFiles setting

2014-05-20 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14003484#comment-14003484
 ] 

Andrew Purtell commented on HBASE-11208:


bq. Client push-back

HBASE-5162

 Remove the hbase.hstor.blockingStoreFiles setting
 -

 Key: HBASE-11208
 URL: https://issues.apache.org/jira/browse/HBASE-11208
 Project: HBase
  Issue Type: Brainstorming
  Components: Compaction, regionserver
Affects Versions: 0.99.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 0.99.0


 It's a little bit of a provocation, but the rational is:
  - there are some bugs around the delayed flush. For example, if the periodic 
 scheduler has asked for a delayed flush, and that we need to flush, we will 
 have to wait
  - if the number of WAL files increases, we won't flush immediately if the 
 blockingFile number has been reached. This impacts the MTTR.
  - We don't write to limit the compaction impact, but they are many cases 
 where we would want to flush anyway, as the writes cannot wait.
  - this obviously leads to huge write latency peaks.
 So I'm questioning this setting, it leads to multiple intricate cases, 
 unpredictable write latency, and looks like a workaround for compaction 
 performances. With all the work done on compaction, I think we can get rid of 
 it.  A solution in the middle would be to deprecate it and to set it to a 
 large value...
 Any opinion before I shoot :-) ? 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11203) Clean up javadoc and findbugs warnings in trunk

2014-05-20 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-11203:
--

Attachment: 11203v2.txt

See if hadoopqa will run this.

 Clean up javadoc and findbugs warnings in trunk
 ---

 Key: HBASE-11203
 URL: https://issues.apache.org/jira/browse/HBASE-11203
 Project: HBase
  Issue Type: Task
Reporter: stack
Assignee: stack
 Attachments: 11203.txt, 11203v2.txt


 Fix outstanding WARNINGS (some of which I am responsible for recently).  Fix 
 some findbugs while at it.  Remove references to mortbay log.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11206) Enable automagically tweaking memstore and blockcache sizes

2014-05-20 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14003493#comment-14003493
 ] 

Andrew Purtell commented on HBASE-11206:


bq. Last I looked at this implementation, it assumes an entirely on-heap world.

We could change that. 

[~anoop.hbase] has been working on an alternate memstore that flushes from on 
heap storage into cellblocks that could be allocated either on or off heap. So 
cells from the recent writes would be on heap in a CSLM (this could be tuned 
small), an index for the total memstore would also be kept on heap, and then 
cellblocks either on or off heap. 

 Enable automagically tweaking memstore and blockcache sizes
 ---

 Key: HBASE-11206
 URL: https://issues.apache.org/jira/browse/HBASE-11206
 Project: HBase
  Issue Type: Task
Reporter: stack
Assignee: stack
 Fix For: 0.99.0


 HBASE-5349
 Automagically tweak global memstore and block cache sizes based on workload 
 adds a nice new feature. It is off by default.  Lets turn it on for 0.99.  
 Liang Xie is concerned that automatically shifting blockcache and memstore 
 sizes could wreck havoc w/ GC'ing in low-latency serving situation -- a valid 
 concern -- but lets enable this feature in 0.99 and see how it does.  Can 
 always disable it before 1.0 if a problem.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-9806) Add PerfEval tool for BlockCache

2014-05-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14003510#comment-14003510
 ] 

Hadoop QA commented on HBASE-9806:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12617766/HBASE-9806.02.patch
  against trunk revision .
  ATTACHMENT ID: 12617766

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 1 
warning messages.

{color:red}-1 findbugs{color}.  The patch appears to introduce 1 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.client.TestHCM
  
org.apache.hadoop.hbase.regionserver.TestRSKilledWhenInitializing
  org.apache.hadoop.hbase.replication.TestReplicationSyncUpTool

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s):   
at 
org.apache.hadoop.hbase.mapreduce.TestMultiTableInputFormat.testScan(TestMultiTableInputFormat.java:244)
at 
org.apache.hadoop.hbase.mapreduce.TestMultiTableInputFormat.testScanYZYToEmpty(TestMultiTableInputFormat.java:195)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9548//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9548//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9548//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9548//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9548//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9548//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9548//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9548//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9548//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9548//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9548//console

This message is automatically generated.

 Add PerfEval tool for BlockCache
 

 Key: HBASE-9806
 URL: https://issues.apache.org/jira/browse/HBASE-9806
 Project: HBase
  Issue Type: Test
  Components: Performance, test
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk
 Attachments: HBASE-9806.00.patch, HBASE-9806.01.patch, 
 HBASE-9806.02.patch, conf_20g.patch, conf_3g.patch, test1_run1_20g.pdf, 
 test1_run1_3g.pdf, test1_run2_20g.pdf, test1_run2_3g.pdf


 We have at least three different block caching layers with myriad 
 configuration settings. Let's add a tool for evaluating memory allocations 
 and configuration combinations with different access patterns.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-9857) Blockcache prefetch option

2014-05-20 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-9857:
--

Attachment: HBASE-9857-trunk.patch

TestHCM passes locally looped 10 times.

Updated the trunk patch with one more Javadoc fix, I used the 0.98 tree to fix 
issues and trunk had an additional nit, a misplaced close tag for a @link. 

 Blockcache prefetch option
 --

 Key: HBASE-9857
 URL: https://issues.apache.org/jira/browse/HBASE-9857
 Project: HBase
  Issue Type: Improvement
Reporter: Andrew Purtell
Assignee: Andrew Purtell
Priority: Minor
 Fix For: 0.99.0, 0.98.3

 Attachments: 9857.patch, 9857.patch, HBASE-9857-0.98.patch, 
 HBASE-9857-trunk.patch, HBASE-9857-trunk.patch


 Attached patch implements a prefetching function for HFile (v3) blocks, if 
 indicated by a column family or regionserver property. The purpose of this 
 change is to as rapidly after region open as reasonable warm the blockcache 
 with all the data and index blocks of (presumably also in-memory) table data, 
 without counting those block loads as cache misses. Great for fast reads and 
 keeping the cache hit ratio high. Can tune the IO impact versus time until 
 all data blocks are in cache. Works a bit like CompactSplitThread. Makes some 
 effort not to stampede.
 I have been using this for setting up various experiments and thought I'd 
 polish it up a bit and throw it out there. If the data to be preloaded will 
 not fit in blockcache, or if as a percentage of blockcache it is large, this 
 is not a good idea, will just blow out the cache and trigger a lot of useless 
 GC activity. Might be useful as an expert tuning option though. Or not.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11206) Enable automagically tweaking memstore and blockcache sizes

2014-05-20 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14003551#comment-14003551
 ] 

Anoop Sam John commented on HBASE-11206:


bq.I think by definition it cannot support a mixed deployment (ie, on-heap 
memstore, off-heap blockcache)
Ya the initial impl was made simple. We can think of supporting the mixed mode 
(ie, on-heap memstore, off-heap blockcache).

 Enable automagically tweaking memstore and blockcache sizes
 ---

 Key: HBASE-11206
 URL: https://issues.apache.org/jira/browse/HBASE-11206
 Project: HBase
  Issue Type: Task
Reporter: stack
Assignee: stack
 Fix For: 0.99.0


 HBASE-5349
 Automagically tweak global memstore and block cache sizes based on workload 
 adds a nice new feature. It is off by default.  Lets turn it on for 0.99.  
 Liang Xie is concerned that automatically shifting blockcache and memstore 
 sizes could wreck havoc w/ GC'ing in low-latency serving situation -- a valid 
 concern -- but lets enable this feature in 0.99 and see how it does.  Can 
 always disable it before 1.0 if a problem.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10336) Remove deprecated usage of Hadoop HttpServer in InfoServer

2014-05-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14003552#comment-14003552
 ] 

Hadoop QA commented on HBASE-10336:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12645625/10336-v10.txt
  against trunk revision .
  ATTACHMENT ID: 12645625

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 69 new 
or modified tests.

{color:red}-1 javac{color}.  The applied patch generated 45 javac compiler 
warnings (more than the trunk's current 4 warnings).

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 1 
warning messages.

{color:red}-1 findbugs{color}.  The patch appears to introduce 2 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+String serverURL = http://; + 
NetUtils.getHostPortString(myServer.getConnectorAddress(0)) + /;
+
Mockito.verify(response).sendError(Mockito.eq(HttpServletResponse.SC_UNAUTHORIZED),
 Mockito.anyString());
+
Mockito.verify(response).sendError(Mockito.eq(HttpServletResponse.SC_UNAUTHORIZED),
 Mockito.anyString());
+  public static HttpServer createServer(String webapp, Configuration conf, 
AccessControlList adminsAcl)
+return 
localServerBuilder(webapp).setFindPort(true).setConf(conf).setPathSpec(pathSpecs).build();
+  assertTrue( e.getMessage().contains(Problem in starting http server. 
Server handlers failed));
+.append(isAlive() ? STATE_DESCRIPTION_ALIVE : 
STATE_DESCRIPTION_NOT_LIVE).append(), listening at:);
+  private void writeAttribute(JsonGenerator jg, ObjectName oname, 
MBeanAttributeInfo attr) throws IOException {

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.procedure.TestZKProcedure
  org.apache.hadoop.hbase.client.TestHCM

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9549//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9549//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9549//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9549//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9549//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9549//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9549//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9549//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9549//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9549//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9549//console

This message is automatically generated.

 Remove deprecated usage of Hadoop HttpServer in InfoServer
 --

 Key: HBASE-10336
 URL: https://issues.apache.org/jira/browse/HBASE-10336
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.99.0
Reporter: Eric Charles
Assignee: Eric Charles
 Attachments: 10336-v10.txt, HBASE-10336-1.patch, HBASE-10336-2.patch, 
 HBASE-10336-3.patch, HBASE-10336-4.patch, HBASE-10336-5.patch, 
 HBASE-10336-6.patch, HBASE-10336-7.patch, HBASE-10336-8.patch, 
 HBASE-10336-9.patch, HBASE-10569-10.patch


 Recent changes in Hadoop HttpServer give NPE when running on hadoop 
 3.0.0-SNAPSHOT. This way we use HttpServer is deprecated and will probably be 
 not fixed (see HDFS-5760). We'd better move to the new proposed builder 
 pattern, which means we can no more use inheritance to build our nice 
 InfoServer.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-9857) Blockcache prefetch option

2014-05-20 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14003557#comment-14003557
 ] 

Andrew Purtell commented on HBASE-9857:
---

Looks like the new findbugs warning is from the TTL pretty printing patch: 
Result of integer multiplication cast to long in 
org.apache.hadoop.hbase.util.PrettyPrinter.humanReadableTTL(long)
 

 Blockcache prefetch option
 --

 Key: HBASE-9857
 URL: https://issues.apache.org/jira/browse/HBASE-9857
 Project: HBase
  Issue Type: Improvement
Reporter: Andrew Purtell
Assignee: Andrew Purtell
Priority: Minor
 Fix For: 0.99.0, 0.98.3

 Attachments: 9857.patch, 9857.patch, HBASE-9857-0.98.patch, 
 HBASE-9857-trunk.patch, HBASE-9857-trunk.patch


 Attached patch implements a prefetching function for HFile (v3) blocks, if 
 indicated by a column family or regionserver property. The purpose of this 
 change is to as rapidly after region open as reasonable warm the blockcache 
 with all the data and index blocks of (presumably also in-memory) table data, 
 without counting those block loads as cache misses. Great for fast reads and 
 keeping the cache hit ratio high. Can tune the IO impact versus time until 
 all data blocks are in cache. Works a bit like CompactSplitThread. Makes some 
 effort not to stampede.
 I have been using this for setting up various experiments and thought I'd 
 polish it up a bit and throw it out there. If the data to be preloaded will 
 not fit in blockcache, or if as a percentage of blockcache it is large, this 
 is not a good idea, will just blow out the cache and trigger a lot of useless 
 GC activity. Might be useful as an expert tuning option though. Or not.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-9857) Blockcache prefetch option

2014-05-20 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14003558#comment-14003558
 ] 

Andrew Purtell commented on HBASE-9857:
---

Going to commit this to trunk and 0.98 this evening unless objection.

 Blockcache prefetch option
 --

 Key: HBASE-9857
 URL: https://issues.apache.org/jira/browse/HBASE-9857
 Project: HBase
  Issue Type: Improvement
Reporter: Andrew Purtell
Assignee: Andrew Purtell
Priority: Minor
 Fix For: 0.99.0, 0.98.3

 Attachments: 9857.patch, 9857.patch, HBASE-9857-0.98.patch, 
 HBASE-9857-trunk.patch, HBASE-9857-trunk.patch


 Attached patch implements a prefetching function for HFile (v3) blocks, if 
 indicated by a column family or regionserver property. The purpose of this 
 change is to as rapidly after region open as reasonable warm the blockcache 
 with all the data and index blocks of (presumably also in-memory) table data, 
 without counting those block loads as cache misses. Great for fast reads and 
 keeping the cache hit ratio high. Can tune the IO impact versus time until 
 all data blocks are in cache. Works a bit like CompactSplitThread. Makes some 
 effort not to stampede.
 I have been using this for setting up various experiments and thought I'd 
 polish it up a bit and throw it out there. If the data to be preloaded will 
 not fit in blockcache, or if as a percentage of blockcache it is large, this 
 is not a good idea, will just blow out the cache and trigger a lot of useless 
 GC activity. Might be useful as an expert tuning option though. Or not.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-11210) Consider restoring Filter class back to an interface

2014-05-20 Thread Ted Yu (JIRA)
Ted Yu created HBASE-11210:
--

 Summary: Consider restoring Filter class back to an interface
 Key: HBASE-11210
 URL: https://issues.apache.org/jira/browse/HBASE-11210
 Project: HBase
  Issue Type: Task
Reporter: Ted Yu


In 0.94, Filter class is an interface.

From Filter.java in 0.96 :
{code}
 * Interface for row and column filters directly applied within the 
regionserver.
...
 * When implementing your own filters, consider inheriting {@link FilterBase} 
to help
 * you reduce boilerplate.
{code}
We should consider restoring Filter class back to an interface.
This gives users / developers clear suggestion that custom filters should 
override FilterBase instead of implementing Filter directly.

Thanks to Anoop who acknowledged this idea during offline discussion.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-7623) Username is not available for HConnectionManager to use in HConnectionKey

2014-05-20 Thread Jimmy Xiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14003610#comment-14003610
 ] 

Jimmy Xiang commented on HBASE-7623:


Which version of ZooKeeper are you using?  It should catch 
IllegalArgumentException and SecurityException.

 Username is not available for HConnectionManager to use in HConnectionKey
 -

 Key: HBASE-7623
 URL: https://issues.apache.org/jira/browse/HBASE-7623
 Project: HBase
  Issue Type: Improvement
  Components: Client, security
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
Priority: Minor
 Attachments: pom.xml, trunk-7623.patch, yogesh_bedekar.vcf, 
 yogesh_bedekar.vcf, yogesh_bedekar.vcf, yogesh_bedekar.vcf, yogesh_bedekar.vcf


 Sometimes, some non-IOException prevents User.getCurrent() to get a username. 
  It makes it impossible to create a HConnection.  We should catch all 
 exception here:
 {noformat}
   try {
 User currentUser = User.getCurrent();
 if (currentUser != null) {
   username = currentUser.getName();
 }
   } catch (IOException ioe) {
 LOG.warn(Error obtaining current user, skipping username in 
 HConnectionKey,
 ioe);
   }
 {noformat}
 Not just IOException, so that client can move forward.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10835) DBE encode path improvements

2014-05-20 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14003606#comment-14003606
 ] 

ramkrishna.s.vasudevan commented on HBASE-10835:


+1. Looks great.  Am ok for follow up issues mentioned over in RB.

 DBE encode path improvements
 

 Key: HBASE-10835
 URL: https://issues.apache.org/jira/browse/HBASE-10835
 Project: HBase
  Issue Type: Improvement
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 0.99.0

 Attachments: HBASE-10835.patch, HBASE-10835_V2.patch, 
 HBASE-10835_V3.patch, HBASE-10835_V4.patch


 Here 1st we write KVs (Cells) into a buffer and then passed to DBE encoder. 
 Encoder again reads kvs one by one from the buffer and encodes and creates a 
 new buffer.
 There is no need to have this model now. Previously we had option of no 
 encode in disk and encode only in cache. At that time the read buffer from a 
 HFile block was passed to this and encodes.
 So encode cell by cell can be done now. Making this change will need us to 
 have a NoOp DBE impl which just do the write of a cell as it is with out any 
 encoding.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10336) Remove deprecated usage of Hadoop HttpServer in InfoServer

2014-05-20 Thread Eric Charles (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14003619#comment-14003619
 ] 

Eric Charles commented on HBASE-10336:
--

[~te...@apache.org] Thx for the support. Review is not published on 
https://reviews.apache.org/r/21705/

 Remove deprecated usage of Hadoop HttpServer in InfoServer
 --

 Key: HBASE-10336
 URL: https://issues.apache.org/jira/browse/HBASE-10336
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.99.0
Reporter: Eric Charles
Assignee: Eric Charles
 Attachments: 10336-v10.txt, HBASE-10336-1.patch, HBASE-10336-2.patch, 
 HBASE-10336-3.patch, HBASE-10336-4.patch, HBASE-10336-5.patch, 
 HBASE-10336-6.patch, HBASE-10336-7.patch, HBASE-10336-8.patch, 
 HBASE-10336-9.patch, HBASE-10569-10.patch


 Recent changes in Hadoop HttpServer give NPE when running on hadoop 
 3.0.0-SNAPSHOT. This way we use HttpServer is deprecated and will probably be 
 not fixed (see HDFS-5760). We'd better move to the new proposed builder 
 pattern, which means we can no more use inheritance to build our nice 
 InfoServer.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11188) Inconsistent configuration for SchemaMetrics is always shown

2014-05-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14003636#comment-14003636
 ] 

Hudson commented on HBASE-11188:


FAILURE: Integrated in HBase-0.94-on-Hadoop-2 #85 (See 
[https://builds.apache.org/job/HBase-0.94-on-Hadoop-2/85/])
HBASE-11188 Inconsistent configuration for SchemaMetrics is always shown 
(jdcryans: rev 1595266)
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/metrics/SchemaMetrics.java


 Inconsistent configuration for SchemaMetrics is always shown
 --

 Key: HBASE-11188
 URL: https://issues.apache.org/jira/browse/HBASE-11188
 Project: HBase
  Issue Type: Bug
  Components: metrics
Affects Versions: 0.94.19
Reporter: Jean-Daniel Cryans
Assignee: Jean-Daniel Cryans
 Fix For: 0.94.20

 Attachments: HBASE-11188-0.94-v2.patch, HBASE-11188-0.94.patch


 Some users have been complaining about this message:
 {noformat}
 ERROR org.apache.hadoop.hbase.regionserver.metrics.SchemaMetrics: 
 Inconsistent configuration. Previous configuration for using table name in 
 metrics: true, new configuration: false
 {noformat}
 The interesting thing is that we see it with default configurations, which 
 made me think that some code path must have been passing the wrong thing. I 
 found that if SchemaConfigured is passed a null Configuration in its 
 constructor that it will then pass null to SchemaMetrics#configureGlobally 
 which will interpret useTableName as being false:
 {code}
   public static void configureGlobally(Configuration conf) {
 if (conf != null) {
   final boolean useTableNameNew =
   conf.getBoolean(SHOW_TABLE_NAME_CONF_KEY, false);
   setUseTableName(useTableNameNew);
 } else {
   setUseTableName(false);
 }
   }
 {code}
 It should be set to true since that's the new default, meaning we missed it 
 in HBASE-5671.
 I found one code path that passes a null configuration, StoreFile.Reader 
 extends SchemaConfigured and uses the constructor that only passes a Path, so 
 the Configuration is set to null.
 I'm planning on just passing true instead of false, fixing the problem for 
 almost everyone (those that disable this feature will get the error message). 
 IMO it's not worth more efforts since it's a 0.94-only problem and it's not 
 actually doing anything bad.
 I'm closing both HBASE-10990 and HBASE-10946 as duplicates.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-11211) LoadTestTool option for specifying number of regions per server

2014-05-20 Thread Andrew Purtell (JIRA)
Andrew Purtell created HBASE-11211:
--

 Summary: LoadTestTool option for specifying number of regions per 
server
 Key: HBASE-11211
 URL: https://issues.apache.org/jira/browse/HBASE-11211
 Project: HBase
  Issue Type: Improvement
Reporter: Andrew Purtell
Assignee: Andrew Purtell
Priority: Trivial
 Fix For: 0.99.0, 0.98.3


Add a new LoadTestTool option for specifying number of regions per server.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11211) LoadTestTool option for specifying number of regions per server

2014-05-20 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-11211:
---

Attachment: HBASE-11211-0.98.patch

Attached patches for trunk and 0.98.

Used the 0.98 patch just now to load up a regionserver with 1M regions.

LoadTestTool uses HBaseTestingUtility to create tables so this change can 
benefit any user of HTU.

 LoadTestTool option for specifying number of regions per server
 ---

 Key: HBASE-11211
 URL: https://issues.apache.org/jira/browse/HBASE-11211
 Project: HBase
  Issue Type: Improvement
Reporter: Andrew Purtell
Assignee: Andrew Purtell
Priority: Trivial
 Fix For: 0.99.0, 0.98.3

 Attachments: HBASE-11211-0.98.patch


 Add a new LoadTestTool option for specifying number of regions per server.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11211) LoadTestTool option for specifying number of regions per server

2014-05-20 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-11211:
---

Status: Patch Available  (was: Open)

 LoadTestTool option for specifying number of regions per server
 ---

 Key: HBASE-11211
 URL: https://issues.apache.org/jira/browse/HBASE-11211
 Project: HBase
  Issue Type: Improvement
Reporter: Andrew Purtell
Assignee: Andrew Purtell
Priority: Trivial
 Fix For: 0.99.0, 0.98.3

 Attachments: HBASE-11211-0.98.patch, HBASE-11211-trunk.patch


 Add a new LoadTestTool option for specifying number of regions per server.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11211) LoadTestTool option for specifying number of regions per server

2014-05-20 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-11211:
---

Attachment: HBASE-11211-trunk.patch

 LoadTestTool option for specifying number of regions per server
 ---

 Key: HBASE-11211
 URL: https://issues.apache.org/jira/browse/HBASE-11211
 Project: HBase
  Issue Type: Improvement
Reporter: Andrew Purtell
Assignee: Andrew Purtell
Priority: Trivial
 Fix For: 0.99.0, 0.98.3

 Attachments: HBASE-11211-0.98.patch, HBASE-11211-trunk.patch


 Add a new LoadTestTool option for specifying number of regions per server.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11210) Consider restoring Filter class back to an interface

2014-05-20 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14003653#comment-14003653
 ] 

stack commented on HBASE-11210:
---

Please fix the description or close.  As is, it makes no sense.

Why does going back to an Interface force use of FilterBase?  Why force use of 
FilterBase at all? 

consider inheriting does not mean should override.

 Consider restoring Filter class back to an interface
 

 Key: HBASE-11210
 URL: https://issues.apache.org/jira/browse/HBASE-11210
 Project: HBase
  Issue Type: Task
Reporter: Ted Yu

 In 0.94, Filter class is an interface.
 From Filter.java in 0.96 :
 {code}
  * Interface for row and column filters directly applied within the 
 regionserver.
 ...
  * When implementing your own filters, consider inheriting {@link FilterBase} 
 to help
  * you reduce boilerplate.
 {code}
 We should consider restoring Filter class back to an interface.
 This gives users / developers clear suggestion that custom filters should 
 override FilterBase instead of implementing Filter directly.
 Thanks to Anoop who acknowledged this idea during offline discussion.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11208) Remove the hbase.hstor.blockingStoreFiles setting

2014-05-20 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14003641#comment-14003641
 ] 

Lars Hofhansl commented on HBASE-11208:
---

Whoa... This is the one mechanism HBase has to limit writes when writes come in 
faster than the hardware (the IO system) can absorb. The only other mechanism 
is OOM'ing.
How else do you purpose throttling clients?

 Remove the hbase.hstor.blockingStoreFiles setting
 -

 Key: HBASE-11208
 URL: https://issues.apache.org/jira/browse/HBASE-11208
 Project: HBase
  Issue Type: Brainstorming
  Components: Compaction, regionserver
Affects Versions: 0.99.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 0.99.0


 It's a little bit of a provocation, but the rational is:
  - there are some bugs around the delayed flush. For example, if the periodic 
 scheduler has asked for a delayed flush, and that we need to flush, we will 
 have to wait
  - if the number of WAL files increases, we won't flush immediately if the 
 blockingFile number has been reached. This impacts the MTTR.
  - We don't write to limit the compaction impact, but they are many cases 
 where we would want to flush anyway, as the writes cannot wait.
  - this obviously leads to huge write latency peaks.
 So I'm questioning this setting, it leads to multiple intricate cases, 
 unpredictable write latency, and looks like a workaround for compaction 
 performances. With all the work done on compaction, I think we can get rid of 
 it.  A solution in the middle would be to deprecate it and to set it to a 
 large value...
 Any opinion before I shoot :-) ? 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11188) Inconsistent configuration for SchemaMetrics is always shown

2014-05-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14003642#comment-14003642
 ] 

Hudson commented on HBASE-11188:


FAILURE: Integrated in HBase-0.94-JDK7 #136 (See 
[https://builds.apache.org/job/HBase-0.94-JDK7/136/])
HBASE-11188 Inconsistent configuration for SchemaMetrics is always shown 
(jdcryans: rev 1595266)
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/metrics/SchemaMetrics.java


 Inconsistent configuration for SchemaMetrics is always shown
 --

 Key: HBASE-11188
 URL: https://issues.apache.org/jira/browse/HBASE-11188
 Project: HBase
  Issue Type: Bug
  Components: metrics
Affects Versions: 0.94.19
Reporter: Jean-Daniel Cryans
Assignee: Jean-Daniel Cryans
 Fix For: 0.94.20

 Attachments: HBASE-11188-0.94-v2.patch, HBASE-11188-0.94.patch


 Some users have been complaining about this message:
 {noformat}
 ERROR org.apache.hadoop.hbase.regionserver.metrics.SchemaMetrics: 
 Inconsistent configuration. Previous configuration for using table name in 
 metrics: true, new configuration: false
 {noformat}
 The interesting thing is that we see it with default configurations, which 
 made me think that some code path must have been passing the wrong thing. I 
 found that if SchemaConfigured is passed a null Configuration in its 
 constructor that it will then pass null to SchemaMetrics#configureGlobally 
 which will interpret useTableName as being false:
 {code}
   public static void configureGlobally(Configuration conf) {
 if (conf != null) {
   final boolean useTableNameNew =
   conf.getBoolean(SHOW_TABLE_NAME_CONF_KEY, false);
   setUseTableName(useTableNameNew);
 } else {
   setUseTableName(false);
 }
   }
 {code}
 It should be set to true since that's the new default, meaning we missed it 
 in HBASE-5671.
 I found one code path that passes a null configuration, StoreFile.Reader 
 extends SchemaConfigured and uses the constructor that only passes a Path, so 
 the Configuration is set to null.
 I'm planning on just passing true instead of false, fixing the problem for 
 almost everyone (those that disable this feature will get the error message). 
 IMO it's not worth more efforts since it's a 0.94-only problem and it's not 
 actually doing anything bad.
 I'm closing both HBASE-10990 and HBASE-10946 as duplicates.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11206) Enable automagically tweaking memstore and blockcache sizes

2014-05-20 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14003666#comment-14003666
 ] 

stack commented on HBASE-11206:
---

bq. You think we should go this route? 

Yes.  We just added a feature that will help some in most use cases (all 
on-heap) yet it is off by default.  Turn it on so we start to get experience 
with it and then improve as new context arrives (Next up would be offheap BC on 
by default.  Even here, meta blocks are onheap so could be pertinent in this 
case even as is).  Yeah, later, would be coolio if our ergonomics system 
shifted offheap usage around... I suggest that a good ways out though (if at 
all).


 Enable automagically tweaking memstore and blockcache sizes
 ---

 Key: HBASE-11206
 URL: https://issues.apache.org/jira/browse/HBASE-11206
 Project: HBase
  Issue Type: Task
Reporter: stack
Assignee: stack
 Fix For: 0.99.0


 HBASE-5349
 Automagically tweak global memstore and block cache sizes based on workload 
 adds a nice new feature. It is off by default.  Lets turn it on for 0.99.  
 Liang Xie is concerned that automatically shifting blockcache and memstore 
 sizes could wreck havoc w/ GC'ing in low-latency serving situation -- a valid 
 concern -- but lets enable this feature in 0.99 and see how it does.  Can 
 always disable it before 1.0 if a problem.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11210) Consider restoring Filter class back to an interface

2014-05-20 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14003670#comment-14003670
 ] 

Andrew Purtell commented on HBASE-11210:


bq. This gives users / developers clear suggestion that custom filters should 
override FilterBase instead of implementing Filter directly.

I agree with stack that providing a base (presumably abstract) class along with 
an interface does not send a clear message. That's what Javadoc is for.

 Consider restoring Filter class back to an interface
 

 Key: HBASE-11210
 URL: https://issues.apache.org/jira/browse/HBASE-11210
 Project: HBase
  Issue Type: Task
Reporter: Ted Yu

 In 0.94, Filter class is an interface.
 From Filter.java in 0.96 :
 {code}
  * Interface for row and column filters directly applied within the 
 regionserver.
 ...
  * When implementing your own filters, consider inheriting {@link FilterBase} 
 to help
  * you reduce boilerplate.
 {code}
 We should consider restoring Filter class back to an interface.
 This gives users / developers clear suggestion that custom filters should 
 override FilterBase instead of implementing Filter directly.
 Thanks to Anoop who acknowledged this idea during offline discussion.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-7987) Snapshot Manifest file instead of multiple empty files

2014-05-20 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14003677#comment-14003677
 ] 

Andrew Purtell commented on HBASE-7987:
---

Related, sooner or later we are going to bump up against wanting something for 
0.94 against the desire not to commit to 0.96. Fortunately not here looks like.

 Snapshot Manifest file instead of multiple empty files
 --

 Key: HBASE-7987
 URL: https://issues.apache.org/jira/browse/HBASE-7987
 Project: HBase
  Issue Type: Improvement
  Components: snapshots
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
 Fix For: 0.99.0

 Attachments: HBASE-7987-v0.patch, HBASE-7987-v1.patch, 
 HBASE-7987-v2.patch, HBASE-7987-v2.sketch, HBASE-7987-v3.patch, 
 HBASE-7987-v4.patch, HBASE-7987-v5.patch, HBASE-7987-v6.patch, 
 HBASE-7987.sketch


 Currently taking a snapshot means creating one empty file for each file in 
 the source table directory, plus copying the .regioninfo file for each 
 region, the table descriptor file and a snapshotInfo file.
 during the restore or snapshot verification we traverse the filesystem 
 (fs.listStatus()) to find the snapshot files, and we open the .regioninfo 
 files to get the information.
 to avoid hammering the NameNode and having lots of empty files, we can use a 
 manifest file that contains the list of files and information that we need.
 To keep the RS parallelism that we have, each RS can write its own manifest.
 {code}
 message SnapshotDescriptor {
   required string name;
   optional string table;
   optional int64 creationTime;
   optional Type type;
   optional int32 version;
 }
 message SnapshotRegionManifest {
   optional int32 version;
   required RegionInfo regionInfo;
   repeated FamilyFiles familyFiles;
   message StoreFile {
 required string name;
 optional Reference reference;
   }
   message FamilyFiles {
 required bytes familyName;
 repeated StoreFile storeFiles;
   }
 }
 {code}
 {code}
 /hbase/.snapshot/snapshotName
 /hbase/.snapshot/snapshotName/snapshotInfo
 /hbase/.snapshot/snapshotName/tableName
 /hbase/.snapshot/snapshotName/tableName/tableInfo
 /hbase/.snapshot/snapshotName/tableName/regionManifest(.n)
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11211) LoadTestTool option for specifying number of regions per server

2014-05-20 Thread Jean-Marc Spaggiari (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14003680#comment-14003680
 ] 

Jean-Marc Spaggiari commented on HBASE-11211:
-

LGTM. +1

What's current number of regions created with today's code? Only 5? Or 5 like 
with this patch?

 LoadTestTool option for specifying number of regions per server
 ---

 Key: HBASE-11211
 URL: https://issues.apache.org/jira/browse/HBASE-11211
 Project: HBase
  Issue Type: Improvement
Reporter: Andrew Purtell
Assignee: Andrew Purtell
Priority: Trivial
 Fix For: 0.99.0, 0.98.3

 Attachments: HBASE-11211-0.98.patch, HBASE-11211-trunk.patch


 Add a new LoadTestTool option for specifying number of regions per server.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11203) Clean up javadoc and findbugs warnings in trunk

2014-05-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14003689#comment-14003689
 ] 

Hadoop QA commented on HBASE-11203:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12645808/11203v2.txt
  against trunk revision .
  ATTACHMENT ID: 12645808

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.client.TestHCM

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9550//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9550//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9550//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9550//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9550//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9550//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9550//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9550//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9550//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9550//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9550//console

This message is automatically generated.

 Clean up javadoc and findbugs warnings in trunk
 ---

 Key: HBASE-11203
 URL: https://issues.apache.org/jira/browse/HBASE-11203
 Project: HBase
  Issue Type: Task
Reporter: stack
Assignee: stack
 Attachments: 11203.txt, 11203v2.txt


 Fix outstanding WARNINGS (some of which I am responsible for recently).  Fix 
 some findbugs while at it.  Remove references to mortbay log.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11211) LoadTestTool option for specifying number of regions per server

2014-05-20 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14003698#comment-14003698
 ] 

Andrew Purtell commented on HBASE-11211:


Previously it was hard coded at 5 per RS

 LoadTestTool option for specifying number of regions per server
 ---

 Key: HBASE-11211
 URL: https://issues.apache.org/jira/browse/HBASE-11211
 Project: HBase
  Issue Type: Improvement
Reporter: Andrew Purtell
Assignee: Andrew Purtell
Priority: Trivial
 Fix For: 0.99.0, 0.98.3

 Attachments: HBASE-11211-0.98.patch, HBASE-11211-trunk.patch


 Add a new LoadTestTool option for specifying number of regions per server.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11211) LoadTestTool option for specifying number of regions per server

2014-05-20 Thread Jean-Marc Spaggiari (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14003719#comment-14003719
 ] 

Jean-Marc Spaggiari commented on HBASE-11211:
-

Oh, hey, I see it now. DEFAULT_REGIONS_PER_SERVER was already 5. Just that now 
can be already adjusted from the command line. 

Thanks Andrew.

 LoadTestTool option for specifying number of regions per server
 ---

 Key: HBASE-11211
 URL: https://issues.apache.org/jira/browse/HBASE-11211
 Project: HBase
  Issue Type: Improvement
Reporter: Andrew Purtell
Assignee: Andrew Purtell
Priority: Trivial
 Fix For: 0.99.0, 0.98.3

 Attachments: HBASE-11211-0.98.patch, HBASE-11211-trunk.patch


 Add a new LoadTestTool option for specifying number of regions per server.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11202) Cleanup on HRegion class

2014-05-20 Thread Jeffrey Zhong (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14003724#comment-14003724
 ] 

Jeffrey Zhong commented on HBASE-11202:
---

I didn't remember I added there two updatesLock  updatesUnlock. I also search 
Phoenix code base and they're not used there either. I guess we can remove them 
in trunk branch(1.0). 

 Cleanup on HRegion class
 

 Key: HBASE-11202
 URL: https://issues.apache.org/jira/browse/HBASE-11202
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.99.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
Priority: Minor
 Fix For: 0.99.0

 Attachments: 11202.v1.patch


 This is mostly trivial stuff
  - remove some methods not used
  - typos
  - remove some @param w/o any info
  - change the code that uses deprecated methods
 The only non trivial change is when we get the store from a cell: instead of 
 using the map, we iterate on the key set. Likely, it would be better to hava 
 a sorted array instead of a Map, as the number of store is fixed.  Could be 
 done in a later patch.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10573) Use Netty 4

2014-05-20 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14003740#comment-14003740
 ] 

stack commented on HBASE-10573:
---

Did we break hbase on java6?  See 
https://builds.apache.org/job/PreCommit-HBASE-Build/9550//testReport/org.apache.hadoop.hbase.client/TestHCM/org_apache_hadoop_hbase_client_TestHCM/
 where it complains:

Caused by: java.lang.UnsupportedOperationException: Only supported on java 7+.
at 
io.netty.channel.socket.nio.NioDatagramChannel.checkJavaVersion(NioDatagramChannel.java:103)
at 
io.netty.channel.socket.nio.NioDatagramChannel.joinGroup(NioDatagramChannel.java:381)
at 
org.apache.hadoop.hbase.master.ClusterStatusPublisher$MulticastPublisher.connect(ClusterStatusPublisher.java:271)
...

I built with java7.

 Use Netty 4
 ---

 Key: HBASE-10573
 URL: https://issues.apache.org/jira/browse/HBASE-10573
 Project: HBase
  Issue Type: Sub-task
Affects Versions: 0.99.0, hbase-10191
Reporter: Andrew Purtell
Assignee: Nicolas Liochon
 Fix For: 0.99.0

 Attachments: 10573.patch, 10573.patch, 10573.v3.patch


 Pull in Netty 4 and sort out the consequences.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11202) Cleanup on HRegion class

2014-05-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14003743#comment-14003743
 ] 

Hadoop QA commented on HBASE-11202:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12645589/11202.v1.patch
  against trunk revision .
  ATTACHMENT ID: 12645589

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 1 
warning messages.

{color:red}-1 findbugs{color}.  The patch appears to introduce 1 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.client.TestHCM

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s):   
at 
org.apache.hadoop.hbase.mapreduce.TestTableMapReduceBase.testMultiRegionTable(TestTableMapReduceBase.java:96)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9551//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9551//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9551//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9551//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9551//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9551//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9551//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9551//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9551//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9551//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9551//console

This message is automatically generated.

 Cleanup on HRegion class
 

 Key: HBASE-11202
 URL: https://issues.apache.org/jira/browse/HBASE-11202
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.99.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
Priority: Minor
 Fix For: 0.99.0

 Attachments: 11202.v1.patch


 This is mostly trivial stuff
  - remove some methods not used
  - typos
  - remove some @param w/o any info
  - change the code that uses deprecated methods
 The only non trivial change is when we get the store from a cell: instead of 
 using the map, we iterate on the key set. Likely, it would be better to hava 
 a sorted array instead of a Map, as the number of store is fixed.  Could be 
 done in a later patch.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10573) Use Netty 4

2014-05-20 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14003764#comment-14003764
 ] 

Nicolas Liochon commented on HBASE-10573:
-

Likely Ouch. Do we still support java6 on 1.0?
Anyway we can deactivate the test to ends the build?

I'm on my phone, I can't do it right now but I will do it in a few hours...



 Use Netty 4
 ---

 Key: HBASE-10573
 URL: https://issues.apache.org/jira/browse/HBASE-10573
 Project: HBase
  Issue Type: Sub-task
Affects Versions: 0.99.0, hbase-10191
Reporter: Andrew Purtell
Assignee: Nicolas Liochon
 Fix For: 0.99.0

 Attachments: 10573.patch, 10573.patch, 10573.v3.patch


 Pull in Netty 4 and sort out the consequences.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11104) IntegrationTestImportTsv#testRunFromOutputCommitter misses credential initialization

2014-05-20 Thread Vandana Ayyalasomayajula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vandana Ayyalasomayajula updated HBASE-11104:
-

Attachment: HBASE-11104_98_v4.patch

Rebased  the  patch for 98 branch.

 IntegrationTestImportTsv#testRunFromOutputCommitter misses credential 
 initialization
 

 Key: HBASE-11104
 URL: https://issues.apache.org/jira/browse/HBASE-11104
 Project: HBase
  Issue Type: Test
Reporter: Ted Yu
Assignee: Vandana Ayyalasomayajula
Priority: Minor
 Fix For: 0.99.0, 0.98.3

 Attachments: 11104-v1.txt, HBASE-11104_98_v3.patch, 
 HBASE-11104_98_v4.patch, HBASE-11104_trunk.patch, HBASE-11104_trunk.patch, 
 HBASE-11104_trunk_v2.patch, HBASE-11104_trunk_v3.patch


 IntegrationTestImportTsv#testRunFromOutputCommitter a parent job that ships 
 the HBase dependencies.
 However, call to TableMapReduceUtil.initCredentials(job) is missing - making 
 this test fail on a secure cluster.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11104) IntegrationTestImportTsv#testRunFromOutputCommitter misses credential initialization

2014-05-20 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-11104:
---

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thanks, Vandana

 IntegrationTestImportTsv#testRunFromOutputCommitter misses credential 
 initialization
 

 Key: HBASE-11104
 URL: https://issues.apache.org/jira/browse/HBASE-11104
 Project: HBase
  Issue Type: Test
Reporter: Ted Yu
Assignee: Vandana Ayyalasomayajula
Priority: Minor
 Fix For: 0.99.0, 0.98.3

 Attachments: 11104-v1.txt, HBASE-11104_98_v3.patch, 
 HBASE-11104_98_v4.patch, HBASE-11104_trunk.patch, HBASE-11104_trunk.patch, 
 HBASE-11104_trunk_v2.patch, HBASE-11104_trunk_v3.patch


 IntegrationTestImportTsv#testRunFromOutputCommitter a parent job that ships 
 the HBase dependencies.
 However, call to TableMapReduceUtil.initCredentials(job) is missing - making 
 this test fail on a secure cluster.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10831) IntegrationTestIngestWithACL is not setting up LoadTestTool correctly

2014-05-20 Thread Vandana Ayyalasomayajula (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14003811#comment-14003811
 ] 

Vandana Ayyalasomayajula commented on HBASE-10831:
--

I got the test running on a secure cluster. The test does not have 
authentication related code for authenticating super user and users ( passed as 
part of the user list). So the test fails with the exception mentioned in 
https://issues.apache.org/jira/browse/HBASE-10831?focusedCommentId=13981816page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13981816.
 
when I added keytab files and principals for the each of the users, the test 
seems to pass. I still have not figured out why it fails while running on local 
cluster though. I will work on creating the patch for making the test pass on 
secure cluster.

 IntegrationTestIngestWithACL is not setting up LoadTestTool correctly
 -

 Key: HBASE-10831
 URL: https://issues.apache.org/jira/browse/HBASE-10831
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.1
Reporter: Andrew Purtell
Assignee: ramkrishna.s.vasudevan
 Fix For: 0.99.0, 0.98.3


 IntegrationTestIngestWithACL is not setting up LoadTestTool correctly.
 {noformat}
 Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 601.709 sec 
  FAILURE!
 testIngest(org.apache.hadoop.hbase.IntegrationTestIngestWithACL)  Time 
 elapsed: 601.489 sec   FAILURE!
 java.lang.AssertionError: Failed to initialize LoadTestTool expected:0 but 
 was:1
 at org.junit.Assert.fail(Assert.java:88)
 at org.junit.Assert.failNotEquals(Assert.java:743)
 at org.junit.Assert.assertEquals(Assert.java:118)
 at org.junit.Assert.assertEquals(Assert.java:555)
 at 
 org.apache.hadoop.hbase.IntegrationTestIngest.initTable(IntegrationTestIngest.java:74)
 at 
 org.apache.hadoop.hbase.IntegrationTestIngest.setUpCluster(IntegrationTestIngest.java:69)
 at 
 org.apache.hadoop.hbase.IntegrationTestIngestWithACL.setUpCluster(IntegrationTestIngestWithACL.java:58)
 at 
 org.apache.hadoop.hbase.IntegrationTestBase.setUp(IntegrationTestBase.java:89)
 {noformat}
 Could be related to HBASE-10675?



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11145) Issue with HLog sync

2014-05-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14003842#comment-14003842
 ] 

Hadoop QA commented on HBASE-11145:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12644898/11145.txt
  against trunk revision .
  ATTACHMENT ID: 12644898

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 1 
warning messages.

{color:red}-1 findbugs{color}.  The patch appears to introduce 1 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.client.TestHCM

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9552//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9552//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9552//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9552//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9552//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9552//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9552//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9552//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9552//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9552//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9552//console

This message is automatically generated.

 Issue with HLog sync
 

 Key: HBASE-11145
 URL: https://issues.apache.org/jira/browse/HBASE-11145
 Project: HBase
  Issue Type: Bug
Reporter: Anoop Sam John
Assignee: stack
Priority: Critical
 Fix For: 0.99.0

 Attachments: 11145.txt


 Got the below Exceptions Log in case of a write heavy test
 {code}
 2014-05-07 11:29:56,417 ERROR [main.append-pool1-t1] 
 wal.FSHLog$RingBufferEventHandler(1882): UNEXPECTED!!!
 java.lang.IllegalStateException: Queue full
  at java.util.AbstractQueue.add(Unknown Source)
  at 
 org.apache.hadoop.hbase.regionserver.wal.FSHLog$SyncRunner.offer(FSHLog.java:1227)
  at 
 org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1878)
  at 
 org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1)
  at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:133)
  at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source)
  at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
  at java.lang.Thread.run(Unknown Source)
 2014-05-07 11:29:56,418 ERROR [main.append-pool1-t1] 
 wal.FSHLog$RingBufferEventHandler(1882): UNEXPECTED!!!
 java.lang.ArrayIndexOutOfBoundsException: 5
  at 
 org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1838)
  at 
 org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1)
  at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:133)
  at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source)
  at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
  at java.lang.Thread.run(Unknown Source)
 

[jira] [Commented] (HBASE-10933) hbck -fixHdfsOrphans is not working properly it throws null pointer exception

2014-05-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14003846#comment-14003846
 ] 

Hadoop QA commented on HBASE-10933:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12645802/TestResults-trunk.txt
  against trunk revision .
  ATTACHMENT ID: 12645802

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified tests.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9554//console

This message is automatically generated.

 hbck -fixHdfsOrphans is not working properly it throws null pointer exception
 -

 Key: HBASE-10933
 URL: https://issues.apache.org/jira/browse/HBASE-10933
 Project: HBase
  Issue Type: Bug
  Components: hbck
Affects Versions: 0.94.16, 0.98.2
Reporter: Deepak Sharma
Assignee: Kashif J S
Priority: Critical
 Fix For: 0.99.0, 0.94.21

 Attachments: HBASE-10933-0.94-v1.patch, HBASE-10933-trunk-v1.patch, 
 TestResults-0.94.txt, TestResults-trunk.txt


 if we regioninfo file is not existing in hbase region then if we run hbck 
 repair or hbck -fixHdfsOrphans
 then it is not able to resolve this problem it throws null pointer exception
 {code}
 2014-04-08 20:11:49,750 INFO  [main] util.HBaseFsck 
 (HBaseFsck.java:adoptHdfsOrphans(470)) - Attempting to handle orphan hdfs 
 dir: 
 hdfs://10.18.40.28:54310/hbase/TestHdfsOrphans1/5a3de9ca65e587cb05c9384a3981c950
 java.lang.NullPointerException
   at 
 org.apache.hadoop.hbase.util.HBaseFsck$TableInfo.access$000(HBaseFsck.java:1939)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphan(HBaseFsck.java:497)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphans(HBaseFsck.java:471)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.restoreHdfsIntegrity(HBaseFsck.java:591)
   at 
 org.apache.hadoop.hbase.util.HBaseFsck.offlineHdfsIntegrityRepair(HBaseFsck.java:369)
   at org.apache.hadoop.hbase.util.HBaseFsck.onlineHbck(HBaseFsck.java:447)
   at org.apache.hadoop.hbase.util.HBaseFsck.exec(HBaseFsck.java:3769)
   at org.apache.hadoop.hbase.util.HBaseFsck.run(HBaseFsck.java:3587)
   at 
 com.huawei.isap.test.smartump.hadoop.hbase.HbaseHbckRepair.repairToFixHdfsOrphans(HbaseHbckRepair.java:244)
   at 
 com.huawei.isap.test.smartump.hadoop.hbase.HbaseHbckRepair.setUp(HbaseHbckRepair.java:84)
   at junit.framework.TestCase.runBare(TestCase.java:132)
   at junit.framework.TestResult$1.protect(TestResult.java:110)
   at junit.framework.TestResult.runProtected(TestResult.java:128)
   at junit.framework.TestResult.run(TestResult.java:113)
   at junit.framework.TestCase.run(TestCase.java:124)
   at junit.framework.TestSuite.runTest(TestSuite.java:243)
   at junit.framework.TestSuite.run(TestSuite.java:238)
   at 
 org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
   at 
 org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
   at 
 org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
 {code}
 problem i got it is because since in HbaseFsck class 
 {code}
  private void adoptHdfsOrphan(HbckInfo hi)
 {code}
 we are intializing tableinfo using SortedMapString, TableInfo tablesInfo 
 object
 {code}
 TableInfo tableInfo = tablesInfo.get(tableName);
 {code}
 but  in private SortedMapString, TableInfo loadHdfsRegionInfos()
 {code}
  for (HbckInfo hbi: hbckInfos) {
   if (hbi.getHdfsHRI() == null) {
 // was an orphan
 continue;
   }
 {code}
 we have check if a region is orphan then that table will can not be added in 
 SortedMapString, TableInfo tablesInfo
 so later while using this we get null pointer exception



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11165) Scaling so cluster can host 1M regions and beyond (50M regions?)

2014-05-20 Thread Mikhail Antonov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14003854#comment-14003854
 ] 

Mikhail Antonov commented on HBASE-11165:
-

bq. If we are lazily allocating mslabs also then we could trigger GC activity 
as those are large objects.
Gc-ing mslabs should be fast, since they're few large (2mb) objects?

bq.It might be worth keeping a pool of mslabs around for use by memstores as 
needed, in aggregate still a lesser allocation than today, the size of the pool 
adjusted by heuristics.
So have a global mslabs-pool in RS, versus mslabs per memstore?

 Scaling so cluster can host 1M regions and beyond (50M regions?)
 

 Key: HBASE-11165
 URL: https://issues.apache.org/jira/browse/HBASE-11165
 Project: HBase
  Issue Type: Brainstorming
Reporter: stack

 This discussion issue comes out of Co-locate Meta And Master HBASE-10569 
 and comments on the doc posted there.
 A user -- our Francis Liu -- needs to be able to scale a cluster to do 1M 
 regions maybe even 50M later.  This issue is about discussing how we will do 
 that (or if not 50M on a cluster, how otherwise we can attain same end).
 More detail to follow.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11165) Scaling so cluster can host 1M regions and beyond (50M regions?)

2014-05-20 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14003857#comment-14003857
 ] 

Andrew Purtell commented on HBASE-11165:


{quote}
bq. If we are lazily allocating mslabs also then we could trigger GC activity 
as those are large objects.
Gc-ing mslabs should be fast, since they're few large (2mb) objects?
{quote}
No, I meant that allocating a large object could trigger GC that wouldn't 
otherwise have happened at that time. 

 Scaling so cluster can host 1M regions and beyond (50M regions?)
 

 Key: HBASE-11165
 URL: https://issues.apache.org/jira/browse/HBASE-11165
 Project: HBase
  Issue Type: Brainstorming
Reporter: stack

 This discussion issue comes out of Co-locate Meta And Master HBASE-10569 
 and comments on the doc posted there.
 A user -- our Francis Liu -- needs to be able to scale a cluster to do 1M 
 regions maybe even 50M later.  This issue is about discussing how we will do 
 that (or if not 50M on a cluster, how otherwise we can attain same end).
 More detail to follow.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11165) Scaling so cluster can host 1M regions and beyond (50M regions?)

2014-05-20 Thread Mikhail Antonov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14003869#comment-14003869
 ] 

Mikhail Antonov commented on HBASE-11165:
-

Oh, I see.

 Scaling so cluster can host 1M regions and beyond (50M regions?)
 

 Key: HBASE-11165
 URL: https://issues.apache.org/jira/browse/HBASE-11165
 Project: HBase
  Issue Type: Brainstorming
Reporter: stack

 This discussion issue comes out of Co-locate Meta And Master HBASE-10569 
 and comments on the doc posted there.
 A user -- our Francis Liu -- needs to be able to scale a cluster to do 1M 
 regions maybe even 50M later.  This issue is about discussing how we will do 
 that (or if not 50M on a cluster, how otherwise we can attain same end).
 More detail to follow.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


  1   2   3   >