[jira] [Commented] (HBASE-10448) ZKUtil create and watch methods don't set watch in some cases

2014-02-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=1376#comment-1376
 ] 

Hudson commented on HBASE-10448:


SUCCESS: Integrated in hbase-0.96 #276 (See 
[https://builds.apache.org/job/hbase-0.96/276/])
HBASE-10448 ZKUtil create and watch methods don't set watch in some cases 
(stack: rev 1563567)
* 
/hbase/branches/0.96/hbase-client/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKUtil.java


 ZKUtil create and watch methods don't set watch in some cases
 -

 Key: HBASE-10448
 URL: https://issues.apache.org/jira/browse/HBASE-10448
 Project: HBase
  Issue Type: Bug
  Components: Zookeeper
Affects Versions: 0.96.0, 0.96.1.1
Reporter: Jerry He
Assignee: Jerry He
 Fix For: 0.98.0, 0.96.2, 0.99.0

 Attachments: HBASE-10448-trunk.patch


 While using the ZKUtil methods during testing, I found that watch was not set 
 when it should be set based on the methods and method comments:
 createNodeIfNotExistsAndWatch
 createEphemeralNodeAndWatch
 For example, in createNodeIfNotExistsAndWatch():
 {code}
  public static boolean createNodeIfNotExistsAndWatch(
   ZooKeeperWatcher zkw, String znode, byte [] data)
   throws KeeperException {
 try {
   zkw.getRecoverableZooKeeper().create(znode, data, createACL(zkw, znode),
   CreateMode.PERSISTENT);
 } catch (KeeperException.NodeExistsException nee) {
   try {
 zkw.getRecoverableZooKeeper().exists(znode, zkw);
   } catch (InterruptedException e) {
 zkw.interruptedException(e);
 return false;
   }
   return false;
 } catch (InterruptedException e) {
   zkw.interruptedException(e);
   return false;
 }
 return true;
   }
 {code}
 The watch is only set via exists() call when the node already exists.
 Similarly in createEphemeralNodeAndWatch():
 {code}
   public static boolean createEphemeralNodeAndWatch(ZooKeeperWatcher zkw,
   String znode, byte [] data)
   throws KeeperException {
 try {
   zkw.getRecoverableZooKeeper().create(znode, data, createACL(zkw, znode),
   CreateMode.EPHEMERAL);
 } catch (KeeperException.NodeExistsException nee) {
   if(!watchAndCheckExists(zkw, znode)) {
 // It did exist but now it doesn't, try again
 return createEphemeralNodeAndWatch(zkw, znode, data);
   }
   return false;
 } catch (InterruptedException e) {
   LOG.info(Interrupted, e);
   Thread.currentThread().interrupt();
 }
 return true;
   }
 {code}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10443) IndexOutOfBoundExceptions when processing compressed tags in HFile

2014-02-02 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-10443:
---

Status: Open  (was: Patch Available)

 IndexOutOfBoundExceptions when processing compressed tags in HFile
 --

 Key: HBASE-10443
 URL: https://issues.apache.org/jira/browse/HBASE-10443
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
Priority: Blocker
 Fix For: 0.98.0

 Attachments: HBASE-10443.patch


 As HBASE-10438 got closed, we still need to fix the Index out of bound 
 exception that occurs.  If we have a proper fix will fix this, if the bug was 
 a false alarm would close this. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10443) IndexOutOfBoundExceptions when processing compressed tags in HFile

2014-02-02 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-10443:
---

Attachment: HBASE-10443_1.patch

Updated patch. Going to commit this.

 IndexOutOfBoundExceptions when processing compressed tags in HFile
 --

 Key: HBASE-10443
 URL: https://issues.apache.org/jira/browse/HBASE-10443
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
Priority: Blocker
 Fix For: 0.98.0

 Attachments: HBASE-10443.patch, HBASE-10443_1.patch


 As HBASE-10438 got closed, we still need to fix the Index out of bound 
 exception that occurs.  If we have a proper fix will fix this, if the bug was 
 a false alarm would close this. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Resolved] (HBASE-10443) IndexOutOfBoundExceptions when processing compressed tags in HFile

2014-02-02 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan resolved HBASE-10443.


   Resolution: Fixed
Fix Version/s: 0.99.0
 Hadoop Flags: Reviewed

Committed to 0.98 and trunk.  While fixing HBASE-10451 needs to fix in trunk 
also. Thanks for the reviews Anoop and Andy.

 IndexOutOfBoundExceptions when processing compressed tags in HFile
 --

 Key: HBASE-10443
 URL: https://issues.apache.org/jira/browse/HBASE-10443
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
Priority: Blocker
 Fix For: 0.98.0, 0.99.0

 Attachments: HBASE-10443.patch, HBASE-10443_1.patch


 As HBASE-10438 got closed, we still need to fix the Index out of bound 
 exception that occurs.  If we have a proper fix will fix this, if the bug was 
 a false alarm would close this. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10447) Memstore flusher scans storefiles also when the scanner heap gets reset

2014-02-02 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=1395#comment-1395
 ] 

ramkrishna.s.vasudevan commented on HBASE-10447:


Both testcases passes - TestEncodedSeekers and TestAcidGuarentees.  I think off 
late this TestAcidGuarentees is failing.  Need to investigate?

 Memstore flusher scans storefiles also when the scanner heap gets reset
 ---

 Key: HBASE-10447
 URL: https://issues.apache.org/jira/browse/HBASE-10447
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0, 0.99.0, 0.96.1.1
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
Priority: Blocker
 Fix For: 0.98.0, 0.96.2, 0.99.0

 Attachments: HBASE-10447_0.98.patch


 See the mail thread
 http://osdir.com/ml/general/2014-01/msg61294.html
 In case of flush we create a memstore flusher which in turn creates a  
 StoreScanner backed by a Single ton MemstoreScanner.  
 But this scanner also registers for any updates in the reader in the HStore.  
 Is this needed?  
 If this happens then any update on the reader may nullify the current heap 
 and the entire Scanner Stack is reset, but this time with the other scanners 
 for all the files that satisfies the last top key.  So the flush that happens 
 on the memstore holds the storefile scanners also in the heap that was 
 recreated but originally the intention was to create a scanner on the 
 memstore alone.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10443) IndexOutOfBoundExceptions when processing compressed tags in HFile

2014-02-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13888903#comment-13888903
 ] 

Hudson commented on HBASE-10443:


SUCCESS: Integrated in HBase-0.98-on-Hadoop-1.1 #112 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/112/])
HBASE-10443-IndexOutOfBoundExceptions when processing compressed tags in HFile 
(Ramkrishna S Vasudevan) (ramkrishna: rev 1563583)
* 
/hbase/branches/0.98/hbase-client/src/main/java/org/apache/hadoop/hbase/HColumnDescriptor.java
* 
/hbase/branches/0.98/hbase-common/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileContextBuilder.java
* 
/hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java


 IndexOutOfBoundExceptions when processing compressed tags in HFile
 --

 Key: HBASE-10443
 URL: https://issues.apache.org/jira/browse/HBASE-10443
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
Priority: Blocker
 Fix For: 0.98.0, 0.99.0

 Attachments: HBASE-10443.patch, HBASE-10443_1.patch


 As HBASE-10438 got closed, we still need to fix the Index out of bound 
 exception that occurs.  If we have a proper fix will fix this, if the bug was 
 a false alarm would close this. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10448) ZKUtil create and watch methods don't set watch in some cases

2014-02-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13888917#comment-13888917
 ] 

Hudson commented on HBASE-10448:


FAILURE: Integrated in hbase-0.96-hadoop2 #190 (See 
[https://builds.apache.org/job/hbase-0.96-hadoop2/190/])
HBASE-10448 ZKUtil create and watch methods don't set watch in some cases 
(stack: rev 1563567)
* 
/hbase/branches/0.96/hbase-client/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKUtil.java


 ZKUtil create and watch methods don't set watch in some cases
 -

 Key: HBASE-10448
 URL: https://issues.apache.org/jira/browse/HBASE-10448
 Project: HBase
  Issue Type: Bug
  Components: Zookeeper
Affects Versions: 0.96.0, 0.96.1.1
Reporter: Jerry He
Assignee: Jerry He
 Fix For: 0.98.0, 0.96.2, 0.99.0

 Attachments: HBASE-10448-trunk.patch


 While using the ZKUtil methods during testing, I found that watch was not set 
 when it should be set based on the methods and method comments:
 createNodeIfNotExistsAndWatch
 createEphemeralNodeAndWatch
 For example, in createNodeIfNotExistsAndWatch():
 {code}
  public static boolean createNodeIfNotExistsAndWatch(
   ZooKeeperWatcher zkw, String znode, byte [] data)
   throws KeeperException {
 try {
   zkw.getRecoverableZooKeeper().create(znode, data, createACL(zkw, znode),
   CreateMode.PERSISTENT);
 } catch (KeeperException.NodeExistsException nee) {
   try {
 zkw.getRecoverableZooKeeper().exists(znode, zkw);
   } catch (InterruptedException e) {
 zkw.interruptedException(e);
 return false;
   }
   return false;
 } catch (InterruptedException e) {
   zkw.interruptedException(e);
   return false;
 }
 return true;
   }
 {code}
 The watch is only set via exists() call when the node already exists.
 Similarly in createEphemeralNodeAndWatch():
 {code}
   public static boolean createEphemeralNodeAndWatch(ZooKeeperWatcher zkw,
   String znode, byte [] data)
   throws KeeperException {
 try {
   zkw.getRecoverableZooKeeper().create(znode, data, createACL(zkw, znode),
   CreateMode.EPHEMERAL);
 } catch (KeeperException.NodeExistsException nee) {
   if(!watchAndCheckExists(zkw, znode)) {
 // It did exist but now it doesn't, try again
 return createEphemeralNodeAndWatch(zkw, znode, data);
   }
   return false;
 } catch (InterruptedException e) {
   LOG.info(Interrupted, e);
   Thread.currentThread().interrupt();
 }
 return true;
   }
 {code}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10443) IndexOutOfBoundExceptions when processing compressed tags in HFile

2014-02-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13888918#comment-13888918
 ] 

Hudson commented on HBASE-10443:


SUCCESS: Integrated in HBase-TRUNK #4874 (See 
[https://builds.apache.org/job/HBase-TRUNK/4874/])
HBASE-10443-IndexOutOfBoundExceptions when processing compressed tags in 
HFile(Ramkrishna S Vasudevan) (ramkrishna: rev 1563584)
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/HColumnDescriptor.java
* 
/hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileContextBuilder.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java


 IndexOutOfBoundExceptions when processing compressed tags in HFile
 --

 Key: HBASE-10443
 URL: https://issues.apache.org/jira/browse/HBASE-10443
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
Priority: Blocker
 Fix For: 0.98.0, 0.99.0

 Attachments: HBASE-10443.patch, HBASE-10443_1.patch


 As HBASE-10438 got closed, we still need to fix the Index out of bound 
 exception that occurs.  If we have a proper fix will fix this, if the bug was 
 a false alarm would close this. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10448) ZKUtil create and watch methods don't set watch in some cases

2014-02-02 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13888961#comment-13888961
 ] 

Lars Hofhansl commented on HBASE-10448:
---

Lemme check if this is an issue in 0.94 as well.

 ZKUtil create and watch methods don't set watch in some cases
 -

 Key: HBASE-10448
 URL: https://issues.apache.org/jira/browse/HBASE-10448
 Project: HBase
  Issue Type: Bug
  Components: Zookeeper
Affects Versions: 0.96.0, 0.96.1.1
Reporter: Jerry He
Assignee: Jerry He
 Fix For: 0.98.0, 0.96.2, 0.99.0

 Attachments: HBASE-10448-trunk.patch


 While using the ZKUtil methods during testing, I found that watch was not set 
 when it should be set based on the methods and method comments:
 createNodeIfNotExistsAndWatch
 createEphemeralNodeAndWatch
 For example, in createNodeIfNotExistsAndWatch():
 {code}
  public static boolean createNodeIfNotExistsAndWatch(
   ZooKeeperWatcher zkw, String znode, byte [] data)
   throws KeeperException {
 try {
   zkw.getRecoverableZooKeeper().create(znode, data, createACL(zkw, znode),
   CreateMode.PERSISTENT);
 } catch (KeeperException.NodeExistsException nee) {
   try {
 zkw.getRecoverableZooKeeper().exists(znode, zkw);
   } catch (InterruptedException e) {
 zkw.interruptedException(e);
 return false;
   }
   return false;
 } catch (InterruptedException e) {
   zkw.interruptedException(e);
   return false;
 }
 return true;
   }
 {code}
 The watch is only set via exists() call when the node already exists.
 Similarly in createEphemeralNodeAndWatch():
 {code}
   public static boolean createEphemeralNodeAndWatch(ZooKeeperWatcher zkw,
   String znode, byte [] data)
   throws KeeperException {
 try {
   zkw.getRecoverableZooKeeper().create(znode, data, createACL(zkw, znode),
   CreateMode.EPHEMERAL);
 } catch (KeeperException.NodeExistsException nee) {
   if(!watchAndCheckExists(zkw, znode)) {
 // It did exist but now it doesn't, try again
 return createEphemeralNodeAndWatch(zkw, znode, data);
   }
   return false;
 } catch (InterruptedException e) {
   LOG.info(Interrupted, e);
   Thread.currentThread().interrupt();
 }
 return true;
   }
 {code}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10452) Potential bugs in exception handlers

2014-02-02 Thread Ding Yuan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ding Yuan updated HBASE-10452:
--

Description: 
Hi HBase developers,
We are a group of researchers on software reliability. Recently we did a study 
and found that majority of the most severe failures in HBase are caused by bugs 
in exception handling logic -- that it is hard to anticipate all the possible 
real-world error scenarios. Therefore we built a simple checking tool that 
automatically detects some bug patterns that have caused some very severe 
real-world failures. I am reporting some of the results here. Any feedback is 
much appreciated!

Ding

=
Case 1:
  Line: 134, File: 
org/apache/hadoop/hbase/regionserver/RegionMergeRequest.java

{noformat}
  protected void releaseTableLock() {
if (this.tableLock != null) {
  try {
this.tableLock.release();
  } catch (IOException ex) {
LOG.warn(Could not release the table lock, ex);
//TODO: if we get here, and not abort RS, this lock will never be 
released
  }
}
{noformat}

The lock is not released if the exception occurs, causing potential deadlock or 
starvation.

Similar code pattern can be found at:
  Line: 135, File: org/apache/hadoop/hbase/regionserver/SplitRequest.java
==

=
Case 2:
  Line: 252, File: 
org/apache/hadoop/hbase/regionserver/wal/SequenceFileLogReader.java

{noformat}
try {
  Field fEnd = SequenceFile.Reader.class.getDeclaredField(end);
  fEnd.setAccessible(true);
  end = fEnd.getLong(this.reader);
} catch(Exception e) { /* reflection fail. keep going */ }
{noformat}

The caught Exception seems to be too general.
While reflection-related errors might be harmless, the try block can throw
other exceptions including SecurityException, IllegalAccessException, etc. 
Currently
all those exceptions are ignored. Maybe
the safe way is to ignore the specific reflection-related errors while logging 
and
handling other types of unexpected exceptions.
==
=
Case 3:
  Line: 148, File: org/apache/hadoop/hbase/HBaseConfiguration.java

{noformat}
try {
  if (Class.forName(org.apache.hadoop.conf.ConfServlet) != null) {
isShowConf = true;
  }
} catch (Exception e) {
}
{noformat}

Similar to the previous case, the exception handling is too general. While 
ClassNotFound error might be the normal case and ignored, Class.forName can 
also throw other exceptions (e.g., LinkageError) under some unexpected and rare 
error cases. If that happens, the error will be lost. So maybe change it to 
below:

{noformat}
try {
  if (Class.forName(org.apache.hadoop.conf.ConfServlet) != null) {
isShowConf = true;
  }
} catch (LinkageError e) {
  LOG.warn(..);
  // handle linkage error
} catch (ExceptionInInitializerError e) {
  LOG.warn(..);
  // handle Initializer error
} catch (ClassNotFoundException e) {
 LOG.debug(..);
 // ignore
}
{noformat}
==
=
Case 4:
  Line: 163, File: org/apache/hadoop/hbase/client/Get.java

{noformat}
  public Get setTimeStamp(long timestamp) {
try {
  tr = new TimeRange(timestamp, timestamp+1);
} catch(IOException e) {
  // Will never happen
}
return this;
  }
{noformat}

Even if the IOException never happens right now, is it possible to happen in 
the future due to code change?
At least there should be a log message. The current behavior is dangerous since 
if the exception ever happens
in any unexpected scenario, it will be silently swallowed.

Similar code pattern can be found at:
  Line: 300, File: org/apache/hadoop/hbase/client/Scan.java
==

=
Case 5:
  Line: 207, File: org/apache/hadoop/hbase/util/JVM.java

{noformat}
   if (input != null){
try {
  input.close();
} catch (IOException ignored) {
}
  }
{noformat}

Any exception encountered in close is completely ignored, not even logged.
In particular, the same exception scenario was handled differently in other 
methods in the same file:
Line: 154, same file
{noformat}
   if (in != null){
 try {
   in.close();
 } catch (IOException e) {
   LOG.warn(Not able to close the InputStream, e);
 }
   }
{noformat}

Line: 248, same file

{noformat}
  if (in != null){
try {
  in.close();
} catch (IOException e) {
  LOG.warn(Not able to close the InputStream, e);
}
  }
{noformat}
==

=
Case 6: empty handler for exception: java.io.IOException
  Line: 312, File: 

[jira] [Created] (HBASE-10452) Potential bugs in exception handlers

2014-02-02 Thread Ding Yuan (JIRA)
Ding Yuan created HBASE-10452:
-

 Summary: Potential bugs in exception handlers
 Key: HBASE-10452
 URL: https://issues.apache.org/jira/browse/HBASE-10452
 Project: HBase
  Issue Type: Bug
  Components: Client, master, regionserver, util
Affects Versions: 0.96.1
Reporter: Ding Yuan


Hi HBase developers,
We are a group of researchers on software reliability. Recently we did a study 
and found that majority of the most severe failures in HBase are caused by bugs 
in exception handling logic -- that it is hard to anticipate all the possible 
real-world error scenarios. Therefore we built a simple checking tool that 
automatically detects some bug patterns that have caused some very severe 
real-world failures. I am reporting some of the results here. Any feedback is 
much appreciated!

=
Case 1:
  Line: 134, File: 
org/apache/hadoop/hbase/regionserver/RegionMergeRequest.java

{noformat}
  protected void releaseTableLock() {
if (this.tableLock != null) {
  try {
this.tableLock.release();
  } catch (IOException ex) {
LOG.warn(Could not release the table lock, ex);
//TODO: if we get here, and not abort RS, this lock will never be 
released
  }
}
{noformat}

The lock is not released if the exception occurs, causing potential deadlock or 
starvation.

Similar code pattern can be found at:
  Line: 135, File: org/apache/hadoop/hbase/regionserver/SplitRequest.java
==

=
Case 2:
  Line: 252, File: 
org/apache/hadoop/hbase/regionserver/wal/SequenceFileLogReader.java

{noformat}
try {
  Field fEnd = SequenceFile.Reader.class.getDeclaredField(end);
  fEnd.setAccessible(true);
  end = fEnd.getLong(this.reader);
} catch(Exception e) { /* reflection fail. keep going */ }
{noformat}

The caught Exception seems to be too general.
While reflection-related errors might be harmless, the try block can throw
other exceptions including SecurityException, IllegalAccessException, etc. 
Currently
all those exceptions are ignored. Maybe
the safe way is to ignore the specific reflection-related errors while logging 
and
handling other types of unexpected exceptions.
==
=
Case 3:
  Line: 148, File: org/apache/hadoop/hbase/HBaseConfiguration.java

{noformat}
try {
  if (Class.forName(org.apache.hadoop.conf.ConfServlet) != null) {
isShowConf = true;
  }
} catch (Exception e) {
}
{noformat}

Similar to the previous case, the exception handling is too general. While 
ClassNotFound error might be the normal case and ignored, Class.forName can 
also throw other exceptions (e.g., LinkageError) under some unexpected and rare 
error cases. If that happens, the error will be lost. So maybe change it to 
below:

{noformat}
try {
  if (Class.forName(org.apache.hadoop.conf.ConfServlet) != null) {
isShowConf = true;
  }
} catch (LinkageError e) {
  LOG.warn(..);
  // handle linkage error
} catch (ExceptionInInitializerError e) {
  LOG.warn(..);
  // handle Initializer error
} catch (ClassNotFoundException e) {
 LOG.debug(..);
 // ignore
}
{noformat}
==
=
Case 4:
  Line: 163, File: org/apache/hadoop/hbase/client/Get.java

{noformat}
  public Get setTimeStamp(long timestamp) {
try {
  tr = new TimeRange(timestamp, timestamp+1);
} catch(IOException e) {
  // Will never happen
}
return this;
  }
{noformat}

Even if the IOException never happens right now, is it possible to happen in 
the future due to code change?
At least there should be a log message. The current behavior is dangerous since 
if the exception ever happens
in any unexpected scenario, it will be silently swallowed.

Similar code pattern can be found at:
  Line: 300, File: org/apache/hadoop/hbase/client/Scan.java
==

=
Case 5:
  Line: 207, File: org/apache/hadoop/hbase/util/JVM.java

{noformat}
   if (input != null){
try {
  input.close();
} catch (IOException ignored) {
}
  }
{noformat}

Any exception encountered in close is completely ignored, not even logged.
In particular, the same exception scenario was handled differently in other 
methods in the same file:
Line: 154, same file
{noformat}
   if (in != null){
 try {
   in.close();
 } catch (IOException e) {
   LOG.warn(Not able to close the InputStream, e);
 }
   }
{noformat}

Line: 248, same file

{noformat}
  if (in != null){
try {
  in.close();
} catch (IOException e) {
  LOG.warn(Not able to close the InputStream, e);
}
  }
{noformat}

[jira] [Commented] (HBASE-10447) Memstore flusher scans storefiles also when the scanner heap gets reset

2014-02-02 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13889037#comment-13889037
 ] 

Andrew Purtell commented on HBASE-10447:


Those two tests have been failing in precommit builds recently. I don't think 
they are related to the patch here.

I have been looking into the precommit hangs but am not able to reproduce 
locally. 

 Memstore flusher scans storefiles also when the scanner heap gets reset
 ---

 Key: HBASE-10447
 URL: https://issues.apache.org/jira/browse/HBASE-10447
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0, 0.99.0, 0.96.1.1
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
Priority: Blocker
 Fix For: 0.98.0, 0.96.2, 0.99.0

 Attachments: HBASE-10447_0.98.patch


 See the mail thread
 http://osdir.com/ml/general/2014-01/msg61294.html
 In case of flush we create a memstore flusher which in turn creates a  
 StoreScanner backed by a Single ton MemstoreScanner.  
 But this scanner also registers for any updates in the reader in the HStore.  
 Is this needed?  
 If this happens then any update on the reader may nullify the current heap 
 and the entire Scanner Stack is reset, but this time with the other scanners 
 for all the files that satisfies the last top key.  So the flush that happens 
 on the memstore holds the storefile scanners also in the heap that was 
 recreated but originally the intention was to create a scanner on the 
 memstore alone.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10354) Add an API for defining consistency per request

2014-02-02 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13889090#comment-13889090
 ] 

Enis Soztutar commented on HBASE-10354:
---

bq. 3) Sending multiple calls is expensive (more calls to send, more calls to 
cancel). If the primary is under recovery, we could answer to the get requests 
with a 'stale' status
Good point. It may tie in nicely with the new distributed log replay feature. 
[~jeffreyz] FYI. 

 Add an API for defining consistency per request
 ---

 Key: HBASE-10354
 URL: https://issues.apache.org/jira/browse/HBASE-10354
 Project: HBase
  Issue Type: Sub-task
  Components: Client
Reporter: Enis Soztutar
Assignee: Enis Soztutar
 Fix For: 0.99.0

 Attachments: hbase-10354_v1.patch


 We should add an API to be able to define the expected consistency level per 
 operation. This API should also allow to check whether the results coming 
 from a query (get or scan) is stale or not. The defaults should reflect the 
 current semantics. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10443) IndexOutOfBoundExceptions when processing compressed tags in HFile

2014-02-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13889141#comment-13889141
 ] 

Hudson commented on HBASE-10443:


FAILURE: Integrated in HBase-TRUNK-on-Hadoop-1.1 #75 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-1.1/75/])
HBASE-10443-IndexOutOfBoundExceptions when processing compressed tags in 
HFile(Ramkrishna S Vasudevan) (ramkrishna: rev 1563584)
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/HColumnDescriptor.java
* 
/hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileContextBuilder.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java


 IndexOutOfBoundExceptions when processing compressed tags in HFile
 --

 Key: HBASE-10443
 URL: https://issues.apache.org/jira/browse/HBASE-10443
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
Priority: Blocker
 Fix For: 0.98.0, 0.99.0

 Attachments: HBASE-10443.patch, HBASE-10443_1.patch


 As HBASE-10438 got closed, we still need to fix the Index out of bound 
 exception that occurs.  If we have a proper fix will fix this, if the bug was 
 a false alarm would close this. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10448) ZKUtil create and watch methods don't set watch in some cases

2014-02-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13889142#comment-13889142
 ] 

Hudson commented on HBASE-10448:


FAILURE: Integrated in HBase-TRUNK-on-Hadoop-1.1 #75 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-1.1/75/])
HBASE-10448 ZKUtil create and watch methods don't set watch in some cases 
(Jerry He) (tedyu: rev 1563507)
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKUtil.java


 ZKUtil create and watch methods don't set watch in some cases
 -

 Key: HBASE-10448
 URL: https://issues.apache.org/jira/browse/HBASE-10448
 Project: HBase
  Issue Type: Bug
  Components: Zookeeper
Affects Versions: 0.96.0, 0.96.1.1
Reporter: Jerry He
Assignee: Jerry He
 Fix For: 0.98.0, 0.96.2, 0.99.0

 Attachments: HBASE-10448-trunk.patch


 While using the ZKUtil methods during testing, I found that watch was not set 
 when it should be set based on the methods and method comments:
 createNodeIfNotExistsAndWatch
 createEphemeralNodeAndWatch
 For example, in createNodeIfNotExistsAndWatch():
 {code}
  public static boolean createNodeIfNotExistsAndWatch(
   ZooKeeperWatcher zkw, String znode, byte [] data)
   throws KeeperException {
 try {
   zkw.getRecoverableZooKeeper().create(znode, data, createACL(zkw, znode),
   CreateMode.PERSISTENT);
 } catch (KeeperException.NodeExistsException nee) {
   try {
 zkw.getRecoverableZooKeeper().exists(znode, zkw);
   } catch (InterruptedException e) {
 zkw.interruptedException(e);
 return false;
   }
   return false;
 } catch (InterruptedException e) {
   zkw.interruptedException(e);
   return false;
 }
 return true;
   }
 {code}
 The watch is only set via exists() call when the node already exists.
 Similarly in createEphemeralNodeAndWatch():
 {code}
   public static boolean createEphemeralNodeAndWatch(ZooKeeperWatcher zkw,
   String znode, byte [] data)
   throws KeeperException {
 try {
   zkw.getRecoverableZooKeeper().create(znode, data, createACL(zkw, znode),
   CreateMode.EPHEMERAL);
 } catch (KeeperException.NodeExistsException nee) {
   if(!watchAndCheckExists(zkw, znode)) {
 // It did exist but now it doesn't, try again
 return createEphemeralNodeAndWatch(zkw, znode, data);
   }
   return false;
 } catch (InterruptedException e) {
   LOG.info(Interrupted, e);
   Thread.currentThread().interrupt();
 }
 return true;
   }
 {code}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10354) Add an API for defining consistency per request

2014-02-02 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-10354:
--

Attachment: hbase-10354_v2.patch

v2 patch. 
 - Renamed Consistenct.EVENTUAL to Consistency.TIMELINE. 
 - Added stale to ClientProtos.Result, and Result now has a ctor with stale. 
removed Result.setStale() method. 
 - improved javadoc on Consistency.TIMELINE. Nicolas what do you think about 
this version? 



 Add an API for defining consistency per request
 ---

 Key: HBASE-10354
 URL: https://issues.apache.org/jira/browse/HBASE-10354
 Project: HBase
  Issue Type: Sub-task
  Components: Client
Reporter: Enis Soztutar
Assignee: Enis Soztutar
 Fix For: 0.99.0

 Attachments: hbase-10354_v1.patch, hbase-10354_v2.patch


 We should add an API to be able to define the expected consistency level per 
 operation. This API should also allow to check whether the results coming 
 from a query (get or scan) is stale or not. The defaults should reflect the 
 current semantics. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10452) Potential bugs in exception handlers

2014-02-02 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13889210#comment-13889210
 ] 

ramkrishna.s.vasudevan commented on HBASE-10452:


Thanks for checking this.  It would be better if you create a patch with the 
above mentioned changes. 
http://hbase.apache.org/book.html#submitting.patches
If you need to know how to start contributing.

 Potential bugs in exception handlers
 

 Key: HBASE-10452
 URL: https://issues.apache.org/jira/browse/HBASE-10452
 Project: HBase
  Issue Type: Bug
  Components: Client, master, regionserver, util
Affects Versions: 0.96.1
Reporter: Ding Yuan

 Hi HBase developers,
 We are a group of researchers on software reliability. Recently we did a 
 study and found that majority of the most severe failures in HBase are caused 
 by bugs in exception handling logic -- that it is hard to anticipate all the 
 possible real-world error scenarios. Therefore we built a simple checking 
 tool that automatically detects some bug patterns that have caused some very 
 severe real-world failures. I am reporting some of the results here. Any 
 feedback is much appreciated!
 Ding
 =
 Case 1:
   Line: 134, File: 
 org/apache/hadoop/hbase/regionserver/RegionMergeRequest.java
 {noformat}
   protected void releaseTableLock() {
 if (this.tableLock != null) {
   try {
 this.tableLock.release();
   } catch (IOException ex) {
 LOG.warn(Could not release the table lock, ex);
 //TODO: if we get here, and not abort RS, this lock will never be 
 released
   }
 }
 {noformat}
 The lock is not released if the exception occurs, causing potential deadlock 
 or starvation.
 Similar code pattern can be found at:
   Line: 135, File: org/apache/hadoop/hbase/regionserver/SplitRequest.java
 ==
 =
 Case 2:
   Line: 252, File: 
 org/apache/hadoop/hbase/regionserver/wal/SequenceFileLogReader.java
 {noformat}
 try {
   Field fEnd = SequenceFile.Reader.class.getDeclaredField(end);
   fEnd.setAccessible(true);
   end = fEnd.getLong(this.reader);
 } catch(Exception e) { /* reflection fail. keep going */ }
 {noformat}
 The caught Exception seems to be too general.
 While reflection-related errors might be harmless, the try block can throw
 other exceptions including SecurityException, IllegalAccessException, 
 etc. Currently
 all those exceptions are ignored. Maybe
 the safe way is to ignore the specific reflection-related errors while 
 logging and
 handling other types of unexpected exceptions.
 ==
 =
 Case 3:
   Line: 148, File: org/apache/hadoop/hbase/HBaseConfiguration.java
 {noformat}
 try {
   if (Class.forName(org.apache.hadoop.conf.ConfServlet) != null) {
 isShowConf = true;
   }
 } catch (Exception e) {
 }
 {noformat}
 Similar to the previous case, the exception handling is too general. While 
 ClassNotFound error might be the normal case and ignored, Class.forName can 
 also throw other exceptions (e.g., LinkageError) under some unexpected and 
 rare error cases. If that happens, the error will be lost. So maybe change it 
 to below:
 {noformat}
 try {
   if (Class.forName(org.apache.hadoop.conf.ConfServlet) != null) {
 isShowConf = true;
   }
 } catch (LinkageError e) {
   LOG.warn(..);
   // handle linkage error
 } catch (ExceptionInInitializerError e) {
   LOG.warn(..);
   // handle Initializer error
 } catch (ClassNotFoundException e) {
  LOG.debug(..);
  // ignore
 }
 {noformat}
 ==
 =
 Case 4:
   Line: 163, File: org/apache/hadoop/hbase/client/Get.java
 {noformat}
   public Get setTimeStamp(long timestamp) {
 try {
   tr = new TimeRange(timestamp, timestamp+1);
 } catch(IOException e) {
   // Will never happen
 }
 return this;
   }
 {noformat}
 Even if the IOException never happens right now, is it possible to happen in 
 the future due to code change?
 At least there should be a log message. The current behavior is dangerous 
 since if the exception ever happens
 in any unexpected scenario, it will be silently swallowed.
 Similar code pattern can be found at:
   Line: 300, File: org/apache/hadoop/hbase/client/Scan.java
 ==
 =
 Case 5:
   Line: 207, File: org/apache/hadoop/hbase/util/JVM.java
 {noformat}
if (input != null){
 try {
   input.close();
 } catch (IOException ignored) {
 }
   }
 {noformat}
 Any exception encountered in close is completely ignored, not even logged.
 

[jira] [Updated] (HBASE-10447) Memstore flusher scans storefiles also when the scanner heap gets reset

2014-02-02 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-10447:
---

Status: Open  (was: Patch Available)

 Memstore flusher scans storefiles also when the scanner heap gets reset
 ---

 Key: HBASE-10447
 URL: https://issues.apache.org/jira/browse/HBASE-10447
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.96.1.1, 0.98.0, 0.99.0
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
Priority: Blocker
 Fix For: 0.98.0, 0.96.2, 0.99.0

 Attachments: HBASE-10447_0.98.patch


 See the mail thread
 http://osdir.com/ml/general/2014-01/msg61294.html
 In case of flush we create a memstore flusher which in turn creates a  
 StoreScanner backed by a Single ton MemstoreScanner.  
 But this scanner also registers for any updates in the reader in the HStore.  
 Is this needed?  
 If this happens then any update on the reader may nullify the current heap 
 and the entire Scanner Stack is reset, but this time with the other scanners 
 for all the files that satisfies the last top key.  So the flush that happens 
 on the memstore holds the storefile scanners also in the heap that was 
 recreated but originally the intention was to create a scanner on the 
 memstore alone.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10447) Memstore flusher scans storefiles also when the scanner heap gets reset

2014-02-02 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-10447:
---

Status: Patch Available  (was: Open)

 Memstore flusher scans storefiles also when the scanner heap gets reset
 ---

 Key: HBASE-10447
 URL: https://issues.apache.org/jira/browse/HBASE-10447
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.96.1.1, 0.98.0, 0.99.0
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
Priority: Blocker
 Fix For: 0.98.0, 0.96.2, 0.99.0

 Attachments: HBASE-10447_0.98.patch, HBASE-10447_trunk.patch


 See the mail thread
 http://osdir.com/ml/general/2014-01/msg61294.html
 In case of flush we create a memstore flusher which in turn creates a  
 StoreScanner backed by a Single ton MemstoreScanner.  
 But this scanner also registers for any updates in the reader in the HStore.  
 Is this needed?  
 If this happens then any update on the reader may nullify the current heap 
 and the entire Scanner Stack is reset, but this time with the other scanners 
 for all the files that satisfies the last top key.  So the flush that happens 
 on the memstore holds the storefile scanners also in the heap that was 
 recreated but originally the intention was to create a scanner on the 
 memstore alone.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10447) Memstore flusher scans storefiles also when the scanner heap gets reset

2014-02-02 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-10447:
---

Attachment: HBASE-10447_trunk.patch

Patch for trunk. Truncated lines  100.

 Memstore flusher scans storefiles also when the scanner heap gets reset
 ---

 Key: HBASE-10447
 URL: https://issues.apache.org/jira/browse/HBASE-10447
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0, 0.99.0, 0.96.1.1
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
Priority: Blocker
 Fix For: 0.98.0, 0.96.2, 0.99.0

 Attachments: HBASE-10447_0.98.patch, HBASE-10447_trunk.patch


 See the mail thread
 http://osdir.com/ml/general/2014-01/msg61294.html
 In case of flush we create a memstore flusher which in turn creates a  
 StoreScanner backed by a Single ton MemstoreScanner.  
 But this scanner also registers for any updates in the reader in the HStore.  
 Is this needed?  
 If this happens then any update on the reader may nullify the current heap 
 and the entire Scanner Stack is reset, but this time with the other scanners 
 for all the files that satisfies the last top key.  So the flush that happens 
 on the memstore holds the storefile scanners also in the heap that was 
 recreated but originally the intention was to create a scanner on the 
 memstore alone.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10448) ZKUtil create and watch methods don't set watch in some cases

2014-02-02 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-10448:
--

Fix Version/s: 0.94.17

 ZKUtil create and watch methods don't set watch in some cases
 -

 Key: HBASE-10448
 URL: https://issues.apache.org/jira/browse/HBASE-10448
 Project: HBase
  Issue Type: Bug
  Components: Zookeeper
Affects Versions: 0.96.0, 0.96.1.1
Reporter: Jerry He
Assignee: Jerry He
 Fix For: 0.98.0, 0.96.2, 0.99.0, 0.94.17

 Attachments: HBASE-10448-trunk.patch


 While using the ZKUtil methods during testing, I found that watch was not set 
 when it should be set based on the methods and method comments:
 createNodeIfNotExistsAndWatch
 createEphemeralNodeAndWatch
 For example, in createNodeIfNotExistsAndWatch():
 {code}
  public static boolean createNodeIfNotExistsAndWatch(
   ZooKeeperWatcher zkw, String znode, byte [] data)
   throws KeeperException {
 try {
   zkw.getRecoverableZooKeeper().create(znode, data, createACL(zkw, znode),
   CreateMode.PERSISTENT);
 } catch (KeeperException.NodeExistsException nee) {
   try {
 zkw.getRecoverableZooKeeper().exists(znode, zkw);
   } catch (InterruptedException e) {
 zkw.interruptedException(e);
 return false;
   }
   return false;
 } catch (InterruptedException e) {
   zkw.interruptedException(e);
   return false;
 }
 return true;
   }
 {code}
 The watch is only set via exists() call when the node already exists.
 Similarly in createEphemeralNodeAndWatch():
 {code}
   public static boolean createEphemeralNodeAndWatch(ZooKeeperWatcher zkw,
   String znode, byte [] data)
   throws KeeperException {
 try {
   zkw.getRecoverableZooKeeper().create(znode, data, createACL(zkw, znode),
   CreateMode.EPHEMERAL);
 } catch (KeeperException.NodeExistsException nee) {
   if(!watchAndCheckExists(zkw, znode)) {
 // It did exist but now it doesn't, try again
 return createEphemeralNodeAndWatch(zkw, znode, data);
   }
   return false;
 } catch (InterruptedException e) {
   LOG.info(Interrupted, e);
   Thread.currentThread().interrupt();
 }
 return true;
   }
 {code}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10448) ZKUtil create and watch methods don't set watch in some cases

2014-02-02 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13889240#comment-13889240
 ] 

Lars Hofhansl commented on HBASE-10448:
---

Committed to 0.94 as well. (I hope we are not relying anywhere in the current 
behavior).

 ZKUtil create and watch methods don't set watch in some cases
 -

 Key: HBASE-10448
 URL: https://issues.apache.org/jira/browse/HBASE-10448
 Project: HBase
  Issue Type: Bug
  Components: Zookeeper
Affects Versions: 0.96.0, 0.96.1.1
Reporter: Jerry He
Assignee: Jerry He
 Fix For: 0.98.0, 0.96.2, 0.99.0, 0.94.17

 Attachments: HBASE-10448-trunk.patch


 While using the ZKUtil methods during testing, I found that watch was not set 
 when it should be set based on the methods and method comments:
 createNodeIfNotExistsAndWatch
 createEphemeralNodeAndWatch
 For example, in createNodeIfNotExistsAndWatch():
 {code}
  public static boolean createNodeIfNotExistsAndWatch(
   ZooKeeperWatcher zkw, String znode, byte [] data)
   throws KeeperException {
 try {
   zkw.getRecoverableZooKeeper().create(znode, data, createACL(zkw, znode),
   CreateMode.PERSISTENT);
 } catch (KeeperException.NodeExistsException nee) {
   try {
 zkw.getRecoverableZooKeeper().exists(znode, zkw);
   } catch (InterruptedException e) {
 zkw.interruptedException(e);
 return false;
   }
   return false;
 } catch (InterruptedException e) {
   zkw.interruptedException(e);
   return false;
 }
 return true;
   }
 {code}
 The watch is only set via exists() call when the node already exists.
 Similarly in createEphemeralNodeAndWatch():
 {code}
   public static boolean createEphemeralNodeAndWatch(ZooKeeperWatcher zkw,
   String znode, byte [] data)
   throws KeeperException {
 try {
   zkw.getRecoverableZooKeeper().create(znode, data, createACL(zkw, znode),
   CreateMode.EPHEMERAL);
 } catch (KeeperException.NodeExistsException nee) {
   if(!watchAndCheckExists(zkw, znode)) {
 // It did exist but now it doesn't, try again
 return createEphemeralNodeAndWatch(zkw, znode, data);
   }
   return false;
 } catch (InterruptedException e) {
   LOG.info(Interrupted, e);
   Thread.currentThread().interrupt();
 }
 return true;
   }
 {code}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10354) Add an API for defining consistency per request

2014-02-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13889243#comment-13889243
 ] 

Hadoop QA commented on HBASE-10354:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12626594/hbase-10354_v2.patch
  against trunk revision .
  ATTACHMENT ID: 12626594

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop1.1{color}.  The patch compiles against the hadoop 
1.1 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+  ly\030\001 \002(\014\022\021\n\tqualifier\030\002 
\003(\014\\324\002\n\003Get\022\013\n\003r +
+   \001(\010\022\013\n\003ttl\030\004 
\001(\r\022\030\n\007results\030\005 \003(\0132\007.Res +
+  new java.lang.String[] { Row, Column, Attribute, Filter, 
TimeRange, MaxVersions, CacheBlocks, StoreLimit, StoreOffset, 
ExistenceOnly, ClosestRowBefore, Consistency, });
+  new java.lang.String[] { Column, Attribute, StartRow, 
StopRow, Filter, TimeRange, MaxVersions, CacheBlocks, BatchSize, 
MaxResultSize, StoreLimit, StoreOffset, LoadColumnFamiliesOnDemand, 
Small, Reversed, Consistency, });
+  
get.setConsistency(org.apache.hadoop.hbase.client.Consistency.valueOf(consistency))
 if consistency
+
scan.setConsistency(org.apache.hadoop.hbase.client.Consistency.valueOf(consistency))
 if consistency

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8573//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8573//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8573//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8573//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8573//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8573//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8573//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8573//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8573//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8573//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8573//console

This message is automatically generated.

 Add an API for defining consistency per request
 ---

 Key: HBASE-10354
 URL: https://issues.apache.org/jira/browse/HBASE-10354
 Project: HBase
  Issue Type: Sub-task
  Components: Client
Reporter: Enis Soztutar
Assignee: Enis Soztutar
 Fix For: 0.99.0

 Attachments: hbase-10354_v1.patch, hbase-10354_v2.patch


 We should add an API to be able to define the expected consistency level per 
 operation. This API should also allow to check whether the results coming 
 from a query (get or scan) is stale or not. The defaults should reflect the 
 current semantics. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10448) ZKUtil create and watch methods don't set watch in some cases

2014-02-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13889255#comment-13889255
 ] 

Hudson commented on HBASE-10448:


FAILURE: Integrated in HBase-0.94-on-Hadoop-2 #6 (See 
[https://builds.apache.org/job/HBase-0.94-on-Hadoop-2/6/])
HBASE-10448 ZKUtil create and watch methods don't set watch in some cases. 
(Jerry He) (larsh: rev 1563783)
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKUtil.java


 ZKUtil create and watch methods don't set watch in some cases
 -

 Key: HBASE-10448
 URL: https://issues.apache.org/jira/browse/HBASE-10448
 Project: HBase
  Issue Type: Bug
  Components: Zookeeper
Affects Versions: 0.96.0, 0.96.1.1
Reporter: Jerry He
Assignee: Jerry He
 Fix For: 0.98.0, 0.96.2, 0.99.0, 0.94.17

 Attachments: HBASE-10448-trunk.patch


 While using the ZKUtil methods during testing, I found that watch was not set 
 when it should be set based on the methods and method comments:
 createNodeIfNotExistsAndWatch
 createEphemeralNodeAndWatch
 For example, in createNodeIfNotExistsAndWatch():
 {code}
  public static boolean createNodeIfNotExistsAndWatch(
   ZooKeeperWatcher zkw, String znode, byte [] data)
   throws KeeperException {
 try {
   zkw.getRecoverableZooKeeper().create(znode, data, createACL(zkw, znode),
   CreateMode.PERSISTENT);
 } catch (KeeperException.NodeExistsException nee) {
   try {
 zkw.getRecoverableZooKeeper().exists(znode, zkw);
   } catch (InterruptedException e) {
 zkw.interruptedException(e);
 return false;
   }
   return false;
 } catch (InterruptedException e) {
   zkw.interruptedException(e);
   return false;
 }
 return true;
   }
 {code}
 The watch is only set via exists() call when the node already exists.
 Similarly in createEphemeralNodeAndWatch():
 {code}
   public static boolean createEphemeralNodeAndWatch(ZooKeeperWatcher zkw,
   String znode, byte [] data)
   throws KeeperException {
 try {
   zkw.getRecoverableZooKeeper().create(znode, data, createACL(zkw, znode),
   CreateMode.EPHEMERAL);
 } catch (KeeperException.NodeExistsException nee) {
   if(!watchAndCheckExists(zkw, znode)) {
 // It did exist but now it doesn't, try again
 return createEphemeralNodeAndWatch(zkw, znode, data);
   }
   return false;
 } catch (InterruptedException e) {
   LOG.info(Interrupted, e);
   Thread.currentThread().interrupt();
 }
 return true;
   }
 {code}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10447) Memstore flusher scans storefiles also when the scanner heap gets reset

2014-02-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13889264#comment-13889264
 ] 

Hadoop QA commented on HBASE-10447:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12626599/HBASE-10447_trunk.patch
  against trunk revision .
  ATTACHMENT ID: 12626599

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 12 new 
or modified tests.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop1.1{color}.  The patch compiles against the hadoop 
1.1 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s):   
at 
org.apache.hadoop.hbase.TestAcidGuarantees.testScanAtomicity(TestAcidGuarantees.java:341)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8574//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8574//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8574//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8574//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8574//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8574//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8574//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8574//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8574//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8574//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8574//console

This message is automatically generated.

 Memstore flusher scans storefiles also when the scanner heap gets reset
 ---

 Key: HBASE-10447
 URL: https://issues.apache.org/jira/browse/HBASE-10447
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0, 0.99.0, 0.96.1.1
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
Priority: Blocker
 Fix For: 0.98.0, 0.96.2, 0.99.0

 Attachments: HBASE-10447_0.98.patch, HBASE-10447_trunk.patch


 See the mail thread
 http://osdir.com/ml/general/2014-01/msg61294.html
 In case of flush we create a memstore flusher which in turn creates a  
 StoreScanner backed by a Single ton MemstoreScanner.  
 But this scanner also registers for any updates in the reader in the HStore.  
 Is this needed?  
 If this happens then any update on the reader may nullify the current heap 
 and the entire Scanner Stack is reset, but this time with the other scanners 
 for all the files that satisfies the last top key.  So the flush that happens 
 on the memstore holds the storefile scanners also in the heap that was 
 recreated but originally the intention was to create a scanner on the 
 memstore alone.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10448) ZKUtil create and watch methods don't set watch in some cases

2014-02-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13889272#comment-13889272
 ] 

Hudson commented on HBASE-10448:


SUCCESS: Integrated in HBase-0.94-security #396 (See 
[https://builds.apache.org/job/HBase-0.94-security/396/])
HBASE-10448 ZKUtil create and watch methods don't set watch in some cases. 
(Jerry He) (larsh: rev 1563783)
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKUtil.java


 ZKUtil create and watch methods don't set watch in some cases
 -

 Key: HBASE-10448
 URL: https://issues.apache.org/jira/browse/HBASE-10448
 Project: HBase
  Issue Type: Bug
  Components: Zookeeper
Affects Versions: 0.96.0, 0.96.1.1
Reporter: Jerry He
Assignee: Jerry He
 Fix For: 0.98.0, 0.96.2, 0.99.0, 0.94.17

 Attachments: HBASE-10448-trunk.patch


 While using the ZKUtil methods during testing, I found that watch was not set 
 when it should be set based on the methods and method comments:
 createNodeIfNotExistsAndWatch
 createEphemeralNodeAndWatch
 For example, in createNodeIfNotExistsAndWatch():
 {code}
  public static boolean createNodeIfNotExistsAndWatch(
   ZooKeeperWatcher zkw, String znode, byte [] data)
   throws KeeperException {
 try {
   zkw.getRecoverableZooKeeper().create(znode, data, createACL(zkw, znode),
   CreateMode.PERSISTENT);
 } catch (KeeperException.NodeExistsException nee) {
   try {
 zkw.getRecoverableZooKeeper().exists(znode, zkw);
   } catch (InterruptedException e) {
 zkw.interruptedException(e);
 return false;
   }
   return false;
 } catch (InterruptedException e) {
   zkw.interruptedException(e);
   return false;
 }
 return true;
   }
 {code}
 The watch is only set via exists() call when the node already exists.
 Similarly in createEphemeralNodeAndWatch():
 {code}
   public static boolean createEphemeralNodeAndWatch(ZooKeeperWatcher zkw,
   String znode, byte [] data)
   throws KeeperException {
 try {
   zkw.getRecoverableZooKeeper().create(znode, data, createACL(zkw, znode),
   CreateMode.EPHEMERAL);
 } catch (KeeperException.NodeExistsException nee) {
   if(!watchAndCheckExists(zkw, znode)) {
 // It did exist but now it doesn't, try again
 return createEphemeralNodeAndWatch(zkw, znode, data);
   }
   return false;
 } catch (InterruptedException e) {
   LOG.info(Interrupted, e);
   Thread.currentThread().interrupt();
 }
 return true;
   }
 {code}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10448) ZKUtil create and watch methods don't set watch in some cases

2014-02-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13889283#comment-13889283
 ] 

Hudson commented on HBASE-10448:


SUCCESS: Integrated in HBase-0.94-JDK7 #35 (See 
[https://builds.apache.org/job/HBase-0.94-JDK7/35/])
HBASE-10448 ZKUtil create and watch methods don't set watch in some cases. 
(Jerry He) (larsh: rev 1563783)
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKUtil.java


 ZKUtil create and watch methods don't set watch in some cases
 -

 Key: HBASE-10448
 URL: https://issues.apache.org/jira/browse/HBASE-10448
 Project: HBase
  Issue Type: Bug
  Components: Zookeeper
Affects Versions: 0.96.0, 0.96.1.1
Reporter: Jerry He
Assignee: Jerry He
 Fix For: 0.98.0, 0.96.2, 0.99.0, 0.94.17

 Attachments: HBASE-10448-trunk.patch


 While using the ZKUtil methods during testing, I found that watch was not set 
 when it should be set based on the methods and method comments:
 createNodeIfNotExistsAndWatch
 createEphemeralNodeAndWatch
 For example, in createNodeIfNotExistsAndWatch():
 {code}
  public static boolean createNodeIfNotExistsAndWatch(
   ZooKeeperWatcher zkw, String znode, byte [] data)
   throws KeeperException {
 try {
   zkw.getRecoverableZooKeeper().create(znode, data, createACL(zkw, znode),
   CreateMode.PERSISTENT);
 } catch (KeeperException.NodeExistsException nee) {
   try {
 zkw.getRecoverableZooKeeper().exists(znode, zkw);
   } catch (InterruptedException e) {
 zkw.interruptedException(e);
 return false;
   }
   return false;
 } catch (InterruptedException e) {
   zkw.interruptedException(e);
   return false;
 }
 return true;
   }
 {code}
 The watch is only set via exists() call when the node already exists.
 Similarly in createEphemeralNodeAndWatch():
 {code}
   public static boolean createEphemeralNodeAndWatch(ZooKeeperWatcher zkw,
   String znode, byte [] data)
   throws KeeperException {
 try {
   zkw.getRecoverableZooKeeper().create(znode, data, createACL(zkw, znode),
   CreateMode.EPHEMERAL);
 } catch (KeeperException.NodeExistsException nee) {
   if(!watchAndCheckExists(zkw, znode)) {
 // It did exist but now it doesn't, try again
 return createEphemeralNodeAndWatch(zkw, znode, data);
   }
   return false;
 } catch (InterruptedException e) {
   LOG.info(Interrupted, e);
   Thread.currentThread().interrupt();
 }
 return true;
   }
 {code}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)