[ https://issues.apache.org/jira/browse/HDFS-10766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15423931#comment-15423931 ]
Karthik Palanisamy edited comment on HDFS-10766 at 8/17/16 6:01 AM: -------------------------------------------------------------------- [~arpitagarwal] 1. In this scenario, the actual exception was lost - was this message seen in the logs? We get the finally block exception message in the log but not the actual exception which was thrown by try block. *org.apache.hadoop.fs.InvalidRequestException: there is no shared memory segment registered with shmId 0773fa8b13b4643cb5be98893af5a873* If try block returns “success=true” then the above message will not appear so I concluded some exception is thrown in try block which is not handled. 2. Was the original error generated in the try block of the requestShortCircuitFds method? If so I didn't get how adding try/catch in the finally block which does failure cleanup will help us. By adding try/catch inside finally block we can handle and log the exception if any occurs in the same. so the exception thrown by requestShortCircuitFds will be the actual exception. Since there was no handling of exception in finally block the requestShortCircuitFds method throws the exception that occurred in finally but not the one occurred inside try block. *example* {code} import java.io.IOException; public class App { private static void message() throws IOException { try { throw new NullPointerException(); } finally { throw new IOException(); } } public static void main(String[] args) throws IOException { message(); } } {code} As in above, We don't get the Nullpointer exception which thrown by try block. We usually get finally block exception and the change here is, {code} finally { try{ //info message }catch(IOException e){ System.out.println(e); } } {code} was (Author: kpalanisamy): [~arpitagarwal] 1. In this scenario, the actual exception was lost - was this message seen in the logs? We get the finally block exception message in the log but not the actual exception which was thrown by try block. *org.apache.hadoop.fs.InvalidRequestException: there is no shared memory segment registered with shmId 0773fa8b13b4643cb5be98893af5a873* If try block returns “success=true” then the above message will not appear so I concluded some exception is thrown in try block which is not handled. 2. Was the original error generated in the try block of the requestShortCircuitFds method? If so I didn't get how adding try/catch in the finally block which does failure cleanup will help us. By adding try/catch inside finally block we can handle and log the exception if any occurs in the same. so the exception thrown by requestShortCircuitFds will be the actual exception. Since there was no handling of exception in finally block the requestShortCircuitFds method throws the exception that occurred in finally but not the one occurred inside try block. *example* {quote} import java.io.IOException; public class App { private static void message() throws IOException { try { throw new NullPointerException(); } finally { throw new IOException(); } } public static void main(String[] args) throws IOException { message(); } } {quote} As in above, We don't get the Nullpointer exception which thrown by try block. We usually get finally block exception and the change here is, {quote} finally { try{ //info message }catch(IOException e){ System.out.println(e); } } {quote} > Request short circuit access failed > ----------------------------------- > > Key: HDFS-10766 > URL: https://issues.apache.org/jira/browse/HDFS-10766 > Project: Hadoop HDFS > Issue Type: Bug > Components: logging > Environment: HDP-2.4 > Reporter: Karthik Palanisamy > Assignee: Karthik Palanisamy > Priority: Minor > Labels: patch > Attachments: HDFS-10766-1.patch > > > There was some error while creating requestShortCircuitFdsForRead and the > exception is thrown when logging the info message > {quote} > In this scenario, the actual exception was lost > {quote} > To get the actual exception message it needs to be handled properly. > 2016-07-25 13:11:54,323 ERROR datanode.DataNode (DataXceiver.java:run(278)) - > xyz.com:50010:DataXceiver error processing REQUEST_SHORT_CIRCUIT_FDS > operation src: unix:/var/lib/hadoop-hdfs/dn_socket dst: <local> > org.apache.hadoop.fs.InvalidRequestException: there is no shared memory > segment registered with shmId 0773fa8b13b4643cb5be98893af5a873 > at > org.apache.hadoop.hdfs.server.datanode.ShortCircuitRegistry.unregisterSlot(ShortCircuitRegistry.java:371) > > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.requestShortCircuitFds(DataXceiver.java:364) > > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opRequestShortCircuitFds(Receiver.java:187) > > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:89) > > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:251) > at java.lang.Thread.run(Thread.java:745) -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org