Mapreduce Exceptions with hadoop 0.20.2

2010-12-09 Thread Praveen Bathala
Hi,

I am running Mapreduce job to get some emails out of a huge text file.
I used to use hadoop 0.19 version and I had no issues, now I am using the
hadoop 0.20.2 and when I run my hadoop mapreduce job I see the log as job
failed and in the jobtracker log

Can someone please help me..

2010-12-09 20:53:00,399 INFO org.apache.hadoop.mapred.JobTracker: problem
cleaning system directory:
hdfs://localhost:9000/home/praveen/hadoop/temp/mapred/system
org.apache.hadoop.ipc.RemoteException:
org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete
/home/praveen/hadoop/temp/mapred/system. Name node is in safe mode.
The ratio of reported blocks 0. has not reached the threshold 0.9990.
Safe mode will be turned off automatically.
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesystem.java:1700)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:1680)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.delete(NameNode.java:517)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)

 at org.apache.hadoop.ipc.Client.call(Client.java:740)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
at $Proxy4.delete(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
at $Proxy4.delete(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.delete(DFSClient.java:582)
at
org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:227)
at org.apache.hadoop.mapred.JobTracker.(JobTracker.java:1695)
at
org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:183)
at
org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:175)
at org.apache.hadoop.mapred.JobTracker.main(JobTracker.java:3702)
2010-12-09 20:53:10,405 INFO org.apache.hadoop.mapred.JobTracker: Cleaning
up the system directory
2010-12-09 20:53:10,409 INFO org.apache.hadoop.mapred.JobTracker: problem
cleaning system directory:
hdfs://localhost:9000/home/praveen/hadoop/temp/mapred/system


Thanks in advance
+ Praveen


Re: Mapreduce Exceptions with hadoop 0.20.2

2010-12-09 Thread Mahadev Konar
Hi Praveen,
  Looks like its your namenode that's still in safemode.


http://wiki.apache.org/hadoop/FAQ

The safemode feature in the namenode waits till a certain number of threshold 
for hdfs blocks have been reported by the datanodes,  before letting clients 
making edits to the namespace. It usually happens when you reboot your 
namenode. You can read more about the safemode in the above FAQ.

Thanks
mahadev


On 12/9/10 6:09 PM, "Praveen Bathala"  wrote:

Hi,

I am running Mapreduce job to get some emails out of a huge text file.
I used to use hadoop 0.19 version and I had no issues, now I am using the
hadoop 0.20.2 and when I run my hadoop mapreduce job I see the log as job
failed and in the jobtracker log

Can someone please help me..

2010-12-09 20:53:00,399 INFO org.apache.hadoop.mapred.JobTracker: problem
cleaning system directory:
hdfs://localhost:9000/home/praveen/hadoop/temp/mapred/system
org.apache.hadoop.ipc.RemoteException:
org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete
/home/praveen/hadoop/temp/mapred/system. Name node is in safe mode.
The ratio of reported blocks 0. has not reached the threshold 0.9990.
Safe mode will be turned off automatically.
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesystem.java:1700)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:1680)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.delete(NameNode.java:517)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)

 at org.apache.hadoop.ipc.Client.call(Client.java:740)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
at $Proxy4.delete(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
at $Proxy4.delete(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.delete(DFSClient.java:582)
at
org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:227)
at org.apache.hadoop.mapred.JobTracker.(JobTracker.java:1695)
at
org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:183)
at
org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:175)
at org.apache.hadoop.mapred.JobTracker.main(JobTracker.java:3702)
2010-12-09 20:53:10,405 INFO org.apache.hadoop.mapred.JobTracker: Cleaning
up the system directory
2010-12-09 20:53:10,409 INFO org.apache.hadoop.mapred.JobTracker: problem
cleaning system directory:
hdfs://localhost:9000/home/praveen/hadoop/temp/mapred/system


Thanks in advance
+ Praveen



Re: Mapreduce Exceptions with hadoop 0.20.2

2010-12-09 Thread Praveen Bathala
I did this
prav...@praveen-desktop:~/hadoop/hadoop-0.20.2$ bin/hadoop dfsadmin
-safemode leave
Safe mode is OFF
prav...@praveen-desktop:~/hadoop/hadoop-0.20.2$ bin/hadoop dfsadmin
-safemode get
Safe mode is OFF
and then I restarted my cluster and still I see the INFO in namenode logs
saying in safemode..

somehow I am getting my Map output fine, but the job.isSuccessful() is
returning false.

Any help on that.

Thanks
+ Praveen

On Thu, Dec 9, 2010 at 9:28 PM, Mahadev Konar  wrote:

> Hi Praveen,
>  Looks like its your namenode that's still in safemode.
>
>
> http://wiki.apache.org/hadoop/FAQ
>
> The safemode feature in the namenode waits till a certain number of
> threshold for hdfs blocks have been reported by the datanodes,  before
> letting clients making edits to the namespace. It usually happens when you
> reboot your namenode. You can read more about the safemode in the above FAQ.
>
> Thanks
> mahadev
>
>
> On 12/9/10 6:09 PM, "Praveen Bathala"  wrote:
>
> Hi,
>
> I am running Mapreduce job to get some emails out of a huge text file.
> I used to use hadoop 0.19 version and I had no issues, now I am using the
> hadoop 0.20.2 and when I run my hadoop mapreduce job I see the log as job
> failed and in the jobtracker log
>
> Can someone please help me..
>
> 2010-12-09 20:53:00,399 INFO org.apache.hadoop.mapred.JobTracker: problem
> cleaning system directory:
> hdfs://localhost:9000/home/praveen/hadoop/temp/mapred/system
> org.apache.hadoop.ipc.RemoteException:
> org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete
> /home/praveen/hadoop/temp/mapred/system. Name node is in safe mode.
> The ratio of reported blocks 0. has not reached the threshold 0.9990.
> Safe mode will be turned off automatically.
>at
>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesystem.java:1700)
>at
>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:1680)
>at
> org.apache.hadoop.hdfs.server.namenode.NameNode.delete(NameNode.java:517)
>at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>at
>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>at
>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>at java.lang.reflect.Method.invoke(Method.java:597)
>at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
>at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
>at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
>at java.security.AccessController.doPrivileged(Native Method)
>at javax.security.auth.Subject.doAs(Subject.java:396)
>at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
>
>  at org.apache.hadoop.ipc.Client.call(Client.java:740)
>at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
>at $Proxy4.delete(Unknown Source)
>at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>at
>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>at
>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>at java.lang.reflect.Method.invoke(Method.java:597)
>at
>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>at
>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>at $Proxy4.delete(Unknown Source)
>at org.apache.hadoop.hdfs.DFSClient.delete(DFSClient.java:582)
>at
>
> org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:227)
>at org.apache.hadoop.mapred.JobTracker.(JobTracker.java:1695)
>at
> org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:183)
>at
> org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:175)
>at org.apache.hadoop.mapred.JobTracker.main(JobTracker.java:3702)
> 2010-12-09 20:53:10,405 INFO org.apache.hadoop.mapred.JobTracker: Cleaning
> up the system directory
> 2010-12-09 20:53:10,409 INFO org.apache.hadoop.mapred.JobTracker: problem
> cleaning system directory:
> hdfs://localhost:9000/home/praveen/hadoop/temp/mapred/system
>
>
> Thanks in advance
> + Praveen
>
>


-- 
+ Praveen


Re: Mapreduce Exceptions with hadoop 0.20.2

2010-12-09 Thread Konstantin Boudnik
On Thu, Dec 9, 2010 at 19:55, Praveen Bathala  wrote:
> I did this
> prav...@praveen-desktop:~/hadoop/hadoop-0.20.2$ bin/hadoop dfsadmin
> -safemode leave
> Safe mode is OFF
> prav...@praveen-desktop:~/hadoop/hadoop-0.20.2$ bin/hadoop dfsadmin
> -safemode get
> Safe mode is OFF

This is not a configuration setting: this is only a runtime on/off
switch. Once you have restarted the cluster your NN will go into
safemode (for a number of reasons). TTs are made to quit if they can't
connect to HDFS after some timeout (60 seconds if I remember
correctly). Once your NN is back from its safemode you can safely
start MR daemons and everything should be just fine.

Simply put: be patient ;)


> and then I restarted my cluster and still I see the INFO in namenode logs
> saying in safemode..
>
> somehow I am getting my Map output fine, but the job.isSuccessful() is
> returning false.
>
> Any help on that.
>
> Thanks
> + Praveen
>
> On Thu, Dec 9, 2010 at 9:28 PM, Mahadev Konar  wrote:
>
>> Hi Praveen,
>>  Looks like its your namenode that's still in safemode.
>>
>>
>> http://wiki.apache.org/hadoop/FAQ
>>
>> The safemode feature in the namenode waits till a certain number of
>> threshold for hdfs blocks have been reported by the datanodes,  before
>> letting clients making edits to the namespace. It usually happens when you
>> reboot your namenode. You can read more about the safemode in the above FAQ.
>>
>> Thanks
>> mahadev
>>
>>
>> On 12/9/10 6:09 PM, "Praveen Bathala"  wrote:
>>
>> Hi,
>>
>> I am running Mapreduce job to get some emails out of a huge text file.
>> I used to use hadoop 0.19 version and I had no issues, now I am using the
>> hadoop 0.20.2 and when I run my hadoop mapreduce job I see the log as job
>> failed and in the jobtracker log
>>
>> Can someone please help me..
>>
>> 2010-12-09 20:53:00,399 INFO org.apache.hadoop.mapred.JobTracker: problem
>> cleaning system directory:
>> hdfs://localhost:9000/home/praveen/hadoop/temp/mapred/system
>> org.apache.hadoop.ipc.RemoteException:
>> org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete
>> /home/praveen/hadoop/temp/mapred/system. Name node is in safe mode.
>> The ratio of reported blocks 0. has not reached the threshold 0.9990.
>> Safe mode will be turned off automatically.
>>        at
>>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesystem.java:1700)
>>        at
>>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:1680)
>>        at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.delete(NameNode.java:517)
>>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>        at
>>
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>        at
>>
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>        at java.lang.reflect.Method.invoke(Method.java:597)
>>        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
>>        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
>>        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
>>        at java.security.AccessController.doPrivileged(Native Method)
>>        at javax.security.auth.Subject.doAs(Subject.java:396)
>>        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
>>
>>  at org.apache.hadoop.ipc.Client.call(Client.java:740)
>>        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
>>        at $Proxy4.delete(Unknown Source)
>>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>        at
>>
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>        at
>>
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>        at java.lang.reflect.Method.invoke(Method.java:597)
>>        at
>>
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>>        at
>>
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>>        at $Proxy4.delete(Unknown Source)
>>        at org.apache.hadoop.hdfs.DFSClient.delete(DFSClient.java:582)
>>        at
>>
>> org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:227)
>>        at org.apache.hadoop.mapred.JobTracker.(JobTracker.java:1695)
>>        at
>> org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:183)
>>        at
>> org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:175)
>>        at org.apache.hadoop.mapred.JobTracker.main(JobTracker.java:3702)
>> 2010-12-09 20:53:10,405 INFO org.apache.hadoop.mapred.JobTracker: Cleaning
>> up the system directory
>> 2010-12-09 20:53:10,409 INFO org.apache.hadoop.mapred.JobTracker: problem
>> cleaning system directory:
>> hdfs://localhost:9000/home/praveen/hadoop/temp/mapred/system
>>
>>
>> Thanks in advance
>> + Praveen
>>
>>
>
>
> --
> + Praveen
>


Re: Mapreduce Exceptions with hadoop 0.20.2

2010-12-12 Thread rahul patodi
Hi Praveen,
whenever we restart our cluster, name node goes to the safe mode for a
particular interval of time, during this period we cannt write in to the
HDFS, after that name node automatically come out from safe mode. What you
are trying to do is that you off the safe mode and than restart cluster, now
when cluster is started again name node is in safe mode to perform essential
checks. what you should do is that after restarting the cluster just wait
for a while so name node come out of safe mode


-- 
-Thanks and Regards,
Rahul Patodi
Associate Software Engineer,
Impetus Infotech (India) Private Limited,
www.impetus.com
Mob:09907074413


On Fri, Dec 10, 2010 at 10:07 AM, Konstantin Boudnik  wrote:

> On Thu, Dec 9, 2010 at 19:55, Praveen Bathala  wrote:
> > I did this
> > prav...@praveen-desktop:~/hadoop/hadoop-0.20.2$ bin/hadoop dfsadmin
> > -safemode leave
> > Safe mode is OFF
> > prav...@praveen-desktop:~/hadoop/hadoop-0.20.2$ bin/hadoop dfsadmin
> > -safemode get
> > Safe mode is OFF
>
> This is not a configuration setting: this is only a runtime on/off
> switch. Once you have restarted the cluster your NN will go into
> safemode (for a number of reasons). TTs are made to quit if they can't
> connect to HDFS after some timeout (60 seconds if I remember
> correctly). Once your NN is back from its safemode you can safely
> start MR daemons and everything should be just fine.
>
> Simply put: be patient ;)
>
>
> > and then I restarted my cluster and still I see the INFO in namenode logs
> > saying in safemode..
> >
> > somehow I am getting my Map output fine, but the job.isSuccessful() is
> > returning false.
> >
> > Any help on that.
> >
> > Thanks
> > + Praveen
> >
> > On Thu, Dec 9, 2010 at 9:28 PM, Mahadev Konar 
> wrote:
> >
> >> Hi Praveen,
> >>  Looks like its your namenode that's still in safemode.
> >>
> >>
> >> http://wiki.apache.org/hadoop/FAQ
> >>
> >> The safemode feature in the namenode waits till a certain number of
> >> threshold for hdfs blocks have been reported by the datanodes,  before
> >> letting clients making edits to the namespace. It usually happens when
> you
> >> reboot your namenode. You can read more about the safemode in the above
> FAQ.
> >>
> >> Thanks
> >> mahadev
> >>
> >>
> >> On 12/9/10 6:09 PM, "Praveen Bathala"  wrote:
> >>
> >> Hi,
> >>
> >> I am running Mapreduce job to get some emails out of a huge text file.
> >> I used to use hadoop 0.19 version and I had no issues, now I am using
> the
> >> hadoop 0.20.2 and when I run my hadoop mapreduce job I see the log as
> job
> >> failed and in the jobtracker log
> >>
> >> Can someone please help me..
> >>
> >> 2010-12-09 20:53:00,399 INFO org.apache.hadoop.mapred.JobTracker:
> problem
> >> cleaning system directory:
> >> hdfs://localhost:9000/home/praveen/hadoop/temp/mapred/system
> >> org.apache.hadoop.ipc.RemoteException:
> >> org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete
> >> /home/praveen/hadoop/temp/mapred/system. Name node is in safe mode.
> >> The ratio of reported blocks 0. has not reached the threshold
> 0.9990.
> >> Safe mode will be turned off automatically.
> >>at
> >>
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesystem.java:1700)
> >>at
> >>
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:1680)
> >>at
> >>
> org.apache.hadoop.hdfs.server.namenode.NameNode.delete(NameNode.java:517)
> >>at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >>at
> >>
> >>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> >>at
> >>
> >>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >>at java.lang.reflect.Method.invoke(Method.java:597)
> >>at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
> >>at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
> >>at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
> >>at java.security.AccessController.doPrivileged(Native Method)
> >>at javax.security.auth.Subject.doAs(Subject.java:396)
> >>at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
> >>
> >>  at org.apache.hadoop.ipc.Client.call(Client.java:740)
> >>at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
> >>at $Proxy4.delete(Unknown Source)
> >>at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >>at
> >>
> >>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> >>at
> >>
> >>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >>at java.lang.reflect.Method.invoke(Method.java:597)
> >>at
> >>
> >>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
> >>at
> >>
> >>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke