My experience is to call Thread.sleep(100) after calling dfs writes N (say 1000) times.
> -----Original Message----- > From: Xavier Stevens [mailto:[EMAIL PROTECTED] > Sent: Wednesday, May 14, 2008 10:47 AM > To: core-user@hadoop.apache.org > Subject: FileSystem.create > > I've having some problems creating a new file on HDFS. I am attempting > to do this after my MapReduce job has finished and I am trying to > combine all part-00* files into a single file programmatically. It's > throwing a LeaseExpiredException saying the file I just created doesn't > exist. Any idea why this is happening or what I can do to fix it? > > -Xavier > > Here is the code snippet > ======================================================================== > ======================================= > FileSystem fileSys = FileSystem.get(job); > FSDataOutputStream fsdos = fileSys.create(new Path(outputIndexPath)); > if (!fileSys.exists(new Path(outputIndexPath))) { > System.err.println("File still does not exist: "+outputIndexPath); > } else { > System.out.println("File exists: "+outputIndexPath); > } > BufferedWriter writer = new BufferedWriter(new > OutputStreamWriter(fsdos,"UTF-8")); > > Output with stack trace > ======================================================================== > ======================================= > File exists: output/index.txt > 08/05/14 03:20:13 INFO dfs.DFSClient: > org.apache.hadoop.ipc.RemoteException: > org.apache.hadoop.dfs.LeaseExpiredException: No lease on > /user/xstevens/output/index.txt File does not exist. [Lease. Holder: 44 > 46 53 43 6c 69 65 6e 74 5f 2d 31 30 31 34 35 38 35 32 32 33, heldlocks: > 0, pendingcreates: 1] > > at org.apache.hadoop.dfs.FSNamesystem.checkLease(FSNamesystem.java:1160) > at > org.apache.hadoop.dfs.FSNamesystem.getAdditionalBlock(FSNamesystem.java: > 1097) > at org.apache.hadoop.dfs.NameNode.addBlock(NameNode.java:312) > at sun.reflect.GeneratedMethodAccessor43.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessor > Impl.java:25) > at java.lang.reflect.Method.invoke(Method.java:585) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:409) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:901) > at org.apache.hadoop.ipc.Client.call(Client.java:512) > at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:198) > at org.apache.hadoop.dfs.$Proxy0.addBlock(Unknown Source) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.jav > a:39) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessor > Impl.java:25) > at java.lang.reflect.Method.invoke(Method.java:585) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvo > cationHandler.java:82) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocation > Handler.java:59) > at org.apache.hadoop.dfs.$Proxy0.addBlock(Unknown Source) > at > org.apache.hadoop.dfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFS > Client.java:2074) > at > org.apache.hadoop.dfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DF > SClient.java:1967) > at > org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access$1500(DFSClient.ja > va:1487) > at > org.apache.hadoop.dfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClie > nt.java:1601) > > > Xavier stevens > Sr. software engineer > FOX INTERACTIVE MEDIA > e: [EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]> > p: 310.633.9749 > >