Sounds like you also hit this problem:
https://issues.apache.org/jira/browse/HADOOP-2669

Runping


> -----Original Message-----
> From: Luca [mailto:[EMAIL PROTECTED]
> Sent: Friday, April 18, 2008 1:21 AM
> To: core-user@hadoop.apache.org
> Subject: Re: Lease expired on open file
> 
> dhruba Borthakur wrote:
> > The DFSClient has a thread that renews leases periodically for all
files
> > that are being written to. I suspect that this thread is not getting
a
> > chance to run because the gunzip program is eating all the CPU. You
> > might want to put in a Sleep() after every few seconds on unzipping.
> >
> > Thanks,
> > dhruba
> >
> 
> Thanks Dhruba,
>       with your suggestion and a small Sleep() every block (more or
less),
> it
> worked perfectly. Good hint!
> 
> Ciao,
> Luca
> 
> > -----Original Message-----
> > From: Luca Telloli [mailto:[EMAIL PROTECTED]
> > Sent: Wednesday, April 16, 2008 9:43 AM
> > To: core-user@hadoop.apache.org
> > Subject: Lease expired on open file
> >
> > Hello everyone,
> >     I wrote a small application that directly gunzip files from a
> > local
> > filesystem to an installation of HDFS, writing on a
FSDataOutputStream.
> > Nevertheless, while expanding a very big file, I got this exception:
> >
> > org.apache.hadoop.ipc.RemoteException:
> > org.apache.hadoop.dfs.LeaseExpiredException: No lease on
> > /user/luca/testfile File is not open for writing. [Lease.  Holder:
44 46
> >
> > 53 43 6c 69 65 6e 74 5f 2d 31 39 31 34 34 39 36 31 34 30, heldlocks:
0,
> > pendingcreates: 1]
> >
> > I wonder what the cause would be for this Exception and if there's a
way
> >
> >   to know the default lease for a file and to possibly prolongate
it.
> >
> > Ciao,
> > Luca
> >
> 

Reply via email to