Hi,

I used *hdfs.ext.avro.**AvroWriter* to write an Avro file on HDFS as given
in,
http://hdfscli.readthedocs.io/en/latest/api.html#hdfs.ext.avro.AvroWriter


with AvroWriter(client, hdfs_file, append=True, codec="snappy") as writer:
>        writer.write(data)
>

When I call above in a loop,
I get,

java.lang.Exception: Shell Process Exception: Python HdfsError raised
> Traceback (most recent call last):
>   File "Hdfsfile.py", line 49, in process
>     writer.write(data)
>   File "/home/ram/lib/python2.7/site-packages/hdfs/ext/avro/__init__.py",
> line 277, in __exit__
>     self._fo.__exit__(*exc_info)
>   File "/home/ram/lib/python2.7/site-packages/hdfs/util.py", line 99, in
> __exit__
>     raise self._err # pylint: disable=raising-bad-type
> *HdfsError: Failed to APPEND_FILE /user/ram/level for
> DFSClient_NONMAPREDUCE_-1757292245_79 on 172.26.83.17 because this file
> lease is currently owned by DFSClient_NONMAPREDUCE_-668446345_78 on
> 172.26.83.17*
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2979)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2726)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:3033)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:3002)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:739)
>     at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:429)
>     at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>     at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2206)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2202)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:422)
>     at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1709)
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2200)
>


It seems like the resource is used by previous append
Is there a way to check whether the file exist, each time the append is
called?

Thanks,
Ram

Reply via email to