[ 
https://issues.apache.org/jira/browse/HDFS-17268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17791983#comment-17791983
 ] 

chan commented on HDFS-17268:
-----------------------------

can you provide your hadoop version?

> when SocketTimeoutException happen, overwrite mode can delete old data, and 
> make file empty
> -------------------------------------------------------------------------------------------
>
>                 Key: HDFS-17268
>                 URL: https://issues.apache.org/jira/browse/HDFS-17268
>             Project: Hadoop HDFS
>          Issue Type: Bug
>            Reporter: katty he
>            Priority: Major
>
> recently, i use fs.create(path, true/* createOrOverwrite */) to write data 
> into parquet file a, but when SocketTimeoutException happend, such as " 
> org.apache.hadoop.io.retry.RetryInvocationHandler            [] - 
> java.net.SocketTimeoutException: Call From xxx to namenodexxx:8888 failed on 
> socket timeout exception: java.net.SocketTimeoutException: 60000 millis 
> timeout while waiting for channel to be ready for read. ch : 
> java.nio.channels.SocketChannel[connected local=/node:33416 
> remote=namenode:8888]; For more details see:  
> http://wiki.apache.org/hadoop/SocketTimeout, while invoking 
> ClientNamenodeProtocolTranslatorPB.create over namenode:8888. Trying to 
> failover immediately." then i found the size of  file a is zero, and read 
> with error "file a is not a parquet file", and there were two create calls 
> from two different routers in hdfs audit log,  so i think overwrite is not 
> safe in some situation



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to