[ 
https://issues.apache.org/jira/browse/HDFS-14462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16899147#comment-16899147
 ] 

Erik Krogen edited comment on HDFS-14462 at 8/2/19 7:05 PM:
------------------------------------------------------------

TestLargeBlockReport is failing consistently on trunk and 
TestUnderReplicatedBlocks is notoriously flaky.

I just committed this to trunk, including backports down to branch-2. Thanks 
for the contribution [~simbadzina]!


was (Author: xkrogen):
I just committed this to trunk, including backports down to branch-2. Thanks 
for the contribution [~simbadzina]!

> WebHDFS throws "Error writing request body to server" instead of 
> DSQuotaExceededException
> -----------------------------------------------------------------------------------------
>
>                 Key: HDFS-14462
>                 URL: https://issues.apache.org/jira/browse/HDFS-14462
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: webhdfs
>    Affects Versions: 3.2.0, 2.9.2, 3.0.3, 2.8.5, 2.7.7, 3.1.2
>            Reporter: Erik Krogen
>            Assignee: Simbarashe Dzinamarira
>            Priority: Major
>             Fix For: 2.10.0, 3.0.4, 3.3.0, 3.2.1, 3.1.3
>
>         Attachments: HDFS-14462.001.patch, HDFS-14462.002.patch, 
> HDFS-14462.003.patch, HDFS-14462.004.patch
>
>
> We noticed recently in our environment that, when writing data to HDFS via 
> WebHDFS, a quota exception is returned to the client as:
> {code}
> java.io.IOException: Error writing request body to server
>         at 
> sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.checkError(HttpURLConnection.java:3536)
>  ~[?:1.8.0_172]
>         at 
> sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.write(HttpURLConnection.java:3519)
>  ~[?:1.8.0_172]
>         at 
> java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) 
> ~[?:1.8.0_172]
>         at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) 
> ~[?:1.8.0_172]
>         at java.io.FilterOutputStream.flush(FilterOutputStream.java:140) 
> ~[?:1.8.0_172]
>         at java.io.DataOutputStream.flush(DataOutputStream.java:123) 
> ~[?:1.8.0_172]
> {code}
> It is entirely opaque to the user that this exception was caused because they 
> exceeded their quota. Yet in the DataNode logs:
> {code}
> 2019-04-24 02:13:09,639 WARN org.apache.hadoop.hdfs.DFSClient: DataStreamer 
> Exception
> org.apache.hadoop.hdfs.protocol.DSQuotaExceededException: The DiskSpace quota 
> of /foo/path/here is exceeded: quota = XXXXXXXXXXXX B = X TB but diskspace 
> consumed = XXXXXXXXXXXXXXXX B = X TB
>         at 
> org.apache.hadoop.hdfs.server.namenode.DirectoryWithQuotaFeature.verifyStoragespaceQuota(DirectoryWithQuotaFeature.java:211)
>         at 
> org.apache.hadoop.hdfs.server.namenode.DirectoryWithQuotaFeature.verifyQuota(DirectoryWithQuotaFeature.java:239)
> {code}
> This was on a 2.7.x cluster, but I verified that the same logic exists on 
> trunk. I believe we need to fix some of the logic within the 
> {{ExceptionHandler}} to add special handling for the quota exception.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to