[ 
https://issues.apache.org/jira/browse/KNOX-2139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16996325#comment-16996325
 ] 

Sean Chow commented on KNOX-2139:
---------------------------------

We use v0.8.0 before and it's ok.  I think it starts from v0.9.0 since KNOX-793 
mentioned that.

I test one file with size 4294967296 bytes  (\{{dd if=/dev/zero 
of=/data/hadoopenv/data4gb.bin bs=1024 count=4194304}}), exactly the same 
isssue.

After comparing the diffs between branch v0.8.0 and v0.9.0, I think \{{KNOX-716 
replayBufferSize is kept in bytes }} is the root cause. Refer to 
[https://github.com/apache/knox/commit/454ea60173ba28fa218c031249348e9cd93759ac]
 

 

I try to change configration \{{replayBufferSize}} to {{-1}} for WEBHDFS, the 
issue is still there.

After do some debug work, I found 
\{{org.apache.knox.gateway.dispatch.DefaultDispatch:executeRequest()}} exit too 
quickly, comparing to normal file-put. And found a clue that may be usefull: 
*the contentLength is wrong set*

 

file put normal:
{code:java}
jdb -attach 127.0.0.1:8787
Set uncaught java.lang.Throwable
Set deferred uncaught java.lang.Throwable
Initializing jdb ...
> stop at org.apache.knox.gateway.dispatch.DefaultDispatch:240
Set breakpoint org.apache.knox.gateway.dispatch.DefaultDispatch:240

qtp1389432760-24[1] next
> 
Step completed: "thread=qtp1389432760-24", 
org.apache.knox.gateway.dispatch.DefaultDispatch.createRequestEntity(), 
line=241 bci=22

qtp1389432760-24[1] print contentLength
 contentLength = -1

{code}
file put abnormal:
{code:java}
jdb> qtp1389432760-24[1] print contentLength
 contentLength = 0
{code}
So I change the contentLength to -1 directly in 
\{{org.apache.knox.gateway.dispatch.createRequestEntity}} to verify my guess 
right,  *the file with 8gb size upload successfully.*
{code:java}
if (contentType == null) {
   entity = new InputStreamEntity(contentStream, -1); // changed to verify
} else {
   entity = new InputStreamEntity(contentStream, contentLength, 
ContentType.parse(contentType));
}
{code}
I'm not sure how replayBufferSize affect this behavior

 

 

> Can not handle 8GB file when using webhdfs
> ------------------------------------------
>
>                 Key: KNOX-2139
>                 URL: https://issues.apache.org/jira/browse/KNOX-2139
>             Project: Apache Knox
>          Issue Type: Bug
>          Components: Server
>    Affects Versions: 1.1.0, 1.2.0
>            Reporter: Sean Chow
>            Priority: Critical
>
> I use knox with webhdfs for a long time, andI uprade my knox version from 0.8 
> to 1.2 recent days. It's really strange that knox can't handle file with size 
> *8589934592 bytes* when I upload my splited file to hdfs.
> It's easy to reproduce and both knox1.1 and 1.2 have this issue. But is works 
> fine in knox0.8.
> Any error log found in gateway.log? No, all logs is clean. From the client 
> side (curl), I saw the the url is redirected correctly and failed with 
> {{curl: (55) Send failure: Connection reset by peer}} or {{curl: (55) Send 
> failure: Broken pipe}}
> I'm sure my network is ok. Any files with other size(smaller or larger) can 
> be  upload successfully.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to