[ 
https://issues.apache.org/jira/browse/KNOX-157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dilli Arumugam updated KNOX-157:
--------------------------------

    Status: Patch Available  (was: In Progress)

Fixing with a limited scoped patch.
The patch does buffering only for Kerberos cluster.
So, simple cluster would not hit memory limit.
For now, Kerberos cluster would memory limits when large files are submitted.
I think this would be good solution in the short term.
We would continue to look for better solutions.
Would file a separate Jira to track this specifically for secure cluster.
                
> Knox is not able to process PUT/POST requests with large payload
> ----------------------------------------------------------------
>
>                 Key: KNOX-157
>                 URL: https://issues.apache.org/jira/browse/KNOX-157
>             Project: Apache Knox
>          Issue Type: Bug
>          Components: Server
>    Affects Versions: 0.3.0
>            Reporter: Vladimir Tkhir
>            Assignee: Dilli Arumugam
>            Priority: Critical
>             Fix For: 0.3.0
>
>         Attachments: KNOX-157.patch, KNOX-157.patch
>
>
> Getting OutOfMemory exception when trying to process PUT/POST requests with 
> large bodies. As an example create 1GB file in HDFS via Knox.
> Issue is related to replaying bodies on requests dispatch.
> From Kevin:
> We need a special version of a BufferedHttpEntity that will buffer up to the 
> first n bytes and then if getContent() is called again after that many bytes 
> have been read it throws an exception.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to