The short term fix would be documentation. Say it in clear language right next 
to the download link - 

    "If you publish large artifacts then you must download Ivy+deps. 
    Install commons httpclient, codec, and logging jars into ant/lib next to 
ivy jar."

Note that you need all three jars, not just httpclient. That detail is not 
documented anywhere that I know of.

That is what can be done now. Going forward the options are as follows:

    1. Keep everything the same, consider the documentation as the solution.
    2. Require httpclient jars to be installed.
    3. Find a work around for the buffering/authentication issues of 
HttpURLConnection. 
    4. Include necessary httpclient classes inside ivy.jar. 

Several options available. Each has its own merits.

L.K.

-----Original Message-----
From: Maarten Coene [mailto:maarten_co...@yahoo.com.INVALID] 
Sent: Thursday, April 09, 2015 7:51 AM
To: Ant Developers List
Subject: Re: [jira] (IVY-1197) OutOfMemoryError during ivy:publish

I'm not a fan of this proposal, I like it that Ivy doesn't has any dependencies 
when using standard resolvers.
Perhaps it could be added to the documentation that if you use the URLresolver 
for large uploads you'll have to add httpclient to the classpath?


Maarten




----- Oorspronkelijk bericht -----
Van: Antoine Levy Lambert <anto...@gmx.de>
Aan: Ant Developers List <dev@ant.apache.org>
Cc: 
Verzonden: donderdag 9 april 3:50 2015
Onderwerp: Re: [jira] (IVY-1197) OutOfMemoryError during ivy:publish

Also, I wonder whether we should not make the use of httpclient with ivy 
compulsory, since Loren says that the HttpUrlConnection of the JDK is always 
copying the full file into a ByteArray when authentication is performed.

That would make the code more simple.

Regards,

Antoine

On Apr 7, 2015, at 9:22 PM, Antoine Levy Lambert <anto...@gmx.de> wrote:

> Hi,
> 
> I wonder whether we should not upgrade ivy to use the latest http client 
> library too ?
> 
> Regards,
> 
> Antoine
> 
> On Apr 7, 2015, at 12:46 PM, Loren Kratzke (JIRA) <j...@apache.org> wrote:
> 
>> 
>>   [ 
>> https://issues.apache.org/jira/browse/IVY-1197?page=com.atlassian.jir
>> a.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1448
>> 3468#comment-14483468 ]
>> 
>> Loren Kratzke edited comment on IVY-1197 at 4/7/15 4:45 PM:
>> ------------------------------------------------------------
>> 
>> I would be happy to provide you with a project that will reproduce the 
>> issue. I can and will do that. 
>> 
>> Generally speaking from a high level, the utility classes are calling 
>> convenience methods and writing to streams that ultimately buffer the data 
>> being written. There is buffering, then more buffering, and even more 
>> buffering until you have multiple copies of the entire content of the stream 
>> stored in over sized buffers (because they double in size when they fill 
>> up). Oddly, the twist is that the JVM hits a limit no matter how much RAM 
>> you allocate. Once the buffers total more than about ~1GB (which is what 
>> happens with a 100-200MB upload) the JVM refuses to allocate more buffer 
>> space (even if you jack up the RAM to 20GB, no cigar). Honestly, there is no 
>> benefit in buffering any of this data to begin with, it is just a side 
>> effect of using high level copy methods. There is no memory ballooning at 
>> all when the content is written directly to the network.
>> 
>> I will provide a test project and note the break points where you can debug 
>> and watch the process walk all the way down the isle to an OOME. I will have 
>> this for you asap.
>> 
>> 
>> was (Author: qphase):
>> I would be happy to provide you with a project that will reproduce the 
>> issue. I can and will do that. 
>> 
>> Generally speaking from a high level, the utility classes are calling 
>> convenience methods and writing to streams that ultimately buffer the data 
>> being written. There is buffering, then more buffering, and even more 
>> buffering until you have multiple copies of the entire content of the stream 
>> stored in over sized buffers (because they double in size when they fill 
>> up). Oddly, the twist is that the JVM hits a limit no matter how much RAM 
>> you allocate. Once the buffers total more than about ~1GB (which is what 
>> happens with a 100-200MB upload) the JVM refuses to allocate more buffer 
>> space (even is you jack up the RAM to 20GB, no cigar). Honestly, there is no 
>> benefit in buffering any of this data to begin with, it is just a side 
>> effect of using high level copy methods. There is no memory ballooning at 
>> all when the content is written directly to the network.
>> 
>> I will provide a test project and note the break points where you can debug 
>> and watch the process walk all the way down the isle to an OOME. I will have 
>> this for you asap.
>> 
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscr...@ant.apache.org For additional 
> commands, e-mail: dev-h...@ant.apache.org

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@ant.apache.org For additional commands, 
e-mail: dev-h...@ant.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@ant.apache.org
For additional commands, e-mail: dev-h...@ant.apache.org

Reply via email to