What projects were you building? You may very well have triggered a problem.
On Jan 10, 2015, at 5:26 PM, Kristian Rosenvold
wrote:
> Sorry about this, there was something wrong about my environment.
> Memory usage is slightly higher, but nothing out of the ordinary.
>
> Kristian
>
>
> 2015-
I think it needs to be fixed so I'll take a look. Igor noticed when I made the
change:
http://mail-archives.apache.org/mod_mbox/maven-dev/201410.mbox/%3C5448E62D.7040607%40ifedorenko.com%3E
But I also feel the change is one that's necessary to get toward an immutable
set of projects before the
Nope, I did not miss it. DeferredFileOutputStream writes everything to
disk once it reaches the treshold, so it basically frees the allocated
memory buffers. Mine keeps the allocated buffers and creates a
SequenceInputStream over allocated buffers + disk file
Kristian
2015-01-10 23:14 GMT+01:00
Sorry about this, there was something wrong about my environment.
Memory usage is slightly higher, but nothing out of the ordinary.
Kristian
2015-01-10 22:29 GMT+01:00 Jason van Zyl :
> If it's causing you issues I'll look sooner, but it is something I planned to
> look at. We're building 900 m
Kristian Rosenvold wrote:
[snip]
> Inside commons-compress this target is always a
> tempfile. Inside plexus-archiver OffloadingOutputStream (a
> commons-compress ScatterOutputStream) is used. This writes to some
> pretty huge memory buffers, but when a certain treshold is reached it
> offloads t
If it's causing you issues I'll look sooner, but it is something I planned to
look at. We're building 900 modules and we're using more memory but no as
drastic as you're seeing.
On Jan 10, 2015, at 4:13 PM, Kristian Rosenvold
wrote:
> I just tracked down the commit that broke memory usage in
Some memory increase is expected, as it doesn't allow anything to be loaded
lazily. We didn't see as drastic as increase as you saw but I think it reveals
other problems in the core.
On Jan 10, 2015, at 4:13 PM, Kristian Rosenvold
wrote:
> I just tracked down the commit that broke memory usag
I just tracked down the commit that broke memory usage in core,
https://github.com/apache/maven/tree/6cf9320942c34bc68205425ab696b1712ace9ba4
it does not revert cleanly towards trunk, I filed MNG-5751 about this issue.
Kristian
Tibor says "flush" was slow so I assume flush should finish in
calculating the effective MB/s. Unsure if he did that...
K
2015-01-10 22:03 GMT+01:00 Igor Fedorenko :
> Out of curiosity, what hardware did you use? 400MB/s seems too high
> even for many modern SSDs [1], let alone mechanical hard d
Out of curiosity, what hardware did you use? 400MB/s seems too high
even for many modern SSDs [1], let alone mechanical hard drives.
[1] http://techreport.com/review/25391/wd-red-4tb-hard-drive-reviewed/4
--
Regards,
Igor
On 2015-01-10 13:55, Tibor Digana wrote:
Hi Kristian,
Are you using NIO
Am 2015-01-10 um 20:43 schrieb Robert Scholte:
Yes, I would have expected another message and messages from the buildbot.
The nice thing about CMS is that you can provide log message as asual,
plus it will cache your credentials and pass them to Subversion.
I really like the built-in preview
Yes, I would have expected another message and messages from the buildbot.
Sorry for the noise.
Robert
Op Sat, 10 Jan 2015 20:33:17 +0100 schreef Michael Osipov
:
Am 2015-01-10 um 20:22 schrieb Robert Scholte:
Hi Michael,
Are you aware that the preferred way to do this is by CMS[1][2]?
Am 2015-01-10 um 20:22 schrieb Robert Scholte:
Hi Michael,
Are you aware that the preferred way to do this is by CMS[1][2]?
Hi Robert,
what makes you think I didn't use the nice CMS service?
Did you expect another author on the commit?
Here is the outcome for the fix:
Congratulations michae
Hi Michael,
Are you aware that the preferred way to do this is by CMS[1][2]?
thanks,
Robert
[1] https://cms.apache.org/
[2] http://maven.apache.org/developers/website/index.html
Op Sat, 10 Jan 2015 20:17:32 +0100 schreef :
Author: michaelo
Date: Sat Jan 10 19:17:32 2015
New Revision: 1650795
No NIO. The problem is the ZipArchiveOutputStream in commons compress
which is already suffering from a bit of an overload. With the changes
I made to ZipArchiveOutputStream you could probably try to make a
patch; it'd be interesting to see what kind of effect it would have.
Be sure to make your pa
Hi Kristian,
Are you using NIO for writing big chunks?
Several years ago I made NIO1 measurements. I found that using 256KB
DirectByteBuffe on Win together with MappedByteBuffer/RandomAccessFile on
very large files 1GB got terribly fast throughput 400MB/s on ordinal hard
drive however the flush/clo
The Apache Maven team is pleased to announce the release of the Maven
Project Info Reports Plugin, version 2.8.
This module generates browsable HTML pages from Java source code.
http://maven.apache.org/plugins/maven-project-info-reports-plugin/
You should specify the version in your project's
Interesting questions, Tibor - and some of them have quite complex
answers. I think this will end up as some kind of blog post when we
approach release.
First the obvious: Scatter the cpu-intensive compression algorithm
(zip Deflater) to all available threads. Initially I let each thread
write it'
done
Regards,
Hervé
Le samedi 10 janvier 2015 15:02:40 Michael Osipov a écrit :
> Hi,
>
> The vote has passed with the following result:
>
> +1 (binding): Karl-Heinz Marbaise, Hervé Boutemy, Kristian Rosenvold
>
> I will promote the artifacts to the central repo.
>
> PMCs please promote the
Great job Kristian!
Where was the hotspot you gained the performance? Was it just the Java code
you add ZipEntries in the stream, or parallel writes in file, or this
improvement is specific on the hard drive? Does it apply to normal hard
drive or SSD better maybe?
--
View this message in contex
2015-01-10 16:46 GMT+01:00 Stefan Bodewig :
>> I originally had ConcurrentJarCreator in my c-compress fork. We discussed
>> this (arguably somewhat briefly) on the commons mailing list and to my
>> understanding Stefan wants c-c to be more of a toolkit (at a slightly lower
>> level) and did not wan
Cool. I'm going to try it on my project and a couple other tests and if it
makes things better I'll push a change for review.
On Jan 10, 2015, at 11:29 AM, Kristian Rosenvold
wrote:
> When I look at this code I see that there's probably a fair bit of
> work to do to bring this up to a level th
When I look at this code I see that there's probably a fair bit of
work to do to bring this up to a level that would fit for commons.
Since this does not really add any value for maven users I'm not
immediately willing to do this; I have other higher-value targets in
sight. Maybe sometime later. If
2015-01-10 16:58 GMT+01:00 Michael Osipov :
> the writeTo method uses currentTimeMillis to calculate elapsed time. You
> should rather turn that to nanoTime, see MNG-5626.
Damn. At my age System.currentTimeInMinutes() feels more appropriate
K
-
Am 2015-01-10 um 16:51 schrieb Kristian Rosenvold:
I'm probably mixing threads here. Better to do so in email than in code :)
The file we're talking about is
https://github.com/sonatype/plexus-archiver/blob/2.x/src/main/java/org/codehaus/plexus/archiver/zip/ConcurrentJarCreator.java
Kristian,
I'm probably mixing threads here. Better to do so in email than in code :)
The file we're talking about is
https://github.com/sonatype/plexus-archiver/blob/2.x/src/main/java/org/codehaus/plexus/archiver/zip/ConcurrentJarCreator.java
Kristian
2015-01-10 16:46 GMT+01:00 Stefan Bodewig :
> On 20
On 2015-01-10, Kristian Rosenvold wrote:
> 10. jan. 2015 15:19 skrev "Jason van Zyl" :
>> So I took a look and my feedback:
>> You have a blurb about how to use the parallel code in commons-compress
>> but there is no test or example that actually shows how to make one. The
>> actual working exa
10. jan. 2015 15:19 skrev "Jason van Zyl" :
> So I took a look and my feedback:
>
> You have a blurb about how to use the parallel code in commons-compress
> but there is no test or example that actually shows how to make one. The
> actual working example is in plexus-archiver and if you don't min
In the project I'm working on there is one module that has a very large JAR,
and the final application ZIP is super massive. So I'll try with those next and
report back with some metrics as well.
On Jan 10, 2015, at 9:06 AM, Kristian Rosenvold
wrote:
> Yes. Everything should be noticeably fas
So I took a look and my feedback:
You have a blurb about how to use the parallel code in commons-compress but
there is no test or example that actually shows how to make one. The actual
working example is in plexus-archiver and if you don't mind I'd like to put a
utility in commons-compress so
Yes. Everything should be noticeably faster. I have also been working on
features that allows transfer of compressed entries from one jar file to
another without packing/unpacking, which should be great for all kinds of
exploded archives inside zips/jars. I hope to get that into a second beta
soon.
Hi,
The vote has passed with the following result:
+1 (binding): Karl-Heinz Marbaise, Hervé Boutemy, Kristian Rosenvold
I will promote the artifacts to the central repo.
PMCs please promote the source release ZIP file and add this release the
board report.
-
Sounds like an awesome improvement! Will we see speed improvement on war
file creation with many megs of jars in the lib dir or only with
compressing of files into the archive (ignoring the web files for this
question)?
On Sat, Jan 10, 2015 at 7:43 AM, Kristian Rosenvold <
kristian.rosenv...@gma
I had 950% CPU usage on my 6 core + HT machine here the other day.
Kristian
2015-01-10 14:42 GMT+01:00 Kristian Rosenvold :
> It's faster; a lot faster - and it scales beautifully. But then
> again you probably need "war/ear/zip" heavy builds to really get max
> effect. The average "jar" plugi
It's faster; a lot faster - and it scales beautifully. But then
again you probably need "war/ear/zip" heavy builds to really get max
effect. The average "jar" plugin does not usually consume that large a
percentage the average build.
I'll try to make some nice graphs and a blog post some time th
Do you have any metrics? I use compress-compress directly so I can certainly
try it on some large archives. Anything special need to be done? Or do the same
code paths still work they just use the cores available with the new version?
On Jan 10, 2015, at 7:23 AM, Kristian Rosenvold
wrote:
> I
I just released plexus-archiver version 2.10-beta-1 to maven central.
This is a "technology preview" of the multithreaded Zip feature I have
been adding to commons-compress for the last few weeks, and will
basically use all available CPU cores when compressing the archive.
To test/use this featur
On Jan 10, 2015, at 3:16 AM, Mark Struberg wrote:
> Hi Martin!
>
> The maven-compiler plugin already does this. But once a single change is
> detected then we need to recompile the _whole_ module because of the reasons
> explained by Igor.
> Pro of JDT: you can e.g. also see if there were onl
Hi Martin!
The maven-compiler plugin already does this. But once a single change is
detected then we need to recompile the _whole_ module because of the reasons
explained by Igor.
Pro of JDT: you can e.g. also see if there were only 'internal' changes and
thus the other dependencies don't need
If you decide to fix a regression on the 1.5 version in a branch, you
dont have to move any labels :)
Thanks
Kristian
-
To unsubscribe, e-mail: dev-unsubscr...@maven.apache.org
For additional commands, e-mail: dev-h...@maven.apa
works fine over here.
rat fine as well.
+1
LieGrue,
strub
> On Thursday, 8 January 2015, 21:21, Karl Heinz Marbaise
> wrote:
> > Hi,
>
> We solved 12 issues:
> http://jira.codehaus.org/secure/ReleaseNote.jspa?projectId=11150&version=20681
>
> There are still a couple of issues left in
41 matches
Mail list logo