Noel J. Bergman wrote:
Stefano Bagnara wrote:

+1 for the dist: it is where the real check has to be forced.
In other scenario it won't help forcing something.

Agreed.

The long-running tests issues is solved using a decent CI environment.

We do have nightly builds.  And the nightly build pointed out the failed
test.

Right, but nighly means once per day (per night?) and
1) often it happens after many hours from the commit
2) often multiple commits have been done and we need time to identify the bad commit 3) the subject of the nighly build is always the same: to see that it fails you have to read the content.

Only the first 3 issues I identified comparing current nightly builds vs my continuum.

I have continuum monitoring the james repositories every 5 minutes

I doubt that anyone wants for me to be posting automated results many times
daily, but if that's what people want, I can adjust the cron job
accordingly, although I'd only upload the builds once per day.

I like what continuum does: run the check often (every commit) and send the notification only if it fails or if the status changed from the previous (fail => success).
If you can do this or something similar it would be cool!

Does ASF provide infrastructure to put there our continuum?

We already have GUMP, remember?  And, yes, there is continuum running
somewhere on the infrastructure.

GUMP is not the same: we already discussed it.
GUMP is intended to check integration between apache projects and not as a continuos build/test tool specific to the Apache James projects.

As an example in this very moment GUMP is not testing james because of jakarta-velocity failing. So it does not really help us for the issue we are talking about.

  ant run-unit-test -Dtest=org.apache.james.smtpserver.SMTPServerTest

I think that generally speaking it is not a good idea to complicate the
build.xml with unused targets

And why is this bad, as opposed to the `mvn test -Dtest=SMTPServerTest`,
which you just suggested?  And why would it be unused?

What I proposed does not need changes to our sources ;-)
You instead proposed to change our build.xml (if I understood you).
This is different, isn't it?

it would be interesting to have an overview of tools every james
committer use to work on james projects (platform, ide, compilers,
debuggers, profilers, svn tools and so on).

  ant, emacs, java, javac, svn

win, eclipse, jprofiler/netbeans profiler, clover, maven, ant (rarely svn)

1) It was not a memory-leak: Wikipedia has a simple "Is it a memory
leak?" explanation: http://en.wikipedia.org/wiki/Memory_leak

<<sigh>>  From your link:

  a memory leak is a particular kind of unintentional memory
  consumption by a computer program where the program fails
  to release memory when no longer needed

Memory consumed permanently for transient conditions meets that definition.

We have different opinion.

That cache is not lost: if an IP is looked up again later it will hit the cache. Imho "unbounded cache" is a perfect definition for what we had so I'll never misuse "memory leak" for this. I can efford the 4 additional chars needed ;-)

We simply said that you had no enough information to say "confirmed
memory leak in james": facts proved that there was no memory-leak,
and that the problem was not in james server code.

The FACT is that there was an error (using InetAddress for resolution, which
we already knew that we cannot use) that caused the heap to be consumed
until JAMES crashed due to lack of memory.  I would call that a problem.
:-)  Fortunately, I always keep my own counsel.

I agree that was an error: not a memory leak.
And in fact it was already fixed in trunk and I proposed a workaround for 2.3. You can tell what you want, but no one ever said that there was no error in james. We simply said that the informations you provided within the bug report was not enough to "confirm" a memory leak. And in fact I still think we was 100% right.

As I observed earlier, we could consider configuring JAMES to use a subclass
of the dnsjava resolver for the JRE that is instrumented to log a warning if
it is called.  That would call out offending code, even in third party code.

This is a good idea, but I think that the "-D" workaround is enough to consider this a "minor" issue. But you're idea is good, feel free to implement it.

6) If you used a real profiler (netbeans profiler, jprofiler or another
"real" tool and not a simple heap dump) it would have been much more
easy to find the problem

The problem was very easy to find once I managed to keep the JVM from
crashing, which ended up just meaning cranking the heap size to 256MB (and
also limiting the dnsjava cache).  And it was necessary to do it in a
production environment, not some toy lab testbench.

        --- Noel

Jprofiler and netbeans profiler can be used also on the real production environment, not only in "toy labs". IN fact they give you much more data in the same time you had instrumented the production environment for heap profiling. But you are your own judge, I just suggested tools that make me work much faster: of course you have to learn them, and this is a cost.

Stefano



---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to