Re: [logging] logging vs slf4j

2011-08-04 Thread Antonio Petrelli
2011/8/4 Ralph Goers ralph.go...@dslextreme.com

 The flaw would be in JBoss Portal, not the portlet spec. The spec doesn't
 have anything to do with logging.


In fact I would have said a library in general that is referenced both in
the portal application and in portlets.
Anyway you're right, if JBPortal would have shaded the logging framework,
the problem would disappear.

Antonio


Re: [JCS] Long standing update: Switched to JDK 5 and Maven 2

2011-08-04 Thread Thomas Vandahl
On 04.08.11 01:08, Rahul Akolkar wrote:
 On Wed, Jul 27, 2011 at 3:54 PM, Rahul Akolkar rahul.akol...@gmail.com 
 wrote:
 I cp -R'ed the site over, but it should be republished more gracefully.

Sorry for the delay, I still need to find the time to move the site to
Maven-2. Will re-publish when that is done.

Bye, Thomas.

-
To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
For additional commands, e-mail: dev-h...@commons.apache.org



Re: [math] Implementation of Conjugate Gradient (MATH-581)

2011-08-04 Thread Gilles Sadowski
Hello.

 please review a proposal for the definition of general iterative linear
 solvers, as well as the implementation of the conjugate gradient method. This
 is file MATH-581-06.zip attached to the JIRA MATH-581 ticket.
 Thanks for your comments!
 
 Actually, I *do* have a comment. For the time being,
 new AbstractIterativeLinearSolver(a, monitor)
 throws a NonSquareMatrixException when a is... not square. However, a is not a
 matrix, it is a RealLinearOperator. Should we
 1. create a new exception, called NonSquareRealLinearOperatorException?
 2. rename NonSquareMatrixException (as this exception does not really need to
 be specialized to matrices?
 
 Also, I see that the current implementation of NonSquareMatrixException does
 not allow one to recover the offending matrix/linear operator. This might be
 handy.

Then please create a
  NonSquareRealLinearOperatorException
similar to what has been done for the other ...RealLinearOperatorException
objects.

I'll subsequently change NonSquareMatrixException to inherit from that
one.


Thanks,
Gilles

P.S. Please submit a separate patch for the new exception.

-
To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
For additional commands, e-mail: dev-h...@commons.apache.org



Re: [math] Implementation of Conjugate Gradient (MATH-581)

2011-08-04 Thread Sebastien Brisard
Hi Gilles,
I'm OK to define a new exception. From your PS, I understand that I should
wait until what I've already submitted (MATH-581-06.zip) has been committed
until I submit a patch containing the new exception. Is that right?
I don't think it's necessary to open a new JIRA ticket for this, do you?

Finally, a design issue. I would like the
NonSquareLinearOperatorException to have an accessor to the actual
LinearOperator which was the cause of this exception. It seems to me that
NonSquareMatrixException does not offer this opportunity, so having the latter
inherit from the former might be difficult. Should I give up this accessor,
then? I think it would be a pity. As an example: the constructor of a
preconditioned iterative solver requires TWO LinearOperators, both of which
must be square. If an exception is raised, it would be nice (although not
absolutely essential, since I can always test a posteriori the size of each
operator) to know which operator was faulty.

Thanks for your advice!
Sebastien

-
To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
For additional commands, e-mail: dev-h...@commons.apache.org



[GUMP@vmgump]: Project commons-proxy-test (in module apache-commons) failed

2011-08-04 Thread Gump
To whom it may engage...

This is an automated request, but not an unsolicited one. For 
more information please visit http://gump.apache.org/nagged.html, 
and/or contact the folk at gene...@gump.apache.org.

Project commons-proxy-test has an issue affecting its community integration.
This issue affects 1 projects,
 and has been outstanding for 16 runs.
The current state of this project is 'Failed', with reason 'Build Failed'.
For reference only, the following projects are affected by this:
- commons-proxy-test :  Apache Commons


Full details are available at:

http://vmgump.apache.org/gump/public/apache-commons/commons-proxy-test/index.html

That said, some information snippets are provided here.

The following annotations (debug/informational/warning/error messages) were 
provided:
 -WARNING- Overriding Maven settings: 
[/srv/gump/public/workspace/apache-commons/proxy/gump_mvn_settings.xml]
 -DEBUG- (Apache Gump generated) Apache Maven Settings in: 
/srv/gump/public/workspace/apache-commons/proxy/gump_mvn_settings.xml
 -INFO- Failed with reason build failed
 -DEBUG- Maven POM in: /srv/gump/public/workspace/apache-commons/proxy/pom.xml
 -INFO- Project Reports in: 
/srv/gump/public/workspace/apache-commons/proxy/target/surefire-reports



The following work was performed:
http://vmgump.apache.org/gump/public/apache-commons/commons-proxy-test/gump_work/build_apache-commons_commons-proxy-test.html
Work Name: build_apache-commons_commons-proxy-test (Type: Build)
Work ended in a state of : Failed
Elapsed: 14 secs
Command Line: /opt/maven2/bin/mvn --batch-mode --settings 
/srv/gump/public/workspace/apache-commons/proxy/gump_mvn_settings.xml test 
[Working Directory: /srv/gump/public/workspace/apache-commons/proxy]
M2_HOME: /opt/maven2
-
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.003 sec
Running org.apache.commons.proxy.factory.util.TestMethodSignature
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.003 sec
Running org.apache.commons.proxy.provider.TestConstantProvider
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.003 sec
Running org.apache.commons.proxy.interceptor.TestFilteredInterceptor
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.032 sec
Running org.apache.commons.proxy.interceptor.filter.TestPatternFilter
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.003 sec
Running org.apache.commons.proxy.interceptor.TestSerializingInterceptor
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.014 sec
Running org.apache.commons.proxy.interceptor.TestInterceptorChain
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.006 sec
Running org.apache.commons.proxy.invoker.TestNullInvoker
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.013 sec
Running org.apache.commons.proxy.provider.remoting.TestBurlapProvider
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.008 sec
Running org.apache.commons.proxy.exception.TestDelegateProviderException
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.002 sec
Running org.apache.commons.proxy.invoker.TestChainInvoker
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.01 sec
Running org.apache.commons.proxy.factory.javassist.TestJavassistProxyFactory
Tests run: 37, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.156 sec
Running org.apache.commons.proxy.exception.TestProxyFactoryException
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.002 sec
Running org.apache.commons.proxy.interceptor.filter.TestReturnTypeFilter
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.002 sec
Running org.apache.commons.proxy.provider.TestBeanProvider
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.011 sec

Results :

Tests in error: 
  testInvalidHandlerName(org.apache.commons.proxy.invoker.TestXmlRpcInvoker)

Tests run: 179, Failures: 0, Errors: 1, Skipped: 0

[INFO] 
[ERROR] BUILD FAILURE
[INFO] 
[INFO] There are test failures.

Please refer to 
/srv/gump/public/workspace/apache-commons/proxy/target/surefire-reports for the 
individual test results.
[INFO] 
[INFO] For more information, run Maven with the -e switch
[INFO] 
[INFO] Total time: 13 seconds
[INFO] Finished at: Thu Aug 04 11:25:18 UTC 2011
[INFO] Final Memory: 24M/58M
[INFO] 
-

To subscribe to this information via syndicated feeds:
- RSS: 
http://vmgump.apache.org/gump/public/apache-commons/commons-proxy-test/rss.xml
- Atom: 

Re: [compress] XZ support and inconsistencies in the existing compressors

2011-08-04 Thread Lasse Collin
On 2011-08-04 Stefan Bodewig wrote:
 On 2011-08-03, Lasse Collin wrote:
  I looked at the APIs and code in Commons Compress to see how XZ
  support could be added. I was especially looking for details where
  one would need to be careful to make different compressors behave
  consistently compared to each other.
 
 This is in a big part due to the history of Commons Compress which
 combined several different codebases with separate APIs and provided a
 first attempt to layer a unifying API on top of it.  We are aware of
 quite a few problems and want to address them in Commons Compress 2.x
 and it would be really great if you would participate in the design of
 the new APIs once that discussion kicks off.

I'm not sure how much I can help, but I can try (depending on how much
I have time).

  (2) BZip2CompressorOutputStream.flush() calls out.flush() but it
  doesn't flush data buffered by BZip2CompressorOutputStream.
  Thus not all data written to the Bzip2 stream will be available
  in the underlying output stream after flushing. This kind of
  flush() implementation doesn't seem very useful.
 
 Agreed, do you want to open a JIRA issue for this?

There is already this:

https://issues.apache.org/jira/browse/COMPRESS-42

I tried to understand how flushing could be done properly. I'm not
really familiar with bzip2 so the following might have errors.

I checked libbzip2 and how it's BZ_FLUSH works. It finishes the block,
but it doesn't flush the last bits, and thus the complete block isn't
available in the output stream. The blocks in the .bz2 format aren't
aligned to full bytes, and there is no padding between blocks.

The lack of alignment makes flushing tricky. One may need to write out
up to seven bits of data from the future. The bright side is that those
future bits can only come from the block header magic or from the end
of stream magic. Both are constants so there are only two possibilities
what those seven bits can be.

Using bits from the end of stream magic doesn't make sense, because then
one would be forced to finish the stream. Using the bits from the
block header magic means that one must add at least one more block.
This is fine if the application will want to encode at least one more
byte. If the application calls close() right after flushing, then
there's a problem unless .bz2 format allows empty blocks. I get a
feeling from the code that .bz2 would support empty blocks, but I'm not
sure at all.

Since bzip2 works on blocks that are compressed independently from each
other, the compression ratio doesn't get a big penalty if the stream is
finished and then a new stream is started. This would make it much
simpler to implement flushing. The downside is that implementations,
that don't support decoding concatenated .bz2 files, will stop after
the first stream.

  (4) The decompressor streams don't support concatenated .gz and .bz2
  files. This can be OK when compressed data is used inside
  another file format or protocol, but with regular
  (standalone) .gz and .bz2 files it is bad to stop after the
  first compressed stream and silently ignore the remaining
  compressed data.
 
  Fixing this in BZip2CompressorInputStream should be relatively
  easy because it stops right after the last byte of the
  compressed stream.
 
 Is this https://issues.apache.org/jira/browse/COMPRESS-146?

Yes. I didn't check the suggested fix though.

  Fixing GzipCompressorInputStream is harder because the problem
  is inherited from java.util.zip.GZIPInputStream which reads
  input past the end of the first stream. One might need to
  reimplement .gz container support on top of
  java.util.zip.InflaterInputStream or java.util.zip.Inflater.
 
 Sounds doable but would need somebody to code it, I guess ;-)

There is a little bit hackish solution in the comments of the following
bug report, but it lacks license information:

http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=4691425

 In the past we have incorporated external codebases (ar and cpio) that
 used to be under compatible licenses to make things simpler for our
 users, but if you prefer to develop your code base outside of Commons
 Compress then I can fully understand that.

I will develop it in my own tree, but it's possible to include a copy
in Commons Compress with modified package and import lines in the
source files. Changes in my tree would need to be copied to Commons
Compress now and then. I don't know if this is better than having an
external dependency.

org.tukaani.xz will include features that aren't necessarily interesting
in Commons Compress, for example, advanced compression options and
random access reading. Most developers probably won't care about these.

(The above answers to Simone Tripodi's message too.)

 From the dependency management POV I know many
 developers prefer dependencies that are available from a Maven
 repository, is this the case for the 

Re: [math] Implementation of Conjugate Gradient (MATH-581)

2011-08-04 Thread Gilles Sadowski
Hello.

 I'm OK to define a new exception. From your PS, I understand that I should
 wait until what I've already submitted (MATH-581-06.zip) has been committed
 until I submit a patch containing the new exception. Is that right?

I would have thought the other way around: First commit the exception than
commit the code that use it (i.e. a new patch). That will avoid changing
that code afterwards just to use the new exception.

 I don't think it's necessary to open a new JIRA ticket for this, do you?

No.

 Finally, a design issue. I would like the
 NonSquareLinearOperatorException to have an accessor to the actual
 LinearOperator which was the cause of this exception. It seems to me that
 NonSquareMatrixException does not offer this opportunity, so having the latter
 inherit from the former might be difficult. Should I give up this accessor,
 then? I think it would be a pity. As an example: the constructor of a
 preconditioned iterative solver requires TWO LinearOperators, both of which
 must be square. If an exception is raised, it would be nice (although not
 absolutely essential, since I can always test a posteriori the size of each
 operator) to know which operator was faulty.

I'm still wondering whether it is useful to provide access to a high-level
object (such as RealLinearOperator or RealMatrix) from an exception.
What are you supposed to do with it?  Printing cannot be a default option
since, from what I understand, it could be huge.

Also, if keeping those data in the exception is useful in some circumtances,
we could use the ExceptionContext feature that was recently implemented.
Code would look like
---CUT---
/**
 * [...]
 *
 * @throws NonSquareLinearOperatorException when [...].
 * The offending operator can be accessed from the exception context (cf.
 * {@link org.apache.commons.math.exception.util.ExceptionContext}, using
 * the operator key name.
 */
public void doSomething(RealLinearOperator op) {
   // ...

   if ( /* test */ ) {
 final NonSquareLinearOperatorException nslo = new 
NonSquareLinearOperatorException();
 nslo.getContext().setValue(operator, op);
 throw nslo;
   }

   // ...
}
---CUT---

Yes, that's more code to write (but you only do it once), but it makes a
clear separation between not often used data, and default information that
will usually end up printed on the console.

I would thus also propose to remove the getOffending... from the existing
exceptions (and replace them likewise).


Regards,
Gilles

-
To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
For additional commands, e-mail: dev-h...@commons.apache.org



Re: [math] Implementation of Conjugate Gradient (MATH-581)

2011-08-04 Thread Sebastien Brisard
Hi,
thanks for your answer.

2011/8/4 Gilles Sadowski gil...@harfang.homelinux.org:
 Hello.

 I'm OK to define a new exception. From your PS, I understand that I should
 wait until what I've already submitted (MATH-581-06.zip) has been
committed
 until I submit a patch containing the new exception. Is that right?

 I would have thought the other way around: First commit the exception than
 commit the code that use it (i.e. a new patch). That will avoid changing
 that code afterwards just to use the new exception.


Sounds great! I'll do that, and post a new bunch of files. I'm just starting
to worry about the large list of files attached to the JIRA ticket, while
almost none of them is really useful, since they were subsequently modified...
I understand there is no functionality to remove files, so we just have to
live with all those near-duplicates. Maybe I should add a comment at the top
of ticket, summarizing which files are useful, and which files are not.

 I don't think it's necessary to open a new JIRA ticket for this, do you?

 No.

 Finally, a design issue. I would like the
 NonSquareLinearOperatorException to have an accessor to the actual
 LinearOperator which was the cause of this exception. It seems to me that
 NonSquareMatrixException does not offer this opportunity, so having the
latter
 inherit from the former might be difficult. Should I give up this
accessor,
 then? I think it would be a pity. As an example: the constructor of a
 preconditioned iterative solver requires TWO LinearOperators, both of
which
 must be square. If an exception is raised, it would be nice (although not
 absolutely essential, since I can always test a posteriori the size of
each
 operator) to know which operator was faulty.

 I'm still wondering whether it is useful to provide access to a high-level
 object (such as RealLinearOperator or RealMatrix) from an exception.
 What are you supposed to do with it?  Printing cannot be a default option
 since, from what I understand, it could be huge.

 Also, if keeping those data in the exception is useful in some
circumtances,
 we could use the ExceptionContext feature that was recently implemented.
 Code would look like
 ---CUT---
 /**
  * [...]
  *
  * @throws NonSquareLinearOperatorException when [...].
  * The offending operator can be accessed from the exception context (cf.
  * {@link org.apache.commons.math.exception.util.ExceptionContext}, using
  * the operator key name.
  */
 public void doSomething(RealLinearOperator op) {
   // ...

   if ( /* test */ ) {
 final NonSquareLinearOperatorException nslo = new
NonSquareLinearOperatorException();
 nslo.getContext().setValue(operator, op);
 throw nslo;
   }

   // ...
 }
 ---CUT---

 Yes, that's more code to write (but you only do it once), but it makes a
 clear separation between not often used data, and default information that
 will usually end up printed on the console.


I like that, and will change the code accordingly. Is there an agreed policy
for the naming of the keys (upper/lower case, and so on)? Also, how do I
document the keys which are actually set by a given method? I quickly searched
the current code, and could not find classes which currently use extensively
this apparently recent functionality. I'll try and browse the mailing list
archive to see what the conclusions of the community were.

 I would thus also propose to remove the getOffending... from the existing
 exceptions (and replace them likewise).


 Regards,
 Gilles

 -
 To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
 For additional commands, e-mail: dev-h...@commons.apache.org



That's what I was going to suggest...
Best regards,
Sebastien

-
To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
For additional commands, e-mail: dev-h...@commons.apache.org



Re: [math] Implementation of Conjugate Gradient (MATH-581)

2011-08-04 Thread Gilles Sadowski
 
  I'm OK to define a new exception. From your PS, I understand that I should
  wait until what I've already submitted (MATH-581-06.zip) has been
 committed
  until I submit a patch containing the new exception. Is that right?
 
  I would have thought the other way around: First commit the exception than
  commit the code that use it (i.e. a new patch). That will avoid changing
  that code afterwards just to use the new exception.
 
 
 Sounds great! I'll do that, and post a new bunch of files. I'm just starting
 to worry about the large list of files attached to the JIRA ticket, while
 almost none of them is really useful, since they were subsequently modified...
 I understand there is no functionality to remove files, so we just have to
 live with all those near-duplicates. Maybe I should add a comment at the top
 of ticket, summarizing which files are useful, and which files are not.

On the JIRA page, do you see a triangle icon (at the right of the + to
add attachments). If so, when you click on it, do you see a menu with a
Manage Attachments item?, If so, when you click on that item and go to the
new page, do you see a bin icon at the right end of each file entry?

  I don't think it's necessary to open a new JIRA ticket for this, do you?
 
  No.
 
  Finally, a design issue. I would like the
  NonSquareLinearOperatorException to have an accessor to the actual
  LinearOperator which was the cause of this exception. It seems to me that
  NonSquareMatrixException does not offer this opportunity, so having the
 latter
  inherit from the former might be difficult. Should I give up this
 accessor,
  then? I think it would be a pity. As an example: the constructor of a
  preconditioned iterative solver requires TWO LinearOperators, both of
 which
  must be square. If an exception is raised, it would be nice (although not
  absolutely essential, since I can always test a posteriori the size of
 each
  operator) to know which operator was faulty.
 
  I'm still wondering whether it is useful to provide access to a high-level
  object (such as RealLinearOperator or RealMatrix) from an exception.
  What are you supposed to do with it?  Printing cannot be a default option
  since, from what I understand, it could be huge.
 
  Also, if keeping those data in the exception is useful in some
 circumtances,
  we could use the ExceptionContext feature that was recently implemented.
  Code would look like
  ---CUT---
  /**
   * [...]
   *
   * @throws NonSquareLinearOperatorException when [...].
   * The offending operator can be accessed from the exception context (cf.
   * {@link org.apache.commons.math.exception.util.ExceptionContext}, using
   * the operator key name.
   */
  public void doSomething(RealLinearOperator op) {
// ...
 
if ( /* test */ ) {
  final NonSquareLinearOperatorException nslo = new
 NonSquareLinearOperatorException();
  nslo.getContext().setValue(operator, op);
  throw nslo;
}
 
// ...
  }
  ---CUT---
 
  Yes, that's more code to write (but you only do it once), but it makes a
  clear separation between not often used data, and default information that
  will usually end up printed on the console.
 
 
 I like that, and will change the code accordingly. Is there an agreed policy
 for the naming of the keys (upper/lower case, and so on)?

No.
I'd suggest to follow the same convention as for naming classes. We can
always modify it later if someone has a better idea.

 Also, how do I
 document the keys which are actually set by a given method?

Cf. my above sample code excerpt. [Except that, with the convention just
suggested, the key would be Operator or RealLinearOperator.]

 I quickly searched
 the current code, and could not find classes which currently use extensively
 this apparently recent functionality.

There is none; you'd be the first one to use the functionality.

 I'll try and browse the mailing list
 archive to see what the conclusions of the community were.

There was no dicussion about how or when to use that functionality.
Originally it was supposed to be a feature offered to users but which would
not be directly useful inside CM. I think that your code proved that wrong.
:-)

  I would thus also propose to remove the getOffending... from the existing
  exceptions (and replace them likewise).
 
 

Regards,
Gilles

-
To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
For additional commands, e-mail: dev-h...@commons.apache.org



Re: [math] Implementation of Conjugate Gradient (MATH-581)

2011-08-04 Thread Ted Dunning
Actually if you just reattach revised files using the original names Kira
will do a great job of managing which is the latest. That let's curious folk
go back in history if they want to see what has changed.

On Thursday, August 4, 2011, Gilles Sadowski gil...@harfang.homelinux.org
wrote:
 
  I'm OK to define a new exception. From your PS, I understand that I
should
  wait until what I've already submitted (MATH-581-06.zip) has been
 committed
  until I submit a patch containing the new exception. Is that right?
 
  I would have thought the other way around: First commit the exception
than
  commit the code that use it (i.e. a new patch). That will avoid
changing
  that code afterwards just to use the new exception.
 

 Sounds great! I'll do that, and post a new bunch of files. I'm just
starting
 to worry about the large list of files attached to the JIRA ticket, while
 almost none of them is really useful, since they were subsequently
modified...
 I understand there is no functionality to remove files, so we just have
to
 live with all those near-duplicates. Maybe I should add a comment at
the top
 of ticket, summarizing which files are useful, and which files are not.

 On the JIRA page, do you see a triangle icon (at the right of the + to
 add attachments). If so, when you click on it, do you see a menu with a
 Manage Attachments item?, If so, when you click on that item and go to
the
 new page, do you see a bin icon at the right end of each file entry?

  I don't think it's necessary to open a new JIRA ticket for this, do
you?
 
  No.
 
  Finally, a design issue. I would like the
  NonSquareLinearOperatorException to have an accessor to the actual
  LinearOperator which was the cause of this exception. It seems to me
that
  NonSquareMatrixException does not offer this opportunity, so having
the
 latter
  inherit from the former might be difficult. Should I give up this
 accessor,
  then? I think it would be a pity. As an example: the constructor of a
  preconditioned iterative solver requires TWO LinearOperators, both of
 which
  must be square. If an exception is raised, it would be nice (although
not
  absolutely essential, since I can always test a posteriori the size of
 each
  operator) to know which operator was faulty.
 
  I'm still wondering whether it is useful to provide access to a
high-level
  object (such as RealLinearOperator or RealMatrix) from an
exception.
  What are you supposed to do with it?  Printing cannot be a default
option
  since, from what I understand, it could be huge.
 
  Also, if keeping those data in the exception is useful in some
 circumtances,
  we could use the ExceptionContext feature that was recently
implemented.
  Code would look like
  ---CUT---
  /**
   * [...]
   *
   * @throws NonSquareLinearOperatorException when [...].
   * The offending operator can be accessed from the exception context
(cf.
   * {@link org.apache.commons.math.exception.util.ExceptionContext},
using
   * the operator key name.
   */
  public void doSomething(RealLinearOperator op) {
// ...
 
if ( /* test */ ) {
  final NonSquareLinearOperatorException nslo = new
 NonSquareLinearOperatorException();
  nslo.getContext().setValue(operator, op);
  throw nslo;
}
 
// ...
  }
  ---CUT---
 
  Yes, that's more code to write (but you only do it once), but it makes
a
  clear separation between not often used data, and default information
that
  will usually end up printed on the console.
 

 I like that, and will change the code accordingly. Is there an agreed
policy
 for the naming of the keys (upper/lower case, and so on)?

 No.
 I'd suggest to follow the same convention as for naming classes. We can
 always modify it later if someone has a better idea.

 Also, how do I
 document the keys which are actually set by a given method?

 Cf. my above sample code excerpt. [Except that, with the convention just
 suggested, the key would be Operator or RealLinearOperator.]

 I quickly searched
 the current code, and could not find classes which currently use
extensively
 this apparently recent functionality.

 There is none; you'd be the first one to use the functionality.

 I'll try and browse the mailing list
 archive to see what the conclusions of the community were.

 There was no dicussion about how or when to use that functionality.
 Originally it was supposed to be a feature offered to users but which
would
 not be directly useful inside CM. I think that your code proved that
wrong.
 :-)

  I would thus also propose to remove the getOffending... from the
existing
  exceptions (and replace them likewise).
 
 

 Regards,
 Gilles

 -
 To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
 For additional commands, e-mail: dev-h...@commons.apache.org




Re: [math] Implementation of Conjugate Gradient (MATH-581)

2011-08-04 Thread Sebastien Brisard


2011/8/4 Gilles Sadowski gil...@harfang.homelinux.org:
 
  I'm OK to define a new exception. From your PS, I understand that I
should
  wait until what I've already submitted (MATH-581-06.zip) has been
 committed
  until I submit a patch containing the new exception. Is that right?
 
  I would have thought the other way around: First commit the exception
than
  commit the code that use it (i.e. a new patch). That will avoid changing
  that code afterwards just to use the new exception.
 

 Sounds great! I'll do that, and post a new bunch of files. I'm just
starting
 to worry about the large list of files attached to the JIRA ticket, while
 almost none of them is really useful, since they were subsequently
modified...
 I understand there is no functionality to remove files, so we just have to
 live with all those near-duplicates. Maybe I should add a comment at the
top
 of ticket, summarizing which files are useful, and which files are not.

 On the JIRA page, do you see a triangle icon (at the right of the + to
 add attachments). If so, when you click on it, do you see a menu with a
 Manage Attachments item?, If so, when you click on that item and go to
the
 new page, do you see a bin icon at the right end of each file entry?


I'm ashamed to say I've been looking for this functionality for quite a
while... Thanks A LOT, I'll do a little bit of cleaning up tonight.

  I don't think it's necessary to open a new JIRA ticket for this, do
you?
 
  No.
 
  Finally, a design issue. I would like the
  NonSquareLinearOperatorException to have an accessor to the actual
  LinearOperator which was the cause of this exception. It seems to me
that
  NonSquareMatrixException does not offer this opportunity, so having the
 latter
  inherit from the former might be difficult. Should I give up this
 accessor,
  then? I think it would be a pity. As an example: the constructor of a
  preconditioned iterative solver requires TWO LinearOperators, both of
 which
  must be square. If an exception is raised, it would be nice (although
not
  absolutely essential, since I can always test a posteriori the size of
 each
  operator) to know which operator was faulty.
 
  I'm still wondering whether it is useful to provide access to a
high-level
  object (such as RealLinearOperator or RealMatrix) from an exception.
  What are you supposed to do with it?  Printing cannot be a default
option
  since, from what I understand, it could be huge.
 
  Also, if keeping those data in the exception is useful in some
 circumtances,
  we could use the ExceptionContext feature that was recently
implemented.
  Code would look like
  ---CUT---
  /**
   * [...]
   *
   * @throws NonSquareLinearOperatorException when [...].
   * The offending operator can be accessed from the exception context
(cf.
   * {@link org.apache.commons.math.exception.util.ExceptionContext},
using
   * the operator key name.
   */
  public void doSomething(RealLinearOperator op) {
// ...
 
if ( /* test */ ) {
  final NonSquareLinearOperatorException nslo = new
 NonSquareLinearOperatorException();
  nslo.getContext().setValue(operator, op);
  throw nslo;
}
 
// ...
  }
  ---CUT---
 
  Yes, that's more code to write (but you only do it once), but it makes a
  clear separation between not often used data, and default information
that
  will usually end up printed on the console.
 

 I like that, and will change the code accordingly. Is there an agreed
policy
 for the naming of the keys (upper/lower case, and so on)?

 No.
 I'd suggest to follow the same convention as for naming classes. We can
 always modify it later if someone has a better idea.


The management of the exception messages through String constants and enums is
in my view a very clean thing. Should we do the same for exception context
keys? Have a big enum holding keys? Or should we define these keys as
constant, public fields inside those classes which throw exceptions? Or, are
we happy with inlining the string, just like in your piece of code?

I kind of remember that JAI has the same kind of management through String
keys, but I have never had a look to the code to see how these keys were
managed internally.

 Also, how do I
 document the keys which are actually set by a given method?

 Cf. my above sample code excerpt. [Except that, with the convention just
 suggested, the key would be Operator or RealLinearOperator.]

 I quickly searched
 the current code, and could not find classes which currently use
extensively
 this apparently recent functionality.

 There is none; you'd be the first one to use the functionality.

 I'll try and browse the mailing list
 archive to see what the conclusions of the community were.

 There was no dicussion about how or when to use that functionality.
 Originally it was supposed to be a feature offered to users but which would
 not be directly useful inside CM. I think that your code proved that wrong.
 :-)

  I would thus also propose to remove the 

[compress] ZIP64: API imposed limits vs limits of the format

2011-08-04 Thread Stefan Bodewig
Hi all,

ZIP64 support in trunk is almost complete except for a case that is
pretty easy to implement but where I'll ask for API feedback once I've
managed to read up on how the InfoZIP people handle it.

There are a few places where our implementation doesn't allow for the
full range the ZIP format would support.  Some are easy to fix, some
hard and I'm asking for feedback whether you consider it worth the
effort to fix them at all.

OK, here we go.

Total Size of the archive
=

There is no formal limit inside the format in particular since ZIP
archives can be split into multiple pieces.  For each individual piece
the last local file header can not have an offset of more than 2^64-1
bytes from the start of the file.

We don't support split archives at all so the size is limited to
one file.

ZipArchiveInputStream should work on arbitrary sizes.

ZipFile relies on RandomAccessFile so any archive can't be bigger than
the maximum size supported by RandomAccessFile.  In particular the seek
method expects a long as argument so the hard limit would be an archive
size of 2^63-1 bytes.  In practice I expect RandomAccessFile to not
support files that big on many platforms.

This is a hard case IMHO, I don't see how we could implement ZipFile
without RandomAccessFile in any efficient way.

ZipArchiveOutputStream has two modes.  If it writes to a file it will
use RandomAccessFile internally otherwise it writes to a stream.  In
file mode the same limits apply that apply to ZipFile.

For the streaming mode offsets are currently stored as longs but that
could be changed to BigIntegers easily so we could reach 2^64-1 at the
expense of memory consumption and maybe even some performance issues
(the offsets are not really used in calculations so I don't expect any
major impact).

Size of an individual entry (compressed or not)
===

The format supports an unsigned 64 bit integer as size, ArchiveEntry's
get/setSize methods use long - this means there is a factor of 2.

We could easily add an additional setter/getter for size that uses
BigInteger, the infrastructure to support it would be there.  OTOH it is
questionable whether we'd support anything  Long.MAX_VALUE in practice
because of the previous point anyway.

Number of files entries the archive
===

This used to be an unsingned 16 bit integer and has grown to an
unsigned 64 bit integer with ZIP64.

ZipArchiveInputStream should work with arbitrary many entries.

ZipArchiveOutputStream uses a LinkedList to store all entries as it has
to keep track of the metadata in order to write the central directory.
It also uses an additional HashMap that could be removed easily by
storing the data together with the entries themselves.  LinkedList won't
allow more than Integer.MAX_VALUE entries which leaves us quite a bit
away from the theoretical limit of the format.

I'm confident that even I would manage to write an efficient singly
linked list that is only ever appended to and that is iterated over
exactly once from head to tail.  I'd even manage to keep track of the
size inside a long or BigInteger (if deemed necessary) in a O(1)
operation ;-)

So ZipArchiveOutputStream could easily be fixed if we wanted to.
Whether it is worth the effort is a different question when the size of
the file is still limited to a single disk archive.

ZipFile is a totally different beast.  It contains several maps
internally and I don't really see how to implement things like

   ZipArchiveEntry getEntry(String name)

efficiently without a map.  I don't see myself writing an efficient map
with a capacity of Long.MAX_VALUE or bigger, either.

And even if we had one, there'd still be the archive size limit.

We could stick with documenting the limits of ZipFile properly.  In
practice I doubt many people will have to deal with archives of 2^63
bytes or more.  And even archives with 2^32 entries or more should be
rare - in which case people could fall back to ZipArchiveInputStream.

Stefan

-
To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
For additional commands, e-mail: dev-h...@commons.apache.org



Re: [math] Implementation of Conjugate Gradient (MATH-581)

2011-08-04 Thread Gilles Sadowski
 [...]
 
 The management of the exception messages through String constants and enums is
 in my view a very clean thing. Should we do the same for exception context
 keys? Have a big enum holding keys?

I'd rather not, but I'm afraid that others will think otherwise :-}.

 Or should we define these keys as
 constant, public fields inside those classes which throw exceptions? Or, are
 we happy with inlining the string, just like in your piece of code?

If the string appears only once, I'd be happy with inlining. If more than
once, a (private) constant is better (to quiet CheckStyle).

 [...]


Best,
Gilles

-
To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
For additional commands, e-mail: dev-h...@commons.apache.org



Re: [compress] ZIP64: API imposed limits vs limits of the format

2011-08-04 Thread Torsten Curdt
 ZipFile relies on RandomAccessFile so any archive can't be bigger than
 the maximum size supported by RandomAccessFile.  In particular the seek
 method expects a long as argument so the hard limit would be an archive
 size of 2^63-1 bytes.  In practice I expect RandomAccessFile to not
 support files that big on many platforms.

Yeah ... let's cross that bridge when people complain ;)

 For the streaming mode offsets are currently stored as longs but that
 could be changed to BigIntegers easily so we could reach 2^64-1 at the
 expense of memory consumption and maybe even some performance issues
 (the offsets are not really used in calculations so I don't expect any
 major impact).

No insights on the implementation but that might be worth changing so
it's in line with the ZipFile impl

 Size of an individual entry (compressed or not)
 ===

 The format supports an unsigned 64 bit integer as size, ArchiveEntry's
 get/setSize methods use long - this means there is a factor of 2.

 We could easily add an additional setter/getter for size that uses
 BigInteger, the infrastructure to support it would be there.  OTOH it is
 questionable whether we'd support anything  Long.MAX_VALUE in practice
 because of the previous point anyway.

Especially as this also just for one individual entry. Again - I think
I would not bother at this stage.
Nothing that cannot be added later.

 Number of files entries the archive
 ===

 This used to be an unsingned 16 bit integer and has grown to an
 unsigned 64 bit integer with ZIP64.

 ZipArchiveInputStream should work with arbitrary many entries.

 ZipArchiveOutputStream uses a LinkedList to store all entries as it has
 to keep track of the metadata in order to write the central directory.
 It also uses an additional HashMap that could be removed easily by
 storing the data together with the entries themselves.  LinkedList won't
 allow more than Integer.MAX_VALUE entries which leaves us quite a bit
 away from the theoretical limit of the format.

Hmmm.

 I'm confident that even I would manage to write an efficient singly
 linked list that is only ever appended to and that is iterated over
 exactly once from head to tail.

+1 for that then :)

 I don't see myself writing an efficient map
 with a capacity of Long.MAX_VALUE or bigger, either.

There must be something like that out there already.
Otherwise it could be another nice addition to Collections ;)

 We could stick with documenting the limits of ZipFile properly.  In
 practice I doubt many people will have to deal with archives of 2^63
 bytes or more.  And even archives with 2^32 entries or more should be
 rare - in which case people could fall back to ZipArchiveInputStream.

Hm. Yeah ...maybe just get it out before we start implementing new
collection classes.

Cool stuff!!

cheers,
Torsten

-
To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
For additional commands, e-mail: dev-h...@commons.apache.org



Re: [compress] XZ support and inconsistencies in the existing compressors

2011-08-04 Thread Stefan Bodewig
On 2011-08-04, Lasse Collin wrote:

 On 2011-08-04 Stefan Bodewig wrote:

 This is in a big part due to the history of Commons Compress which
 combined several different codebases with separate APIs and provided a
 first attempt to layer a unifying API on top of it.  We are aware of
 quite a few problems and want to address them in Commons Compress 2.x
 and it would be really great if you would participate in the design of
 the new APIs once that discussion kicks off.

 I'm not sure how much I can help, but I can try (depending on how much
 I have time).

Thanks.

 On 2011-08-03, Lasse Collin wrote:

 (2) BZip2CompressorOutputStream.flush() calls out.flush() but it
 doesn't flush data buffered by BZip2CompressorOutputStream.
 Thus not all data written to the Bzip2 stream will be available
 in the underlying output stream after flushing. This kind of
 flush() implementation doesn't seem very useful.

 Agreed, do you want to open a JIRA issue for this?

 There is already this:

 https://issues.apache.org/jira/browse/COMPRESS-42

Ahh, I knew I once fiddled with flush there but a quick grep through the
changes file didn't show anything - because it was before the 1.0
release.

 I tried to understand how flushing could be done properly. I'm not
 really familiar with bzip2 so the following might have errors.

As I already said, neither of us is terribly familiar with the format
right now.  I for one didn't even know you could have multiple streams
in a single file so it took your mail for me to make sense out of
COMPRESS-146.

 I checked libbzip2 and how it's BZ_FLUSH works. It finishes the block,
 but it doesn't flush the last bits, and thus the complete block isn't
 available in the output stream. The blocks in the .bz2 format aren't
 aligned to full bytes, and there is no padding between blocks.

 The lack of alignment makes flushing tricky. One may need to write out
 up to seven bits of data from the future. The bright side is that those
 future bits can only come from the block header magic or from the end
 of stream magic. Both are constants so there are only two possibilities
 what those seven bits can be.

 Using bits from the end of stream magic doesn't make sense, because then
 one would be forced to finish the stream. Using the bits from the
 block header magic means that one must add at least one more block.
 This is fine if the application will want to encode at least one more
 byte. If the application calls close() right after flushing, then
 there's a problem unless .bz2 format allows empty blocks. I get a
 feeling from the code that .bz2 would support empty blocks, but I'm not
 sure at all.

It should be possible to write some unit tests to see what works and to
create some test archives for interop testing with native tools.

 (4) The decompressor streams don't support concatenated .gz and .bz2
 files. This can be OK when compressed data is used inside
 another file format or protocol, but with regular
 (standalone) .gz and .bz2 files it is bad to stop after the
 first compressed stream and silently ignore the remaining
 compressed data.

 Fixing this in BZip2CompressorInputStream should be relatively
 easy because it stops right after the last byte of the
 compressed stream.

 Is this https://issues.apache.org/jira/browse/COMPRESS-146?

 Yes. I didn't check the suggested fix though.

Would be nice if you'd find the time to do so.

 Fixing GzipCompressorInputStream is harder because the problem
 is inherited from java.util.zip.GZIPInputStream which reads
 input past the end of the first stream. One might need to
 reimplement .gz container support on top of
 java.util.zip.InflaterInputStream or java.util.zip.Inflater.

 Sounds doable but would need somebody to code it, I guess ;-)

 There is a little bit hackish solution in the comments of the following
 bug report, but it lacks license information:

 http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=4691425

Yes.  I agree it is hacky.

 In the past we have incorporated external codebases (ar and cpio) that
 used to be under compatible licenses to make things simpler for our
 users, but if you prefer to develop your code base outside of Commons
 Compress then I can fully understand that.

 I will develop it in my own tree, but it's possible to include a copy
 in Commons Compress with modified package and import lines in the
 source files. Changes in my tree would need to be copied to Commons
 Compress now and then. I don't know if this is better than having an
 external dependency.

Don't know either.  It depends on who'd do the work of syncing, I guess.

 org.tukaani.xz will include features that aren't necessarily interesting
 in Commons Compress, for example, advanced compression options and
 random access reading. Most developers probably won't care about these.

We'll need standalone compressors for other formats as well (and we do
need LZMA 8-).  Some of the 

Re: [compress] ZIP64: API imposed limits vs limits of the format

2011-08-04 Thread Lasse Collin
On 2011-08-04 Stefan Bodewig wrote:
 There are a few places where our implementation doesn't allow for the
 full range the ZIP format would support.  Some are easy to fix, some
 hard and I'm asking for feedback whether you consider it worth the
 effort to fix them at all.

I guess that these are enough for the foreseeable future:

Max archive size: Long.MAX_VALUE
Max size of individual entry: Long.MAX_VALUE
Max number of file entries:   Integer.MAX_VALUE

Java APIs don't suppport bigger files and I guess that so big files
won't be common even if file system sizes allowed them. If you write
ten terabytes per second, it will still take well over a week to
create an archive of 2^63-1 bytes.

I don't know how much memory one file entry needs, but let's assume
it takes only 50 bytes, including the overhead of the linked list
etc. Keeping a list of 2^31-1 files will then need 100 GiB of RAM.
While it might be OK in some situations, I hope such archives won't
become common. ;-) Even if the number of files is limited to
Integer.MAX_VALUE, it can be good to think about the memory usage
of the data structures used for the file entries.

-- 
Lasse Collin  |  IRC: Larhzu @ IRCnet  Freenode

-
To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
For additional commands, e-mail: dev-h...@commons.apache.org



[continuum] BUILD FAILURE: Apache Commons - Commons Codec - Default Maven 2 Build Definition (Java 1.5)

2011-08-04 Thread Continuum@vmbuild
Online report : 
http://vmbuild.apache.org/continuum/buildResult.action?buildId=10905projectId=70

Build statistics:
  State: Failed
  Previous State: Ok
  Started at: Fri 5 Aug 2011 03:37:19 +
  Finished at: Fri 5 Aug 2011 03:37:58 +
  Total time: 38s
  Build Trigger: Forced
  Build Number: 108
  Exit code: 1
  Building machine hostname: vmbuild
  Operating system : Linux(unknown)
  Java Home version : 
  java version 1.6.0_24
  Java(TM) SE Runtime Environment (build 1.6.0_24-b07)
  Java HotSpot(TM) 64-Bit Server VM (build 19.1-b02, mixed mode)

  Builder version :
  Apache Maven 2.2.1 (r801777; 2009-08-06 19:16:01+)
  Java version: 1.6.0_24
  Java home: /usr/lib/jvm/java-6-sun-1.6.0.24/jre
  Default locale: en_AU, platform encoding: UTF-8
  OS name: linux version: 2.6.32-31-server arch: amd64 Family: 
unix


SCM Changes:

No files changed


Dependencies Changes:

No dependencies changed



Build Definition:

POM filename: pom.xml
Goals: clean deploy   
Arguments: --batch-mode -Pjava-1.5
Build Fresh: false
Always Build: false
Default Build Definition: true
Schedule: COMMONS_SCHEDULE
Profile Name: Maven 2.2.1
Description: Default Maven 2 Build Definition (Java 1.5)


Test Summary:

Tests: 409
Failures: 0
Errors: 1
Success Rate: 99
Total time: 24.942001





-
To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
For additional commands, e-mail: dev-h...@commons.apache.org



[continuum] BUILD FAILURE: Apache Commons - Commons JCI - Continue test if possible; use Java 1.5

2011-08-04 Thread Continuum@vmbuild
Online report : 
http://vmbuild.apache.org/continuum/buildResult.action?buildId=10929projectId=108

Build statistics:
  State: Failed
  Previous State: Failed
  Started at: Fri 5 Aug 2011 03:58:35 +
  Finished at: Fri 5 Aug 2011 04:00:16 +
  Total time: 1m 41s
  Build Trigger: Forced
  Build Number: 4
  Exit code: 1
  Building machine hostname: vmbuild
  Operating system : Linux(unknown)
  Java Home version : 
  java version 1.6.0_24
  Java(TM) SE Runtime Environment (build 1.6.0_24-b07)
  Java HotSpot(TM) 64-Bit Server VM (build 19.1-b02, mixed mode)

  Builder version :
  Apache Maven 2.2.1 (r801777; 2009-08-06 19:16:01+)
  Java version: 1.6.0_24
  Java home: /usr/lib/jvm/java-6-sun-1.6.0.24/jre
  Default locale: en_AU, platform encoding: UTF-8
  OS name: linux version: 2.6.32-31-server arch: amd64 Family: 
unix


SCM Changes:

No files changed


Dependencies Changes:

No dependencies changed



Build Definition:

POM filename: pom.xml
Goals: clean deploy   
Arguments: --batch-mode -fae -Pjava-1.5
Build Fresh: false
Always Build: false
Default Build Definition: true
Schedule: COMMONS_SCHEDULE
Profile Name: Maven 2.2.1
Description: Continue test if possible; use Java 1.5


Test Summary:

Tests: 81
Failures: 1
Errors: 0
Success Rate: 98
Total time: 167.75099


Test Failures:


EclipseJavaCompilerTestCase
testAdditionalTopLevelClassCompile :
  junit.framework.AssertionFailedError
  junit.framework.AssertionFailedError: The type AdditionalTopLevel collides 
with a package,  expected:0 but was:1
at junit.framework.Assert.fail(Assert.java:47)
at junit.framework.Assert.failNotEquals(Assert.java:280)
at junit.framework.Assert.assertEquals(Assert.java:64)
at junit.framework.Assert.assertEquals(Assert.java:198)
at 
org.apache.commons.jci.compilers.AbstractCompilerTestCase.testAdditionalTopLevelClassCompile(AbstractCompilerTestCase.java:336)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:592)
at junit.framework.TestCase.runTest(TestCase.java:164)
at junit.framework.TestCase.runBare(TestCase.java:130)
at junit.framework.TestResult$1.protect(TestResult.java:106)
at junit.framework.TestResult.runProtected(TestResult.java:124)
at junit.framework.TestResult.run(TestResult.java:109)
at junit.framework.TestCase.run(TestCase.java:120)
at junit.framework.TestSuite.runTest(TestSuite.java:230)
at junit.framework.TestSuite.run(TestSuite.java:225)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:592)
at 
org.apache.maven.surefire.junit.JUnitTestSet.execute(JUnitTestSet.java:213)
at 
org.apache.maven.surefire.suite.AbstractDirectoryTestSuite.executeTestSet(AbstractDirectoryTestSuite.java:115)
at 
org.apache.maven.surefire.suite.AbstractDirectoryTestSuite.execute(AbstractDirectoryTestSuite.java:102)
at org.apache.maven.surefire.Surefire.run(Surefire.java:180)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:592)
at 
org.apache.maven.surefire.booter.SurefireBooter.runSuitesInProcess(SurefireBooter.java:350)
at 
org.apache.maven.surefire.booter.SurefireBooter.main(SurefireBooter.java:1021)


  



-
To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
For additional commands, e-mail: 

[continuum] BUILD FAILURE: Apache Commons - Commons Javaflow (Sandbox) - Default Maven 2 Build Definition (Java 1.5)

2011-08-04 Thread Continuum@vmbuild
Online report : 
http://vmbuild.apache.org/continuum/buildResult.action?buildId=10931projectId=116

Build statistics:
  State: Failed
  Previous State: Failed
  Started at: Fri 5 Aug 2011 04:00:33 +
  Finished at: Fri 5 Aug 2011 04:00:49 +
  Total time: 15s
  Build Trigger: Forced
  Build Number: 8
  Exit code: 1
  Building machine hostname: vmbuild
  Operating system : Linux(unknown)
  Java Home version : 
  java version 1.6.0_24
  Java(TM) SE Runtime Environment (build 1.6.0_24-b07)
  Java HotSpot(TM) 64-Bit Server VM (build 19.1-b02, mixed mode)

  Builder version :
  Apache Maven 2.2.1 (r801777; 2009-08-06 19:16:01+)
  Java version: 1.6.0_24
  Java home: /usr/lib/jvm/java-6-sun-1.6.0.24/jre
  Default locale: en_AU, platform encoding: UTF-8
  OS name: linux version: 2.6.32-31-server arch: amd64 Family: 
unix


SCM Changes:

No files changed


Dependencies Changes:

No dependencies changed



Build Definition:

POM filename: pom.xml
Goals: clean deploy   
Arguments: --batch-mode -Pjava-1.5
Build Fresh: false
Always Build: false
Default Build Definition: true
Schedule: COMMONS_SCHEDULE
Profile Name: Maven 2.2.1
Description: Default Maven 2 Build Definition (Java 1.5)


Test Summary:

Tests: 18
Failures: 0
Errors: 0
Success Rate: 100
Total time: 0.38





-
To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
For additional commands, e-mail: dev-h...@commons.apache.org



[continuum] BUILD FAILURE: Apache Commons - Commons Math - Default Maven 2 Build Definition (Java 1.5)

2011-08-04 Thread Continuum@vmbuild
Online report : 
http://vmbuild.apache.org/continuum/buildResult.action?buildId=10941projectId=206

Build statistics:
  State: Failed
  Previous State: Failed
  Started at: Fri 5 Aug 2011 04:07:01 +
  Finished at: Fri 5 Aug 2011 04:08:31 +
  Total time: 1m 29s
  Build Trigger: Forced
  Build Number: 75
  Exit code: 1
  Building machine hostname: vmbuild
  Operating system : Linux(unknown)
  Java Home version : 
  java version 1.6.0_24
  Java(TM) SE Runtime Environment (build 1.6.0_24-b07)
  Java HotSpot(TM) 64-Bit Server VM (build 19.1-b02, mixed mode)

  Builder version :
  Apache Maven 2.2.1 (r801777; 2009-08-06 19:16:01+)
  Java version: 1.6.0_24
  Java home: /usr/lib/jvm/java-6-sun-1.6.0.24/jre
  Default locale: en_AU, platform encoding: UTF-8
  OS name: linux version: 2.6.32-31-server arch: amd64 Family: 
unix


SCM Changes:

No files changed


Dependencies Changes:

No dependencies changed



Build Definition:

POM filename: pom.xml
Goals: clean deploy   
Arguments: --batch-mode -Pjava-1.5
Build Fresh: false
Always Build: false
Default Build Definition: true
Schedule: COMMONS_SCHEDULE
Profile Name: Maven 2.2.1
Description: Default Maven 2 Build Definition (Java 1.5)


Test Summary:

Tests: 2451
Failures: 0
Errors: 0
Success Rate: 100
Total time: 53.541965





-
To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
For additional commands, e-mail: dev-h...@commons.apache.org



Deploying snapshot builds

2011-08-04 Thread Phil Steitz
Looks like we have been experimenting with this.  Before actually
turning anything on, we need should agree that we actually want to
to do this and probably VOTE, as I assume the generated artifacts
are going to be publicly available.  My personal opinion is that we
should not do this.

Phil

-
To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
For additional commands, e-mail: dev-h...@commons.apache.org



Re: Deploying snapshot builds

2011-08-04 Thread Ralph Goers
We can hold a vote. I would be persuaded to vote no if someone could point to 
an ASF or incubator policy that says we should not do this.

Ralph

On Aug 4, 2011, at 10:08 PM, Phil Steitz wrote:

 Looks like we have been experimenting with this.  Before actually
 turning anything on, we need should agree that we actually want to
 to do this and probably VOTE, as I assume the generated artifacts
 are going to be publicly available.  My personal opinion is that we
 should not do this.
 
 Phil
 
 -
 To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
 For additional commands, e-mail: dev-h...@commons.apache.org
 


-
To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
For additional commands, e-mail: dev-h...@commons.apache.org