[jira] [Comment Edited] (MATH-738) Incomplete beta function I(x, a, b) is inaccurate for large values of a and/or b

2012-10-24 Thread JIRA

[ 
https://issues.apache.org/jira/browse/MATH-738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13483917#comment-13483917
 ] 

Sébastien Brisard edited comment on MATH-738 at 10/25/12 6:10 AM:
--

Hi Thomas,
thanks for proposing your help. Please wait a little, as I have already 
implemented some more methods, not yet committed.
For the time being, I'm working on a Java app to automatically assess the 
accuracy of implementation of special functions. I need to write a small README 
to explain how it works, then I'll commit the whole lot. I also have 
reimplemented logBeta. I will provide all these soon, but I am drowned in work 
for the time being.

I'll try to commit next weekend. Thanks for your patience!

BTW: be careful with TOMS code, which is copyrighted. I based my 
implementations on the NSWC library, which is very (very) similar, as it's the 
same author, but is not copyrighted.

  was (Author: celestin):
Hi Thomas,
thanks for proposing your help. Please wait a little, as I have already 
implemented some more methods, not yet committed.
For the time being, I'm working on a Java app to automatically assess the 
accuracy of implementation of special functions. I need to write a small README 
to explain how it works, then I'll commit the whole lot. I also have 
reimplemented logBeta. I will provide all these soon, but I am drowned in work 
for the time being.
Thanks for your patience!
  
> Incomplete beta function I(x, a, b) is inaccurate for large values of a 
> and/or b
> 
>
> Key: MATH-738
> URL: https://issues.apache.org/jira/browse/MATH-738
> Project: Commons Math
>  Issue Type: Bug
>Affects Versions: 3.0
>Reporter: Sébastien Brisard
>Assignee: Sébastien Brisard
>  Labels: special-functions
> Fix For: 3.1, 4.0
>
>
> This was first reported in MATH-718. The result of the current implementation 
> of the incomplete beta function I(x, a, b) is inaccurate when a and/or b are 
> large-ish. 
> I've skimmed through [slatec|http://www.netlib.org/slatec/fnlib/betai.f], 
> GSL, 
> [Boost|http://www.boost.org/doc/libs/1_38_0/libs/math/doc/sf_and_dist/html/math_toolkit/special/sf_beta/ibeta_function.html]
>  as well as NR. At first sight, neither uses the same method to compute this 
> function. I think [TOMS-708|http://www.netlib.org/toms/708] is probably the 
> best option.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (MATH-738) Incomplete beta function I(x, a, b) is inaccurate for large values of a and/or b

2012-10-24 Thread JIRA

[ 
https://issues.apache.org/jira/browse/MATH-738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13483917#comment-13483917
 ] 

Sébastien Brisard commented on MATH-738:


Hi Thomas,
thanks for proposing your help. Please wait a little, as I have already 
implemented some more methods, not yet committed.
For the time being, I'm working on a Java app to automatically assess the 
accuracy of implementation of special functions. I need to write a small README 
to explain how it works, then I'll commit the whole lot. I also have 
reimplemented logBeta. I will provide all these soon, but I am drowned in work 
for the time being.
Thanks for your patience!

> Incomplete beta function I(x, a, b) is inaccurate for large values of a 
> and/or b
> 
>
> Key: MATH-738
> URL: https://issues.apache.org/jira/browse/MATH-738
> Project: Commons Math
>  Issue Type: Bug
>Affects Versions: 3.0
>Reporter: Sébastien Brisard
>Assignee: Sébastien Brisard
>  Labels: special-functions
> Fix For: 3.1, 4.0
>
>
> This was first reported in MATH-718. The result of the current implementation 
> of the incomplete beta function I(x, a, b) is inaccurate when a and/or b are 
> large-ish. 
> I've skimmed through [slatec|http://www.netlib.org/slatec/fnlib/betai.f], 
> GSL, 
> [Boost|http://www.boost.org/doc/libs/1_38_0/libs/math/doc/sf_and_dist/html/math_toolkit/special/sf_beta/ibeta_function.html]
>  as well as NR. At first sight, neither uses the same method to compute this 
> function. I think [TOMS-708|http://www.netlib.org/toms/708] is probably the 
> best option.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (LANG-846) StringUtils.equals() / CharSequenceUtils.regionMatches() assumes that CharSequence.toString() implementation is effective

2012-10-24 Thread Gary Gregory (JIRA)

[ 
https://issues.apache.org/jira/browse/LANG-846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13483776#comment-13483776
 ] 

Gary Gregory commented on LANG-846:
---

Patches welcome! :)

> StringUtils.equals() / CharSequenceUtils.regionMatches() assumes that 
> CharSequence.toString() implementation is effective
> -
>
> Key: LANG-846
> URL: https://issues.apache.org/jira/browse/LANG-846
> Project: Commons Lang
>  Issue Type: Improvement
>  Components: lang.*
>Affects Versions: 3.1
>Reporter: Dmitry Katsubo
>Priority: Minor
>
> In my case I have {{CharSequence}} that implements a "lazy" string which is 
> stored on disk, and although {{toString()}} implementation is valid, it is 
> very expensive plus can potentially cause OOM.
> Thus {{CharSequenceUtils.regionMatches()}} should really do char-by-char 
> comparison, leaving the optimization to underlying {{CharSequence}} 
> implementation.
> Maybe {{CharSequenceUtils.regionMatches()}} could check that passed 
> {{CharSequence}} is standard implementation (like {{StringBuilder}}, 
> {{StringBuffer}}) that has "effective" {{toString()}} implementation, but 
> this implementation ends up with creating new {{String}} object and thus 
> duplicating the character buffer. So we have classical speed/memory trade-off.
> P.S. [Line 192 of 
> CharSequenceUtils()|http://svn.apache.org/viewvc/commons/proper/lang/trunk/src/main/java/org/apache/commons/lang3/CharSequenceUtils.java?revision=1199894&view=markup#l192]
>  reads
> {{TODO: Implement rather than convert to String}}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (IMAGING-94) Add ability to load partial TIFF images

2012-10-24 Thread Gary Lucas (JIRA)

 [ 
https://issues.apache.org/jira/browse/IMAGING-94?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Lucas updated IMAGING-94:
--

Attachment: LucasTrackerItem94_Oct24.patch

This 24 October patch uses the System.arraycopy method as recommended by 
Damjan.  I have tested it several ways and believe that it is ready for 
incorporation into the code tree.

> Add ability to load partial TIFF images
> ---
>
> Key: IMAGING-94
> URL: https://issues.apache.org/jira/browse/IMAGING-94
> Project: Commons Imaging
>  Issue Type: New Feature
>  Components: Format: TIFF
>Reporter: Gary Lucas
> Attachments: LucasTrackerItem94_Oct14.patch, 
> LucasTrackerItem94_Oct24.patch
>
>
> For most Apache Commons Imaging applications, the easiest way to obtain a sub 
> image from a file is to simply use the Imaging class to load it as a 
> BufferedImage and then use BufferedImage’s getSubimage() method to extract 
> the portion of the image you wish to use.  The TIFF format presents a special 
> problem because it is very common to have huge images (100’s or even 1000’s 
> of megapixel).  Examples include Landsat satellite images, global-scale 
> GeoTIFF images, etc.  In such cases, loading the entire image into memory is 
> not practical because it would require too much memory.  For example, I am 
> currently working with a 21600 by 10800 image that requires more than 890 
> megabytes to store as a BufferedImage.  That value is pushing the limit of 
> what I can configure Java to handle on my particular OS.
> I propose to implement features for TIFF files that would permit Commons 
> Imaging to load a partial image of a TIFF file using only the amount memory 
> actually needed to hold the sub-image.
> These changes would not interfere with normal operations of TIFF files and 
> would not affect other image formats.  If there were a need for similar 
> features for other image formats, they could be phased in through future 
> changes.
> The specification for a sub-image would be through the use of the params 
> argument in the getBufferedImage call as follows:
> HashMap params = new HashMap();
> params.put(TiffConstants.PARAM_KEY_SUBIMAGE_X, new Integer( x ));
> params.put(TiffConstants.PARAM_KEY_SUBIMAGE_Y, new Integer( y ));
> params.put(TiffConstants.PARAM_KEY_SUBIMAGE_WIDTH, new 
> Integer(width));
> params.put(TiffConstants.PARAM_KEY_SUBIMAGE_HEIGHT, new 
> Integer(height));

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (IO-349) Add API with array offset and length argument to FileUtils.writeByteArrayToFile

2012-10-24 Thread Gary Gregory (JIRA)

[ 
https://issues.apache.org/jira/browse/IO-349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13483702#comment-13483702
 ] 

Gary Gregory commented on IO-349:
-

YW. Keep 'em coming. 

> Add API with array offset and length argument to 
> FileUtils.writeByteArrayToFile
> ---
>
> Key: IO-349
> URL: https://issues.apache.org/jira/browse/IO-349
> Project: Commons IO
>  Issue Type: Improvement
>  Components: Utilities
>Affects Versions: 2.4
>Reporter: David Bild
>  Labels: patch
> Fix For: 2.5
>
> Attachments: 
> add_OffsetAndLengthArguments_To_FileUtils_writeByteArrayToFile.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> The FileUtils.writeByteArrayToFile method does not allow a subset of an array 
> to be written to a file.  Instead, the subset must be copied to a separate 
> array, increasing the lines of code and (for all JVMs I know about) runtime.
> Sister methods that take an offset and length should be added, inline with 
> the byte array-oriented methods in the Java standard library. 
> Attached is a patch that implements FileUtils.writeByteArrayToFile(File file, 
> byte[] data, int offset, int length) and FileUtils.writeByteArrayToFile(File 
> file, byte[] data, int offset, int length, boolean append) and associated 
> testcases in FileUtilsTestCase.java.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (MATH-816) Multivariate Normal Mixture Models

2012-10-24 Thread Gilles (JIRA)

 [ 
https://issues.apache.org/jira/browse/MATH-816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gilles resolved MATH-816.
-

   Resolution: Fixed
Fix Version/s: (was: 3.2)
   3.1

> Multivariate Normal Mixture Models
> --
>
> Key: MATH-816
> URL: https://issues.apache.org/jira/browse/MATH-816
> Project: Commons Math
>  Issue Type: New Feature
>Reporter: Jared Becksfort
>Priority: Minor
> Fix For: 3.1
>
> Attachments: MixtureMultivariateRealDistribution.java, 
> MixtureMultivariateRealDistribution.java, 
> MixtureMultivariateRealDistribution.java.patch, 
> MixtureMultivariateRealDistributionTest.java, 
> MultivariateNormalMixtureModelDistribution.java, 
> MultivariateNormalMixtureModelDistributionTest.java
>
>   Original Estimate: 1m
>  Remaining Estimate: 1m
>
> I will submit a class for Multivariate Normal Mixture Models.  Not sure it 
> will allow sampling initially.
> > Hello,
> >
> > I have implemented some classes for multivariate Normal distributions, 
> > multivariate normal mixture models, and an expectation maximization fitting 
> > class for the mixture model.  I would like to submit it to Apache Commons 
> > Math.  I still have some touching up to do so that they fit the style 
> > guidelines and implement the correct interfaces.  Before I do so, I thought 
> > I would at least ask if the developers of the project are interested in me 
> > submitting them.
> >
> > Thanks,
> > Jared Becksfort
> Dear Jared,
> Yes, that would be very nice to have such an addition! Remember to also 
> include unit tests (refer to the current ones for examples). The best would 
> be to split a submission up into multiple minor ones, each covering a natural 
> submission (e.g. multivariate Normal distribution in one submission), and 
> create an issue as described at 
> http://commons.apache.org/math/issue-tracking.html .
> If you run into any problems, please do not hesitate to ask on this mailing 
> list.
> Cheers, Mikkel.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (MATH-816) Multivariate Normal Mixture Models

2012-10-24 Thread Gilles (JIRA)

[ 
https://issues.apache.org/jira/browse/MATH-816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13483671#comment-13483671
 ] 

Gilles commented on MATH-816:
-

bq. for some reason I was thinking the names of the classes and the tests had 
to be the same

That is usually the case indeed. But the test's class name should actually 
refer to what is being tested. In this particular case, the mixture 
functionality is tested through using normal distributions as components; thus, 
it seemed more natural to stress that the test is about a "normal mixture 
model".
(If we ever want to create unit tests for other mixtures, they can go in their 
own dedicated class, and it will be more manageable than a single big class 
with every possible mixtures...)

Thanks for the contribution.


> Multivariate Normal Mixture Models
> --
>
> Key: MATH-816
> URL: https://issues.apache.org/jira/browse/MATH-816
> Project: Commons Math
>  Issue Type: New Feature
>Reporter: Jared Becksfort
>Priority: Minor
> Fix For: 3.2
>
> Attachments: MixtureMultivariateRealDistribution.java, 
> MixtureMultivariateRealDistribution.java, 
> MixtureMultivariateRealDistribution.java.patch, 
> MixtureMultivariateRealDistributionTest.java, 
> MultivariateNormalMixtureModelDistribution.java, 
> MultivariateNormalMixtureModelDistributionTest.java
>
>   Original Estimate: 1m
>  Remaining Estimate: 1m
>
> I will submit a class for Multivariate Normal Mixture Models.  Not sure it 
> will allow sampling initially.
> > Hello,
> >
> > I have implemented some classes for multivariate Normal distributions, 
> > multivariate normal mixture models, and an expectation maximization fitting 
> > class for the mixture model.  I would like to submit it to Apache Commons 
> > Math.  I still have some touching up to do so that they fit the style 
> > guidelines and implement the correct interfaces.  Before I do so, I thought 
> > I would at least ask if the developers of the project are interested in me 
> > submitting them.
> >
> > Thanks,
> > Jared Becksfort
> Dear Jared,
> Yes, that would be very nice to have such an addition! Remember to also 
> include unit tests (refer to the current ones for examples). The best would 
> be to split a submission up into multiple minor ones, each covering a natural 
> submission (e.g. multivariate Normal distribution in one submission), and 
> create an issue as described at 
> http://commons.apache.org/math/issue-tracking.html .
> If you run into any problems, please do not hesitate to ask on this mailing 
> list.
> Cheers, Mikkel.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (IO-349) Add API with array offset and length argument to FileUtils.writeByteArrayToFile

2012-10-24 Thread David Bild (JIRA)

[ 
https://issues.apache.org/jira/browse/IO-349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13483658#comment-13483658
 ] 

David Bild commented on IO-349:
---

Ah, yes indeed.  Thanks for the lesson.

> Add API with array offset and length argument to 
> FileUtils.writeByteArrayToFile
> ---
>
> Key: IO-349
> URL: https://issues.apache.org/jira/browse/IO-349
> Project: Commons IO
>  Issue Type: Improvement
>  Components: Utilities
>Affects Versions: 2.4
>Reporter: David Bild
>  Labels: patch
> Fix For: 2.5
>
> Attachments: 
> add_OffsetAndLengthArguments_To_FileUtils_writeByteArrayToFile.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> The FileUtils.writeByteArrayToFile method does not allow a subset of an array 
> to be written to a file.  Instead, the subset must be copied to a separate 
> array, increasing the lines of code and (for all JVMs I know about) runtime.
> Sister methods that take an offset and length should be added, inline with 
> the byte array-oriented methods in the Java standard library. 
> Attached is a patch that implements FileUtils.writeByteArrayToFile(File file, 
> byte[] data, int offset, int length) and FileUtils.writeByteArrayToFile(File 
> file, byte[] data, int offset, int length, boolean append) and associated 
> testcases in FileUtilsTestCase.java.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (MATH-816) Multivariate Normal Mixture Models

2012-10-24 Thread Jared Becksfort (JIRA)

[ 
https://issues.apache.org/jira/browse/MATH-816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13483655#comment-13483655
 ] 

Jared Becksfort commented on MATH-816:
--

Apparently I misunderstood you.  I didn't realize you had changed the test, and 
for some reason I was thinking the names of the classes and the tests had to be 
the same.  Anyway, I like your test class better since it has the separate test 
cases and explicit expect annotation.

I checked the current status of the repository and it looks like you have 
committed the new sample values in the latest revision (1401894). I have 
nothing else to add the distribution or test class at this time.

> Multivariate Normal Mixture Models
> --
>
> Key: MATH-816
> URL: https://issues.apache.org/jira/browse/MATH-816
> Project: Commons Math
>  Issue Type: New Feature
>Reporter: Jared Becksfort
>Priority: Minor
> Fix For: 3.2
>
> Attachments: MixtureMultivariateRealDistribution.java, 
> MixtureMultivariateRealDistribution.java, 
> MixtureMultivariateRealDistribution.java.patch, 
> MixtureMultivariateRealDistributionTest.java, 
> MultivariateNormalMixtureModelDistribution.java, 
> MultivariateNormalMixtureModelDistributionTest.java
>
>   Original Estimate: 1m
>  Remaining Estimate: 1m
>
> I will submit a class for Multivariate Normal Mixture Models.  Not sure it 
> will allow sampling initially.
> > Hello,
> >
> > I have implemented some classes for multivariate Normal distributions, 
> > multivariate normal mixture models, and an expectation maximization fitting 
> > class for the mixture model.  I would like to submit it to Apache Commons 
> > Math.  I still have some touching up to do so that they fit the style 
> > guidelines and implement the correct interfaces.  Before I do so, I thought 
> > I would at least ask if the developers of the project are interested in me 
> > submitting them.
> >
> > Thanks,
> > Jared Becksfort
> Dear Jared,
> Yes, that would be very nice to have such an addition! Remember to also 
> include unit tests (refer to the current ones for examples). The best would 
> be to split a submission up into multiple minor ones, each covering a natural 
> submission (e.g. multivariate Normal distribution in one submission), and 
> create an issue as described at 
> http://commons.apache.org/math/issue-tracking.html .
> If you run into any problems, please do not hesitate to ask on this mailing 
> list.
> Cheers, Mikkel.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (MATH-874) New API for optimizers

2012-10-24 Thread Gilles (JIRA)

[ 
https://issues.apache.org/jira/browse/MATH-874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13483647#comment-13483647
 ] 

Gilles commented on MATH-874:
-

Thanks a lot!


> New API for optimizers
> --
>
> Key: MATH-874
> URL: https://issues.apache.org/jira/browse/MATH-874
> Project: Commons Math
>  Issue Type: Improvement
>Affects Versions: 3.0
>Reporter: Gilles
>Assignee: Gilles
>Priority: Minor
>  Labels: api-change
> Fix For: 3.1, 4.0
>
> Attachments: optimizers.patch
>
>
> I suggest to change the signatures of the "optimize" methods in
> * {{UnivariateOptimizer}}
> * {{MultivariateOptimizer}}
> * {{MultivariateDifferentiableOptimizer}}
> * {{MultivariateDifferentiableVectorOptimizer}}
> * {{BaseMultivariateSimpleBoundsOptimizer}}
> Currently, the arguments are
> * the allowed number of evaluations of the objective function
> * the objective function
> * the type of optimization (minimize or maximize)
> * the initial guess
> * optionally, the lower and upper bounds
> A marker interface:
> {code}
> public interface OptimizationData {}
> {code}
> would in effect be implemented by all input data so that the signature would 
> become (for {{MultivariateOptimizer}}):
> {code}
> public PointValuePair optimize(MultivariateFunction f,
>OptimizationData... optData);
> {code}
> A [thread|http://markmail.org/message/fbmqrbf2t5pb5br5] was started on the 
> "dev" ML.
> Initially, this proposal aimed at avoiding to call some optimizer-specific 
> methods. An example is the "setSimplex" method in 
> "o.a.c.m.optimization.direct.SimplexOptimizer": it must be called before the 
> call to "optimize". Not only this departs form the common API, but the 
> definition of the simplex also fixes the dimension of the problem; hence it 
> would be more natural to pass it together with the other parameters (i.e. in 
> "optimize") that are also dimension-dependent (initial guess, bounds).
> Eventually, the API will be simpler: users will
> # construct an optimizer (passing dimension-independent parameters at 
> construction),
> # call "optimize" (passing any dimension-dependent parameters).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (IO-349) Add API with array offset and length argument to FileUtils.writeByteArrayToFile

2012-10-24 Thread Gary Gregory (JIRA)

[ 
https://issues.apache.org/jira/browse/IO-349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13483632#comment-13483632
 ] 

Gary Gregory commented on IO-349:
-

The issue with the tests I changed is that these tests used hard offsets into 
the byte array returned by String.getBytes(). There is no guarantee that these 
offsets will address the expected bytes when the test is run on platforms with 
encodings that create byte arrays of different sizes. For example UTF-16 and 
UTF-32 type of encodings. This is unlikely but removing the mystery answer from 
getBytes() is simple.



> Add API with array offset and length argument to 
> FileUtils.writeByteArrayToFile
> ---
>
> Key: IO-349
> URL: https://issues.apache.org/jira/browse/IO-349
> Project: Commons IO
>  Issue Type: Improvement
>  Components: Utilities
>Affects Versions: 2.4
>Reporter: David Bild
>  Labels: patch
> Fix For: 2.5
>
> Attachments: 
> add_OffsetAndLengthArguments_To_FileUtils_writeByteArrayToFile.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> The FileUtils.writeByteArrayToFile method does not allow a subset of an array 
> to be written to a file.  Instead, the subset must be copied to a separate 
> array, increasing the lines of code and (for all JVMs I know about) runtime.
> Sister methods that take an offset and length should be added, inline with 
> the byte array-oriented methods in the Java standard library. 
> Attached is a patch that implements FileUtils.writeByteArrayToFile(File file, 
> byte[] data, int offset, int length) and FileUtils.writeByteArrayToFile(File 
> file, byte[] data, int offset, int length, boolean append) and associated 
> testcases in FileUtilsTestCase.java.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (MATH-816) Multivariate Normal Mixture Models

2012-10-24 Thread Gilles (JIRA)

[ 
https://issues.apache.org/jira/browse/MATH-816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13483627#comment-13483627
 ] 

Gilles commented on MATH-816:
-

Change to "reseedRandomGenerator" method committed in revision 1401894.

I didn't understand why you created this 
"MixtureMultivariateRealDistributionTest.java" file that basically discarded 
all the corrections I had brought into 
"MultivariateNormalMixtureModelDistributionTest" (including the removal of 
obsoleted tests like the weights summing up to one). All that was needed was 
the new list of samples (also committed in r1401894)...

If you want to add another unit test, please create a patch with respect to the 
latest version of that file. Note that it is better to not have multiple 
assertions in the same test method; when testing for several preconditions, 
separate test methods are cleaner, e.g. "testPrecondition1()", 
"testPrecondition2()", etc (if no better names come to mind), and using the 
Junit4 annotation mechanism to specify which exception must be triggered (cf. 
"MultivariateNormalMixtureModelDistributionTest").


> Multivariate Normal Mixture Models
> --
>
> Key: MATH-816
> URL: https://issues.apache.org/jira/browse/MATH-816
> Project: Commons Math
>  Issue Type: New Feature
>Reporter: Jared Becksfort
>Priority: Minor
> Fix For: 3.2
>
> Attachments: MixtureMultivariateRealDistribution.java, 
> MixtureMultivariateRealDistribution.java, 
> MixtureMultivariateRealDistribution.java.patch, 
> MixtureMultivariateRealDistributionTest.java, 
> MultivariateNormalMixtureModelDistribution.java, 
> MultivariateNormalMixtureModelDistributionTest.java
>
>   Original Estimate: 1m
>  Remaining Estimate: 1m
>
> I will submit a class for Multivariate Normal Mixture Models.  Not sure it 
> will allow sampling initially.
> > Hello,
> >
> > I have implemented some classes for multivariate Normal distributions, 
> > multivariate normal mixture models, and an expectation maximization fitting 
> > class for the mixture model.  I would like to submit it to Apache Commons 
> > Math.  I still have some touching up to do so that they fit the style 
> > guidelines and implement the correct interfaces.  Before I do so, I thought 
> > I would at least ask if the developers of the project are interested in me 
> > submitting them.
> >
> > Thanks,
> > Jared Becksfort
> Dear Jared,
> Yes, that would be very nice to have such an addition! Remember to also 
> include unit tests (refer to the current ones for examples). The best would 
> be to split a submission up into multiple minor ones, each covering a natural 
> submission (e.g. multivariate Normal distribution in one submission), and 
> create an issue as described at 
> http://commons.apache.org/math/issue-tracking.html .
> If you run into any problems, please do not hesitate to ask on this mailing 
> list.
> Cheers, Mikkel.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (IO-349) Add API with array offset and length argument to FileUtils.writeByteArrayToFile

2012-10-24 Thread David Bild (JIRA)

[ 
https://issues.apache.org/jira/browse/IO-349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13483570#comment-13483570
 ] 

David Bild commented on IO-349:
---

Thanks much for the code review and commit.

FYI, the tests for the existing #writeByteArrayToFile methods use the platform 
default charset for String encoding.  Is there a reason to prefer UTF-8 (those 
were the only modifications I saw in the unit tests)?  If so, I can submit a 
patch to update the existing tests as well.


> Add API with array offset and length argument to 
> FileUtils.writeByteArrayToFile
> ---
>
> Key: IO-349
> URL: https://issues.apache.org/jira/browse/IO-349
> Project: Commons IO
>  Issue Type: Improvement
>  Components: Utilities
>Affects Versions: 2.4
>Reporter: David Bild
>  Labels: patch
> Fix For: 2.5
>
> Attachments: 
> add_OffsetAndLengthArguments_To_FileUtils_writeByteArrayToFile.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> The FileUtils.writeByteArrayToFile method does not allow a subset of an array 
> to be written to a file.  Instead, the subset must be copied to a separate 
> array, increasing the lines of code and (for all JVMs I know about) runtime.
> Sister methods that take an offset and length should be added, inline with 
> the byte array-oriented methods in the Java standard library. 
> Attached is a patch that implements FileUtils.writeByteArrayToFile(File file, 
> byte[] data, int offset, int length) and FileUtils.writeByteArrayToFile(File 
> file, byte[] data, int offset, int length, boolean append) and associated 
> testcases in FileUtilsTestCase.java.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (IMAGING-94) Add ability to load partial TIFF images

2012-10-24 Thread Gary Lucas (JIRA)

[ 
https://issues.apache.org/jira/browse/IMAGING-94?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13483513#comment-13483513
 ] 

Gary Lucas edited comment on IMAGING-94 at 10/24/12 7:44 PM:
-

Good idea.  The only reason I didn't use arraycopy was that I didn't think of 
it.

I just ran a quick test using both approaches and it doesn't actually seem to 
make much difference. System.arraycopy is actually about 0.1 percent slower. 
But on my computer there is so much other stuff running that it makes for a 
noisy testing environment, and that value is probably not statistically 
significant.

Given that the run times are so close, I am inclined to replace the code with 
the System.arraycopy call just for the sake of simplicity...   No sense adding 
complicated loops that you have to explain to people when a simple System 
method call works just as well and brings clarity to the code.

  was (Author: gwlucas):
Good idea.  The only reason I didn't use arraycopy was that I didn't think 
of it.

I just ran a quick test using both approaches and it does actually seem to make 
much difference. System.arraycopy is actually about 0.1 percent slower. But on 
my computer there is so much other stuff running that it makes for a noisy 
testing environment, and that value is probably not statistically significant.

Given that the run times are so close, I am inclined to replace the code with 
the System.arraycopy call just for the sake of simplicity...   No sense adding 
complicated loops that you have to explain to people when a simple System 
method call works just as well and brings clarity to the code.
  
> Add ability to load partial TIFF images
> ---
>
> Key: IMAGING-94
> URL: https://issues.apache.org/jira/browse/IMAGING-94
> Project: Commons Imaging
>  Issue Type: New Feature
>  Components: Format: TIFF
>Reporter: Gary Lucas
> Attachments: LucasTrackerItem94_Oct14.patch
>
>
> For most Apache Commons Imaging applications, the easiest way to obtain a sub 
> image from a file is to simply use the Imaging class to load it as a 
> BufferedImage and then use BufferedImage’s getSubimage() method to extract 
> the portion of the image you wish to use.  The TIFF format presents a special 
> problem because it is very common to have huge images (100’s or even 1000’s 
> of megapixel).  Examples include Landsat satellite images, global-scale 
> GeoTIFF images, etc.  In such cases, loading the entire image into memory is 
> not practical because it would require too much memory.  For example, I am 
> currently working with a 21600 by 10800 image that requires more than 890 
> megabytes to store as a BufferedImage.  That value is pushing the limit of 
> what I can configure Java to handle on my particular OS.
> I propose to implement features for TIFF files that would permit Commons 
> Imaging to load a partial image of a TIFF file using only the amount memory 
> actually needed to hold the sub-image.
> These changes would not interfere with normal operations of TIFF files and 
> would not affect other image formats.  If there were a need for similar 
> features for other image formats, they could be phased in through future 
> changes.
> The specification for a sub-image would be through the use of the params 
> argument in the getBufferedImage call as follows:
> HashMap params = new HashMap();
> params.put(TiffConstants.PARAM_KEY_SUBIMAGE_X, new Integer( x ));
> params.put(TiffConstants.PARAM_KEY_SUBIMAGE_Y, new Integer( y ));
> params.put(TiffConstants.PARAM_KEY_SUBIMAGE_WIDTH, new 
> Integer(width));
> params.put(TiffConstants.PARAM_KEY_SUBIMAGE_HEIGHT, new 
> Integer(height));

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (IMAGING-94) Add ability to load partial TIFF images

2012-10-24 Thread Gary Lucas (JIRA)

[ 
https://issues.apache.org/jira/browse/IMAGING-94?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13483513#comment-13483513
 ] 

Gary Lucas commented on IMAGING-94:
---

Good idea.  The only reason I didn't use arraycopy was that I didn't think of 
it.

I just ran a quick test using both approaches and it does actually seem to make 
much difference. System.arraycopy is actually about 0.1 percent slower. But on 
my computer there is so much other stuff running that it makes for a noisy 
testing environment, and that value is probably not statistically significant.

Given that the run times are so close, I am inclined to replace the code with 
the System.arraycopy call just for the sake of simplicity...   No sense adding 
complicated loops that you have to explain to people when a simple System 
method call works just as well and brings clarity to the code.

> Add ability to load partial TIFF images
> ---
>
> Key: IMAGING-94
> URL: https://issues.apache.org/jira/browse/IMAGING-94
> Project: Commons Imaging
>  Issue Type: New Feature
>  Components: Format: TIFF
>Reporter: Gary Lucas
> Attachments: LucasTrackerItem94_Oct14.patch
>
>
> For most Apache Commons Imaging applications, the easiest way to obtain a sub 
> image from a file is to simply use the Imaging class to load it as a 
> BufferedImage and then use BufferedImage’s getSubimage() method to extract 
> the portion of the image you wish to use.  The TIFF format presents a special 
> problem because it is very common to have huge images (100’s or even 1000’s 
> of megapixel).  Examples include Landsat satellite images, global-scale 
> GeoTIFF images, etc.  In such cases, loading the entire image into memory is 
> not practical because it would require too much memory.  For example, I am 
> currently working with a 21600 by 10800 image that requires more than 890 
> megabytes to store as a BufferedImage.  That value is pushing the limit of 
> what I can configure Java to handle on my particular OS.
> I propose to implement features for TIFF files that would permit Commons 
> Imaging to load a partial image of a TIFF file using only the amount memory 
> actually needed to hold the sub-image.
> These changes would not interfere with normal operations of TIFF files and 
> would not affect other image formats.  If there were a need for similar 
> features for other image formats, they could be phased in through future 
> changes.
> The specification for a sub-image would be through the use of the params 
> argument in the getBufferedImage call as follows:
> HashMap params = new HashMap();
> params.put(TiffConstants.PARAM_KEY_SUBIMAGE_X, new Integer( x ));
> params.put(TiffConstants.PARAM_KEY_SUBIMAGE_Y, new Integer( y ));
> params.put(TiffConstants.PARAM_KEY_SUBIMAGE_WIDTH, new 
> Integer(width));
> params.put(TiffConstants.PARAM_KEY_SUBIMAGE_HEIGHT, new 
> Integer(height));

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (MATH-874) New API for optimizers

2012-10-24 Thread Luc Maisonobe (JIRA)

[ 
https://issues.apache.org/jira/browse/MATH-874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13483511#comment-13483511
 ] 

Luc Maisonobe commented on MATH-874:


I have implemented the converters in the subversion repository as of r1401837 
and use them in the optimizers as you suggested as of r1401838.

> New API for optimizers
> --
>
> Key: MATH-874
> URL: https://issues.apache.org/jira/browse/MATH-874
> Project: Commons Math
>  Issue Type: Improvement
>Affects Versions: 3.0
>Reporter: Gilles
>Assignee: Gilles
>Priority: Minor
>  Labels: api-change
> Fix For: 3.1, 4.0
>
> Attachments: optimizers.patch
>
>
> I suggest to change the signatures of the "optimize" methods in
> * {{UnivariateOptimizer}}
> * {{MultivariateOptimizer}}
> * {{MultivariateDifferentiableOptimizer}}
> * {{MultivariateDifferentiableVectorOptimizer}}
> * {{BaseMultivariateSimpleBoundsOptimizer}}
> Currently, the arguments are
> * the allowed number of evaluations of the objective function
> * the objective function
> * the type of optimization (minimize or maximize)
> * the initial guess
> * optionally, the lower and upper bounds
> A marker interface:
> {code}
> public interface OptimizationData {}
> {code}
> would in effect be implemented by all input data so that the signature would 
> become (for {{MultivariateOptimizer}}):
> {code}
> public PointValuePair optimize(MultivariateFunction f,
>OptimizationData... optData);
> {code}
> A [thread|http://markmail.org/message/fbmqrbf2t5pb5br5] was started on the 
> "dev" ML.
> Initially, this proposal aimed at avoiding to call some optimizer-specific 
> methods. An example is the "setSimplex" method in 
> "o.a.c.m.optimization.direct.SimplexOptimizer": it must be called before the 
> call to "optimize". Not only this departs form the common API, but the 
> definition of the simplex also fixes the dimension of the problem; hence it 
> would be more natural to pass it together with the other parameters (i.e. in 
> "optimize") that are also dimension-dependent (initial guess, bounds).
> Eventually, the API will be simpler: users will
> # construct an optimizer (passing dimension-independent parameters at 
> construction),
> # call "optimize" (passing any dimension-dependent parameters).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (IMAGING-94) Add ability to load partial TIFF images

2012-10-24 Thread Damjan Jovanovic (JIRA)

[ 
https://issues.apache.org/jira/browse/IMAGING-94?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13483490#comment-13483490
 ] 

Damjan Jovanovic commented on IMAGING-94:
-

Hi Gary

Wow what a patch :).

In this section, is there some reason you don't use System.arraycopy()? That 
should be even faster than looping and copying 2 values at a time.

{code}
+// Transcribe the data.  In this code block, there is a small
+// amount of loop unrolling.  In testing, unrolling saves about 25 
+// percent load time for large subimages.  2/3 of the cost of reading
+// the subimage is in the transcription of data.
+int[] argb = new int[w * h];
+int k = 0;
+int w2 = w / 2;
+for (int iRow = 0; iRow < h; iRow++) {
+int dIndex = (iRow + y) * width + x;
+for (int j = 0; j < w2; j++) {
+argb[k++] = this.data[dIndex++];
+argb[k++] = this.data[dIndex++];
+}
+if ((w & 1) != 0) {
+// there's an odd number of columns
+argb[k++] = this.data[dIndex++];
+}
+}
{code}

> Add ability to load partial TIFF images
> ---
>
> Key: IMAGING-94
> URL: https://issues.apache.org/jira/browse/IMAGING-94
> Project: Commons Imaging
>  Issue Type: New Feature
>  Components: Format: TIFF
>Reporter: Gary Lucas
> Attachments: LucasTrackerItem94_Oct14.patch
>
>
> For most Apache Commons Imaging applications, the easiest way to obtain a sub 
> image from a file is to simply use the Imaging class to load it as a 
> BufferedImage and then use BufferedImage’s getSubimage() method to extract 
> the portion of the image you wish to use.  The TIFF format presents a special 
> problem because it is very common to have huge images (100’s or even 1000’s 
> of megapixel).  Examples include Landsat satellite images, global-scale 
> GeoTIFF images, etc.  In such cases, loading the entire image into memory is 
> not practical because it would require too much memory.  For example, I am 
> currently working with a 21600 by 10800 image that requires more than 890 
> megabytes to store as a BufferedImage.  That value is pushing the limit of 
> what I can configure Java to handle on my particular OS.
> I propose to implement features for TIFF files that would permit Commons 
> Imaging to load a partial image of a TIFF file using only the amount memory 
> actually needed to hold the sub-image.
> These changes would not interfere with normal operations of TIFF files and 
> would not affect other image formats.  If there were a need for similar 
> features for other image formats, they could be phased in through future 
> changes.
> The specification for a sub-image would be through the use of the params 
> argument in the getBufferedImage call as follows:
> HashMap params = new HashMap();
> params.put(TiffConstants.PARAM_KEY_SUBIMAGE_X, new Integer( x ));
> params.put(TiffConstants.PARAM_KEY_SUBIMAGE_Y, new Integer( y ));
> params.put(TiffConstants.PARAM_KEY_SUBIMAGE_WIDTH, new 
> Integer(width));
> params.put(TiffConstants.PARAM_KEY_SUBIMAGE_HEIGHT, new 
> Integer(height));

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (IMAGING-93) Tiled TIFF images do not correctly load partial tiles

2012-10-24 Thread Damjan Jovanovic (JIRA)

 [ 
https://issues.apache.org/jira/browse/IMAGING-93?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Damjan Jovanovic resolved IMAGING-93.
-

   Resolution: Fixed
Fix Version/s: 1.0

Applied, resolving fixed. Thank you for your patch!

> Tiled TIFF images do not correctly load partial tiles
> -
>
> Key: IMAGING-93
> URL: https://issues.apache.org/jira/browse/IMAGING-93
> Project: Commons Imaging
>  Issue Type: Bug
>  Components: Format: TIFF
>Affects Versions: 1.0
>Reporter: Gary Lucas
> Fix For: 1.0
>
> Attachments: LucasTrackerItem93_Oct14.patch
>
>
> For a tiled TIFF file in which the tile size does not evenly divide the width 
> or height of the image, the partial tiles (last row or column) are not being 
> populated.  This occurs only for 24-bit RGB format images.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (MATH-816) Multivariate Normal Mixture Models

2012-10-24 Thread Jared Becksfort (JIRA)

 [ 
https://issues.apache.org/jira/browse/MATH-816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jared Becksfort updated MATH-816:
-

Attachment: MixtureMultivariateRealDistributionTest.java
MixtureMultivariateRealDistribution.java.patch

Gilles,

I am attaching a patch for MixtureMultivariateRealDistribution class that 
removes the temporary code and comments.

I am including the plain java file for the test because it was renamed. Running 
a rename and patch on my end basically just generated a patch containing the 
entire file. The new test file uses the generic code and the modified component 
seed method.  I added one precondition test for dimension checking and removed 
the one that checked for the weight sum to equal one because your previous 
changes made it unnecessary.

Please let me know if you need anything else from me for this issue to be 
resolved.

Jared

> Multivariate Normal Mixture Models
> --
>
> Key: MATH-816
> URL: https://issues.apache.org/jira/browse/MATH-816
> Project: Commons Math
>  Issue Type: New Feature
>Reporter: Jared Becksfort
>Priority: Minor
> Fix For: 3.2
>
> Attachments: MixtureMultivariateRealDistribution.java, 
> MixtureMultivariateRealDistribution.java, 
> MixtureMultivariateRealDistribution.java.patch, 
> MixtureMultivariateRealDistributionTest.java, 
> MultivariateNormalMixtureModelDistribution.java, 
> MultivariateNormalMixtureModelDistributionTest.java
>
>   Original Estimate: 1m
>  Remaining Estimate: 1m
>
> I will submit a class for Multivariate Normal Mixture Models.  Not sure it 
> will allow sampling initially.
> > Hello,
> >
> > I have implemented some classes for multivariate Normal distributions, 
> > multivariate normal mixture models, and an expectation maximization fitting 
> > class for the mixture model.  I would like to submit it to Apache Commons 
> > Math.  I still have some touching up to do so that they fit the style 
> > guidelines and implement the correct interfaces.  Before I do so, I thought 
> > I would at least ask if the developers of the project are interested in me 
> > submitting them.
> >
> > Thanks,
> > Jared Becksfort
> Dear Jared,
> Yes, that would be very nice to have such an addition! Remember to also 
> include unit tests (refer to the current ones for examples). The best would 
> be to split a submission up into multiple minor ones, each covering a natural 
> submission (e.g. multivariate Normal distribution in one submission), and 
> create an issue as described at 
> http://commons.apache.org/math/issue-tracking.html .
> If you run into any problems, please do not hesitate to ask on this mailing 
> list.
> Cheers, Mikkel.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (MATH-738) Incomplete beta function I(x, a, b) is inaccurate for large values of a and/or b

2012-10-24 Thread Thomas Neidhart (JIRA)

[ 
https://issues.apache.org/jira/browse/MATH-738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13483455#comment-13483455
 ] 

Thomas Neidhart commented on MATH-738:
--

Could you provide the scripts/programs you used for testing as a patch?
I may be able to help with the implementation of the TOMS-708 algorithm.

> Incomplete beta function I(x, a, b) is inaccurate for large values of a 
> and/or b
> 
>
> Key: MATH-738
> URL: https://issues.apache.org/jira/browse/MATH-738
> Project: Commons Math
>  Issue Type: Bug
>Affects Versions: 3.0
>Reporter: Sébastien Brisard
>Assignee: Sébastien Brisard
>  Labels: special-functions
> Fix For: 3.1, 4.0
>
>
> This was first reported in MATH-718. The result of the current implementation 
> of the incomplete beta function I(x, a, b) is inaccurate when a and/or b are 
> large-ish. 
> I've skimmed through [slatec|http://www.netlib.org/slatec/fnlib/betai.f], 
> GSL, 
> [Boost|http://www.boost.org/doc/libs/1_38_0/libs/math/doc/sf_and_dist/html/math_toolkit/special/sf_beta/ibeta_function.html]
>  as well as NR. At first sight, neither uses the same method to compute this 
> function. I think [TOMS-708|http://www.netlib.org/toms/708] is probably the 
> best option.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (MATH-757) ResizableDoubleArray is not thread-safe yet has some synch. methods

2012-10-24 Thread Gilles (JIRA)

 [ 
https://issues.apache.org/jira/browse/MATH-757?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gilles updated MATH-757:


Fix Version/s: (was: 3.1)
   4.0

> ResizableDoubleArray is not thread-safe yet has some synch. methods
> ---
>
> Key: MATH-757
> URL: https://issues.apache.org/jira/browse/MATH-757
> Project: Commons Math
>  Issue Type: Bug
>Reporter: Sebb
> Fix For: 4.0
>
>
> ResizableDoubleArray has several synchronised methods, but is not 
> thread-safe, because class variables are not always accessed using the lock.
> Is the class supposed to be thread-safe?
> If so, all accesses (read and write) need to be synch.
> If not, the synch. qualifiers could be dropped.
> In any case, the protected fields need to be made private.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (MATH-757) ResizableDoubleArray is not thread-safe yet has some synch. methods

2012-10-24 Thread Phil Steitz (JIRA)

[ 
https://issues.apache.org/jira/browse/MATH-757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13483359#comment-13483359
 ] 

Phil Steitz commented on MATH-757:
--

Good point on breaking existing usage if we drop the syncs.  I think it is best 
to push this to 4.0, when we can also refactor the API.  The only reason this 
class exists is for the "rolling" behavior, which is all that needs to be 
retained.

> ResizableDoubleArray is not thread-safe yet has some synch. methods
> ---
>
> Key: MATH-757
> URL: https://issues.apache.org/jira/browse/MATH-757
> Project: Commons Math
>  Issue Type: Bug
>Reporter: Sebb
> Fix For: 3.1
>
>
> ResizableDoubleArray has several synchronised methods, but is not 
> thread-safe, because class variables are not always accessed using the lock.
> Is the class supposed to be thread-safe?
> If so, all accesses (read and write) need to be synch.
> If not, the synch. qualifiers could be dropped.
> In any case, the protected fields need to be made private.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (MATH-874) New API for optimizers

2012-10-24 Thread Luc Maisonobe (JIRA)

[ 
https://issues.apache.org/jira/browse/MATH-874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13483314#comment-13483314
 ] 

Luc Maisonobe commented on MATH-874:


Oh, sorry, I did not understand it this way.

As long as we accept that a converted function is limited to what was already 
in the initial function (i.e. only first order derivatives) and will trigger 
exception if used outside of these limitations, then yes it would be possible.

The optimizers we use now are guaranteed to not ask for higher order 
derivatives, so this is probably acceptable as a temporary converter. Of 
course, if we introduce later algorithms that need second order derivatives (I 
think some of them require the Hessian), then this converter would not be 
sufficient and the users would need to implement by themselves the 
DerivativeStructure method properly to provide these derivatives.

I'll have a look at setting up the converter.

> New API for optimizers
> --
>
> Key: MATH-874
> URL: https://issues.apache.org/jira/browse/MATH-874
> Project: Commons Math
>  Issue Type: Improvement
>Affects Versions: 3.0
>Reporter: Gilles
>Assignee: Gilles
>Priority: Minor
>  Labels: api-change
> Fix For: 3.1, 4.0
>
> Attachments: optimizers.patch
>
>
> I suggest to change the signatures of the "optimize" methods in
> * {{UnivariateOptimizer}}
> * {{MultivariateOptimizer}}
> * {{MultivariateDifferentiableOptimizer}}
> * {{MultivariateDifferentiableVectorOptimizer}}
> * {{BaseMultivariateSimpleBoundsOptimizer}}
> Currently, the arguments are
> * the allowed number of evaluations of the objective function
> * the objective function
> * the type of optimization (minimize or maximize)
> * the initial guess
> * optionally, the lower and upper bounds
> A marker interface:
> {code}
> public interface OptimizationData {}
> {code}
> would in effect be implemented by all input data so that the signature would 
> become (for {{MultivariateOptimizer}}):
> {code}
> public PointValuePair optimize(MultivariateFunction f,
>OptimizationData... optData);
> {code}
> A [thread|http://markmail.org/message/fbmqrbf2t5pb5br5] was started on the 
> "dev" ML.
> Initially, this proposal aimed at avoiding to call some optimizer-specific 
> methods. An example is the "setSimplex" method in 
> "o.a.c.m.optimization.direct.SimplexOptimizer": it must be called before the 
> call to "optimize". Not only this departs form the common API, but the 
> definition of the simplex also fixes the dimension of the problem; hence it 
> would be more natural to pass it together with the other parameters (i.e. in 
> "optimize") that are also dimension-dependent (initial guess, bounds).
> Eventually, the API will be simpler: users will
> # construct an optimizer (passing dimension-independent parameters at 
> construction),
> # call "optimize" (passing any dimension-dependent parameters).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (JCS-100) JCS never going out of the dispose methode

2012-10-24 Thread Jean-Marc Spaggiari (JIRA)
Jean-Marc Spaggiari created JCS-100:
---

 Summary: JCS never going out of the dispose methode
 Key: JCS-100
 URL: https://issues.apache.org/jira/browse/JCS-100
 Project: Commons JCS
  Issue Type: Bug
  Components: Composite Cache
Affects Versions: jcs-1.3
 Environment: Windows
Reporter: Jean-Marc Spaggiari


I have an application using many threads all calling JCS. When I close the 
application, the cache is usually working fine, but sometime, it stays stucked 
on the dispose methode.

The cacheEventQueue never going empty. Then it's looping on while ( keepGoing ) 
and never ending. I have to kill the application to exit.

It's difficult to reproduce. There is no fixed pattern so far.





--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (LANG-846) StringUtils.equals() / CharSequenceUtils.regionMatches() assumes that CharSequence.toString() implementation is effective

2012-10-24 Thread Dmitry Katsubo (JIRA)
Dmitry Katsubo created LANG-846:
---

 Summary: StringUtils.equals() / CharSequenceUtils.regionMatches() 
assumes that CharSequence.toString() implementation is effective
 Key: LANG-846
 URL: https://issues.apache.org/jira/browse/LANG-846
 Project: Commons Lang
  Issue Type: Improvement
  Components: lang.*
Affects Versions: 3.1
Reporter: Dmitry Katsubo
Priority: Minor


In my case I have {{CharSequence}} that implements a "lazy" string which is 
stored on disk, and although {{toString()}} implementation is valid, it is very 
expensive plus can potentially cause OOM.

Thus {{CharSequenceUtils.regionMatches()}} should really do char-by-char 
comparison, leaving the optimization to underlying {{CharSequence}} 
implementation.

Maybe {{CharSequenceUtils.regionMatches()}} could check that passed 
{{CharSequence}} is standard implementation (like {{StringBuilder}}, 
{{StringBuffer}}) that has "effective" {{toString()}} implementation, but this 
implementation ends up with creating new {{String}} object and thus duplicating 
the character buffer. So we have classical speed/memory trade-off.

P.S. [Line 192 of 
CharSequenceUtils()|http://svn.apache.org/viewvc/commons/proper/lang/trunk/src/main/java/org/apache/commons/lang3/CharSequenceUtils.java?revision=1199894&view=markup#l192]
 reads

{{TODO: Implement rather than convert to String}}



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (MATH-874) New API for optimizers

2012-10-24 Thread Gilles (JIRA)

[ 
https://issues.apache.org/jira/browse/MATH-874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13483278#comment-13483278
 ] 

Gilles commented on MATH-874:
-

My question was related to what happens before any specifics of the optimizer 
code (base class included).

In "FunctionUtils", you created a converter:
{code}
public static UnivariateDifferentiableFunction toUnivariateDifferential(final 
DifferentiableUnivariateFunction f) {
  // ...
}
{code}

(With the nice consequence that old user code can be transparently transformed 
(in CM) and passed to the new API.)
I was asking whether the same kind of converter could be written for the vector 
function (from the type used in the old, now deprecated, optimizer API to the 
new type):
{code}
public static MultivariateDifferentiableVectorFunction 
toMultivariateDifferentiableVectorFunction(DifferentiableMultivariateVectorFunction
 f) {
  // ???
}
{code}

With this converter, the deprecated method in "AbstractLeastSquaresOptimizer" 
(lines 302-322):

{code}
@Deprecated
public PointVectorValuePair optimize(int maxEval,
 final 
DifferentiableMultivariateVectorFunction f,
 final double[] target, final double[] 
weights,
 final double[] startPoint) {
// Reset counter.
jacobianEvaluations = 0;

// Store least squares problem characteristics.
jF = f.jacobian();

// Arrays shared with the other private methods.
point = startPoint.clone();
rows = target.length;
cols = point.length;

weightedResidualJacobian = new double[rows][cols];
this.weightedResiduals = new double[rows];

cost = Double.POSITIVE_INFINITY;

return optimizeInternal(maxEval, f, target, weights, startPoint);
}
{code}

can be transformed into

{code}
@Deprecated
public PointVectorValuePair optimize(int maxEval,
 final 
DifferentiableMultivariateVectorFunction f,
 final double[] target, final double[] 
weights,
 final double[] startPoint) {
  return optimize(maxEval,
  FunctionUtils.toMultivariateDifferentiableVectorFunction(f),
  target,
  weights,
  startPoint);
}
{code}

By which I mean that old user code will automatically use the new 
{{DerivativeStructure}} API.



> New API for optimizers
> --
>
> Key: MATH-874
> URL: https://issues.apache.org/jira/browse/MATH-874
> Project: Commons Math
>  Issue Type: Improvement
>Affects Versions: 3.0
>Reporter: Gilles
>Assignee: Gilles
>Priority: Minor
>  Labels: api-change
> Fix For: 3.1, 4.0
>
> Attachments: optimizers.patch
>
>
> I suggest to change the signatures of the "optimize" methods in
> * {{UnivariateOptimizer}}
> * {{MultivariateOptimizer}}
> * {{MultivariateDifferentiableOptimizer}}
> * {{MultivariateDifferentiableVectorOptimizer}}
> * {{BaseMultivariateSimpleBoundsOptimizer}}
> Currently, the arguments are
> * the allowed number of evaluations of the objective function
> * the objective function
> * the type of optimization (minimize or maximize)
> * the initial guess
> * optionally, the lower and upper bounds
> A marker interface:
> {code}
> public interface OptimizationData {}
> {code}
> would in effect be implemented by all input data so that the signature would 
> become (for {{MultivariateOptimizer}}):
> {code}
> public PointValuePair optimize(MultivariateFunction f,
>OptimizationData... optData);
> {code}
> A [thread|http://markmail.org/message/fbmqrbf2t5pb5br5] was started on the 
> "dev" ML.
> Initially, this proposal aimed at avoiding to call some optimizer-specific 
> methods. An example is the "setSimplex" method in 
> "o.a.c.m.optimization.direct.SimplexOptimizer": it must be called before the 
> call to "optimize". Not only this departs form the common API, but the 
> definition of the simplex also fixes the dimension of the problem; hence it 
> would be more natural to pass it together with the other parameters (i.e. in 
> "optimize") that are also dimension-dependent (initial guess, bounds).
> Eventually, the API will be simpler: users will
> # construct an optimizer (passing dimension-independent parameters at 
> construction),
> # call "optimize" (passing any dimension-dependent parameters).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (MATH-874) New API for optimizers

2012-10-24 Thread Luc Maisonobe (JIRA)

[ 
https://issues.apache.org/jira/browse/MATH-874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13483251#comment-13483251
 ] 

Luc Maisonobe commented on MATH-874:


Reading again my previous comment, I think it doesn't clearly answers your 
question.

So to summarize, I would say no, it is not possible to merge the two methods 
further. They are already merged as much as possible (and at the end call the 
same optimizeInternal method).

> New API for optimizers
> --
>
> Key: MATH-874
> URL: https://issues.apache.org/jira/browse/MATH-874
> Project: Commons Math
>  Issue Type: Improvement
>Affects Versions: 3.0
>Reporter: Gilles
>Assignee: Gilles
>Priority: Minor
>  Labels: api-change
> Fix For: 3.1, 4.0
>
> Attachments: optimizers.patch
>
>
> I suggest to change the signatures of the "optimize" methods in
> * {{UnivariateOptimizer}}
> * {{MultivariateOptimizer}}
> * {{MultivariateDifferentiableOptimizer}}
> * {{MultivariateDifferentiableVectorOptimizer}}
> * {{BaseMultivariateSimpleBoundsOptimizer}}
> Currently, the arguments are
> * the allowed number of evaluations of the objective function
> * the objective function
> * the type of optimization (minimize or maximize)
> * the initial guess
> * optionally, the lower and upper bounds
> A marker interface:
> {code}
> public interface OptimizationData {}
> {code}
> would in effect be implemented by all input data so that the signature would 
> become (for {{MultivariateOptimizer}}):
> {code}
> public PointValuePair optimize(MultivariateFunction f,
>OptimizationData... optData);
> {code}
> A [thread|http://markmail.org/message/fbmqrbf2t5pb5br5] was started on the 
> "dev" ML.
> Initially, this proposal aimed at avoiding to call some optimizer-specific 
> methods. An example is the "setSimplex" method in 
> "o.a.c.m.optimization.direct.SimplexOptimizer": it must be called before the 
> call to "optimize". Not only this departs form the common API, but the 
> definition of the simplex also fixes the dimension of the problem; hence it 
> would be more natural to pass it together with the other parameters (i.e. in 
> "optimize") that are also dimension-dependent (initial guess, bounds).
> Eventually, the API will be simpler: users will
> # construct an optimizer (passing dimension-independent parameters at 
> construction),
> # call "optimize" (passing any dimension-dependent parameters).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (DIGESTER-174) Inner List Annotation has wrong @Target for most of the predefined annotation rules

2012-10-24 Thread Andreas Sahlbach (JIRA)

 [ 
https://issues.apache.org/jira/browse/DIGESTER-174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andreas Sahlbach updated DIGESTER-174:
--

Description: 
See documentation "Applying multiple annotation rule of the same type"
The inner annotation should be usable as a wrapper for the outer annotation, to 
apply several rules to one element. Therefore the inner annotation has to have 
the very same defined @Target annotations as the outer annotation. This is not 
the case for most of the rules, making them practically useless. 
Example:
@BeanPropertySetter - its @Target is ElementType.Field
but its inner List 
@BeanPropertySetter.List - its @Target is ElementType.Type
The only place where this is correct are the Create Rules, which are the 
annotations from the documentation.
May I suggest to create a test case or example for a multi annotation rule? The 
provided example for annotated rules was very helpful for me, but is very 
simple and doesn't cover the multi annotated rules.

  was:
See documentation "Applying multiple annotation rule of the same type"
The inner annotation should be usable as a wrapper for the outer annotation, to 
apply several rules to one element. Therefore the inner annotation has to have 
the very same defined @Target annotations as the outer annotation. This is not 
the case for most of the rules, making them practically useless. 
Example:
@BeanPropertySetter - its @Target is ElementType.Field
but its inner List 
@BeanPropertySetter.List - its @Target is ElementType.Type
The only place where this is correct are the Create Rules, which are the 
annotations from the documentation.
May I suggest to create a test case or example for a multi annotation rule? The 
provided example was very helpful for me, but is very simple.


> Inner List Annotation has wrong @Target for most of the predefined annotation 
> rules
> ---
>
> Key: DIGESTER-174
> URL: https://issues.apache.org/jira/browse/DIGESTER-174
> Project: Commons Digester
>  Issue Type: Bug
>Affects Versions: 3.2
>Reporter: Andreas Sahlbach
>
> See documentation "Applying multiple annotation rule of the same type"
> The inner annotation should be usable as a wrapper for the outer annotation, 
> to apply several rules to one element. Therefore the inner annotation has to 
> have the very same defined @Target annotations as the outer annotation. This 
> is not the case for most of the rules, making them practically useless. 
> Example:
> @BeanPropertySetter - its @Target is ElementType.Field
> but its inner List 
> @BeanPropertySetter.List - its @Target is ElementType.Type
> The only place where this is correct are the Create Rules, which are the 
> annotations from the documentation.
> May I suggest to create a test case or example for a multi annotation rule? 
> The provided example for annotated rules was very helpful for me, but is very 
> simple and doesn't cover the multi annotated rules.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (DIGESTER-174) Inner List Annotation has wrong @Target for most of the predefined annotation rules

2012-10-24 Thread Andreas Sahlbach (JIRA)

 [ 
https://issues.apache.org/jira/browse/DIGESTER-174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andreas Sahlbach updated DIGESTER-174:
--

Description: 
See documentation "Applying multiple annotation rule of the same type"
The inner annotation should be usable as a wrapper for the outer annotation, to 
apply several rules to one element. Therefore the inner annotation has to have 
the very same defined @Target annotations as the outer annotation. This is not 
the case for most of the rules, making them practically useless. 
Example:
@BeanPropertySetter - its @Target is ElementType.Field
but its inner List 
@BeanPropertySetter.List - its @Target is ElementType.Type
The only place where this is correct are the Create Rules, which are the 
annotations from the documentation.
May I suggest to create a test case or example for a multi annotation rule? The 
provided example for annotated rules was very helpful for me, but is very 
simple and doesn't cover multi annotated rules.

  was:
See documentation "Applying multiple annotation rule of the same type"
The inner annotation should be usable as a wrapper for the outer annotation, to 
apply several rules to one element. Therefore the inner annotation has to have 
the very same defined @Target annotations as the outer annotation. This is not 
the case for most of the rules, making them practically useless. 
Example:
@BeanPropertySetter - its @Target is ElementType.Field
but its inner List 
@BeanPropertySetter.List - its @Target is ElementType.Type
The only place where this is correct are the Create Rules, which are the 
annotations from the documentation.
May I suggest to create a test case or example for a multi annotation rule? The 
provided example for annotated rules was very helpful for me, but is very 
simple and doesn't cover the multi annotated rules.


> Inner List Annotation has wrong @Target for most of the predefined annotation 
> rules
> ---
>
> Key: DIGESTER-174
> URL: https://issues.apache.org/jira/browse/DIGESTER-174
> Project: Commons Digester
>  Issue Type: Bug
>Affects Versions: 3.2
>Reporter: Andreas Sahlbach
>
> See documentation "Applying multiple annotation rule of the same type"
> The inner annotation should be usable as a wrapper for the outer annotation, 
> to apply several rules to one element. Therefore the inner annotation has to 
> have the very same defined @Target annotations as the outer annotation. This 
> is not the case for most of the rules, making them practically useless. 
> Example:
> @BeanPropertySetter - its @Target is ElementType.Field
> but its inner List 
> @BeanPropertySetter.List - its @Target is ElementType.Type
> The only place where this is correct are the Create Rules, which are the 
> annotations from the documentation.
> May I suggest to create a test case or example for a multi annotation rule? 
> The provided example for annotated rules was very helpful for me, but is very 
> simple and doesn't cover multi annotated rules.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (DIGESTER-174) Inner List Annotation has wrong @Target for most of the predefined annotation rules

2012-10-24 Thread Andreas Sahlbach (JIRA)
Andreas Sahlbach created DIGESTER-174:
-

 Summary: Inner List Annotation has wrong @Target for most of the 
predefined annotation rules
 Key: DIGESTER-174
 URL: https://issues.apache.org/jira/browse/DIGESTER-174
 Project: Commons Digester
  Issue Type: Bug
Affects Versions: 3.2
Reporter: Andreas Sahlbach


See documentation "Applying multiple annotation rule of the same type"
The inner annotation should be usable as a wrapper for the outer annotation, to 
apply several rules to one element. Therefore the inner annotation has to have 
the very same defined @Target annotations as the outer annotation. This is not 
the case for most of the rules, making them practically useless. 
Example:
@BeanPropertySetter - its @Target is ElementType.Field
but its inner List 
@BeanPropertySetter.List - its @Target is ElementType.Type
The only place where this is correct are the Create Rules, which are the 
annotations from the documentation.
May I suggest to create a test case or example for a multi annotation rule? The 
provided example was very helpful for me, but is very simple.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (COMPRESS-200) Round trip conversion with more than 66 US-ASCII characters fails when using TarArchiveOutputStream.LONGFILE_GNU

2012-10-24 Thread Christian Schlichtherle (JIRA)

[ 
https://issues.apache.org/jira/browse/COMPRESS-200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13483230#comment-13483230
 ] 

Christian Schlichtherle commented on COMPRESS-200:
--

Thanks, but I'm busy taking care of my own mess.

> Round trip conversion with more than 66 US-ASCII characters fails when using 
> TarArchiveOutputStream.LONGFILE_GNU
> 
>
> Key: COMPRESS-200
> URL: https://issues.apache.org/jira/browse/COMPRESS-200
> Project: Commons Compress
>  Issue Type: Bug
>  Components: Archivers
>Affects Versions: 1.4.1
> Environment: Any
>Reporter: Christian Schlichtherle
> Fix For: 1.5
>
>
> When using TarArchiveOutputStream.LONGFILE_GNU with an entry name of more 
> than 66 US-ASCII characters, a round trip conversion (write the entry, then 
> read it back) fails because of several bugs in TarArchiveOutputStream and 
> TarArchiveInputStream.
> This has been reported as an issue to TrueZIP, which is why you can find a 
> more detailed analysis here: http://java.net/jira/browse/TRUEZIP-286 .

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (MATH-874) New API for optimizers

2012-10-24 Thread Luc Maisonobe (JIRA)

[ 
https://issues.apache.org/jira/browse/MATH-874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13483231#comment-13483231
 ] 

Luc Maisonobe commented on MATH-874:


Well, in fact there is not really new CM code here, only a small glue code. The 
code that really changes, is user code...

What changes is how users provided the Jacobian. With the former API, the user 
had to provide two interlinked implementation. An implementation of the 
DifferentiableMultivariateVectorFunction interface, which itself was a mean to 
retrieve an implementation of the MultivariateMatrixFunction interface. These 
two implementations had to be in different classes, as they both defined a 
method named "value" and having a single double[] parameter, one method 
returning a double[] and the other returning a double[][]. A common way to do 
this was to use a top level class for one interface and an internal class for 
the second interface.

With the newer API, users provide a single class implementing two functions. 
The first function is the same as in the former API and computes the value 
only. The second function is able to merge value, the Jacobian and in fact 
could also provide higher order derivatives or derivatives with respect to 
other variables if this function were appended after other functions.

The optimizers do handle both cases in the same way after the initialization. 
With the former API, the optimizer stores a reference to both users objects 
(the one returning double[] and the one returning double[][]). In the newer 
API, the optimizer stores a reference to the user object and a reference to a 
wrapper around the user object that extract the Jacobian from the second 
method. The underlying optimization engine is exactly the same.


What it means for users is the following:

* the part of user code dedicated to set up and call the optimizer is not 
changed at all
* the part of user code dedicated to compute the function value is not changed 
at all
* the part of user code dedicated to compute the function Jacobian is changed

For closed form functions, the changes to Jacobians computation is in fact a 
simplification. Users are not required to apply the chain rules by themselves, 
they simply have to change double variables into DerivativeStructure variables 
and change accordingly the +, -, * ... operators into calls to add, subtract, 
multiply ...

Here is an example, reworked from the unit tests:

{code:title=FormerAPI}
public class Brown implements DifferentiableMultivariateVectorFunction {

  public double[] value(double[] variables) {
double[] f = new double[m];
double sum  = -(n + 1);
double prod = 1;
for (int j = 0; j < n; ++j) {
  sum  += variables[j];
  prod *= variables[j];
}
for (int i = 0; i < n; ++i) {
  f[i] = variables[i] + sum;
}
f[n - 1] = prod - 1;
return f;
  }

  public MultivariateMatrixFunction jacobian() {
  return new Internal();
  }

  private class Internal implements MultivariateMatrixFunction {
public double[][] value(double[] variables) {
  double[][] jacobian = new double[m][];
  for (int i = 0; i < m; ++i) {
jacobian[i] = new double[n];
  }

  double prod = 1;
  for (int j = 0; j < n; ++j) {
prod *= variables[j];
for (int i = 0; i < n; ++i) {
  jacobian[i][j] = 1;
}
jacobian[j][j] = 2;
  }

  for (int j = 0; j < n; ++j) {
double temp = variables[j];
if (temp == 0) {
  temp = 1;
  prod = 1;
  for (int k = 0; k < n; ++k) {
if (k != j) {
  prod *= variables[k];
}
  }
}
jacobian[n - 1][j] = prod / temp;
  }

  return jacobian;

}

  }

}
{code}

{code:title=NewerAPI}
public class Brown implements MultivariateDifferentiableVectorFunction {

  public double[] value(double[] variables) {
double[] f = new double[m];
double sum  = -(n + 1);
double prod = 1;
for (int j = 0; j < n; ++j) {
  sum  += variables[j];
  prod *= variables[j];
}
for (int i = 0; i < n; ++i) {
  f[i] = variables[i] + sum;
}
f[n - 1] = prod - 1;
return f;
  }

  public DerivativeStructure[] value(DerivativeStructure[] variables) {
DerivativeStructure[] f = new DerivativeStructure[m];
DerivativeStructure sum  = variables[0].getField().getZero().subtract(n + 
1);
DerivativeStructure prod = variables[0].getField().getOne();
for (int j = 0; j < n; ++j) {
  sum  = sum.add(variables[j]);
  prod = prod.multiply(variables[j]);
}
for (int i = 0; i < n; ++i) {
  f[i] = variables[i].add(sum);
}
f[n - 1] = prod.subtract(1);
return f;
  }

} 
{code}

You can note that with the newer API, creating the second method (with 
DerivativeStructure) from the first method (with double), is straightforward. 
It is mainly co

[jira] [Commented] (COMPRESS-206) TarArchiveOutputStream sometimes writes garbage beyond the end of the archive

2012-10-24 Thread Gary Gregory (JIRA)

[ 
https://issues.apache.org/jira/browse/COMPRESS-206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13483223#comment-13483223
 ] 

Gary Gregory commented on COMPRESS-206:
---

If you take care of your relationship with patience and package the test and 
fix into an SVN diff patch file, the likelihood of this issue being addressed 
quickly will increase tremendously!

> TarArchiveOutputStream sometimes writes garbage beyond the end of the archive
> -
>
> Key: COMPRESS-206
> URL: https://issues.apache.org/jira/browse/COMPRESS-206
> Project: Commons Compress
>  Issue Type: Bug
>  Components: Compressors
>Affects Versions: 1.0, 1.4.1
> Environment: Linux x86
>Reporter: Peter De Maeyer
> Attachments: GarbageBeyondEndTest.java
>
>
> For some combinations of file lengths, the archive created by 
> TarArchiveOutputStream writes garbage beyond the end of the TAR stream. 
> TarArchiveInputStream can still read the stream without problems, but it does 
> not read beyond the garbage. This is problematic for my use case because I 
> write a checksum _after_ the TAR content. If I then try to read the checksum 
> back, I read garbage instead.
> Functional impact:
> * TarArchiveInputStream is asymmetrical with respect to 
> TarArchiveOutputStream, in the sense that TarArchiveInputStream does not read 
> everything that was written by TarArchiveOutputStream.
> * The content is unnecessarily large. The garbage is totally unnecessarily 
> large: ~10K overhead compared to Linux command-line tar.
> This symptom is remarkably similar to #COMPRESS-81, which is supposedly fixed 
> since 1.1. Except for the fact that this issue still exists... I've tested 
> this with 1.0 and 1.4.1.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (COMPRESS-200) Round trip conversion with more than 66 US-ASCII characters fails when using TarArchiveOutputStream.LONGFILE_GNU

2012-10-24 Thread Gary Gregory (JIRA)

[ 
https://issues.apache.org/jira/browse/COMPRESS-200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13483222#comment-13483222
 ] 

Gary Gregory commented on COMPRESS-200:
---

As a volunteer here, I can say that if you are not happy with the current state 
of an issue, the best way to move it forward is to put some elbow grease into 
it and provide a patch :)

> Round trip conversion with more than 66 US-ASCII characters fails when using 
> TarArchiveOutputStream.LONGFILE_GNU
> 
>
> Key: COMPRESS-200
> URL: https://issues.apache.org/jira/browse/COMPRESS-200
> Project: Commons Compress
>  Issue Type: Bug
>  Components: Archivers
>Affects Versions: 1.4.1
> Environment: Any
>Reporter: Christian Schlichtherle
> Fix For: 1.5
>
>
> When using TarArchiveOutputStream.LONGFILE_GNU with an entry name of more 
> than 66 US-ASCII characters, a round trip conversion (write the entry, then 
> read it back) fails because of several bugs in TarArchiveOutputStream and 
> TarArchiveInputStream.
> This has been reported as an issue to TrueZIP, which is why you can find a 
> more detailed analysis here: http://java.net/jira/browse/TRUEZIP-286 .

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (COMPRESS-200) Round trip conversion with more than 66 US-ASCII characters fails when using TarArchiveOutputStream.LONGFILE_GNU

2012-10-24 Thread Christian Schlichtherle (JIRA)

[ 
https://issues.apache.org/jira/browse/COMPRESS-200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13483217#comment-13483217
 ] 

Christian Schlichtherle commented on COMPRESS-200:
--

Two months later still nobody cares?

> Round trip conversion with more than 66 US-ASCII characters fails when using 
> TarArchiveOutputStream.LONGFILE_GNU
> 
>
> Key: COMPRESS-200
> URL: https://issues.apache.org/jira/browse/COMPRESS-200
> Project: Commons Compress
>  Issue Type: Bug
>  Components: Archivers
>Affects Versions: 1.4.1
> Environment: Any
>Reporter: Christian Schlichtherle
> Fix For: 1.5
>
>
> When using TarArchiveOutputStream.LONGFILE_GNU with an entry name of more 
> than 66 US-ASCII characters, a round trip conversion (write the entry, then 
> read it back) fails because of several bugs in TarArchiveOutputStream and 
> TarArchiveInputStream.
> This has been reported as an issue to TrueZIP, which is why you can find a 
> more detailed analysis here: http://java.net/jira/browse/TRUEZIP-286 .

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (IO-351) The Tailer keeps closing and re-opening file, leads to logs lost while file rotation.

2012-10-24 Thread Gary Gregory (JIRA)

[ 
https://issues.apache.org/jira/browse/IO-351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13483215#comment-13483215
 ] 

Gary Gregory commented on IO-351:
-

Would you care to provide a patch?

> The Tailer keeps closing and re-opening file, leads to logs lost while file 
> rotation.
> -
>
> Key: IO-351
> URL: https://issues.apache.org/jira/browse/IO-351
> Project: Commons IO
>  Issue Type: Bug
>  Components: Utilities
>Affects Versions: 2.4
> Environment: Linux/Win
>Reporter: Wally Qiao
>
> If reOpen is true, log texts reading lost while file rotation.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (COMPRESS-206) TarArchiveOutputStream sometimes writes garbage beyond the end of the archive

2012-10-24 Thread Peter De Maeyer (JIRA)

[ 
https://issues.apache.org/jira/browse/COMPRESS-206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13483185#comment-13483185
 ] 

Peter De Maeyer edited comment on COMPRESS-206 at 10/24/12 12:55 PM:
-

I think the root cause is that TarArchiveInputStream stops reading the input 
stream once it hits the first EOF record. Any lingering records after that are 
left unconsumed. It would be best to consume these as well.

In {{TarArchiveInputStream.getRecord()}}, I suggest to replace

{code}
} else if (buffer.isEOFRecord(headerBuf)) {
  hasHitEOF = true;
}
{code}

with

{code}
} else if (buffer.isEOFRecord(headerBuf)) {
  while (buffer.readRecord() != null) { // Consume any lingering records
;
  }
  hasHitEOF = true;
}
{code}

This fixes the test. It doesn't seem to break any other tests either, although 
I did not run all of them because they take a long time and I didn't have the 
patience.

  was (Author: peterdm):
I think the root cause is that TarArchiveInputStream stops reading the 
input stream once it hits the first EOF record. Any lingering records after 
that are left unconsumed. It would be best to consume these as well.

In {{TarArchiveInputStream.getRecord()}}, I suggest to replace

{code}
  } else if (buffer.isEOFRecord(headerBuf)) {
hasHitEOF = true;
  }
{code}

with

{code}
  } else if (buffer.isEOFRecord(headerBuf)) {
while (buffer.readRecord() != null) { // Consume any lingering records
  ;
}
hasHitEOF = true;
  }
{code}

This fixes the test. It doesn't seem to break any other tests either, although 
I did not run all of them because they take a long time and I didn't have the 
patience.
  
> TarArchiveOutputStream sometimes writes garbage beyond the end of the archive
> -
>
> Key: COMPRESS-206
> URL: https://issues.apache.org/jira/browse/COMPRESS-206
> Project: Commons Compress
>  Issue Type: Bug
>  Components: Compressors
>Affects Versions: 1.0, 1.4.1
> Environment: Linux x86
>Reporter: Peter De Maeyer
> Attachments: GarbageBeyondEndTest.java
>
>
> For some combinations of file lengths, the archive created by 
> TarArchiveOutputStream writes garbage beyond the end of the TAR stream. 
> TarArchiveInputStream can still read the stream without problems, but it does 
> not read beyond the garbage. This is problematic for my use case because I 
> write a checksum _after_ the TAR content. If I then try to read the checksum 
> back, I read garbage instead.
> Functional impact:
> * TarArchiveInputStream is asymmetrical with respect to 
> TarArchiveOutputStream, in the sense that TarArchiveInputStream does not read 
> everything that was written by TarArchiveOutputStream.
> * The content is unnecessarily large. The garbage is totally unnecessarily 
> large: ~10K overhead compared to Linux command-line tar.
> This symptom is remarkably similar to #COMPRESS-81, which is supposedly fixed 
> since 1.1. Except for the fact that this issue still exists... I've tested 
> this with 1.0 and 1.4.1.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (COMPRESS-206) TarArchiveOutputStream sometimes writes garbage beyond the end of the archive

2012-10-24 Thread Peter De Maeyer (JIRA)

[ 
https://issues.apache.org/jira/browse/COMPRESS-206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13483185#comment-13483185
 ] 

Peter De Maeyer commented on COMPRESS-206:
--

I think the root cause is that TarArchiveInputStream stops reading the input 
stream once it hits the first EOF record. Any lingering records after that are 
left unconsumed. It would be best to consume these as well.

In {{TarArchiveInputStream.getRecord()}}, I suggest to replace

{code}
  ...
  } else if (buffer.isEOFRecord(headerBuf)) {
hasHitEOF = true;
  }
  ...
{code}

with

{code}
  } else if (buffer.isEOFRecord(headerBuf)) {
while (buffer.readRecord() != null) { // Consume any lingering records
  ;
}
hasHitEOF = true;
  }
{code}

This fixes the test. It doesn't seem to break any other tests either, although 
I did not run all of them because they take a long time and I didn't have the 
patience.

> TarArchiveOutputStream sometimes writes garbage beyond the end of the archive
> -
>
> Key: COMPRESS-206
> URL: https://issues.apache.org/jira/browse/COMPRESS-206
> Project: Commons Compress
>  Issue Type: Bug
>  Components: Compressors
>Affects Versions: 1.0, 1.4.1
> Environment: Linux x86
>Reporter: Peter De Maeyer
> Attachments: GarbageBeyondEndTest.java
>
>
> For some combinations of file lengths, the archive created by 
> TarArchiveOutputStream writes garbage beyond the end of the TAR stream. 
> TarArchiveInputStream can still read the stream without problems, but it does 
> not read beyond the garbage. This is problematic for my use case because I 
> write a checksum _after_ the TAR content. If I then try to read the checksum 
> back, I read garbage instead.
> Functional impact:
> * TarArchiveInputStream is asymmetrical with respect to 
> TarArchiveOutputStream, in the sense that TarArchiveInputStream does not read 
> everything that was written by TarArchiveOutputStream.
> * The content is unnecessarily large. The garbage is totally unnecessarily 
> large: ~10K overhead compared to Linux command-line tar.
> This symptom is remarkably similar to #COMPRESS-81, which is supposedly fixed 
> since 1.1. Except for the fact that this issue still exists... I've tested 
> this with 1.0 and 1.4.1.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (COMPRESS-206) TarArchiveOutputStream sometimes writes garbage beyond the end of the archive

2012-10-24 Thread Peter De Maeyer (JIRA)

[ 
https://issues.apache.org/jira/browse/COMPRESS-206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13483185#comment-13483185
 ] 

Peter De Maeyer edited comment on COMPRESS-206 at 10/24/12 12:51 PM:
-

I think the root cause is that TarArchiveInputStream stops reading the input 
stream once it hits the first EOF record. Any lingering records after that are 
left unconsumed. It would be best to consume these as well.

In {{TarArchiveInputStream.getRecord()}}, I suggest to replace

{code}
  } else if (buffer.isEOFRecord(headerBuf)) {
hasHitEOF = true;
  }
{code}

with

{code}
  } else if (buffer.isEOFRecord(headerBuf)) {
while (buffer.readRecord() != null) { // Consume any lingering records
  ;
}
hasHitEOF = true;
  }
{code}

This fixes the test. It doesn't seem to break any other tests either, although 
I did not run all of them because they take a long time and I didn't have the 
patience.

  was (Author: peterdm):
I think the root cause is that TarArchiveInputStream stops reading the 
input stream once it hits the first EOF record. Any lingering records after 
that are left unconsumed. It would be best to consume these as well.

In {{TarArchiveInputStream.getRecord()}}, I suggest to replace

{code}
  ...
  } else if (buffer.isEOFRecord(headerBuf)) {
hasHitEOF = true;
  }
  ...
{code}

with

{code}
  } else if (buffer.isEOFRecord(headerBuf)) {
while (buffer.readRecord() != null) { // Consume any lingering records
  ;
}
hasHitEOF = true;
  }
{code}

This fixes the test. It doesn't seem to break any other tests either, although 
I did not run all of them because they take a long time and I didn't have the 
patience.
  
> TarArchiveOutputStream sometimes writes garbage beyond the end of the archive
> -
>
> Key: COMPRESS-206
> URL: https://issues.apache.org/jira/browse/COMPRESS-206
> Project: Commons Compress
>  Issue Type: Bug
>  Components: Compressors
>Affects Versions: 1.0, 1.4.1
> Environment: Linux x86
>Reporter: Peter De Maeyer
> Attachments: GarbageBeyondEndTest.java
>
>
> For some combinations of file lengths, the archive created by 
> TarArchiveOutputStream writes garbage beyond the end of the TAR stream. 
> TarArchiveInputStream can still read the stream without problems, but it does 
> not read beyond the garbage. This is problematic for my use case because I 
> write a checksum _after_ the TAR content. If I then try to read the checksum 
> back, I read garbage instead.
> Functional impact:
> * TarArchiveInputStream is asymmetrical with respect to 
> TarArchiveOutputStream, in the sense that TarArchiveInputStream does not read 
> everything that was written by TarArchiveOutputStream.
> * The content is unnecessarily large. The garbage is totally unnecessarily 
> large: ~10K overhead compared to Linux command-line tar.
> This symptom is remarkably similar to #COMPRESS-81, which is supposedly fixed 
> since 1.1. Except for the fact that this issue still exists... I've tested 
> this with 1.0 and 1.4.1.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (OGNL-226) Race condition with OgnlRuntime.getMethod

2012-10-24 Thread Lukasz Lenart (JIRA)

 [ 
https://issues.apache.org/jira/browse/OGNL-226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lukasz Lenart resolved OGNL-226.


Resolution: Not A Problem

> Race condition with OgnlRuntime.getMethod
> -
>
> Key: OGNL-226
> URL: https://issues.apache.org/jira/browse/OGNL-226
> Project: Commons OGNL
>  Issue Type: Bug
>  Components: Core Runtime
>Affects Versions: 3.0
>Reporter: Johno Crawford
>Priority: Minor
> Attachments: OgnlRuntimeTest.java
>
>
> If there are two consecutive calls to OgnlRuntime.getMethod before the result 
> is cached it may erroneously return null.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (IO-349) Add API with array offset and length argument to FileUtils.writeByteArrayToFile

2012-10-24 Thread Gary Gregory (JIRA)

 [ 
https://issues.apache.org/jira/browse/IO-349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Gregory resolved IO-349.
-

Resolution: Fixed

It's simple unless the unit tests have subtle bugs ;)

commit -m "[IO-349] Add API with array offset and length argument to 
FileUtils.writeByteArrayToFile. Applied patch with changes: (1) Fixed bugs in 
unit tests (2) Added @since tags (3) Fixed formatting." 
C:/svn/org/apache/commons/trunks-proper/io/src/test/java/org/apache/commons/io/FileUtilsTestCase.java
 
C:/svn/org/apache/commons/trunks-proper/io/src/main/java/org/apache/commons/io/FileUtils.java
 C:/svn/org/apache/commons/trunks-proper/io/src/changes/changes.xml 
C:/svn/org/apache/commons/trunks-proper/io/pom.xml
SendingC:/svn/org/apache/commons/trunks-proper/io/pom.xml
Sending
C:/svn/org/apache/commons/trunks-proper/io/src/changes/changes.xml
Sending
C:/svn/org/apache/commons/trunks-proper/io/src/main/java/org/apache/commons/io/FileUtils.java
Sending
C:/svn/org/apache/commons/trunks-proper/io/src/test/java/org/apache/commons/io/FileUtilsTestCase.java
Transmitting file data ...
Committed revision 1401648.

> Add API with array offset and length argument to 
> FileUtils.writeByteArrayToFile
> ---
>
> Key: IO-349
> URL: https://issues.apache.org/jira/browse/IO-349
> Project: Commons IO
>  Issue Type: Improvement
>  Components: Utilities
>Affects Versions: 2.4
>Reporter: David Bild
>  Labels: patch
> Fix For: 2.5
>
> Attachments: 
> add_OffsetAndLengthArguments_To_FileUtils_writeByteArrayToFile.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> The FileUtils.writeByteArrayToFile method does not allow a subset of an array 
> to be written to a file.  Instead, the subset must be copied to a separate 
> array, increasing the lines of code and (for all JVMs I know about) runtime.
> Sister methods that take an offset and length should be added, inline with 
> the byte array-oriented methods in the Java standard library. 
> Attached is a patch that implements FileUtils.writeByteArrayToFile(File file, 
> byte[] data, int offset, int length) and FileUtils.writeByteArrayToFile(File 
> file, byte[] data, int offset, int length, boolean append) and associated 
> testcases in FileUtilsTestCase.java.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (IO-349) Add API with array offset and length argument to FileUtils.writeByteArrayToFile

2012-10-24 Thread Gary Gregory (JIRA)

 [ 
https://issues.apache.org/jira/browse/IO-349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Gregory updated IO-349:


Summary: Add API with array offset and length argument to 
FileUtils.writeByteArrayToFile  (was: Add array offset and length argument to 
FileUtils.writeByteArrayToFile)

> Add API with array offset and length argument to 
> FileUtils.writeByteArrayToFile
> ---
>
> Key: IO-349
> URL: https://issues.apache.org/jira/browse/IO-349
> Project: Commons IO
>  Issue Type: Improvement
>  Components: Utilities
>Affects Versions: 2.4
>Reporter: David Bild
>  Labels: patch
> Fix For: 2.5
>
> Attachments: 
> add_OffsetAndLengthArguments_To_FileUtils_writeByteArrayToFile.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> The FileUtils.writeByteArrayToFile method does not allow a subset of an array 
> to be written to a file.  Instead, the subset must be copied to a separate 
> array, increasing the lines of code and (for all JVMs I know about) runtime.
> Sister methods that take an offset and length should be added, inline with 
> the byte array-oriented methods in the Java standard library. 
> Attached is a patch that implements FileUtils.writeByteArrayToFile(File file, 
> byte[] data, int offset, int length) and FileUtils.writeByteArrayToFile(File 
> file, byte[] data, int offset, int length, boolean append) and associated 
> testcases in FileUtilsTestCase.java.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (MATH-757) ResizableDoubleArray is not thread-safe yet has some synch. methods

2012-10-24 Thread Gilles (JIRA)

[ 
https://issues.apache.org/jira/browse/MATH-757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13483164#comment-13483164
 ] 

Gilles commented on MATH-757:
-

I'm also wary of
* the package-scoped "getInternalLength" method,
* the "expansion mode" being represented as an "int" (and being mutable).

To KISS, I think that we should mimic the standard "Collections" API (as was 
done in "Commons Primitives") and hide/encapsulate everything else (i.e. set 
the behaviour at construction time).

For better separation of concerns, I'd also suggest to move all "rolling" 
features to a new class that would inherit from a trimmed-down 
"ResizableDoubleArray" (i.e. only concerned with, hmm, resizable array 
features, à la "Commons Primitives").


> ResizableDoubleArray is not thread-safe yet has some synch. methods
> ---
>
> Key: MATH-757
> URL: https://issues.apache.org/jira/browse/MATH-757
> Project: Commons Math
>  Issue Type: Bug
>Reporter: Sebb
> Fix For: 3.1
>
>
> ResizableDoubleArray has several synchronised methods, but is not 
> thread-safe, because class variables are not always accessed using the lock.
> Is the class supposed to be thread-safe?
> If so, all accesses (read and write) need to be synch.
> If not, the synch. qualifiers could be dropped.
> In any case, the protected fields need to be made private.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (OGNL-226) Race condition with OgnlRuntime.getMethod

2012-10-24 Thread Johno Crawford (JIRA)

[ 
https://issues.apache.org/jira/browse/OGNL-226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13483160#comment-13483160
 ] 

Johno Crawford commented on OGNL-226:
-

Nope, bug does not exist in Apache OGNL. Also ignore my statement about 
ConcurrentHashMap, looks like it is used inside a wrapper (falls back to normal 
HashMap + sync if below JDK 1.5).

> Race condition with OgnlRuntime.getMethod
> -
>
> Key: OGNL-226
> URL: https://issues.apache.org/jira/browse/OGNL-226
> Project: Commons OGNL
>  Issue Type: Bug
>  Components: Core Runtime
>Affects Versions: 3.0
>Reporter: Johno Crawford
>Priority: Minor
> Attachments: OgnlRuntimeTest.java
>
>
> If there are two consecutive calls to OgnlRuntime.getMethod before the result 
> is cached it may erroneously return null.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (IO-349) Add array offset and length argument to FileUtils.writeByteArrayToFile

2012-10-24 Thread David Bild (JIRA)

[ 
https://issues.apache.org/jira/browse/IO-349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13483139#comment-13483139
 ] 

David Bild commented on IO-349:
---

No comments?  Is this functionality already available and I just missed it?  
Seems like a pretty simple patch to accept if not.

> Add array offset and length argument to FileUtils.writeByteArrayToFile
> --
>
> Key: IO-349
> URL: https://issues.apache.org/jira/browse/IO-349
> Project: Commons IO
>  Issue Type: Improvement
>  Components: Utilities
>Affects Versions: 2.4
>Reporter: David Bild
>  Labels: patch
> Fix For: 2.5
>
> Attachments: 
> add_OffsetAndLengthArguments_To_FileUtils_writeByteArrayToFile.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> The FileUtils.writeByteArrayToFile method does not allow a subset of an array 
> to be written to a file.  Instead, the subset must be copied to a separate 
> array, increasing the lines of code and (for all JVMs I know about) runtime.
> Sister methods that take an offset and length should be added, inline with 
> the byte array-oriented methods in the Java standard library. 
> Attached is a patch that implements FileUtils.writeByteArrayToFile(File file, 
> byte[] data, int offset, int length) and FileUtils.writeByteArrayToFile(File 
> file, byte[] data, int offset, int length, boolean append) and associated 
> testcases in FileUtilsTestCase.java.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (COMPRESS-206) TarArchiveOutputStream sometimes writes garbage beyond the end of the archive

2012-10-24 Thread Peter De Maeyer (JIRA)

 [ 
https://issues.apache.org/jira/browse/COMPRESS-206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter De Maeyer updated COMPRESS-206:
-

Attachment: GarbageBeyondEndTest.java

Attached a stand-alone junit test illustrating the problem.

> TarArchiveOutputStream sometimes writes garbage beyond the end of the archive
> -
>
> Key: COMPRESS-206
> URL: https://issues.apache.org/jira/browse/COMPRESS-206
> Project: Commons Compress
>  Issue Type: Bug
>  Components: Compressors
>Affects Versions: 1.0, 1.4.1
> Environment: Linux x86
>Reporter: Peter De Maeyer
> Attachments: GarbageBeyondEndTest.java
>
>
> For some combinations of file lengths, the archive created by 
> TarArchiveOutputStream writes garbage beyond the end of the TAR stream. 
> TarArchiveInputStream can still read the stream without problems, but it does 
> not read beyond the garbage. This is problematic for my use case because I 
> write a checksum _after_ the TAR content. If I then try to read the checksum 
> back, I read garbage instead.
> Functional impact:
> * TarArchiveInputStream is asymmetrical with respect to 
> TarArchiveOutputStream, in the sense that TarArchiveInputStream does not read 
> everything that was written by TarArchiveOutputStream.
> * The content is unnecessarily large. The garbage is totally unnecessarily 
> large: ~10K overhead compared to Linux command-line tar.
> This symptom is remarkably similar to #COMPRESS-81, which is supposedly fixed 
> since 1.1. Except for the fact that this issue still exists... I've tested 
> this with 1.0 and 1.4.1.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (COMPRESS-206) TarArchiveOutputStream sometimes writes garbage beyond the end of the archive

2012-10-24 Thread Peter De Maeyer (JIRA)
Peter De Maeyer created COMPRESS-206:


 Summary: TarArchiveOutputStream sometimes writes garbage beyond 
the end of the archive
 Key: COMPRESS-206
 URL: https://issues.apache.org/jira/browse/COMPRESS-206
 Project: Commons Compress
  Issue Type: Bug
  Components: Compressors
Affects Versions: 1.4.1, 1.0
 Environment: Linux x86
Reporter: Peter De Maeyer
 Attachments: GarbageBeyondEndTest.java

For some combinations of file lengths, the archive created by 
TarArchiveOutputStream writes garbage beyond the end of the TAR stream. 
TarArchiveInputStream can still read the stream without problems, but it does 
not read beyond the garbage. This is problematic for my use case because I 
write a checksum _after_ the TAR content. If I then try to read the checksum 
back, I read garbage instead.

Functional impact:
* TarArchiveInputStream is asymmetrical with respect to TarArchiveOutputStream, 
in the sense that TarArchiveInputStream does not read everything that was 
written by TarArchiveOutputStream.
* The content is unnecessarily large. The garbage is totally unnecessarily 
large: ~10K overhead compared to Linux command-line tar.

This symptom is remarkably similar to #COMPRESS-81, which is supposedly fixed 
since 1.1. Except for the fact that this issue still exists... I've tested this 
with 1.0 and 1.4.1.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (OGNL-224) Performance - Locking and performance problem with OgnlRuntime.findParameterTypes

2012-10-24 Thread Lukasz Lenart (JIRA)

[ 
https://issues.apache.org/jira/browse/OGNL-224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13483068#comment-13483068
 ] 

Lukasz Lenart commented on OGNL-224:


The plan is ;-) I started working on that few days ago, basically what's needed 
is to satisfy RM needs which means all the reports have to be green :-)

http://commons.apache.org/ognl/project-reports.html

> Performance - Locking and performance problem with 
> OgnlRuntime.findParameterTypes
> -
>
> Key: OGNL-224
> URL: https://issues.apache.org/jira/browse/OGNL-224
> Project: Commons OGNL
>  Issue Type: Improvement
>  Components: Core Runtime
>Affects Versions: 3.0
>Reporter: Pelladi Gabor
>Assignee: Lukasz Lenart
>Priority: Minor
>  Labels: patch, perfomance
> Fix For: 3.0
>
> Attachments: OGNL-224.patch, OgnlRuntimeTest.java
>
>
> I am using struts2 and under heavy load (around 100 threads) many threads are 
> in BLOCKED state because of OgnlRuntime.findParameterTypes(). The actions we 
> use have a generic superclass like:
> public class PersonalCaptureAction extends DataCaptureAction
> OGNL handles this very bad, it enters
> synchronized (_genericMethodParameterTypesCache)
> all the time, at every property access of the Action. A possible workaround 
> is to introduce another layer of superclass that is not generic.
> I know that in current OGNL trunk (4.0-SNAPSHOT) caching has been rewritten, 
> but Struts2 is using 3.0.5, and maybe it could be fixed as 3.0.6 in the git 
> tree I found:
> https://github.com/jkuhnert/ognl
> I will attach a patch and a testcase.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (OGNL-224) Performance - Locking and performance problem with OgnlRuntime.findParameterTypes

2012-10-24 Thread Pelladi Gabor (JIRA)

[ 
https://issues.apache.org/jira/browse/OGNL-224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13483037#comment-13483037
 ] 

Pelladi Gabor commented on OGNL-224:


Thank you. Monday and Tuesday were a national holiday in Hungary, that is why I 
did not read your answer until today.
By the way, is there any plan to release OGNL 4.0, and Struts2 depending on 
OGNL 4.0? I have seen some activity in 2011, but this year not so much.

> Performance - Locking and performance problem with 
> OgnlRuntime.findParameterTypes
> -
>
> Key: OGNL-224
> URL: https://issues.apache.org/jira/browse/OGNL-224
> Project: Commons OGNL
>  Issue Type: Improvement
>  Components: Core Runtime
>Affects Versions: 3.0
>Reporter: Pelladi Gabor
>Assignee: Lukasz Lenart
>Priority: Minor
>  Labels: patch, perfomance
> Fix For: 3.0
>
> Attachments: OGNL-224.patch, OgnlRuntimeTest.java
>
>
> I am using struts2 and under heavy load (around 100 threads) many threads are 
> in BLOCKED state because of OgnlRuntime.findParameterTypes(). The actions we 
> use have a generic superclass like:
> public class PersonalCaptureAction extends DataCaptureAction
> OGNL handles this very bad, it enters
> synchronized (_genericMethodParameterTypesCache)
> all the time, at every property access of the Action. A possible workaround 
> is to introduce another layer of superclass that is not generic.
> I know that in current OGNL trunk (4.0-SNAPSHOT) caching has been rewritten, 
> but Struts2 is using 3.0.5, and maybe it could be fixed as 3.0.6 in the git 
> tree I found:
> https://github.com/jkuhnert/ognl
> I will attach a patch and a testcase.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira