[jira] [Resolved] (MATH-931) Speed up UnitSphereRandomVectorGenerator for high dimensions

2013-01-30 Thread Gilles (JIRA)

 [ 
https://issues.apache.org/jira/browse/MATH-931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gilles resolved MATH-931.
-

   Resolution: Fixed
Fix Version/s: 3.2

> Speed up UnitSphereRandomVectorGenerator for high dimensions
> 
>
> Key: MATH-931
> URL: https://issues.apache.org/jira/browse/MATH-931
> Project: Commons Math
>  Issue Type: Improvement
>Affects Versions: 3.1.1
>Reporter: Sean Owen
>Priority: Minor
> Fix For: 3.2
>
> Attachments: MATH-931.patch
>
>
> I have a small proposal to improve the speed of 
> UnitSphereRandomVectorGenerator. This class picks a random point on the unit 
> n-sphere -- a unit vector, chosen uniformly from all possible directions.
> It does so using a rejection process -- generates a random point in the unit 
> n-cube (well, with side lengths 2) and rejects any points outside the unit 
> n-sphere, then normalizes the length. This is correct and works well at low 
> dimension. However the volume of the unit n-sphere compared to the unit 
> n-cube drops exponentially. This method eventually takes an extraordinary 
> amount of time when dimensions get past about 20, since virtually no samples 
> are usable.
> For example, here is the time in milliseconds taken to make pick 10 points as 
> a function of dimension up to 20:
> 1 : 11
> 2 : 1
> 3 : 0
> 4 : 1
> 5 : 0
> 6 : 1
> 7 : 1
> 8 : 17
> 9 : 4
> 10 : 3
> 11 : 13
> 12 : 32
> 13 : 15
> 14 : 41
> 15 : 220
> 16 : 897
> 17 : 1770
> 18 : 7426
> 19 : 48457
> 20 : 122647
> ...
> It's equally correct, and much faster, to generate these points by picking n 
> standard Gaussians and normalizing. This method takes negligible time even 
> into thousands of dimensions.
> Patch coming.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (MATH-931) Speed up UnitSphereRandomVectorGenerator for high dimensions

2013-01-30 Thread Sean Owen (JIRA)

[ 
https://issues.apache.org/jira/browse/MATH-931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13567097#comment-13567097
 ] 

Sean Owen commented on MATH-931:


You can omit that test, of the two. The original motivation here was that the 
method could take minutes or hours to run when dimensions got to high tens. 
That just asserted that it ran in 10 seconds -- should be like a nanosecond now.

> Speed up UnitSphereRandomVectorGenerator for high dimensions
> 
>
> Key: MATH-931
> URL: https://issues.apache.org/jira/browse/MATH-931
> Project: Commons Math
>  Issue Type: Improvement
>Affects Versions: 3.1.1
>Reporter: Sean Owen
>Priority: Minor
> Attachments: MATH-931.patch
>
>
> I have a small proposal to improve the speed of 
> UnitSphereRandomVectorGenerator. This class picks a random point on the unit 
> n-sphere -- a unit vector, chosen uniformly from all possible directions.
> It does so using a rejection process -- generates a random point in the unit 
> n-cube (well, with side lengths 2) and rejects any points outside the unit 
> n-sphere, then normalizes the length. This is correct and works well at low 
> dimension. However the volume of the unit n-sphere compared to the unit 
> n-cube drops exponentially. This method eventually takes an extraordinary 
> amount of time when dimensions get past about 20, since virtually no samples 
> are usable.
> For example, here is the time in milliseconds taken to make pick 10 points as 
> a function of dimension up to 20:
> 1 : 11
> 2 : 1
> 3 : 0
> 4 : 1
> 5 : 0
> 6 : 1
> 7 : 1
> 8 : 17
> 9 : 4
> 10 : 3
> 11 : 13
> 12 : 32
> 13 : 15
> 14 : 41
> 15 : 220
> 16 : 897
> 17 : 1770
> 18 : 7426
> 19 : 48457
> 20 : 122647
> ...
> It's equally correct, and much faster, to generate these points by picking n 
> standard Gaussians and normalizing. This method takes negligible time even 
> into thousands of dimensions.
> Patch coming.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CODEC-166) Base64 could be faster

2013-01-30 Thread Julius Davies (JIRA)

 [ 
https://issues.apache.org/jira/browse/CODEC-166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julius Davies updated CODEC-166:


Attachment: CODEC-166.draft.patch

Here's one way of doing it:  retrofit MiGBase64.java so that it becomes our 
back-end on all the byte[] and String based methods.

All the unit tests still pass!  :-)

Of course this patch still needs some work to clean up documentation and code 
style, but I thought I'd put it out there for comment.

Here's the benchmark run now:

{noformat}
  LARGE DATA new byte[12345]

iHarder...
encode 471.0 MB/sdecode 158.0 MB/s
encode 495.0 MB/sdecode 155.0 MB/s

MiGBase64...
encode 497.0 MB/sdecode 215.0 MB/s
encode 510.0 MB/sdecode 211.0 MB/s

Apache Commons Codec...
encode 556.0 MB/sdecode 224.0 MB/s
encode 553.0 MB/sdecode 226.0 MB/s
{noformat}

Encode speed-up about 350% and decode speed-up about 50%.

> Base64 could be faster
> --
>
> Key: CODEC-166
> URL: https://issues.apache.org/jira/browse/CODEC-166
> Project: Commons Codec
>  Issue Type: Bug
>Affects Versions: 1.7
>Reporter: Julius Davies
>Assignee: Julius Davies
> Attachments: base64bench.zip, CODEC-166.draft.patch
>
>
> Our Base64 consistently performs 3 times slower compared to MiGBase64 and 
> iHarder in the byte[] and String encode() methods.
> We are pretty good on decode(), though a little slower (approx. 33% slower) 
> than MiGBase64.
> We always win in the Streaming methods (MiGBase64 doesn't do streaming).  
> Yay!  :-) :-) :-)
> I put together a benchmark.  Here's a typical run:
> {noformat}
>   LARGE DATA new byte[12345]
> iHarder...
> encode 486.0 MB/sdecode 158.0 MB/s
> encode 491.0 MB/sdecode 148.0 MB/s
> MiGBase64...
> encode 499.0 MB/sdecode 222.0 MB/s
> encode 493.0 MB/sdecode 226.0 MB/s
> Apache Commons Codec...
> encode 142.0 MB/sdecode 146.0 MB/s
> encode 138.0 MB/sdecode 150.0 MB/s
> {noformat}
> I believe the main approach we can consider to improve performance is to 
> avoid array copies at all costs.   MiGBase64 even counts the number of valid 
> Base64 characters ahead of time on decode() to precalculate the result's size 
> and avoid any array copying!
> I suspect this will mean writing out separate execution paths for the String 
> and byte[] methods, and keeping them out of the streaming logic, since the 
> streaming logic is founded on array copy.
> Unfortunately this means we will diminish internal reuse of the streaming 
> implementation, but I think it's the only way to improve performance, if we 
> want to.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (MATH-931) Speed up UnitSphereRandomVectorGenerator for high dimensions

2013-01-30 Thread Gilles (JIRA)

[ 
https://issues.apache.org/jira/browse/MATH-931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13567087#comment-13567087
 ] 

Gilles commented on MATH-931:
-

Efficiency improvement committed in revision 1440734.

However I did not yet include the speed test which I don't quite understand.


> Speed up UnitSphereRandomVectorGenerator for high dimensions
> 
>
> Key: MATH-931
> URL: https://issues.apache.org/jira/browse/MATH-931
> Project: Commons Math
>  Issue Type: Improvement
>Affects Versions: 3.1.1
>Reporter: Sean Owen
>Priority: Minor
> Attachments: MATH-931.patch
>
>
> I have a small proposal to improve the speed of 
> UnitSphereRandomVectorGenerator. This class picks a random point on the unit 
> n-sphere -- a unit vector, chosen uniformly from all possible directions.
> It does so using a rejection process -- generates a random point in the unit 
> n-cube (well, with side lengths 2) and rejects any points outside the unit 
> n-sphere, then normalizes the length. This is correct and works well at low 
> dimension. However the volume of the unit n-sphere compared to the unit 
> n-cube drops exponentially. This method eventually takes an extraordinary 
> amount of time when dimensions get past about 20, since virtually no samples 
> are usable.
> For example, here is the time in milliseconds taken to make pick 10 points as 
> a function of dimension up to 20:
> 1 : 11
> 2 : 1
> 3 : 0
> 4 : 1
> 5 : 0
> 6 : 1
> 7 : 1
> 8 : 17
> 9 : 4
> 10 : 3
> 11 : 13
> 12 : 32
> 13 : 15
> 14 : 41
> 15 : 220
> 16 : 897
> 17 : 1770
> 18 : 7426
> 19 : 48457
> 20 : 122647
> ...
> It's equally correct, and much faster, to generate these points by picking n 
> standard Gaussians and normalizing. This method takes negligible time even 
> into thousands of dimensions.
> Patch coming.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CODEC-166) Base64 could be faster

2013-01-30 Thread Sebb (JIRA)

 [ 
https://issues.apache.org/jira/browse/CODEC-166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sebb updated CODEC-166:
---

Fix Version/s: (was: 1.8)

> Base64 could be faster
> --
>
> Key: CODEC-166
> URL: https://issues.apache.org/jira/browse/CODEC-166
> Project: Commons Codec
>  Issue Type: Bug
>Affects Versions: 1.7
>Reporter: Julius Davies
>Assignee: Julius Davies
> Attachments: base64bench.zip
>
>
> Our Base64 consistently performs 3 times slower compared to MiGBase64 and 
> iHarder in the byte[] and String encode() methods.
> We are pretty good on decode(), though a little slower (approx. 33% slower) 
> than MiGBase64.
> We always win in the Streaming methods (MiGBase64 doesn't do streaming).  
> Yay!  :-) :-) :-)
> I put together a benchmark.  Here's a typical run:
> {noformat}
>   LARGE DATA new byte[12345]
> iHarder...
> encode 486.0 MB/sdecode 158.0 MB/s
> encode 491.0 MB/sdecode 148.0 MB/s
> MiGBase64...
> encode 499.0 MB/sdecode 222.0 MB/s
> encode 493.0 MB/sdecode 226.0 MB/s
> Apache Commons Codec...
> encode 142.0 MB/sdecode 146.0 MB/s
> encode 138.0 MB/sdecode 150.0 MB/s
> {noformat}
> I believe the main approach we can consider to improve performance is to 
> avoid array copies at all costs.   MiGBase64 even counts the number of valid 
> Base64 characters ahead of time on decode() to precalculate the result's size 
> and avoid any array copying!
> I suspect this will mean writing out separate execution paths for the String 
> and byte[] methods, and keeping them out of the streaming logic, since the 
> streaming logic is founded on array copy.
> Unfortunately this means we will diminish internal reuse of the streaming 
> implementation, but I think it's the only way to improve performance, if we 
> want to.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (CODEC-166) Base64 could be faster

2013-01-30 Thread Julius Davies (JIRA)

[ 
https://issues.apache.org/jira/browse/CODEC-166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13567068#comment-13567068
 ] 

Julius Davies edited comment on CODEC-166 at 1/30/13 11:29 PM:
---

This could also help conform to user expectations.  For example, if we write 
pure-static Base64 implementations for the static String and byte[] methods, 
that's another way to solve CODEC-165.

(CODEC-165 is what got me thinking about this).

  was (Author: juliusdavies):
This could also help conform to user expectations.  If we write pure-static 
implementations just for these cases, that would also solve CODEC-165.
  
> Base64 could be faster
> --
>
> Key: CODEC-166
> URL: https://issues.apache.org/jira/browse/CODEC-166
> Project: Commons Codec
>  Issue Type: Bug
>Affects Versions: 1.7
>Reporter: Julius Davies
>Assignee: Julius Davies
> Fix For: 1.8
>
> Attachments: base64bench.zip
>
>
> Our Base64 consistently performs 3 times slower compared to MiGBase64 and 
> iHarder in the byte[] and String encode() methods.
> We are pretty good on decode(), though a little slower (approx. 33% slower) 
> than MiGBase64.
> We always win in the Streaming methods (MiGBase64 doesn't do streaming).  
> Yay!  :-) :-) :-)
> I put together a benchmark.  Here's a typical run:
> {noformat}
>   LARGE DATA new byte[12345]
> iHarder...
> encode 486.0 MB/sdecode 158.0 MB/s
> encode 491.0 MB/sdecode 148.0 MB/s
> MiGBase64...
> encode 499.0 MB/sdecode 222.0 MB/s
> encode 493.0 MB/sdecode 226.0 MB/s
> Apache Commons Codec...
> encode 142.0 MB/sdecode 146.0 MB/s
> encode 138.0 MB/sdecode 150.0 MB/s
> {noformat}
> I believe the main approach we can consider to improve performance is to 
> avoid array copies at all costs.   MiGBase64 even counts the number of valid 
> Base64 characters ahead of time on decode() to precalculate the result's size 
> and avoid any array copying!
> I suspect this will mean writing out separate execution paths for the String 
> and byte[] methods, and keeping them out of the streaming logic, since the 
> streaming logic is founded on array copy.
> Unfortunately this means we will diminish internal reuse of the streaming 
> implementation, but I think it's the only way to improve performance, if we 
> want to.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CODEC-166) Base64 could be faster

2013-01-30 Thread Julius Davies (JIRA)

 [ 
https://issues.apache.org/jira/browse/CODEC-166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julius Davies updated CODEC-166:


Attachment: base64bench.zip

Here's the benchmark I wrote (base64bench.zip).

Instructions to run it are in the README.txt.

> Base64 could be faster
> --
>
> Key: CODEC-166
> URL: https://issues.apache.org/jira/browse/CODEC-166
> Project: Commons Codec
>  Issue Type: Bug
>Affects Versions: 1.7
>Reporter: Julius Davies
>Assignee: Julius Davies
> Fix For: 1.8
>
> Attachments: base64bench.zip
>
>
> Our Base64 consistently performs 3 times slower compared to MiGBase64 and 
> iHarder in the byte[] and String encode() methods.
> We are pretty good on decode(), though a little slower (approx. 33% slower) 
> than MiGBase64.
> We always win in the Streaming methods (MiGBase64 doesn't do streaming).  
> Yay!  :-) :-) :-)
> I put together a benchmark.  Here's a typical run:
> {noformat}
>   LARGE DATA new byte[12345]
> iHarder...
> encode 486.0 MB/sdecode 158.0 MB/s
> encode 491.0 MB/sdecode 148.0 MB/s
> MiGBase64...
> encode 499.0 MB/sdecode 222.0 MB/s
> encode 493.0 MB/sdecode 226.0 MB/s
> Apache Commons Codec...
> encode 142.0 MB/sdecode 146.0 MB/s
> encode 138.0 MB/sdecode 150.0 MB/s
> {noformat}
> I believe the main approach we can consider to improve performance is to 
> avoid array copies at all costs.   MiGBase64 even counts the number of valid 
> Base64 characters ahead of time on decode() to precalculate the result's size 
> and avoid any array copying!
> I suspect this will mean writing out separate execution paths for the String 
> and byte[] methods, and keeping them out of the streaming logic, since the 
> streaming logic is founded on array copy.
> Unfortunately this means we will diminish internal reuse of the streaming 
> implementation, but I think it's the only way to improve performance, if we 
> want to.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CODEC-166) Base64 could be faster

2013-01-30 Thread Julius Davies (JIRA)

[ 
https://issues.apache.org/jira/browse/CODEC-166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13567068#comment-13567068
 ] 

Julius Davies commented on CODEC-166:
-

This could also help conform to user expectations.  If we write pure-static 
implementations just for these cases, that would also solve CODEC-165.

> Base64 could be faster
> --
>
> Key: CODEC-166
> URL: https://issues.apache.org/jira/browse/CODEC-166
> Project: Commons Codec
>  Issue Type: Bug
>Affects Versions: 1.7
>Reporter: Julius Davies
>Assignee: Julius Davies
> Fix For: 1.8
>
>
> Our Base64 consistently performs 3 times slower compared to MiGBase64 and 
> iHarder in the byte[] and String encode() methods.
> We are pretty good on decode(), though a little slower (approx. 33% slower) 
> than MiGBase64.
> We always win in the Streaming methods (MiGBase64 doesn't do streaming).  
> Yay!  :-) :-) :-)
> I put together a benchmark.  Here's a typical run:
> {noformat}
>   LARGE DATA new byte[12345]
> iHarder...
> encode 486.0 MB/sdecode 158.0 MB/s
> encode 491.0 MB/sdecode 148.0 MB/s
> MiGBase64...
> encode 499.0 MB/sdecode 222.0 MB/s
> encode 493.0 MB/sdecode 226.0 MB/s
> Apache Commons Codec...
> encode 142.0 MB/sdecode 146.0 MB/s
> encode 138.0 MB/sdecode 150.0 MB/s
> {noformat}
> I believe the main approach we can consider to improve performance is to 
> avoid array copies at all costs.   MiGBase64 even counts the number of valid 
> Base64 characters ahead of time on decode() to precalculate the result's size 
> and avoid any array copying!
> I suspect this will mean writing out separate execution paths for the String 
> and byte[] methods, and keeping them out of the streaming logic, since the 
> streaming logic is founded on array copy.
> Unfortunately this means we will diminish internal reuse of the streaming 
> implementation, but I think it's the only way to improve performance, if we 
> want to.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CODEC-166) Base64 could be faster

2013-01-30 Thread Julius Davies (JIRA)
Julius Davies created CODEC-166:
---

 Summary: Base64 could be faster
 Key: CODEC-166
 URL: https://issues.apache.org/jira/browse/CODEC-166
 Project: Commons Codec
  Issue Type: Bug
Affects Versions: 1.7
Reporter: Julius Davies
Assignee: Julius Davies
 Fix For: 1.8


Our Base64 consistently performs 3 times slower compared to MiGBase64 and 
iHarder in the byte[] and String encode() methods.

We are pretty good on decode(), though a little slower (approx. 33% slower) 
than MiGBase64.

We always win in the Streaming methods (MiGBase64 doesn't do streaming).  Yay!  
:-) :-) :-)

I put together a benchmark.  Here's a typical run:

{noformat}
  LARGE DATA new byte[12345]

iHarder...
encode 486.0 MB/sdecode 158.0 MB/s
encode 491.0 MB/sdecode 148.0 MB/s

MiGBase64...
encode 499.0 MB/sdecode 222.0 MB/s
encode 493.0 MB/sdecode 226.0 MB/s

Apache Commons Codec...
encode 142.0 MB/sdecode 146.0 MB/s
encode 138.0 MB/sdecode 150.0 MB/s
{noformat}

I believe the main approach we can consider to improve performance is to avoid 
array copies at all costs.   MiGBase64 even counts the number of valid Base64 
characters ahead of time on decode() to precalculate the result's size and 
avoid any array copying!

I suspect this will mean writing out separate execution paths for the String 
and byte[] methods, and keeping them out of the streaming logic, since the 
streaming logic is founded on array copy.

Unfortunately this means we will diminish internal reuse of the streaming 
implementation, but I think it's the only way to improve performance, if we 
want to.




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CODEC-161) Add Match Rating Approach (MRA) phonetic algorithm encoder

2013-01-30 Thread Julius Davies (JIRA)

[ 
https://issues.apache.org/jira/browse/CODEC-161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13566966#comment-13566966
 ] 

Julius Davies commented on CODEC-161:
-

you can do this to just run cobertura:

{noformat}
mvn cobertura:cobertura
{noformat}

(I usually do {noformat}mvn clean{noformat} first.)

> Add Match Rating Approach (MRA) phonetic algorithm encoder
> --
>
> Key: CODEC-161
> URL: https://issues.apache.org/jira/browse/CODEC-161
> Project: Commons Codec
>  Issue Type: New Feature
>Affects Versions: 1.6
>Reporter: Colm Rice
>Priority: Minor
>  Labels: newbie
> Fix For: 1.8
>
> Attachments: CODEC-161-18Jan2013.patch, CODEC-161-23Jan2013.patch, 
> CODEC-161-24Jan2013.patch, CODEC-161-MatchRatingApproach.patch, 
> CODEC-161.patch, CODEC-161.patch, CODEC-161.patch, CODEC-161.patch, 
> CODEC-161.patch, Code_Coverage_EclEmma_MRA_TargetAlgo_03Dec2012.jpg, 
> CODED-161.patch, MRA_Cobertura_CodeCoverage_18Jan2013.jpg, 
> MRA_Cobertura_Code_Coverage_DeMorganElseIfWorkaround.jpg, 
> MRA_Cobertura_ScreenShot_01Jan2013.jpg, MRA_eCobertura_Output.jpg
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> I want to add MatchRatingApproach algorithm to the Lucene project via commons 
> codec.
> What I have at the moment is a class called 
> org.apache.lucene.analysis.phoenetic.MatchRatingApproach implementing 
> StringEncoder
> I have a pretty comprehensive test file located at: 
> org.apache.lucene.analysis.phonetic.MatchRatingApproachTests
> It's not exactly existing pattern so I'm going to need a bit of advice here. 
> Thanks! Feel free to email.
> FYI: It my first contribution so be gentle :-)  C# is my native.
> I had incorrectly added this to Lucene solution as LUCENE-4494 but received 
> some good advice to move it to here. I'm doing that now.
> Reference: http://en.wikipedia.org/wiki/Match_rating_approach

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (NET-497) ToNetASCIIInputStream skips LF at the end of the stream

2013-01-30 Thread Mirko Raner (JIRA)

[ 
https://issues.apache.org/jira/browse/NET-497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13566929#comment-13566929
 ] 

Mirko Raner commented on NET-497:
-

O.K., I see what you're saying. This behavior seems to trigger a failure in our 
application, though. I'll double-check to see if ToNetASCIIInputStream is 
indeed the culprit and if I extracted the test case correctly.

> ToNetASCIIInputStream skips LF at the end of the stream
> ---
>
> Key: NET-497
> URL: https://issues.apache.org/jira/browse/NET-497
> Project: Commons Net
>  Issue Type: Bug
>  Components: Telnet, TFTP
>Affects Versions: 3.1
>Reporter: Mirko Raner
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> I have the following failing test case for ToNetASCIIInputStream:
> {noformat}
> public void testToNetASCIIInputStream() throws Exception
> {
> final Charset ASCII = Charset.forName("ASCII");
> byte[] data = "Hello\nWorld\n".getBytes(ASCII);
> InputStream source = new ByteArrayInputStream(data);
> ToNetASCIIInputStream toNetASCII = new ToNetASCIIInputStream(source);
> byte[] output = new byte[512];
> int length = toNetASCII.read(output);
> byte[] result = new byte[length];
> System.arraycopy(output, 0, result, 0, length);
> assertEquals('\r', result[length-2]);
> assertEquals('\n', result[length-1]);
> }
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (IO-364) Allow DirectoryWalker provide relative paths in handle*()

2013-01-30 Thread JIRA

[ 
https://issues.apache.org/jira/browse/IO-364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13566784#comment-13566784
 ] 

Ondra Žižka commented on IO-364:


Workaround:

{code}
new DirectoryWalker( null, new SuffixFileFilter(".texy"), -1){
File dirToScan;

@Override protected void handleFile( File file, int depth, 
Collection results ) throws IOException {
String rel = 
dirToScan.toURI().relativize(file.toURI()).getPath();
File relativePath = new File(rel);
addDocToIndexIfNotExists( relativePath );
}

public void scan( File dirToScan ) throws IOException {
List results = new ArrayList();
this.dirToScan = dirToScan;
walk( dirToScan, results );
}
}.scan( dirToScan );
{code}

> Allow DirectoryWalker provide relative paths in handle*()
> -
>
> Key: IO-364
> URL: https://issues.apache.org/jira/browse/IO-364
> Project: Commons IO
>  Issue Type: Bug
>  Components: Utilities
>Affects Versions: 2.4
>Reporter: Ondra Žižka
>
> {code}
> handleFile( File file, int depth, Collection results )
> {code}
> and other methods provide a file object with full path.
> As it's much easier to concat base path and additional path than 
> "substracting" one path from other, I suggest:
> The `File` object provided by `handleFile()` and other `handle*` methods 
> should (optionally) contain path relative to the one passed to `walk()`.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (IO-364) Allow DirectoryWalker provide relative paths in handle*()

2013-01-30 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/IO-364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ondra Žižka updated IO-364:
---

  Component/s: Utilities
  Description: 
{code}
handleFile( File file, int depth, Collection results )
{code}

and other methods provide a file object with full path.

As it's much easier to concat base path and additional path than "substracting" 
one path from other, I suggest:

The `File` object provided by `handleFile()` and other `handle*` methods should 
(optionally) contain path relative to the one passed to `walk()`.

  was:
{code}
handleFile( File file, int depth, Collection results )
{code}

and other methods provide a file object with full path.

As it's much easier to concat base path and additional path than "substracting" 
one path from other, I suggest:

The `File` object provided by `handleFile()` and other `handle*` methods should 
contain path relative to the one passed to `walk()`.

Affects Version/s: 2.4

> Allow DirectoryWalker provide relative paths in handle*()
> -
>
> Key: IO-364
> URL: https://issues.apache.org/jira/browse/IO-364
> Project: Commons IO
>  Issue Type: Bug
>  Components: Utilities
>Affects Versions: 2.4
>Reporter: Ondra Žižka
>
> {code}
> handleFile( File file, int depth, Collection results )
> {code}
> and other methods provide a file object with full path.
> As it's much easier to concat base path and additional path than 
> "substracting" one path from other, I suggest:
> The `File` object provided by `handleFile()` and other `handle*` methods 
> should (optionally) contain path relative to the one passed to `walk()`.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (IO-364) Allow DirectoryWalker provide relative paths in handle*()

2013-01-30 Thread JIRA
Ondra Žižka created IO-364:
--

 Summary: Allow DirectoryWalker provide relative paths in handle*()
 Key: IO-364
 URL: https://issues.apache.org/jira/browse/IO-364
 Project: Commons IO
  Issue Type: Bug
Reporter: Ondra Žižka


{code}
handleFile( File file, int depth, Collection results )
{code}

and other methods provide a file object with full path.

As it's much easier to concat base path and additional path than "substracting" 
one path from other, I suggest:

The `File` object provided by `handleFile()` and other `handle*` methods should 
contain path relative to the one passed to `walk()`.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (COLLECTIONS-310) Modifications of a SetUniqueList.subList() invalidate the parent list

2013-01-30 Thread Thomas Vahrst (JIRA)

[ 
https://issues.apache.org/jira/browse/COLLECTIONS-310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13566703#comment-13566703
 ] 

Thomas Vahrst commented on COLLECTIONS-310:
---

Mmmh, I don't understand my own comment any more... Must have been tired. So 
you are right: the sorting example is nonsense, please ignore it. 

I agree to keep the class - I'll try to write some additions to the javadoc 
comment for the subList() method to clarify the behavior... 

> Modifications of a SetUniqueList.subList() invalidate the parent list
> -
>
> Key: COLLECTIONS-310
> URL: https://issues.apache.org/jira/browse/COLLECTIONS-310
> Project: Commons Collections
>  Issue Type: Bug
>  Components: List
>Affects Versions: 3.2, Nightly Builds
>Reporter: Christian Semrau
>Priority: Minor
> Fix For: 4.0
>
>
> The List returned by SetUniqueList.subList() is again a SetUniqueList. The 
> contract for List.subList() says that the returned list supports all the 
> operations of the parent list, and it is backed by the parent list.
> We have a SetUniqueList uniqueList equal to {"Hello", "World"}. We get a 
> subList containing the last element. Now we add the element "Hello", 
> contained in the uniqueList but not in the subList, to the subList.
> What should happen?
> Should the subList behave like a SetUniqueList and add the element - meaning 
> that it changes position in the uniqueList because at the old place it gets 
> removed, so now uniqueList equals {"World", "Hello"} (which fails)?
> Or should the element not be added, because it is already contained in the 
> parent list, thereby violating the SetUniqueList-ness of the subList (which 
> fails)?
> I prefer the former behaviour, because modifications should only be made 
> through the subList and not through the parent list (as explained in 
> List.subList()).
> What should happen if we replace (using set) the subList element "World" with 
> "Hello" instead of adding an element?
> The subList should contain only "Hello", and for the parent list, the old 
> element 0 (now a duplicate of the just set element 1) should be removed 
> (which fails).
> And of course the parent list should know what happens to it (specifically, 
> its uniqueness Set) (which fails in the current snapshot).
>   public void testSubListAddNew() {
>   List uniqueList = SetUniqueList.decorate(new ArrayList());
>   uniqueList.add("Hello");
>   uniqueList.add("World");
>   List subList = uniqueList.subList(1, 2);
>   subList.add("Goodbye");
>   List expectedSubList = Arrays.asList(new Object[] { "World", 
> "Goodbye" });
>   List expectedParentList = Arrays.asList(new Object[] { "Hello", 
> "World", "Goodbye" });
>   assertEquals(expectedSubList, subList);
>   assertEquals(expectedParentList, uniqueList);
>   assertTrue(uniqueList.contains("Goodbye")); // fails
>   }
>   public void testSubListAddDuplicate() {
>   List uniqueList = SetUniqueList.decorate(new ArrayList());
>   uniqueList.add("Hello");
>   uniqueList.add("World");
>   List subList = uniqueList.subList(1, 2);
>   subList.add("Hello");
>   List expectedSubList = Arrays.asList(new Object[] { "World", 
> "Hello" });
>   List expectedParentList = Arrays.asList(new Object[] { "World", 
> "Hello" });
>   assertEquals(expectedSubList, subList);
>   assertEquals(expectedParentList, uniqueList); // fails
>   }
>   public void testSubListSetDuplicate() {
>   List uniqueList = SetUniqueList.decorate(new ArrayList());
>   uniqueList.add("Hello");
>   uniqueList.add("World");
>   List subList = uniqueList.subList(1, 2);
>   subList.set(0, "Hello");
>   List expectedSubList = Arrays.asList(new Object[] { "Hello" });
>   List expectedParentList = Arrays.asList(new Object[] { "Hello" 
> });
>   assertEquals(expectedSubList, subList);
>   assertEquals(expectedParentList, uniqueList); // fails
>   }

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (MATH-931) Speed up UnitSphereRandomVectorGenerator for high dimensions

2013-01-30 Thread Sean Owen (JIRA)

 [ 
https://issues.apache.org/jira/browse/MATH-931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Owen updated MATH-931:
---

Attachment: MATH-931.patch

> Speed up UnitSphereRandomVectorGenerator for high dimensions
> 
>
> Key: MATH-931
> URL: https://issues.apache.org/jira/browse/MATH-931
> Project: Commons Math
>  Issue Type: Improvement
>Affects Versions: 3.1.1
>Reporter: Sean Owen
>Priority: Minor
> Attachments: MATH-931.patch
>
>
> I have a small proposal to improve the speed of 
> UnitSphereRandomVectorGenerator. This class picks a random point on the unit 
> n-sphere -- a unit vector, chosen uniformly from all possible directions.
> It does so using a rejection process -- generates a random point in the unit 
> n-cube (well, with side lengths 2) and rejects any points outside the unit 
> n-sphere, then normalizes the length. This is correct and works well at low 
> dimension. However the volume of the unit n-sphere compared to the unit 
> n-cube drops exponentially. This method eventually takes an extraordinary 
> amount of time when dimensions get past about 20, since virtually no samples 
> are usable.
> For example, here is the time in milliseconds taken to make pick 10 points as 
> a function of dimension up to 20:
> 1 : 11
> 2 : 1
> 3 : 0
> 4 : 1
> 5 : 0
> 6 : 1
> 7 : 1
> 8 : 17
> 9 : 4
> 10 : 3
> 11 : 13
> 12 : 32
> 13 : 15
> 14 : 41
> 15 : 220
> 16 : 897
> 17 : 1770
> 18 : 7426
> 19 : 48457
> 20 : 122647
> ...
> It's equally correct, and much faster, to generate these points by picking n 
> standard Gaussians and normalizing. This method takes negligible time even 
> into thousands of dimensions.
> Patch coming.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (MATH-931) Speed up UnitSphereRandomVectorGenerator for high dimensions

2013-01-30 Thread Sean Owen (JIRA)
Sean Owen created MATH-931:
--

 Summary: Speed up UnitSphereRandomVectorGenerator for high 
dimensions
 Key: MATH-931
 URL: https://issues.apache.org/jira/browse/MATH-931
 Project: Commons Math
  Issue Type: Improvement
Affects Versions: 3.1.1
Reporter: Sean Owen
Priority: Minor
 Attachments: MATH-931.patch

I have a small proposal to improve the speed of 
UnitSphereRandomVectorGenerator. This class picks a random point on the unit 
n-sphere -- a unit vector, chosen uniformly from all possible directions.

It does so using a rejection process -- generates a random point in the unit 
n-cube (well, with side lengths 2) and rejects any points outside the unit 
n-sphere, then normalizes the length. This is correct and works well at low 
dimension. However the volume of the unit n-sphere compared to the unit n-cube 
drops exponentially. This method eventually takes an extraordinary amount of 
time when dimensions get past about 20, since virtually no samples are usable.

For example, here is the time in milliseconds taken to make pick 10 points as a 
function of dimension up to 20:

1 : 11
2 : 1
3 : 0
4 : 1
5 : 0
6 : 1
7 : 1
8 : 17
9 : 4
10 : 3
11 : 13
12 : 32
13 : 15
14 : 41
15 : 220
16 : 897
17 : 1770
18 : 7426
19 : 48457
20 : 122647
...

It's equally correct, and much faster, to generate these points by picking n 
standard Gaussians and normalizing. This method takes negligible time even into 
thousands of dimensions.

Patch coming.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (COLLECTIONS-310) Modifications of a SetUniqueList.subList() invalidate the parent list

2013-01-30 Thread Thomas Neidhart (JIRA)

[ 
https://issues.apache.org/jira/browse/COLLECTIONS-310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13566631#comment-13566631
 ] 

Thomas Neidhart commented on COLLECTIONS-310:
-

Hi Thomas,

I agree in general with your observation, but I do not understand your 
statement 'because the specified element is not only added, an other element is 
possibly removed during the invocation'.

Looking at the add method, I fail to see how this may happen. The use-case you 
describe does explicitly call remove, so I wonder how this is related to the 
previous statement.

This class in general should be used with a lot of care, and only if you know 
exactly what you are doing, which is probably not very convincing either. I 
would prefer to keep the class for now, but improve the javadoc wrt the current 
limitations, which may never be fully resolved.

> Modifications of a SetUniqueList.subList() invalidate the parent list
> -
>
> Key: COLLECTIONS-310
> URL: https://issues.apache.org/jira/browse/COLLECTIONS-310
> Project: Commons Collections
>  Issue Type: Bug
>  Components: List
>Affects Versions: 3.2, Nightly Builds
>Reporter: Christian Semrau
>Priority: Minor
> Fix For: 4.0
>
>
> The List returned by SetUniqueList.subList() is again a SetUniqueList. The 
> contract for List.subList() says that the returned list supports all the 
> operations of the parent list, and it is backed by the parent list.
> We have a SetUniqueList uniqueList equal to {"Hello", "World"}. We get a 
> subList containing the last element. Now we add the element "Hello", 
> contained in the uniqueList but not in the subList, to the subList.
> What should happen?
> Should the subList behave like a SetUniqueList and add the element - meaning 
> that it changes position in the uniqueList because at the old place it gets 
> removed, so now uniqueList equals {"World", "Hello"} (which fails)?
> Or should the element not be added, because it is already contained in the 
> parent list, thereby violating the SetUniqueList-ness of the subList (which 
> fails)?
> I prefer the former behaviour, because modifications should only be made 
> through the subList and not through the parent list (as explained in 
> List.subList()).
> What should happen if we replace (using set) the subList element "World" with 
> "Hello" instead of adding an element?
> The subList should contain only "Hello", and for the parent list, the old 
> element 0 (now a duplicate of the just set element 1) should be removed 
> (which fails).
> And of course the parent list should know what happens to it (specifically, 
> its uniqueness Set) (which fails in the current snapshot).
>   public void testSubListAddNew() {
>   List uniqueList = SetUniqueList.decorate(new ArrayList());
>   uniqueList.add("Hello");
>   uniqueList.add("World");
>   List subList = uniqueList.subList(1, 2);
>   subList.add("Goodbye");
>   List expectedSubList = Arrays.asList(new Object[] { "World", 
> "Goodbye" });
>   List expectedParentList = Arrays.asList(new Object[] { "Hello", 
> "World", "Goodbye" });
>   assertEquals(expectedSubList, subList);
>   assertEquals(expectedParentList, uniqueList);
>   assertTrue(uniqueList.contains("Goodbye")); // fails
>   }
>   public void testSubListAddDuplicate() {
>   List uniqueList = SetUniqueList.decorate(new ArrayList());
>   uniqueList.add("Hello");
>   uniqueList.add("World");
>   List subList = uniqueList.subList(1, 2);
>   subList.add("Hello");
>   List expectedSubList = Arrays.asList(new Object[] { "World", 
> "Hello" });
>   List expectedParentList = Arrays.asList(new Object[] { "World", 
> "Hello" });
>   assertEquals(expectedSubList, subList);
>   assertEquals(expectedParentList, uniqueList); // fails
>   }
>   public void testSubListSetDuplicate() {
>   List uniqueList = SetUniqueList.decorate(new ArrayList());
>   uniqueList.add("Hello");
>   uniqueList.add("World");
>   List subList = uniqueList.subList(1, 2);
>   subList.set(0, "Hello");
>   List expectedSubList = Arrays.asList(new Object[] { "Hello" });
>   List expectedParentList = Arrays.asList(new Object[] { "Hello" 
> });
>   assertEquals(expectedSubList, subList);
>   assertEquals(expectedParentList, uniqueList); // fails
>   }

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on 

[jira] [Updated] (VFS-369) Compilation error with newer versions of Jackrabbit

2013-01-30 Thread Gary Gregory (JIRA)

 [ 
https://issues.apache.org/jira/browse/VFS-369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Gregory updated VFS-369:
-

Attachment: vfs-369-JR-2.5.3.diff

A patch that updates VFS trunk to Jackrabbit 2.5.3 but causes tests to fail:

{noformat}
Running org.apache.commons.vfs2.provider.webdav.test.WebdavProviderTestCase
Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 17.631 sec <<< 
FAILURE!
junit.framework.TestSuite@5e1c0e6e(org.apache.commons.vfs2.provider.webdav.test.WebdavProviderTestCase$1)
  Time elapsed: 17631 sec  <<< FAILURE!
junit.framework.AssertionFailedError: Folder does not exist: 
webdav://admin@localhost:55406/repository/default/read-tests
at junit.framework.Assert.fail(Assert.java:57)
at junit.framework.Assert.assertTrue(Assert.java:22)
at 
org.apache.commons.vfs2.test.AbstractTestSuite.setUp(AbstractTestSuite.java:171)
at 
org.apache.commons.vfs2.provider.webdav.test.WebdavProviderTestCase$1.setUp(WebdavProviderTestCase.java:279)
at junit.extensions.TestSetup$1.protect(TestSetup.java:22)
at junit.framework.TestResult.runProtected(TestResult.java:142)
at junit.extensions.TestSetup.run(TestSetup.java:27)
at 
org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:84)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
at 
org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
at 
org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)
{noformat}


> Compilation error with newer versions of Jackrabbit
> ---
>
> Key: VFS-369
> URL: https://issues.apache.org/jira/browse/VFS-369
> Project: Commons VFS
>  Issue Type: Bug
>Affects Versions: 2.0
>Reporter: Cedric Nanni
> Attachments: vfs-369-JR-2.5.3.diff, VFS-369.patch
>
>
> When I try to compile VFS on my computer I've got compilation errors due 
> maybe because I use a more recent version of jackrabbit. I patched the code 
> and now it compiles.
> diff -rupN 
> original//org/apache/commons/vfs2/provider/webdav/ExceptionConverter.java 
> patched//org/apache/commons/vfs2/provider/webdav/ExceptionConverter.java
> --- original//org/apache/commons/vfs2/provider/webdav/ExceptionConverter.java 
> 2011-08-18 06:57:10.0 +0200
> +++ patched//org/apache/commons/vfs2/provider/webdav/ExceptionConverter.java  
> 2011-10-24 20:35:41.0 +0200
> @@ -50,7 +50,7 @@ public final class ExceptionConverter
>  {
>  try
>  {
> -Element error = 
> davExc.toXml(DomUtil.BUILDER_FACTORY.newDocumentBuilder().newDocument());
> +Element error = davExc.toXml(DomUtil.createDocument());
>  if (DomUtil.matches(error, DavException.XML_ERROR, 
> DavConstants.NAMESPACE))
>  {
>  if (DomUtil.hasChildElement(error, "exception", null))
> diff -rupN 
> original//org/apache/commons/vfs2/provider/webdav/WebdavFileObject.java 
> patched//org/apache/commons/vfs2/provider/webdav/WebdavFileObject.java
> --- original//org/apache/commons/vfs2/provider/webdav/WebdavFileObject.java   
> 2011-08-18 06:57:10.0 +0200
> +++ patched//org/apache/commons/vfs2/provider/webdav/WebdavFileObject.java
> 2011-10-24 20:35:41.0 +0200
> @@ -292,19 +292,17 @@ public class WebdavFileObject extends Ht
>  URLFileName fileName = (URLFileName) getName();
>  DavPropertySet properties = getProperties(fileName, 
> PropFindMethod.PROPFIND_ALL_PROP,
>  new DavPropertyNameSet(), false);
> -@SuppressWarnings("unchecked") // iterator() is documented to 
> return DavProperty instances
> -Iterator iter = properties.iterator();
> +Iterator iter = properties.iterator();
>  while (iter.hasNext())
>  {
> -DavProperty property = iter.next();
> +  

[jira] [Commented] (COLLECTIONS-429) Performance problem in MultiValueMap

2013-01-30 Thread Thomas Neidhart (JIRA)

[ 
https://issues.apache.org/jira/browse/COLLECTIONS-429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13566530#comment-13566530
 ] 

Thomas Neidhart commented on COLLECTIONS-429:
-

There was a problem when the same value was present multiple times in the 
second collection, the standard containsAll method does not take cardinality 
into account, thus the final method looks like this:

{noformat}
public static boolean containsAll(final Collection coll1, final 
Collection coll2) {
if (coll2.isEmpty()) {
return true;
} else {
final SetOperationCardinalityHelper helper =
new SetOperationCardinalityHelper(coll1, coll2);
for (final Object obj : helper) {
helper.setCardinality(obj, helper.min(obj));
}
return helper.list().size() == helper.sizeB();
}
}
{noformat}

Whereas helper.sizeB() returns the size of the unique set of elements from 
coll2.

> Performance problem in MultiValueMap
> 
>
> Key: COLLECTIONS-429
> URL: https://issues.apache.org/jira/browse/COLLECTIONS-429
> Project: Commons Collections
>  Issue Type: Bug
>Affects Versions: 3.2.1
> Environment: java 1.6.0_24
> Ubuntu 11.10
>Reporter: Adrian Nistor
> Attachments: patchFull_AbstractHashedMap.diff, patchFull.diff, 
> patchFull_StaticBucketMap.diff, patchSmall_AbstractHashedMap.diff, 
> patchSmall.diff, patchSmall_StaticBucketMap.diff, 
> Test_AbstractHashedMap.java, TestDifferentParameter.java, Test.java, 
> Test_StaticBucketMap.java
>
>
> Hi,
> I am encountering a performance problem in MultiValueMap.  It appears
> in version 3.2.1 and also in revision 1366088.  I attached a test that
> exposes this problem and a patch that fixes it.  On my machine, for
> this test, the patch provides a 1158X speedup.
> To run the test, just do:
> $ java Test
> The output for the un-patched version is:
> Time is 44040
> The output for the patched version is:
> Time is 38
> The attached test shows that, for a "MultiValueMap multi" object, the
> following operation is very slow:
> multi.values().containsAll(toContain);
> "MultiValueMap.values()" returns a "MultiValueMap.Values" object,
> which inherits containsAll() from "java.util.AbstractCollection",
> which has quadratic complexity.
> I attached two patches.  Both patches override containsAll() and
> implement a linear time algorithm.  patchSmall.diff populates a
> HashSet eagerly, and patchFull.diff populates a HashSet lazily.
> patchFull.diff is faster than patchSmall.diff when containsAll()
> returns false after inspecting only a few elements, though in most
> cases they are equally fast.  I am including patchSmall.diff just in
> case you prefer smaller code.
> Note that this problem is different from COLLECTIONS-416.  As
> established in the COLLECTIONS-416 discussion, there the user was
> responsible for using the proper data structures as argument for the
> method.
> For "MultiValueMap.values()", the problem is not related to the
> collection passed as parameter.  The problem will always manifest for
> this method, irrespective of the parameter type.  I attached a test
> (TestDifferentParameter.java) that shows that, even with a HashSet
> parameter, the current problem still manifests (which did not happen
> for COLLECTIONS-416).
> This problem also exists for the two other "Values" classes
> (AbstractHashedMap.Values, StaticBucketMap.Values).  I attached tests
> and patches for these classes as well.  If you want me to file
> separate reports, just let me know.
> Is this truly a performance problem, or am I misunderstanding the
> intended behavior?  If so, can you please confirm that the patches are
> correct?
> Thanks,
> Adrian

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (VFS-369) Compilation error with newer versions of Jackrabbit

2013-01-30 Thread Gary Gregory (JIRA)

[ 
https://issues.apache.org/jira/browse/VFS-369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13566518#comment-13566518
 ] 

Gary Gregory commented on VFS-369:
--

It seems that in order to update VFS to a current version of Jackrabbit we need:

- Updated VFS POMs to point to the latest Jackrabbit jars. This will likely 
mean expanding the dependencies since there is no all-in-one jars for newer 
versions of Jackrabbit.
- Updated Jackrabit call sites to deal with Jackrabbit API changes.
- Updated tests to deal with any changes in testing with an embedded Jackrabbit.

Patches welcome!

Gary

> Compilation error with newer versions of Jackrabbit
> ---
>
> Key: VFS-369
> URL: https://issues.apache.org/jira/browse/VFS-369
> Project: Commons VFS
>  Issue Type: Bug
>Affects Versions: 2.0
>Reporter: Cedric Nanni
> Attachments: VFS-369.patch
>
>
> When I try to compile VFS on my computer I've got compilation errors due 
> maybe because I use a more recent version of jackrabbit. I patched the code 
> and now it compiles.
> diff -rupN 
> original//org/apache/commons/vfs2/provider/webdav/ExceptionConverter.java 
> patched//org/apache/commons/vfs2/provider/webdav/ExceptionConverter.java
> --- original//org/apache/commons/vfs2/provider/webdav/ExceptionConverter.java 
> 2011-08-18 06:57:10.0 +0200
> +++ patched//org/apache/commons/vfs2/provider/webdav/ExceptionConverter.java  
> 2011-10-24 20:35:41.0 +0200
> @@ -50,7 +50,7 @@ public final class ExceptionConverter
>  {
>  try
>  {
> -Element error = 
> davExc.toXml(DomUtil.BUILDER_FACTORY.newDocumentBuilder().newDocument());
> +Element error = davExc.toXml(DomUtil.createDocument());
>  if (DomUtil.matches(error, DavException.XML_ERROR, 
> DavConstants.NAMESPACE))
>  {
>  if (DomUtil.hasChildElement(error, "exception", null))
> diff -rupN 
> original//org/apache/commons/vfs2/provider/webdav/WebdavFileObject.java 
> patched//org/apache/commons/vfs2/provider/webdav/WebdavFileObject.java
> --- original//org/apache/commons/vfs2/provider/webdav/WebdavFileObject.java   
> 2011-08-18 06:57:10.0 +0200
> +++ patched//org/apache/commons/vfs2/provider/webdav/WebdavFileObject.java
> 2011-10-24 20:35:41.0 +0200
> @@ -292,19 +292,17 @@ public class WebdavFileObject extends Ht
>  URLFileName fileName = (URLFileName) getName();
>  DavPropertySet properties = getProperties(fileName, 
> PropFindMethod.PROPFIND_ALL_PROP,
>  new DavPropertyNameSet(), false);
> -@SuppressWarnings("unchecked") // iterator() is documented to 
> return DavProperty instances
> -Iterator iter = properties.iterator();
> +Iterator iter = properties.iterator();
>  while (iter.hasNext())
>  {
> -DavProperty property = iter.next();
> +DavProperty property = (DavProperty)iter.next();
>  attributes.put(property.getName().toString(), 
> property.getValue());
>  }
>  properties = getPropertyNames(fileName);
> -@SuppressWarnings("unchecked") // iterator() is documented to 
> return DavProperty instances
> -Iterator iter2 = properties.iterator();
> +Iterator iter2 = properties.iterator();
>  while (iter2.hasNext())
>  {
> -DavProperty property = iter2.next();
> +DavProperty property = (DavProperty)iter2.next();
>  if (!attributes.containsKey(property.getName().getName()))
>  {
>  property = getProperty(fileName, property.getName());

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (VFS-369) Compilation error with newer versions of Jackrabbit

2013-01-30 Thread Gary Gregory (JIRA)

 [ 
https://issues.apache.org/jira/browse/VFS-369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Gregory updated VFS-369:
-

Summary: Compilation error with newer versions of Jackrabbit  (was: 
Compilation Issue)

> Compilation error with newer versions of Jackrabbit
> ---
>
> Key: VFS-369
> URL: https://issues.apache.org/jira/browse/VFS-369
> Project: Commons VFS
>  Issue Type: Bug
>Affects Versions: 2.0
>Reporter: Cedric Nanni
> Attachments: VFS-369.patch
>
>
> When I try to compile VFS on my computer I've got compilation errors due 
> maybe because I use a more recent version of jackrabbit. I patched the code 
> and now it compiles.
> diff -rupN 
> original//org/apache/commons/vfs2/provider/webdav/ExceptionConverter.java 
> patched//org/apache/commons/vfs2/provider/webdav/ExceptionConverter.java
> --- original//org/apache/commons/vfs2/provider/webdav/ExceptionConverter.java 
> 2011-08-18 06:57:10.0 +0200
> +++ patched//org/apache/commons/vfs2/provider/webdav/ExceptionConverter.java  
> 2011-10-24 20:35:41.0 +0200
> @@ -50,7 +50,7 @@ public final class ExceptionConverter
>  {
>  try
>  {
> -Element error = 
> davExc.toXml(DomUtil.BUILDER_FACTORY.newDocumentBuilder().newDocument());
> +Element error = davExc.toXml(DomUtil.createDocument());
>  if (DomUtil.matches(error, DavException.XML_ERROR, 
> DavConstants.NAMESPACE))
>  {
>  if (DomUtil.hasChildElement(error, "exception", null))
> diff -rupN 
> original//org/apache/commons/vfs2/provider/webdav/WebdavFileObject.java 
> patched//org/apache/commons/vfs2/provider/webdav/WebdavFileObject.java
> --- original//org/apache/commons/vfs2/provider/webdav/WebdavFileObject.java   
> 2011-08-18 06:57:10.0 +0200
> +++ patched//org/apache/commons/vfs2/provider/webdav/WebdavFileObject.java
> 2011-10-24 20:35:41.0 +0200
> @@ -292,19 +292,17 @@ public class WebdavFileObject extends Ht
>  URLFileName fileName = (URLFileName) getName();
>  DavPropertySet properties = getProperties(fileName, 
> PropFindMethod.PROPFIND_ALL_PROP,
>  new DavPropertyNameSet(), false);
> -@SuppressWarnings("unchecked") // iterator() is documented to 
> return DavProperty instances
> -Iterator iter = properties.iterator();
> +Iterator iter = properties.iterator();
>  while (iter.hasNext())
>  {
> -DavProperty property = iter.next();
> +DavProperty property = (DavProperty)iter.next();
>  attributes.put(property.getName().toString(), 
> property.getValue());
>  }
>  properties = getPropertyNames(fileName);
> -@SuppressWarnings("unchecked") // iterator() is documented to 
> return DavProperty instances
> -Iterator iter2 = properties.iterator();
> +Iterator iter2 = properties.iterator();
>  while (iter2.hasNext())
>  {
> -DavProperty property = iter2.next();
> +DavProperty property = (DavProperty)iter2.next();
>  if (!attributes.containsKey(property.getName().getName()))
>  {
>  property = getProperty(fileName, property.getName());

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (COLLECTIONS-429) Performance problem in MultiValueMap

2013-01-30 Thread Thomas Neidhart (JIRA)

[ 
https://issues.apache.org/jira/browse/COLLECTIONS-429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13566508#comment-13566508
 ] 

Thomas Neidhart edited comment on COLLECTIONS-429 at 1/30/13 2:28 PM:
--

In r1440418 I added a CollectionUtils.containsAll(Collection, Collection) 
method which provides guaranteed runtime complexity of O\(n\), regardless of 
the type of Collection.

Now, this method, similar to the one in the patch, will duplicate the original 
collection in a HashSet, which may not be desirable in case of really large 
collections, so I would rather opt to not implement the performance improvement 
for the referenced Map implementations.

In case someone wants a fast containsAll, he/she should call 
CollectionUtils.containsAll(), with the downside of more memory usage.

Comments, opinions?

  was (Author: tn):
In r1440418 I added a CollectionUtils.containsAll(Collection, Collection) 
method which provides guaranteed runtime complexity of O(n), regardless of the 
type of Collection.

Now, this method, similar to the one in the patch, will duplicate the original 
collection in a HashSet, which may not be desirable in case of really large 
collections, so I would rather opt to not implement the performance improvement 
for the referenced Map implementations.

In case someone wants a fast containsAll, he/she should call 
CollectionUtils.containsAll(), with the downside of more memory usage.

Comments, opinions?
  
> Performance problem in MultiValueMap
> 
>
> Key: COLLECTIONS-429
> URL: https://issues.apache.org/jira/browse/COLLECTIONS-429
> Project: Commons Collections
>  Issue Type: Bug
>Affects Versions: 3.2.1
> Environment: java 1.6.0_24
> Ubuntu 11.10
>Reporter: Adrian Nistor
> Attachments: patchFull_AbstractHashedMap.diff, patchFull.diff, 
> patchFull_StaticBucketMap.diff, patchSmall_AbstractHashedMap.diff, 
> patchSmall.diff, patchSmall_StaticBucketMap.diff, 
> Test_AbstractHashedMap.java, TestDifferentParameter.java, Test.java, 
> Test_StaticBucketMap.java
>
>
> Hi,
> I am encountering a performance problem in MultiValueMap.  It appears
> in version 3.2.1 and also in revision 1366088.  I attached a test that
> exposes this problem and a patch that fixes it.  On my machine, for
> this test, the patch provides a 1158X speedup.
> To run the test, just do:
> $ java Test
> The output for the un-patched version is:
> Time is 44040
> The output for the patched version is:
> Time is 38
> The attached test shows that, for a "MultiValueMap multi" object, the
> following operation is very slow:
> multi.values().containsAll(toContain);
> "MultiValueMap.values()" returns a "MultiValueMap.Values" object,
> which inherits containsAll() from "java.util.AbstractCollection",
> which has quadratic complexity.
> I attached two patches.  Both patches override containsAll() and
> implement a linear time algorithm.  patchSmall.diff populates a
> HashSet eagerly, and patchFull.diff populates a HashSet lazily.
> patchFull.diff is faster than patchSmall.diff when containsAll()
> returns false after inspecting only a few elements, though in most
> cases they are equally fast.  I am including patchSmall.diff just in
> case you prefer smaller code.
> Note that this problem is different from COLLECTIONS-416.  As
> established in the COLLECTIONS-416 discussion, there the user was
> responsible for using the proper data structures as argument for the
> method.
> For "MultiValueMap.values()", the problem is not related to the
> collection passed as parameter.  The problem will always manifest for
> this method, irrespective of the parameter type.  I attached a test
> (TestDifferentParameter.java) that shows that, even with a HashSet
> parameter, the current problem still manifests (which did not happen
> for COLLECTIONS-416).
> This problem also exists for the two other "Values" classes
> (AbstractHashedMap.Values, StaticBucketMap.Values).  I attached tests
> and patches for these classes as well.  If you want me to file
> separate reports, just let me know.
> Is this truly a performance problem, or am I misunderstanding the
> intended behavior?  If so, can you please confirm that the patches are
> correct?
> Thanks,
> Adrian

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (COLLECTIONS-429) Performance problem in MultiValueMap

2013-01-30 Thread Thomas Neidhart (JIRA)

[ 
https://issues.apache.org/jira/browse/COLLECTIONS-429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13566508#comment-13566508
 ] 

Thomas Neidhart commented on COLLECTIONS-429:
-

In r1440418 I added a CollectionUtils.containsAll(Collection, Collection) 
method which provides guaranteed runtime complexity of O(n), regardless of the 
type of Collection.

Now, this method, similar to the one in the patch, will duplicate the original 
collection in a HashSet, which may not be desirable in case of really large 
collections, so I would rather opt to not implement the performance improvement 
for the referenced Map implementations.

In case someone wants a fast containsAll, he/she should call 
CollectionUtils.containsAll(), with the downside of more memory usage.

Comments, opinions?

> Performance problem in MultiValueMap
> 
>
> Key: COLLECTIONS-429
> URL: https://issues.apache.org/jira/browse/COLLECTIONS-429
> Project: Commons Collections
>  Issue Type: Bug
>Affects Versions: 3.2.1
> Environment: java 1.6.0_24
> Ubuntu 11.10
>Reporter: Adrian Nistor
> Attachments: patchFull_AbstractHashedMap.diff, patchFull.diff, 
> patchFull_StaticBucketMap.diff, patchSmall_AbstractHashedMap.diff, 
> patchSmall.diff, patchSmall_StaticBucketMap.diff, 
> Test_AbstractHashedMap.java, TestDifferentParameter.java, Test.java, 
> Test_StaticBucketMap.java
>
>
> Hi,
> I am encountering a performance problem in MultiValueMap.  It appears
> in version 3.2.1 and also in revision 1366088.  I attached a test that
> exposes this problem and a patch that fixes it.  On my machine, for
> this test, the patch provides a 1158X speedup.
> To run the test, just do:
> $ java Test
> The output for the un-patched version is:
> Time is 44040
> The output for the patched version is:
> Time is 38
> The attached test shows that, for a "MultiValueMap multi" object, the
> following operation is very slow:
> multi.values().containsAll(toContain);
> "MultiValueMap.values()" returns a "MultiValueMap.Values" object,
> which inherits containsAll() from "java.util.AbstractCollection",
> which has quadratic complexity.
> I attached two patches.  Both patches override containsAll() and
> implement a linear time algorithm.  patchSmall.diff populates a
> HashSet eagerly, and patchFull.diff populates a HashSet lazily.
> patchFull.diff is faster than patchSmall.diff when containsAll()
> returns false after inspecting only a few elements, though in most
> cases they are equally fast.  I am including patchSmall.diff just in
> case you prefer smaller code.
> Note that this problem is different from COLLECTIONS-416.  As
> established in the COLLECTIONS-416 discussion, there the user was
> responsible for using the proper data structures as argument for the
> method.
> For "MultiValueMap.values()", the problem is not related to the
> collection passed as parameter.  The problem will always manifest for
> this method, irrespective of the parameter type.  I attached a test
> (TestDifferentParameter.java) that shows that, even with a HashSet
> parameter, the current problem still manifests (which did not happen
> for COLLECTIONS-416).
> This problem also exists for the two other "Values" classes
> (AbstractHashedMap.Values, StaticBucketMap.Values).  I attached tests
> and patches for these classes as well.  If you want me to file
> separate reports, just let me know.
> Is this truly a performance problem, or am I misunderstanding the
> intended behavior?  If so, can you please confirm that the patches are
> correct?
> Thanks,
> Adrian

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (BEANUTILS-421) NullPointerException in BeanUtilsBean.setProperty

2013-01-30 Thread Maxim Kramarenko (JIRA)

[ 
https://issues.apache.org/jira/browse/BEANUTILS-421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13566465#comment-13566465
 ] 

Maxim Kramarenko commented on BEANUTILS-421:


To fix it please replace
type = ((IndexedPropertyDescriptor) descriptor).
getIndexedPropertyType();
with
type = descriptor.getPropertyType();

> NullPointerException in BeanUtilsBean.setProperty
> -
>
> Key: BEANUTILS-421
> URL: https://issues.apache.org/jira/browse/BEANUTILS-421
> Project: Commons BeanUtils
>  Issue Type: Bug
>Affects Versions: 1.8.3
>Reporter: Maxim Kramarenko
>Priority: Blocker
>
> I got the following exception on some servers:
>javax.servlet.ServletException: BeanUtils.populate
>   at org.apache.struts.util.RequestUtils.populate(RequestUtils.java:475)
>   at 
> org.apache.struts.chain.commands.servlet.PopulateActionForm.populate(PopulateActionForm.java:50)
>   at 
> org.apache.struts.chain.commands.AbstractPopulateActionForm.execute(AbstractPopulateActionForm.java:60)
>   at 
> org.apache.struts.chain.commands.ActionCommandBase.execute(ActionCommandBase.java:51)
>   at org.apache.commons.chain.impl.ChainBase.execute(ChainBase.java:191)
>   at 
> org.apache.commons.chain.generic.LookupCommand.execute(LookupCommand.java:305)
>   at org.apache.commons.chain.impl.ChainBase.execute(ChainBase.java:191)
>   at 
> org.apache.struts.chain.ComposableRequestProcessor.process(ComposableRequestProcessor.java:283)
>   at 
> org.apache.struts.action.ActionServlet.process(ActionServlet.java:1913)
>   at org.apache.struts.action.ActionServlet.doPost(ActionServlet.java:462)
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.commons.beanutils.BeanUtilsBean.setProperty(BeanUtilsBean.java:982)
>   at 
> org.apache.commons.beanutils.BeanUtilsBean.populate(BeanUtilsBean.java:830)
>   at org.apache.commons.beanutils.BeanUtils.populate(BeanUtils.java:433)
>   at org.apache.struts.util.RequestUtils.populate(RequestUtils.java:473)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (VFS-369) Compilation Issue

2013-01-30 Thread Oliver Matz (JIRA)

[ 
https://issues.apache.org/jira/browse/VFS-369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13566412#comment-13566412
 ] 

Oliver Matz edited comment on VFS-369 at 1/30/13 12:26 PM:
---

I have run into the basically same problem, but at runtime: I am using 
jackrabbit-webdav 2.4.3 together with commons-vfs2 2.0 and get the following:
{code}
java.lang.IllegalAccessError: tried to access field 
org.apache.jackrabbit.webdav.xml.DomUtil.BUILDER_FACTORY from class 
org.apache.commons.vfs2.provider.webdav.ExceptionConverter
at 
org.apache.commons.vfs2.provider.webdav.ExceptionConverter.generate(ExceptionConverter.java:53)
at 
org.apache.commons.vfs2.provider.webdav.ExceptionConverter.generate(ExceptionConverter.java:42)
at 
org.apache.commons.vfs2.provider.webdav.WebdavFileObject.execute(WebdavFileObject.java:435)
at 
org.apache.commons.vfs2.provider.webdav.WebdavFileObject.getProperties(WebdavFileObject.java:510)
at 
org.apache.commons.vfs2.provider.webdav.WebdavFileObject.getProperties(WebdavFileObject.java:485)
at 
org.apache.commons.vfs2.provider.webdav.WebdavFileObject.getProperty(WebdavFileObject.java:478)
at 
org.apache.commons.vfs2.provider.webdav.WebdavFileObject.getProperty(WebdavFileObject.java:470)
at 
org.apache.commons.vfs2.provider.webdav.WebdavFileObject.isDirectory(WebdavFileObject.java:450)
at 
org.apache.commons.vfs2.provider.webdav.WebdavFileObject.doListChildrenResolved(WebdavFileObject.java:148)
at 
org.apache.commons.vfs2.provider.AbstractFileObject.getChildren(AbstractFileObject.java:654)
{code}

The nasty thing is that this seems to happen only as a consequence of preceding 
other exceptions.


  was (Author: olivermatz):
I have run into the basically same problem: I am using jackrabbit-webdav 
2.4.3 together with commons-vfs2 2.0 and got the following at run time:
{code}
java.lang.IllegalAccessError: tried to access field 
org.apache.jackrabbit.webdav.xml.DomUtil.BUILDER_FACTORY from class 
org.apache.commons.vfs2.provider.webdav.ExceptionConverter
at 
org.apache.commons.vfs2.provider.webdav.ExceptionConverter.generate(ExceptionConverter.java:53)
at 
org.apache.commons.vfs2.provider.webdav.ExceptionConverter.generate(ExceptionConverter.java:42)
at 
org.apache.commons.vfs2.provider.webdav.WebdavFileObject.execute(WebdavFileObject.java:435)
at 
org.apache.commons.vfs2.provider.webdav.WebdavFileObject.getProperties(WebdavFileObject.java:510)
at 
org.apache.commons.vfs2.provider.webdav.WebdavFileObject.getProperties(WebdavFileObject.java:485)
at 
org.apache.commons.vfs2.provider.webdav.WebdavFileObject.getProperty(WebdavFileObject.java:478)
at 
org.apache.commons.vfs2.provider.webdav.WebdavFileObject.getProperty(WebdavFileObject.java:470)
at 
org.apache.commons.vfs2.provider.webdav.WebdavFileObject.isDirectory(WebdavFileObject.java:450)
at 
org.apache.commons.vfs2.provider.webdav.WebdavFileObject.doListChildrenResolved(WebdavFileObject.java:148)
at 
org.apache.commons.vfs2.provider.AbstractFileObject.getChildren(AbstractFileObject.java:654)
{code}

  
> Compilation Issue
> -
>
> Key: VFS-369
> URL: https://issues.apache.org/jira/browse/VFS-369
> Project: Commons VFS
>  Issue Type: Bug
>Affects Versions: 2.0
>Reporter: Cedric Nanni
> Attachments: VFS-369.patch
>
>
> When I try to compile VFS on my computer I've got compilation errors due 
> maybe because I use a more recent version of jackrabbit. I patched the code 
> and now it compiles.
> diff -rupN 
> original//org/apache/commons/vfs2/provider/webdav/ExceptionConverter.java 
> patched//org/apache/commons/vfs2/provider/webdav/ExceptionConverter.java
> --- original//org/apache/commons/vfs2/provider/webdav/ExceptionConverter.java 
> 2011-08-18 06:57:10.0 +0200
> +++ patched//org/apache/commons/vfs2/provider/webdav/ExceptionConverter.java  
> 2011-10-24 20:35:41.0 +0200
> @@ -50,7 +50,7 @@ public final class ExceptionConverter
>  {
>  try
>  {
> -Element error = 
> davExc.toXml(DomUtil.BUILDER_FACTORY.newDocumentBuilder().newDocument());
> +Element error = davExc.toXml(DomUtil.createDocument());
>  if (DomUtil.matches(error, DavException.XML_ERROR, 
> DavConstants.NAMESPACE))
>  {
>  if (DomUtil.hasChildElement(error, "exception", null))
> diff -rupN 
> original//org/apache/commons/vfs2/provider/webdav/WebdavFileObject.java 
> patched//org/apache/commons/vfs2/provider/webdav/WebdavFileObject.java
> --- original//org/apache/commons/vfs2/provider/webdav/WebdavFileObject.java   
> 2011-08-18 06:57:10.0 +0200
> +++ pat

[jira] [Commented] (VFS-369) Compilation Issue

2013-01-30 Thread Oliver Matz (JIRA)

[ 
https://issues.apache.org/jira/browse/VFS-369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13566412#comment-13566412
 ] 

Oliver Matz commented on VFS-369:
-

I have run into the basically same problem: I am using jackrabbit-webdav 2.4.3 
together with commons-vfs2 2.0 and got the following:
{code}
java.lang.IllegalAccessError: tried to access field 
org.apache.jackrabbit.webdav.xml.DomUtil.BUILDER_FACTORY from class 
org.apache.commons.vfs2.provider.webdav.ExceptionConverter
at 
org.apache.commons.vfs2.provider.webdav.ExceptionConverter.generate(ExceptionConverter.java:53)
at 
org.apache.commons.vfs2.provider.webdav.ExceptionConverter.generate(ExceptionConverter.java:42)
at 
org.apache.commons.vfs2.provider.webdav.WebdavFileObject.execute(WebdavFileObject.java:435)
at 
org.apache.commons.vfs2.provider.webdav.WebdavFileObject.getProperties(WebdavFileObject.java:510)
at 
org.apache.commons.vfs2.provider.webdav.WebdavFileObject.getProperties(WebdavFileObject.java:485)
at 
org.apache.commons.vfs2.provider.webdav.WebdavFileObject.getProperty(WebdavFileObject.java:478)
at 
org.apache.commons.vfs2.provider.webdav.WebdavFileObject.getProperty(WebdavFileObject.java:470)
at 
org.apache.commons.vfs2.provider.webdav.WebdavFileObject.isDirectory(WebdavFileObject.java:450)
at 
org.apache.commons.vfs2.provider.webdav.WebdavFileObject.doListChildrenResolved(WebdavFileObject.java:148)
at 
org.apache.commons.vfs2.provider.AbstractFileObject.getChildren(AbstractFileObject.java:654)
{code}


> Compilation Issue
> -
>
> Key: VFS-369
> URL: https://issues.apache.org/jira/browse/VFS-369
> Project: Commons VFS
>  Issue Type: Bug
>Affects Versions: 2.0
>Reporter: Cedric Nanni
> Attachments: VFS-369.patch
>
>
> When I try to compile VFS on my computer I've got compilation errors due 
> maybe because I use a more recent version of jackrabbit. I patched the code 
> and now it compiles.
> diff -rupN 
> original//org/apache/commons/vfs2/provider/webdav/ExceptionConverter.java 
> patched//org/apache/commons/vfs2/provider/webdav/ExceptionConverter.java
> --- original//org/apache/commons/vfs2/provider/webdav/ExceptionConverter.java 
> 2011-08-18 06:57:10.0 +0200
> +++ patched//org/apache/commons/vfs2/provider/webdav/ExceptionConverter.java  
> 2011-10-24 20:35:41.0 +0200
> @@ -50,7 +50,7 @@ public final class ExceptionConverter
>  {
>  try
>  {
> -Element error = 
> davExc.toXml(DomUtil.BUILDER_FACTORY.newDocumentBuilder().newDocument());
> +Element error = davExc.toXml(DomUtil.createDocument());
>  if (DomUtil.matches(error, DavException.XML_ERROR, 
> DavConstants.NAMESPACE))
>  {
>  if (DomUtil.hasChildElement(error, "exception", null))
> diff -rupN 
> original//org/apache/commons/vfs2/provider/webdav/WebdavFileObject.java 
> patched//org/apache/commons/vfs2/provider/webdav/WebdavFileObject.java
> --- original//org/apache/commons/vfs2/provider/webdav/WebdavFileObject.java   
> 2011-08-18 06:57:10.0 +0200
> +++ patched//org/apache/commons/vfs2/provider/webdav/WebdavFileObject.java
> 2011-10-24 20:35:41.0 +0200
> @@ -292,19 +292,17 @@ public class WebdavFileObject extends Ht
>  URLFileName fileName = (URLFileName) getName();
>  DavPropertySet properties = getProperties(fileName, 
> PropFindMethod.PROPFIND_ALL_PROP,
>  new DavPropertyNameSet(), false);
> -@SuppressWarnings("unchecked") // iterator() is documented to 
> return DavProperty instances
> -Iterator iter = properties.iterator();
> +Iterator iter = properties.iterator();
>  while (iter.hasNext())
>  {
> -DavProperty property = iter.next();
> +DavProperty property = (DavProperty)iter.next();
>  attributes.put(property.getName().toString(), 
> property.getValue());
>  }
>  properties = getPropertyNames(fileName);
> -@SuppressWarnings("unchecked") // iterator() is documented to 
> return DavProperty instances
> -Iterator iter2 = properties.iterator();
> +Iterator iter2 = properties.iterator();
>  while (iter2.hasNext())
>  {
> -DavProperty property = iter2.next();
> +DavProperty property = (DavProperty)iter2.next();
>  if (!attributes.containsKey(property.getName().getName()))
>  {
>  property = getProperty(fileName, property.getName());

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA adm

[jira] [Comment Edited] (VFS-369) Compilation Issue

2013-01-30 Thread Oliver Matz (JIRA)

[ 
https://issues.apache.org/jira/browse/VFS-369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13566412#comment-13566412
 ] 

Oliver Matz edited comment on VFS-369 at 1/30/13 12:21 PM:
---

I have run into the basically same problem: I am using jackrabbit-webdav 2.4.3 
together with commons-vfs2 2.0 and got the following at run time:
{code}
java.lang.IllegalAccessError: tried to access field 
org.apache.jackrabbit.webdav.xml.DomUtil.BUILDER_FACTORY from class 
org.apache.commons.vfs2.provider.webdav.ExceptionConverter
at 
org.apache.commons.vfs2.provider.webdav.ExceptionConverter.generate(ExceptionConverter.java:53)
at 
org.apache.commons.vfs2.provider.webdav.ExceptionConverter.generate(ExceptionConverter.java:42)
at 
org.apache.commons.vfs2.provider.webdav.WebdavFileObject.execute(WebdavFileObject.java:435)
at 
org.apache.commons.vfs2.provider.webdav.WebdavFileObject.getProperties(WebdavFileObject.java:510)
at 
org.apache.commons.vfs2.provider.webdav.WebdavFileObject.getProperties(WebdavFileObject.java:485)
at 
org.apache.commons.vfs2.provider.webdav.WebdavFileObject.getProperty(WebdavFileObject.java:478)
at 
org.apache.commons.vfs2.provider.webdav.WebdavFileObject.getProperty(WebdavFileObject.java:470)
at 
org.apache.commons.vfs2.provider.webdav.WebdavFileObject.isDirectory(WebdavFileObject.java:450)
at 
org.apache.commons.vfs2.provider.webdav.WebdavFileObject.doListChildrenResolved(WebdavFileObject.java:148)
at 
org.apache.commons.vfs2.provider.AbstractFileObject.getChildren(AbstractFileObject.java:654)
{code}


  was (Author: olivermatz):
I have run into the basically same problem: I am using jackrabbit-webdav 
2.4.3 together with commons-vfs2 2.0 and got the following:
{code}
java.lang.IllegalAccessError: tried to access field 
org.apache.jackrabbit.webdav.xml.DomUtil.BUILDER_FACTORY from class 
org.apache.commons.vfs2.provider.webdav.ExceptionConverter
at 
org.apache.commons.vfs2.provider.webdav.ExceptionConverter.generate(ExceptionConverter.java:53)
at 
org.apache.commons.vfs2.provider.webdav.ExceptionConverter.generate(ExceptionConverter.java:42)
at 
org.apache.commons.vfs2.provider.webdav.WebdavFileObject.execute(WebdavFileObject.java:435)
at 
org.apache.commons.vfs2.provider.webdav.WebdavFileObject.getProperties(WebdavFileObject.java:510)
at 
org.apache.commons.vfs2.provider.webdav.WebdavFileObject.getProperties(WebdavFileObject.java:485)
at 
org.apache.commons.vfs2.provider.webdav.WebdavFileObject.getProperty(WebdavFileObject.java:478)
at 
org.apache.commons.vfs2.provider.webdav.WebdavFileObject.getProperty(WebdavFileObject.java:470)
at 
org.apache.commons.vfs2.provider.webdav.WebdavFileObject.isDirectory(WebdavFileObject.java:450)
at 
org.apache.commons.vfs2.provider.webdav.WebdavFileObject.doListChildrenResolved(WebdavFileObject.java:148)
at 
org.apache.commons.vfs2.provider.AbstractFileObject.getChildren(AbstractFileObject.java:654)
{code}

  
> Compilation Issue
> -
>
> Key: VFS-369
> URL: https://issues.apache.org/jira/browse/VFS-369
> Project: Commons VFS
>  Issue Type: Bug
>Affects Versions: 2.0
>Reporter: Cedric Nanni
> Attachments: VFS-369.patch
>
>
> When I try to compile VFS on my computer I've got compilation errors due 
> maybe because I use a more recent version of jackrabbit. I patched the code 
> and now it compiles.
> diff -rupN 
> original//org/apache/commons/vfs2/provider/webdav/ExceptionConverter.java 
> patched//org/apache/commons/vfs2/provider/webdav/ExceptionConverter.java
> --- original//org/apache/commons/vfs2/provider/webdav/ExceptionConverter.java 
> 2011-08-18 06:57:10.0 +0200
> +++ patched//org/apache/commons/vfs2/provider/webdav/ExceptionConverter.java  
> 2011-10-24 20:35:41.0 +0200
> @@ -50,7 +50,7 @@ public final class ExceptionConverter
>  {
>  try
>  {
> -Element error = 
> davExc.toXml(DomUtil.BUILDER_FACTORY.newDocumentBuilder().newDocument());
> +Element error = davExc.toXml(DomUtil.createDocument());
>  if (DomUtil.matches(error, DavException.XML_ERROR, 
> DavConstants.NAMESPACE))
>  {
>  if (DomUtil.hasChildElement(error, "exception", null))
> diff -rupN 
> original//org/apache/commons/vfs2/provider/webdav/WebdavFileObject.java 
> patched//org/apache/commons/vfs2/provider/webdav/WebdavFileObject.java
> --- original//org/apache/commons/vfs2/provider/webdav/WebdavFileObject.java   
> 2011-08-18 06:57:10.0 +0200
> +++ patched//org/apache/commons/vfs2/provider/webdav/WebdavFileObject.java
> 2011-10-24 20:35:41.0 +0200
> @@ -

[jira] [Commented] (CODEC-158) Add Codec, StringCodec, and BinaryCodec interfaces that extend both encoder and decoder

2013-01-30 Thread Mirko Raner (JIRA)

[ 
https://issues.apache.org/jira/browse/CODEC-158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13566287#comment-13566287
 ] 

Mirko Raner commented on CODEC-158:
---

Introducing a common interface for encoder/decoder and converting the existing 
hierarchy to generics are orthogonal issues, in my view. As such, I propose to 
solve them separately and not intermix these two issues.

A common superinterface for encoders and decoders is useful with and without 
generics, and, as far as I can see, it does not impede the introduction of 
generics at a later point.

CODEC-158 is strictly about the common superinterface for encoders and 
decoders. I agree that, going forward, genericized encoders/decoders would be 
extremely useful, but we should discuss this as a separate issue.


> Add Codec, StringCodec, and BinaryCodec interfaces that extend both encoder 
> and decoder
> ---
>
> Key: CODEC-158
> URL: https://issues.apache.org/jira/browse/CODEC-158
> Project: Commons Codec
>  Issue Type: Improvement
>Affects Versions: 1.7
>Reporter: Mirko Raner
>Priority: Minor
> Attachments: CODEC-158.patch, CODEC-158.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> Currently, there are no common interfaces that extend both the encoder and 
> the decoder interfaces. This makes it hard to deal with a codec as a single 
> entity and requires separate treatment of encoder and decoder parts.
> For example, let's say you want to develop a storage abstraction that uses an 
> encoding. Right now, you would need to write
> class Storage
> {
> @Inject Encoder encoder;
> @Inject Decoder decoder;
> //...
> }
> In practice, encoder and decoder need to match, and most likely both encoder 
> and decoder would be bound to the same implementation, like Base64 or 
> URLCodec. Because of the lack of a common superinterface they need to be 
> specified separately. There are some classes like BaseNCodec that can be used 
> to unify some of the encoders and decoders, but they are too specific and 
> restrictive.
> Ideally, I would like to write:
> class Storage
> {
> @Inject Codec codec;
> //...
> }
> Assuming that combined encoder/decoder classes like Base64 would implement 
> that Codec interface, this could be directly bound to a combined 
> encoder/decoder implementation.
> It would be nice if these interfaces were added and the existing codec 
> classes (BaseNCodec, Hex, QCodec, QuotedPrintableCodec, URLCodec) could be 
> modified to implement these new interfaces.
> I'm happy to contribute a patch if there is interest in this feature.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira