[math] RealMatrixPreservingVisitor and RealMatrixChangingVisitor the same?

2015-12-29 Thread Ole Ersoy

Hi,

RealMatrixPreservingVisitor and RealMatrixChangingVisitor files look identical 
with the exception of a single @see Default... annotation (Which I think is 
redundant...same as > All known implementing classes...?).  Would it make sense 
to remove the annotation and have one RealMatrixChangingVisitor extend 
RealMatrixPreservingVisitor?

Cheers,
Ole

-
To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
For additional commands, e-mail: dev-h...@commons.apache.org



Re: [math] releasing 3.6

2015-12-29 Thread Thomas Neidhart
On 12/29/2015 07:39 PM, Luc Maisonobe wrote:
> Hi all,
> 
> A few weeks ago, I proposed to release 3.6. There were two
> points I wanted to address before that, both related to
> ODE. These points are now completed: the Adams methods
> stability issues have been fixed, and a bunch a field-based
> integrators are available.
> 
> There are 3 issues in JIRA that are tagged for 3.6:
> 
>  - MATH-1281 "Median" should not extend "Percentile"
>  - MATH-1285 Description API ZipfDistribution
>  - MATH-1308 Deprecate and remove "AbstractRandomGenerator"
> 
> MATH-1281 could probably not be solved in 3.6 (or in any 3.X),
> so I suggest to simply retag it for 4.0 only.
> 
> MATH-1285 seems complete to me, so I suggest to resolve it.

+1

> MATH-1308 is more 4.0-oriented and subject to experimentation.
> As we can do what we want in 4.0, it seems possible to *not*
> deprecate the class in 3.6, even if it completely disappears
> in 4.0 (furthermore as users probably are more concerned with
> the upper RandomGenerator interface or the lower specific
> implementations than with the intermedaite abstract class).

Any objection to also in apply the patch for MATH-1196?

> If you agree with this, I could cut an RC as soon as tomorrow,
> targeting a release for 2016-01-01 (so I would also change the
> copyright years throughout the library).
> 
> What do you think?

I fixed some compilation problems with java 1.5. The site reports look
all fine.

Thomas

-
To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
For additional commands, e-mail: dev-h...@commons.apache.org



Re: [math] releasing 3.6

2015-12-29 Thread Luc Maisonobe
Le 29/12/2015 20:48, Phil Steitz a écrit :
> 
> 
>> On Dec 29, 2015, at 11:39 AM, Luc Maisonobe 
>> wrote:
>> 
>> Hi all,
>> 
>> A few weeks ago, I proposed to release 3.6. There were two points I
>> wanted to address before that, both related to ODE. These points
>> are now completed: the Adams methods stability issues have been
>> fixed, and a bunch a field-based integrators are available.
>> 
>> There are 3 issues in JIRA that are tagged for 3.6:
>> 
>> - MATH-1281 "Median" should not extend "Percentile" - MATH-1285
>> Description API ZipfDistribution - MATH-1308 Deprecate and remove
>> "AbstractRandomGenerator"
>> 
>> MATH-1281 could probably not be solved in 3.6 (or in any 3.X), so I
>> suggest to simply retag it for 4.0 only.
>> 
>> MATH-1285 seems complete to me, so I suggest to resolve it.
>> 
>> MATH-1308 is more 4.0-oriented and subject to experimentation. As
>> we can do what we want in 4.0, it seems possible to *not* deprecate
>> the class in 3.6, even if it completely disappears in 4.0
>> (furthermore as users probably are more concerned with the upper
>> RandomGenerator interface or the lower specific implementations
>> than with the intermedaite abstract class).
>> 
>> If you agree with this, I could cut an RC as soon as tomorrow, 
>> targeting a release for 2016-01-01 (so I would also change the 
>> copyright years throughout the library).
>> 
>> What do you think?
> 
> +1 to issue comments.
> 
> I am working on 2 other things that I would like to get into 3.6 if
> possible.
> 
> 1. Fix javadoc to allow Java 8 build.
> 
> 2. Implement efficient 2-sample small sample exact KS.  I have found
> a reference and started work on this.  The current default impl uses
> Monte Carlo for relatively small samples and the convergence is poor,
> leading to dubious results. I would really like to fix this.
> 
> How about giving me until Friday before RCs start?  If I don't finish
> neither is a blocker for 3.6.  Thanks for pushing this along.

Sure. I'll wait until Friday.

best regards,
Luc

> 
> Phil
>> 
>> best regards, Luc
>> 
>> -
>>
>> 
To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
>> For additional commands, e-mail: dev-h...@commons.apache.org
>> 
> 
> -
>
> 
To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
> For additional commands, e-mail: dev-h...@commons.apache.org
> 
> 


-
To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
For additional commands, e-mail: dev-h...@commons.apache.org



Re: [math] releasing 3.6

2015-12-29 Thread Phil Steitz


> On Dec 29, 2015, at 11:39 AM, Luc Maisonobe  wrote:
> 
> Hi all,
> 
> A few weeks ago, I proposed to release 3.6. There were two
> points I wanted to address before that, both related to
> ODE. These points are now completed: the Adams methods
> stability issues have been fixed, and a bunch a field-based
> integrators are available.
> 
> There are 3 issues in JIRA that are tagged for 3.6:
> 
> - MATH-1281 "Median" should not extend "Percentile"
> - MATH-1285 Description API ZipfDistribution
> - MATH-1308 Deprecate and remove "AbstractRandomGenerator"
> 
> MATH-1281 could probably not be solved in 3.6 (or in any 3.X),
> so I suggest to simply retag it for 4.0 only.
> 
> MATH-1285 seems complete to me, so I suggest to resolve it.
> 
> MATH-1308 is more 4.0-oriented and subject to experimentation.
> As we can do what we want in 4.0, it seems possible to *not*
> deprecate the class in 3.6, even if it completely disappears
> in 4.0 (furthermore as users probably are more concerned with
> the upper RandomGenerator interface or the lower specific
> implementations than with the intermedaite abstract class).
> 
> If you agree with this, I could cut an RC as soon as tomorrow,
> targeting a release for 2016-01-01 (so I would also change the
> copyright years throughout the library).
> 
> What do you think?

+1 to issue comments.

I am working on 2 other things that I would like to get into 3.6 if possible.  

1. Fix javadoc to allow Java 8 build.

2. Implement efficient 2-sample small sample exact KS.  I have found a 
reference and started work on this.  The current default impl uses Monte Carlo 
for relatively small samples and the convergence is poor, leading to dubious 
results. I would really like to fix this. 

How about giving me until Friday before RCs start?  If I don't finish neither 
is a blocker for 3.6.  Thanks for pushing this along.

Phil
> 
> best regards,
> Luc
> 
> -
> To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
> For additional commands, e-mail: dev-h...@commons.apache.org
> 

-
To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
For additional commands, e-mail: dev-h...@commons.apache.org



Re: [math] releasing 3.6

2015-12-29 Thread Gary Gregory
I'm all for RERO.

Gary
On Dec 29, 2015 10:39 AM, "Luc Maisonobe"  wrote:

> Hi all,
>
> A few weeks ago, I proposed to release 3.6. There were two
> points I wanted to address before that, both related to
> ODE. These points are now completed: the Adams methods
> stability issues have been fixed, and a bunch a field-based
> integrators are available.
>
> There are 3 issues in JIRA that are tagged for 3.6:
>
>  - MATH-1281 "Median" should not extend "Percentile"
>  - MATH-1285 Description API ZipfDistribution
>  - MATH-1308 Deprecate and remove "AbstractRandomGenerator"
>
> MATH-1281 could probably not be solved in 3.6 (or in any 3.X),
> so I suggest to simply retag it for 4.0 only.
>
> MATH-1285 seems complete to me, so I suggest to resolve it.
>
> MATH-1308 is more 4.0-oriented and subject to experimentation.
> As we can do what we want in 4.0, it seems possible to *not*
> deprecate the class in 3.6, even if it completely disappears
> in 4.0 (furthermore as users probably are more concerned with
> the upper RandomGenerator interface or the lower specific
> implementations than with the intermedaite abstract class).
>
> If you agree with this, I could cut an RC as soon as tomorrow,
> targeting a release for 2016-01-01 (so I would also change the
> copyright years throughout the library).
>
> What do you think?
>
> best regards,
> Luc
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
> For additional commands, e-mail: dev-h...@commons.apache.org
>
>


[math] releasing 3.6

2015-12-29 Thread Luc Maisonobe
Hi all,

A few weeks ago, I proposed to release 3.6. There were two
points I wanted to address before that, both related to
ODE. These points are now completed: the Adams methods
stability issues have been fixed, and a bunch a field-based
integrators are available.

There are 3 issues in JIRA that are tagged for 3.6:

 - MATH-1281 "Median" should not extend "Percentile"
 - MATH-1285 Description API ZipfDistribution
 - MATH-1308 Deprecate and remove "AbstractRandomGenerator"

MATH-1281 could probably not be solved in 3.6 (or in any 3.X),
so I suggest to simply retag it for 4.0 only.

MATH-1285 seems complete to me, so I suggest to resolve it.

MATH-1308 is more 4.0-oriented and subject to experimentation.
As we can do what we want in 4.0, it seems possible to *not*
deprecate the class in 3.6, even if it completely disappears
in 4.0 (furthermore as users probably are more concerned with
the upper RandomGenerator interface or the lower specific
implementations than with the intermedaite abstract class).

If you agree with this, I could cut an RC as soon as tomorrow,
targeting a release for 2016-01-01 (so I would also change the
copyright years throughout the library).

What do you think?

best regards,
Luc

-
To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
For additional commands, e-mail: dev-h...@commons.apache.org



Re: [Math] About the refactoring of RNGs

2015-12-29 Thread Luc Maisonobe
hi all,

Le 29/12/2015 18:32, Phil Steitz a écrit :
> 
> 
>> On Dec 29, 2015, at 8:41 AM, Gilles 
>> wrote:
>> 
>>> On Mon, 28 Dec 2015 20:33:24 -0700, Phil Steitz wrote:
 On 12/28/15 8:08 PM, Gilles wrote:
> On Mon, 28 Dec 2015 11:08:56 -0700, Phil Steitz wrote: The
> significant refactoring to eliminate the (standard)
> next(int) included in these changes has the possibility of
> introducing subtle bugs or performance issues.  Please run
> some tests to verify that the same sequences are generated by
> the 3_X code
 
 IIUC our unit tests of the RNGs, this is covered.
>>> 
>>> No.  Not sufficient.  What you have done is changed the internal 
>>> implementation of all of the Bitstream generators.  I am not 
>>> convinced that you have not broken anything.  I will have to do
>>> the testing myself.  I see no point in fiddling with the
>>> internals of this code that has had a lot of eyeballs and testing
>>> on it.  I was not personally looking forward to researching the
>>> algorithms to make sure any invariants may be broken by these
>>> changes; but I am now going to have to do this.  I have to ask
>>> why.  Please at some point read [1], especially the sections on
>>> "Avoid Flexibility Syndrom" and "Value Laziness as a Virtue."
>>> Gratuitous refactoring drains community energy.
>> 
>> It seems that again you don't read what I write.[1] Hence the above
>> paragraph is spreading F -> "I am not convinced that you have not
>> broken anything." U -> "I will have to do the testing myself." D ->
>> "I see no point in fiddling [...]" .
>> 
>> Or maybe I have to rant about email communication. Please reread
>> the thread to fully appreciate that you could have shared your
>> doubts among the opinions which you gave about some of my
>> questions. When the reply answers to only some of the direct
>> questions, the OP is legitimately tempted to assume that the
>> non-comment is akin to "I don't care" (as in "left to the judgment
>> of the one who does the job").
>> 
>> As is often the case, you can dislike my ideas of improvements
>> (ideas that come on the basis of the information provided by what
>> the code does) but I don't appreciate the use of the word
>> "gratuitous", given that I clearly stated the purpose of making
>> explicit what the code actually does (that is, *generating* 32
>> random bits and not the number of bits passed as a parameter to
>> "next(int)"). So, the code is now self-documenting; it is a small
>> change, to be sure, but hardly gratuitous.
> 
> I did not respond to the original question about eliminating
> next(int) because I was not (still am not) expert in the internals of
> how the generators work and how the generated ints-of-bits are
> transformed to make doubles, gaussians, etc.  I was hoping someone
> who was expert would chime in and say either "don't do that" or
> "that's ok".

I intend to look at these changes, but concentrated on the last
things I wanted to be included in 3.6 (another mail will come
about this later this evening).

I am not an expert either on PRNG, but did work on some of the
implementations we have, so maybe I will be able to give an
opinion.

One thing we coud do is set up a branch for such an experimentation.
Branches in Git are simple. The work could then be merged back
to master once it has stabilized. It would also be easy for any
developer to switch back and forth between this feature branch
and master to get convinced there are no hidden problems.

> That did not happen before you committed the changes.
> That obligates us to review them.  The tests cover only nextInt so
> we need to convince ourselves that the transformations (which have
> all changed) are still correct.  That is in my mind just extra work
> generated for the community.  I would rather see our cycles go toward
> getting 3.6 out.

So let the dust settle down for a few days, I would really like
to finalize 3.6 too.

best regards,
Luc

> 
> I stand by my assertion that the rebasing to eliminate next(bits)
> adds nothing. If you can demonstrate via benchmarks this it is
> faster, I will retract that statement.
> 
> I know we disagree fundamentally on the priorities of [math].  To me,
> stability, correctness and ease of use (including ease of upgrades)
> are paramount.  That means you don't make implementation changes to
> hardened code without a good reason and you limit incompatible change
> to what is necessary to deliver practical benefit to users in new
> features or bug fixes.  I was +1 to dropping AbstractRandomGenerator
> and just leaving the bitstream generators in place.  That is a
> simplifying change.
>> 
>> I actually did go some way towards "Avoiding [a] Flexibility
>> Syndrom" that *was* present in a seeming flexibility of
>> "next(int)". This method was indeed letting users assume that it is
>> able to generate _less than 32 bits_ whereas the randomness
>> generators implemented in CM *always* generate exactly 32
>> (hopefully random) 

Re: [Math] About the refactoring of RNGs

2015-12-29 Thread Phil Steitz


> On Dec 29, 2015, at 8:41 AM, Gilles  wrote:
> 
>> On Mon, 28 Dec 2015 20:33:24 -0700, Phil Steitz wrote:
>>> On 12/28/15 8:08 PM, Gilles wrote:
 On Mon, 28 Dec 2015 11:08:56 -0700, Phil Steitz wrote:
 The significant refactoring to eliminate the (standard) next(int)
 included in these changes has the possibility of introducing subtle
 bugs or performance issues.  Please run some tests to verify that
 the same sequences are generated by the 3_X code
>>> 
>>> IIUC our unit tests of the RNGs, this is covered.
>> 
>> No.  Not sufficient.  What you have done is changed the internal
>> implementation of all of the Bitstream generators.  I am not
>> convinced that you have not broken anything.  I will have to do the
>> testing myself.  I see no point in fiddling with the internals of
>> this code that has had a lot of eyeballs and testing on it.  I was
>> not personally looking forward to researching the algorithms to make
>> sure any invariants may be broken by these changes; but I am now
>> going to have to do this.  I have to ask why.  Please at some point
>> read [1], especially the sections on "Avoid Flexibility Syndrom" and
>> "Value Laziness as a Virtue."  Gratuitous refactoring drains
>> community energy.
> 
> It seems that again you don't read what I write.[1]
> Hence the above paragraph is spreading
> F -> "I am not convinced that you have not broken anything."
> U -> "I will have to do the testing myself."
> D -> "I see no point in fiddling [...]"
> .
> 
> Or maybe I have to rant about email communication.
> Please reread the thread to fully appreciate that you could have shared
> your doubts among the opinions which you gave about some of my questions.
> When the reply answers to only some of the direct questions, the OP is
> legitimately tempted to assume that the non-comment is akin to "I don't
> care" (as in "left to the judgment of the one who does the job").
> 
> As is often the case, you can dislike my ideas of improvements (ideas
> that come on the basis of the information provided by what the code
> does) but I don't appreciate the use of the word "gratuitous", given
> that I clearly stated the purpose of making explicit what the code
> actually does (that is, *generating* 32 random bits and not the number
> of bits passed as a parameter to "next(int)").
> So, the code is now self-documenting; it is a small change, to be sure,
> but hardly gratuitous.

I did not respond to the original question about eliminating next(int) because 
I was not (still am not) expert in the internals of how the generators work and 
how the generated ints-of-bits are transformed to make doubles, gaussians, etc. 
 I was hoping someone who was expert would chime in and say either "don't do 
that" or "that's ok".  That did not happen before you committed the changes.  
That obligates us to review them.  The tests cover only nextInt so  we need to 
convince ourselves that the transformations (which have all changed) are still 
correct.  That is in my mind just extra work generated for the community.  I 
would rather see our cycles go toward getting 3.6 out.

I stand by my assertion that the rebasing to eliminate next(bits) adds nothing. 
If you can demonstrate via benchmarks this it is faster, I will retract that 
statement.

I know we disagree fundamentally on the priorities of [math].  To me, 
stability, correctness and ease of use (including ease of upgrades) are 
paramount.  That means you don't make implementation changes to hardened code 
without a good reason and you limit incompatible change to what is necessary to 
deliver practical benefit to users in new features or bug fixes.  I was +1 to 
dropping AbstractRandomGenerator and just leaving the bitstream generators in 
place.  That is a simplifying change.
> 
> I actually did go some way towards "Avoiding [a] Flexibility Syndrom"
> that *was* present in a seeming flexibility of "next(int)".
> This method was indeed letting users assume that it is able to generate
> _less than 32 bits_ whereas the randomness generators implemented in CM
> *always* generate exactly 32 (hopefully random) bits.
> 
> I stand guilty on the last count as I do indeed not always "value
> laziness as a virtue": I indeed attribute more value to design (and code)
> aesthetics than to laziness.
> IMO, ugly code is often an early hint that the design is broken, even
> if the functionality may not be (yet?).
> [But note again that this change was not "just because of aesthetic
> reasons"; in the wording of your reference, I think that "just because"
> is important.]
> 
> In a real community,[2] you'd value that some people are willing to
> tackle different tasks than you do.
> Rather than stifling any change on the sole ground that it is a change,
> it would be less "draining" on the community if reviewers would only
> voice concrete concerns about the resulting code, and not just assume
> that the coder's motivation is pointless.[3]
> 
> I understand that something c

Re: [Math] About the refactoring of RNGs

2015-12-29 Thread Thomas Neidhart
On 12/29/2015 05:10 PM, Gilles wrote:
> On Tue, 29 Dec 2015 10:33:15 +0100, Luc Maisonobe wrote:
>> Hi all,
>>
>> Le 29/12/2015 09:21, Thomas Neidhart a écrit :
>>> On 12/29/2015 04:33 AM, Phil Steitz wrote:
 On 12/28/15 8:08 PM, Gilles wrote:
> On Mon, 28 Dec 2015 11:08:56 -0700, Phil Steitz wrote:
>> The significant refactoring to eliminate the (standard) next(int)
>> included in these changes has the possibility of introducing subtle
>> bugs or performance issues.  Please run some tests to verify that
>> the same sequences are generated by the 3_X code
>
> IIUC our unit tests of the RNGs, this is covered.

 No.  Not sufficient.  What you have done is changed the internal
 implementation of all of the Bitstream generators.  I am not
 convinced that you have not broken anything.  I will have to do the
 testing myself.  I see no point in fiddling with the internals of
 this code that has had a lot of eyeballs and testing on it.  I was
 not personally looking forward to researching the algorithms to make
 sure any invariants may be broken by these changes; but I am now
 going to have to do this.  I have to ask why.  Please at some point
 read [1], especially the sections on "Avoid Flexibility Syndrom" and
 "Value Laziness as a Virtue."  Gratuitous refactoring drains
 community energy.
>>>
>>> +1, on top of that I think we should aim to refactor the parts that
>>> really need refactoring
> 
> Though I could have liked to say as much on the parts of the library
> were my changes were much criticized because they failed to produce
> a perfect design (which the previous one wasn't either), I would have
> refrained to tell volunteers what they should do or not.

when I have seen the commit I was already thinking of writing a comment
about it but refrained until Phil was doing so. Afaict, your refactoring
moved the bit-shifting logic from one method (next(int)) to others
(nextDouble(), ...) in a slightly different way. On top of that, some
classes have been removed and added and I do not clearly see how this
has improved something, but maybe it was a change in the right direction
as Luc pointed out.

Nobody blames anybody for something, I believe, instead I have the
impression that people are focused on good technical solutions and this
sometimes means that you get criticism (and I got this as well a lot,
which can be quite frustrating).

I just got the impression lately that some people want to completely
refactor CM for the 4.0 release and I wonder if this will do any good to
the project, especially in areas where there is really no need to
further refactor anything.

>>> and try to keep the number of incompatibilities
>>> to the 3_X branch as minimal as possible.
> 
> I clearly and not surprisingly do not subscribe to that goal.
> And the recent discussions about RERO and "experimental" releases
> certainly were getting to a completely different consensus.

I am not sure what this has to do with my comment. RERO just means that
we should release more often, but you can only do this if you are not
introducing incompatibilities.

The discussion about "experimental" releases was just brain-storming
imho. There was no decision or real consensus on how to achieve this,
just some ideas that are at best very non-standard and in some cases
impractical.

I fear that Apache Commons is not the right place for doing such things.

btw. I proposed to use the scheme from guava to explicitly mark new
features with a Beta annotation to allow possible incompatible changes
later on, which I find much more practical, especially considering the
limited man-power and release cycles.

Thomas

>> and the refactored
>> code and benchmarks to show there is no loss in performance.
>
> Given that there are exactly two operations _less_ (a subtraction
> and a shift), it would be surprising.
>
>> It
>> would also be good to have some additional review of this code by
>> PRNG experts.
>
> The "nextInt()" code is exactly the same as the "next(int)" modulo
> the little change above (in the last line of the "nextInt/next"
> code).
>
> That change in "nextInt/next" implied similarly tiny recodings in
> the generic methods "nextDouble()", "nextBoolean()", ... which, apart
> from that, were copied from "BitsStreamGenerator".
>
> [However tiny a change, I had made a mistake... and dozens of tests
> started to fail. Found the typo and all was quiet again...]
>
> About "next(int)" being standard, it would be interesting to know
> what that means.
>>
>> In all the papers I have read concerning pseudo random number
>> generation, the basic model was based on small chunks of bits,
>> much of the time the size of an int because this is what computer
>> manages directly (they have no provision to manage chunks of 5 or
>> 11 bits for example).
>>
>> Deriving other primitive types from t

Re: [Math] About the refactoring of RNGs (Was: [01/18] [math] MATH-1307)

2015-12-29 Thread Phil Steitz
On 12/29/15 2:33 AM, Luc Maisonobe wrote:
> Hi all,
>
> Le 29/12/2015 09:21, Thomas Neidhart a écrit :
>> On 12/29/2015 04:33 AM, Phil Steitz wrote:
>>> On 12/28/15 8:08 PM, Gilles wrote:
 On Mon, 28 Dec 2015 11:08:56 -0700, Phil Steitz wrote:
> The significant refactoring to eliminate the (standard) next(int)
> included in these changes has the possibility of introducing subtle
> bugs or performance issues.  Please run some tests to verify that
> the same sequences are generated by the 3_X code
 IIUC our unit tests of the RNGs, this is covered.
>>> No.  Not sufficient.  What you have done is changed the internal
>>> implementation of all of the Bitstream generators.  I am not
>>> convinced that you have not broken anything.  I will have to do the
>>> testing myself.  I see no point in fiddling with the internals of
>>> this code that has had a lot of eyeballs and testing on it.  I was
>>> not personally looking forward to researching the algorithms to make
>>> sure any invariants may be broken by these changes; but I am now
>>> going to have to do this.  I have to ask why.  Please at some point
>>> read [1], especially the sections on "Avoid Flexibility Syndrom" and
>>> "Value Laziness as a Virtue."  Gratuitous refactoring drains
>>> community energy. 
>> +1, on top of that I think we should aim to refactor the parts that
>> really need refactoring and try to keep the number of incompatibilities
>> to the 3_X branch as minimal as possible.
>>
>> Thomas
>>
> and the refactored
> code and benchmarks to show there is no loss in performance.
 Given that there are exactly two operations _less_ (a subtraction
 and a shift), it would be surprising.

> It
> would also be good to have some additional review of this code by
> PRNG experts.
 The "nextInt()" code is exactly the same as the "next(int)" modulo
 the little change above (in the last line of the "nextInt/next"
 code).

 That change in "nextInt/next" implied similarly tiny recodings in
 the generic methods "nextDouble()", "nextBoolean()", ... which, apart
 from that, were copied from "BitsStreamGenerator".

 [However tiny a change, I had made a mistake... and dozens of tests
 started to fail. Found the typo and all was quiet again...]

 About "next(int)" being standard, it would be interesting to know
 what that means.
> In all the papers I have read concerning pseudo random number
> generation, the basic model was based on small chunks of bits,
> much of the time the size of an int because this is what computer
> manages directly (they have no provision to manage chunks of 5 or
> 11 bits for example).

That's what I thought.  Internally, is it correct to assume that the
generation is always in int-sized blocks so Gilles is correct that
the bits parameter is only used for output truncation?
>
> Deriving other primitive types from this (boolean, long, double) is
> really an add-on. I even asked an expert about the (Pierre L'Ecuyer
> if I remember well) about some explanations for converting to double
> (which is simply done by multiplying by a constant representing the
> weight of the least significant bit in order to constrain the range to
> [0; 1]). His answer was that this ensured the theoretical mathematical
> proofs that apply to uniform distribution still apply, as only this
> case (uniformity over a multi-dimensional integral grid) has been
> studied. It seems nothing has been studied about using the exponential
> features of floating point representation in relationship with
> double random number generation directly.

Makes sense.  The - excellent - tests included with the Well,
Mersenne and Isaac generators verify (as Gilles states) that
nextInt() itself is likely unaffected by the changes above.  What I
am worried about is the conversions.  Do those changes look correct
to you?  That is what needs to be carefully reviewed and tested. 
>
> Hence everybody starts from int, and the mathematicians proved us
> this method works and some properties are preserved (multi-dimensional
> independance, long period, ...) that are essential typically for
> Monte-Carlo analyses.
>
> I know nothing about random number generation for secure application
> like cryptograpgy, except that it requires completely different
> properties, often completely opposite to what is needed for
> Monte-Carlo analysis. As an example, it should be impossible to
> reproduce a secure sequence (it cannot be deterministic), whereas in
> Monte-Carlo we *want* it to be reproducible if we reuse the same seed.
>
>>> Have a look at the source code for the JDK generators, for example.
 As I indicated quite clearly in one of my first posts about this
 refactoring
 1. all the CM implementations generate random bits in batches
of 32 bits, and
 2. before returning, the "next(int bits)" method was truncating
the generated "int":
  return x >>> (32 - b

Re: [Math] About the refactoring of RNGs

2015-12-29 Thread Gilles

On Tue, 29 Dec 2015 10:33:15 +0100, Luc Maisonobe wrote:

Hi all,

Le 29/12/2015 09:21, Thomas Neidhart a écrit :

On 12/29/2015 04:33 AM, Phil Steitz wrote:

On 12/28/15 8:08 PM, Gilles wrote:

On Mon, 28 Dec 2015 11:08:56 -0700, Phil Steitz wrote:

The significant refactoring to eliminate the (standard) next(int)
included in these changes has the possibility of introducing 
subtle

bugs or performance issues.  Please run some tests to verify that
the same sequences are generated by the 3_X code


IIUC our unit tests of the RNGs, this is covered.


No.  Not sufficient.  What you have done is changed the internal
implementation of all of the Bitstream generators.  I am not
convinced that you have not broken anything.  I will have to do the
testing myself.  I see no point in fiddling with the internals of
this code that has had a lot of eyeballs and testing on it.  I was
not personally looking forward to researching the algorithms to 
make

sure any invariants may be broken by these changes; but I am now
going to have to do this.  I have to ask why.  Please at some point
read [1], especially the sections on "Avoid Flexibility Syndrom" 
and

"Value Laziness as a Virtue."  Gratuitous refactoring drains
community energy.


+1, on top of that I think we should aim to refactor the parts that
really need refactoring


Though I could have liked to say as much on the parts of the library
were my changes were much criticized because they failed to produce
a perfect design (which the previous one wasn't either), I would have
refrained to tell volunteers what they should do or not.


and try to keep the number of incompatibilities
to the 3_X branch as minimal as possible.


I clearly and not surprisingly do not subscribe to that goal.
And the recent discussions about RERO and "experimental" releases
certainly were getting to a completely different consensus.



Thomas





and the refactored
code and benchmarks to show there is no loss in performance.


Given that there are exactly two operations _less_ (a subtraction
and a shift), it would be surprising.


It
would also be good to have some additional review of this code by
PRNG experts.


The "nextInt()" code is exactly the same as the "next(int)" modulo
the little change above (in the last line of the "nextInt/next"
code).

That change in "nextInt/next" implied similarly tiny recodings in
the generic methods "nextDouble()", "nextBoolean()", ... which, 
apart

from that, were copied from "BitsStreamGenerator".

[However tiny a change, I had made a mistake... and dozens of 
tests

started to fail. Found the typo and all was quiet again...]

About "next(int)" being standard, it would be interesting to know
what that means.


In all the papers I have read concerning pseudo random number
generation, the basic model was based on small chunks of bits,
much of the time the size of an int because this is what computer
manages directly (they have no provision to manage chunks of 5 or
11 bits for example).

Deriving other primitive types from this (boolean, long, double) is
really an add-on. I even asked an expert about the (Pierre L'Ecuyer
if I remember well) about some explanations for converting to double
(which is simply done by multiplying by a constant representing the
weight of the least significant bit in order to constrain the range 
to
[0; 1]). His answer was that this ensured the theoretical 
mathematical

proofs that apply to uniform distribution still apply, as only this
case (uniformity over a multi-dimensional integral grid) has been
studied. It seems nothing has been studied about using the 
exponential

features of floating point representation in relationship with
double random number generation directly.

Hence everybody starts from int,


[Or a "long", as I could observe in some other source codes.]

Hence, do you agree that my move to "nextInt()" was a sensible one?


and the mathematicians proved us
this method works and some properties are preserved 
(multi-dimensional

independance, long period, ...) that are essential typically for
Monte-Carlo analyses.

I know nothing about random number generation for secure application
like cryptograpgy, except that it requires completely different
properties, often completely opposite to what is needed for
Monte-Carlo analysis. As an example, it should be impossible to
reproduce a secure sequence (it cannot be deterministic), whereas in
Monte-Carlo we *want* it to be reproducible if we reuse the same 
seed.




Have a look at the source code for the JDK generators, for example.

As I indicated quite clearly in one of my first posts about this
refactoring
1. all the CM implementations generate random bits in batches
   of 32 bits, and
2. before returning, the "next(int bits)" method was truncating
   the generated "int":
 return x >>> (32 - bits);

In all implementations, that was the only place where the "bits"
parameter was used, from which I concluded that the randomness
provider does not care if the reques

Re: [Math] About the refactoring of RNGs

2015-12-29 Thread Gilles

On Mon, 28 Dec 2015 20:33:24 -0700, Phil Steitz wrote:

On 12/28/15 8:08 PM, Gilles wrote:

On Mon, 28 Dec 2015 11:08:56 -0700, Phil Steitz wrote:

The significant refactoring to eliminate the (standard) next(int)
included in these changes has the possibility of introducing subtle
bugs or performance issues.  Please run some tests to verify that
the same sequences are generated by the 3_X code


IIUC our unit tests of the RNGs, this is covered.


No.  Not sufficient.  What you have done is changed the internal
implementation of all of the Bitstream generators.  I am not
convinced that you have not broken anything.  I will have to do the
testing myself.  I see no point in fiddling with the internals of
this code that has had a lot of eyeballs and testing on it.  I was
not personally looking forward to researching the algorithms to make
sure any invariants may be broken by these changes; but I am now
going to have to do this.  I have to ask why.  Please at some point
read [1], especially the sections on "Avoid Flexibility Syndrom" and
"Value Laziness as a Virtue."  Gratuitous refactoring drains
community energy.


It seems that again you don't read what I write.[1]
Hence the above paragraph is spreading
 F -> "I am not convinced that you have not broken anything."
 U -> "I will have to do the testing myself."
 D -> "I see no point in fiddling [...]"
.

Or maybe I have to rant about email communication.
Please reread the thread to fully appreciate that you could have shared
your doubts among the opinions which you gave about some of my 
questions.

When the reply answers to only some of the direct questions, the OP is
legitimately tempted to assume that the non-comment is akin to "I don't
care" (as in "left to the judgment of the one who does the job").

As is often the case, you can dislike my ideas of improvements (ideas
that come on the basis of the information provided by what the code
does) but I don't appreciate the use of the word "gratuitous", given
that I clearly stated the purpose of making explicit what the code
actually does (that is, *generating* 32 random bits and not the number
of bits passed as a parameter to "next(int)").
So, the code is now self-documenting; it is a small change, to be sure,
but hardly gratuitous.

I actually did go some way towards "Avoiding [a] Flexibility Syndrom"
that *was* present in a seeming flexibility of "next(int)".
This method was indeed letting users assume that it is able to generate
_less than 32 bits_ whereas the randomness generators implemented in CM
*always* generate exactly 32 (hopefully random) bits.

I stand guilty on the last count as I do indeed not always "value
laziness as a virtue": I indeed attribute more value to design (and 
code)

aesthetics than to laziness.
IMO, ugly code is often an early hint that the design is broken, even
if the functionality may not be (yet?).
[But note again that this change was not "just because of aesthetic
reasons"; in the wording of your reference, I think that "just because"
is important.]

In a real community,[2] you'd value that some people are willing to
tackle different tasks than you do.
Rather than stifling any change on the sole ground that it is a change,
it would be less "draining" on the community if reviewers would only
voice concrete concerns about the resulting code, and not just assume
that the coder's motivation is pointless.[3]

I understand that something can more likely become broken when it is
being touched rather than when left alone.  Of course, I do!
But with the help of an extensive and sensitive test suite, I felt I
could give this small refactoring a try, being fairly confident that
mistakes would not go unnoticed.
Your doubting that the test suite could let this happen should question
our assuming that it could assess the correct behaviour of the previous
code. Alternately, all of the numerous tests passing should mean that 
the
new code is not buggier; and visual inspection can assess that it 
cannot

be slower.[4]

My last thought about how standard the method "next(int)" is, I let it
be conveyed by what is not mentioned in the following unquestionably
standard source:
  
http://docs.oracle.com/javase/8/docs/api/java/util/SplittableRandom.html


[I could have made additional comments on how various suggestions in
  http://www.apachecon.com/eu2007/materials/ac2006.2.pdf
are either not applied in the CM project, or not taken "with a grain of
salt" whenever it suits you.  But this post has already drained me far
too much.]

Gilles

[1] I can make mistakes (and I did, as told previously) in "fiddling" 
with
the code, but that can be spotted relatively easily by inspecting 
three
one-line changes commits and their few lines consequences on the 
generic
methods where "nextInt()" replaced "next(int)", mutatis mutandis, 
in a

single and quite small class.
That is a far cry from "researching the algorithms".
[2] To be contrasted with the "common good" for which your stand 
ag

Early Access builds b99 for JDK 9 & build b96 for JDK 9 with Project Jigsaw are available on java.net

2015-12-29 Thread Rory O'Donnell


Hi Benedikt,

Early Access b99  for JDK 9 is 
available on java.net, summary of  changes are listed here 
.


Early Access b96  for JDK 9 with Project 
Jigsaw is available on java.net, summary of changes are listed here 
 .


We have reached a milestone of 100 bugs logged by Open Source projects , 
thank you for your continued

support in testing Early Access builds based on various OpenJDK Projects.

Best wishes for the New Year, hope to catch up with you at FOSDEM in 
January.


Rgds,Rory

--
Rgds,Rory O'Donnell
Quality Engineering Manager
Oracle EMEA, Dublin,Ireland



Re: [Math] About the refactoring of RNGs (Was: [01/18] [math] MATH-1307)

2015-12-29 Thread Luc Maisonobe
Hi all,

Le 29/12/2015 09:21, Thomas Neidhart a écrit :
> On 12/29/2015 04:33 AM, Phil Steitz wrote:
>> On 12/28/15 8:08 PM, Gilles wrote:
>>> On Mon, 28 Dec 2015 11:08:56 -0700, Phil Steitz wrote:
 The significant refactoring to eliminate the (standard) next(int)
 included in these changes has the possibility of introducing subtle
 bugs or performance issues.  Please run some tests to verify that
 the same sequences are generated by the 3_X code
>>>
>>> IIUC our unit tests of the RNGs, this is covered.
>>
>> No.  Not sufficient.  What you have done is changed the internal
>> implementation of all of the Bitstream generators.  I am not
>> convinced that you have not broken anything.  I will have to do the
>> testing myself.  I see no point in fiddling with the internals of
>> this code that has had a lot of eyeballs and testing on it.  I was
>> not personally looking forward to researching the algorithms to make
>> sure any invariants may be broken by these changes; but I am now
>> going to have to do this.  I have to ask why.  Please at some point
>> read [1], especially the sections on "Avoid Flexibility Syndrom" and
>> "Value Laziness as a Virtue."  Gratuitous refactoring drains
>> community energy. 
> 
> +1, on top of that I think we should aim to refactor the parts that
> really need refactoring and try to keep the number of incompatibilities
> to the 3_X branch as minimal as possible.
> 
> Thomas
> 
 and the refactored
 code and benchmarks to show there is no loss in performance.
>>>
>>> Given that there are exactly two operations _less_ (a subtraction
>>> and a shift), it would be surprising.
>>>
 It
 would also be good to have some additional review of this code by
 PRNG experts.
>>>
>>> The "nextInt()" code is exactly the same as the "next(int)" modulo
>>> the little change above (in the last line of the "nextInt/next"
>>> code).
>>>
>>> That change in "nextInt/next" implied similarly tiny recodings in
>>> the generic methods "nextDouble()", "nextBoolean()", ... which, apart
>>> from that, were copied from "BitsStreamGenerator".
>>>
>>> [However tiny a change, I had made a mistake... and dozens of tests
>>> started to fail. Found the typo and all was quiet again...]
>>>
>>> About "next(int)" being standard, it would be interesting to know
>>> what that means.

In all the papers I have read concerning pseudo random number
generation, the basic model was based on small chunks of bits,
much of the time the size of an int because this is what computer
manages directly (they have no provision to manage chunks of 5 or
11 bits for example).

Deriving other primitive types from this (boolean, long, double) is
really an add-on. I even asked an expert about the (Pierre L'Ecuyer
if I remember well) about some explanations for converting to double
(which is simply done by multiplying by a constant representing the
weight of the least significant bit in order to constrain the range to
[0; 1]). His answer was that this ensured the theoretical mathematical
proofs that apply to uniform distribution still apply, as only this
case (uniformity over a multi-dimensional integral grid) has been
studied. It seems nothing has been studied about using the exponential
features of floating point representation in relationship with
double random number generation directly.

Hence everybody starts from int, and the mathematicians proved us
this method works and some properties are preserved (multi-dimensional
independance, long period, ...) that are essential typically for
Monte-Carlo analyses.

I know nothing about random number generation for secure application
like cryptograpgy, except that it requires completely different
properties, often completely opposite to what is needed for
Monte-Carlo analysis. As an example, it should be impossible to
reproduce a secure sequence (it cannot be deterministic), whereas in
Monte-Carlo we *want* it to be reproducible if we reuse the same seed.

>>
>> Have a look at the source code for the JDK generators, for example.
>>> As I indicated quite clearly in one of my first posts about this
>>> refactoring
>>> 1. all the CM implementations generate random bits in batches
>>>of 32 bits, and
>>> 2. before returning, the "next(int bits)" method was truncating
>>>the generated "int":
>>>  return x >>> (32 - bits);
>>>
>>> In all implementations, that was the only place where the "bits"
>>> parameter was used, from which I concluded that the randomness
>>> provider does not care if the request was to create less than 32
>>> random bits.
>>> Taking "nextBoolean()" for example, it looks like a waste of 31
>>> bits (or am I missing something?).
>>
>> Quite possibly, yes, you are missing something.

I would guess it is linked to performance consideration. Pseudo
random number generation is sometimes put under very heavy stress
with billions of numbers generated. It should run extremelly fast,
and the algorithms have been designed to have tremendously l

Re: [Math] About the refactoring of RNGs (Was: [01/18] [math] MATH-1307)

2015-12-29 Thread Thomas Neidhart
On 12/29/2015 04:33 AM, Phil Steitz wrote:
> On 12/28/15 8:08 PM, Gilles wrote:
>> On Mon, 28 Dec 2015 11:08:56 -0700, Phil Steitz wrote:
>>> The significant refactoring to eliminate the (standard) next(int)
>>> included in these changes has the possibility of introducing subtle
>>> bugs or performance issues.  Please run some tests to verify that
>>> the same sequences are generated by the 3_X code
>>
>> IIUC our unit tests of the RNGs, this is covered.
> 
> No.  Not sufficient.  What you have done is changed the internal
> implementation of all of the Bitstream generators.  I am not
> convinced that you have not broken anything.  I will have to do the
> testing myself.  I see no point in fiddling with the internals of
> this code that has had a lot of eyeballs and testing on it.  I was
> not personally looking forward to researching the algorithms to make
> sure any invariants may be broken by these changes; but I am now
> going to have to do this.  I have to ask why.  Please at some point
> read [1], especially the sections on "Avoid Flexibility Syndrom" and
> "Value Laziness as a Virtue."  Gratuitous refactoring drains
> community energy. 

+1, on top of that I think we should aim to refactor the parts that
really need refactoring and try to keep the number of incompatibilities
to the 3_X branch as minimal as possible.

Thomas

>>> and the refactored
>>> code and benchmarks to show there is no loss in performance.
>>
>> Given that there are exactly two operations _less_ (a subtraction
>> and a shift), it would be surprising.
>>
>>> It
>>> would also be good to have some additional review of this code by
>>> PRNG experts.
>>
>> The "nextInt()" code is exactly the same as the "next(int)" modulo
>> the little change above (in the last line of the "nextInt/next"
>> code).
>>
>> That change in "nextInt/next" implied similarly tiny recodings in
>> the generic methods "nextDouble()", "nextBoolean()", ... which, apart
>> from that, were copied from "BitsStreamGenerator".
>>
>> [However tiny a change, I had made a mistake... and dozens of tests
>> started to fail. Found the typo and all was quiet again...]
>>
>> About "next(int)" being standard, it would be interesting to know
>> what that means.
> 
> Have a look at the source code for the JDK generators, for example.
>> As I indicated quite clearly in one of my first posts about this
>> refactoring
>> 1. all the CM implementations generate random bits in batches
>>of 32 bits, and
>> 2. before returning, the "next(int bits)" method was truncating
>>the generated "int":
>>  return x >>> (32 - bits);
>>
>> In all implementations, that was the only place where the "bits"
>> parameter was used, from which I concluded that the randomness
>> provider does not care if the request was to create less than 32
>> random bits.
>> Taking "nextBoolean()" for example, it looks like a waste of 31
>> bits (or am I missing something?).
> 
> Quite possibly, yes, you are missing something.
>>
>> Of course, if some implementation were able to store the bits not
>> requested by the last call to "next(int)", then I'd understand that
>> we must really provide access to a "next(int)" method.
>>
>> Perhaps that the overhead of such bookkeeping is why the practical
>> algorithms chose to store integers rather than bits (?).
>>
>> As you dismissed my request about CM being able to care for a RNG
>> implementation based on a "long", I don't quite understand the
>> caring for a "next(int)" that serves no more purpose (as of current
>> CM).
>>
> This change is
>>
>> Gilles
>>
>>
>>> Phil
>>>
>>> On 12/28/15 10:23 AM, er...@apache.org wrote:
 Repository: commons-math
 Updated Branches:
   refs/heads/master 7b62d0155 -> 81585a3c4


 MATH-1307

 New base class for RNG implementations.
 The source of randomness is provided through the "nextInt()"
 method (to be defined in subclasses).


 [...]
>>
> [1] http://www.apachecon.com/eu2007/materials/ac2006.2.pdf
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
>> For additional commands, e-mail: dev-h...@commons.apache.org
>>
>>
> 
> 
> 
> -
> To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
> For additional commands, e-mail: dev-h...@commons.apache.org
> 


-
To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
For additional commands, e-mail: dev-h...@commons.apache.org