[jira] [Issue Comment Edited] (MATH-650) FastMath has static code which slows the first access to FastMath

2012-02-11 Thread Luc Maisonobe (Issue Comment Edited) (JIRA)

[ 
https://issues.apache.org/jira/browse/MATH-650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13206095#comment-13206095
 ] 

Luc Maisonobe edited comment on MATH-650 at 2/11/12 12:44 PM:
--

I would like to resolve this issue.
I think everybody agreed pre-computation was an improvement, and the remaining 
discussions are about the way the pre-computed tables are loaded. We have two 
competing implementations for that: resources loaded from a binary file and 
literal arrays compiled in the library. In both cases, the data is generated 
beforehand by our own code, so in both case users who wants to re-generate them 
with different settings can do so.

The advantages of resources are that they are more compact (binary) and don't 
clutter the sources with large tables.
The advantages of literal arrays are that the can be checked by mere human 
beings and don't require any support code.
The speed difference between the two implementation exists but is tiny.

Two people have already expressed their preferences (one favoring resources the 
other one favoring literal arrays). I don't have a clear cut preference myself 
(only a slight bias toward one solution which I prefer to keep for myself).

I would like to have the opinions of newer developers and users on this before 
selecting one approach. Could someone please provide another opinion ?

  was (Author: luc):
I would like to resolve this issue.
I think everybody agreed pre-computation was an improvement, and the remaining 
discussions are about the way the pre-computed tables are loaded. We have two 
competing implementations for that: resources loaded from a binary file and 
literal arrays compiled in the library. In both cases, the data is generated 
beforehand by our own code, so in both case users who wants to re-generate with 
different settings them can do so.

The advantages of resources are that they are more compact (binary) and don't 
clutter the sources with large tables.
The advantages of literal arrays are that the can be checked by mere human 
beings and don't require any support code.
The speed difference between the two implementation exists but is tiny.

Two people have already expressed their preferences (one favoring resources the 
other one favoring literal arrays). I don't have a clear cut preference myself 
(only a slight bias toward one solution which I prefer to keep for myself).

I would like to have the opinions of newer developers and users on this before 
selecting one approach. Could someone please provide another opinion ?
  
 FastMath has static code which slows the first access to FastMath
 -

 Key: MATH-650
 URL: https://issues.apache.org/jira/browse/MATH-650
 Project: Commons Math
  Issue Type: Improvement
Affects Versions: Nightly Builds
 Environment: Android 2.3 (Dalvik VM with JIT)
Reporter: Alexis Robert
Priority: Minor
 Fix For: 3.0

 Attachments: FastMathLoadCheck.java, LucTestPerformance.java


 Working on an Android application using Orekit, I've discovered that a simple 
 FastMath.floor() takes about 4 to 5 secs on a 1GHz Nexus One phone (only the 
 first time it's called). I've launched the Android profiling tool (traceview) 
 and the problem seems to be linked with the static portion of FastMath code 
 named // Initialize tables
 The timing resulted in :
 - FastMath.slowexp (40.8%)
 - FastMath.expint (39.2%)
  \- FastMath.quadmult() (95.6% of expint)
 - FastMath.slowlog (18.2%)
 Hoping that would help
 Thanks!
 Alexis Robert

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Issue Comment Edited] (MATH-650) FastMath has static code which slows the first access to FastMath

2012-02-11 Thread Luc Maisonobe (Issue Comment Edited) (JIRA)

[ 
https://issues.apache.org/jira/browse/MATH-650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13206188#comment-13206188
 ] 

Luc Maisonobe edited comment on MATH-650 at 2/11/12 5:27 PM:
-

I think the reason literal array are faster is that there is no real reading 
(no file to open, no loop, no read ...). Everything has already been prepared 
beforehand by the compiler and the loaded class is most probably already in a 
memory-mapped file. Remember that the array are literal and need to be parsed 
only by the compiler, not by the JVM at loading time. In fact in both cases 
what is loaded is binary.

  was (Author: luc):
I think the reason literal array are faster is that they is no real reading 
(no file to open, no loop, no read ...). Everything has already been prepared 
beforehand by the compiler and the loaded class is most probably already in a 
memory-mapped file. Remember that the array are literal and need to be parsed 
only by the compiler, not by the JVM at loading time. In fact in both cases 
what is loaded is binary.
  
 FastMath has static code which slows the first access to FastMath
 -

 Key: MATH-650
 URL: https://issues.apache.org/jira/browse/MATH-650
 Project: Commons Math
  Issue Type: Improvement
Affects Versions: Nightly Builds
 Environment: Android 2.3 (Dalvik VM with JIT)
Reporter: Alexis Robert
Priority: Minor
 Fix For: 3.0

 Attachments: FastMathLoadCheck.java, LucTestPerformance.java


 Working on an Android application using Orekit, I've discovered that a simple 
 FastMath.floor() takes about 4 to 5 secs on a 1GHz Nexus One phone (only the 
 first time it's called). I've launched the Android profiling tool (traceview) 
 and the problem seems to be linked with the static portion of FastMath code 
 named // Initialize tables
 The timing resulted in :
 - FastMath.slowexp (40.8%)
 - FastMath.expint (39.2%)
  \- FastMath.quadmult() (95.6% of expint)
 - FastMath.slowlog (18.2%)
 Hoping that would help
 Thanks!
 Alexis Robert

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Issue Comment Edited] (MATH-713) Negative value with restrictNonNegative

2011-11-27 Thread Luc Maisonobe (Issue Comment Edited) (JIRA)

[ 
https://issues.apache.org/jira/browse/MATH-713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13157713#comment-13157713
 ] 

Luc Maisonobe edited comment on MATH-713 at 11/27/11 9:29 AM:
--

Could you please check this against the latest development version from the 
subversion repository ?
There have been several fixes concerning these coefficients in simplex solver 
since 2.2


  was (Author: luc):
Could you please check this agains the latest development version from the 
subversion repository ?
There have been several fixes concerning these coefficients in simplex solver 
since 2.2

  
 Negative value with restrictNonNegative
 ---

 Key: MATH-713
 URL: https://issues.apache.org/jira/browse/MATH-713
 Project: Commons Math
  Issue Type: Bug
Affects Versions: 2.2
 Environment: commons-math-2.2
Reporter: MichaƂ Skrzypczak
  Labels: nonnegative, simplex, solver
   Original Estimate: 3h
  Remaining Estimate: 3h

 Problem: commons-math-2.2 SimplexSolver.
 A variable with 0 coefficient may be assigned a negative value nevertheless 
 restrictToNonnegative flag in call:
 SimplexSolver.optimize(function, constraints, GoalType.MINIMIZE, true);
 Function
 1 * x + 1 * y + 0
 Constraints:
 1 * x + 0 * y = 1
 Result:
 x = 1; y = -1;
 Probably variables with 0 coefficients are omitted at some point of 
 computation and because of that the restrictions do not affect their values.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Issue Comment Edited] (MATH-705) DormandPrince853 integrator leads to revisiting of state events

2011-11-09 Thread Luc Maisonobe (Issue Comment Edited) (JIRA)

[ 
https://issues.apache.org/jira/browse/MATH-705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13147240#comment-13147240
 ] 

Luc Maisonobe edited comment on MATH-705 at 11/9/11 7:58 PM:
-

The reason for this strange behavior is that g function evaluations are based 
on the integrator-specific interpolator.

Each integration method has its specific algorithm and preserve a rich internal 
data set. From this data set, it is possible to build an interpolator which is 
specific to the integrator and in fact share part of the data set (they 
reference the same arrays). So integrator and interpolator are tightly bound 
together.

For embedded Runge-Kutta methods like Dormand-Prince 8(5,3), this data set 
corresponds to one state vector value and several state vector derivatives 
sampled throughout the step. When the step is accepted after error estimation, 
the state value is set to the value at end of step and the interpolator is 
called. So the equations of the interpolator are written in such a way 
interpolation is backward: we start from the end state and roll back to 
beginning of step. This explains why when we roll all the way back to step 
start, we may find a state that is not exactly the one we started from, due to 
both the integration order and interpolation order.

For Gragg-Bulirsch-Stoer, the data set corresponds to one state vector value 
and derivatives at several orders, all taken at step middle point. When the 
step is accepted after error estimation, the interpolator is called before the 
state value is set to the value at end of step and. So the equations of the 
interpolator are written in such a way interpolation is forward: we start from 
the start state and go on towards end of step. This explains why when we go all 
the way to step end, we may find a state that is not exactly the one that will 
be used for next step, due to both the integration order and interpolation 
order.

So one integrator type is more consistent at step start and has more error at 
step end, while the other integrator has a reversed behavior.

In any case, the interpolation that is used (and in fact the integration data 
set it is based upon) are not error free. The error is related to step size.

We could perhaps rewrite some interpolators by preserving both start state 
s(t[k]) and end state s(t[k+1]) and switching between two hal model as follows:
  i(t) = s(t[k])   + forwardModel(t[k], t)if t = (t[k] + t[k+1])/2
and
  i(t) = s(t[k+1]) + backwardModel(t, t[k+1]) if t  (t[k] + t[k+1])/2

This would make interpolator more consistent with integrator at both step start 
and step end and perhaps reduce this problem. This would however not be 
perfect, as it will introduce a small error at junction point. I'm not sure if 
it would be easy or not, we would have to review all interpolators and all 
integrators for that. All models are polynomial ones.

Note that the problem should not appear for Adams methods (when they will be 
considered validated ...), because in this case, it is the interpolator that is 
built first and the integrator is in fact an application of the interpolator at 
step end! So interpolator and integrator are by definition always perfectly 
consistent with each other.

What do you think ?

Should we let this problem alone and consider we are in the grey zone of 
expected numerical inaccuracy due to integration/interpolation orders or should 
we attempt the two half-models trick ?


  was (Author: luc):
The reason for this strange behavior is that g function evaluations are 
based on the integrator-specific interpolator.

Each integration method has its specific algorithm and preserve a rich internal 
data set. From this data set, it is possible to build an interpolator which is 
specific to the integrator and in fact share part of the data set (they 
reference the same arrays). So integrator and interpolator are tightly bound 
together.

For embedded Runge-Kutta methods like Dormand-Print 8(5,3), this data set 
corresponds to one state vector value and several state vector derivatives 
sampled throughout the step. When the step is accepted after error estimation, 
the state value is set to the value at end of step and the interpolator is 
called. So the equations of the interpolator are written in such a way 
interpolation is backward: we start from the end state and roll back to 
beginning of step. This explains why when we roll all the way back to step 
start, we may find a state that is not exactly the one we started from, due to 
both the integration order and interpolation order.

For Gragg-Bulirsch-Stoer, the data set corresponds to one state vector value 
and derivatives at several orders, all taken at step middle point. When the 
step is accepted after error estimation, the interpolator is called before the 
state value is set to the value at end of 

[jira] [Issue Comment Edited] (MATH-695) Incomplete reinitialization with some events handling

2011-10-25 Thread Luc Maisonobe (Issue Comment Edited) (JIRA)

[ 
https://issues.apache.org/jira/browse/MATH-695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13135351#comment-13135351
 ] 

Luc Maisonobe edited comment on MATH-695 at 10/25/11 7:28 PM:
--

As I work with Pascal (we share the same office at work), I have seen how the 
bug occurs and am trying to set up a simplified test case that reproduces it. 
It seems that there is a combination of conditions that is not handled 
properly. We have seen the bug occurring when both following conditions are 
true:

* there are several events occurring in the same integration step
* when one of the earliest events occurrence is triggered, it returns 
RESET_DERIVATIVES

In this case, the acceptStep method in AbstractIntegrator returns early from 
inside the while loop and the remaining events that where expected to occur 
later on are left in an inconsistent state with respect to the integrator. The 
t0 and g0 fields in the corresponding EventState instance still contain values 
from the beginning of the step, they do not reflect the fact the event has been 
triggered. This implies that when next step is started with the updated 
derivatives, evaluateStep tries to catch up from t0 to current t and calls the 
g function at times that do not belong to the current step.

Up to now, I have not been able to set up a simplified test case that exhibits 
this, but I'm working on it.

  was (Author: luc):
As I work with Pascal (we share the same office at work), I have seen how 
the bug occur and am trying to set up a simplified test case that reproduces 
it. It seems that there is a combination of conditions that is not handled 
properly. We have seen the bu occurring when both following conditions are true:

* there are several events occurring in the same integration step
* when one of the earliest events occurrence is triggered, it returns 
RESET_DERIVATIVES

In this case, the acceptStep method in AbstractIntegrator returns early from 
inside the while loop and the remaining events that where expected to occur 
later on are left in an inconsistent state with respect to the integrator. The 
t0 and g0 fields in the corresponding EventState still contains values from the 
beginning of the step, they do not reflect the fact the event has been 
triggered. This implies that when next step is started with the updated 
derivatives, since the t0 for the second event is still lagging behind current 
time, evaluateStep tries to catch up from t0 to current t and calls the g 
function at times that do not belong to the current step.

Up to now, I have not been able to set up a simplified test case that exhibits 
this, but I'm working on it.
  
 Incomplete reinitialization with some events handling
 -

 Key: MATH-695
 URL: https://issues.apache.org/jira/browse/MATH-695
 Project: Commons Math
  Issue Type: Bug
Affects Versions: 3.0
Reporter: Pascal Parraud
 Attachments: events.patch


 I get a bug with event handling: I track 2 events that occur in the same 
 step, when the first one is accepted, it resets the state but the 
 reinitialization is not complete and the second one becomes unable to find 
 its way.
 I can't give my context, which is rather large, but I tried a patch that 
 works for me, unfortunately it breaks the unit tests.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira