I'd agree here. This is actually a very nice example of a system that might be called "chaotic", though "chaos" is, even mathematically, a very vague term:
1) the iteration will never leave [-2, 2] 2) it won't converge because all 3 fixed points are unstable ( |f'(x_s)|>1 ) So, your example is really not calculating any particular number. Now you could consider it as a calculation of the series itself. The question is that of repeatability. This is not something that can be answered by looking at hardware only. Even if you are running with the same primitive operations, the results could be different. Floating point representation violated distributivity and associativity laws of real numbers. Thus, the error of a certain computation, even if algebraically equivalent, depends on the ordering of operations (if you ever come close to the accuracy limits ~1e-7 for floats and ~1e-15 for doubles, or something like that). Since different compilers will order computation differently, you cannot really expect to match a diverging series ... Standard texts are: http://www.amazon.com/Nonlinear-Dynamics-And-Chaos-Applications/dp/0738204536 http://www.amazon.com/Accuracy-Stability-Numerical-Algorithms-Nicholas/dp/0898715210 ranko On Wednesday, February 5, 2014 12:22:54 PM UTC-5, Konrad Hinsen wrote: > > --On 5 Feb 2014 05:17:13 -0800 Glen Fraser <hola...@gmail.com<javascript:>> > wrote: > > > My benchmark iteratively runs a function 100M times: g(x) <-- sin(2.3x) > + > > cos(3.7x), starting with x of 0. > > A quick look at the series you are computing suggests that it has chaotic > behavior. Another quick looks shows that neither of the two values that > you see after 100M iterations is a fix point. I'd need to do a careful > numerical analysis to be sure, but I suspect that you are computing > a close to random number: any numerical error at some stage is amplified > in the further computation. > > If you get identical results from different languages, this suggests that > they all end up using the same numerical code (probably the C math > library). I suggest you try your Python code under Jython, perhaps > that will reproduce the Clojure result by also relying on the JVM > standard library. > > > In the other languages, I always got the result 0.0541718..., but in > > Clojure I get 0.24788989.... I realize this is a contrived case, but -- > > doing an identical sequence of 64-bit floating-point operations on the > > same machine should give the same answer. > > Unfortunately not. Your reasoning would be true if everyone adopted > IEEE float operations, but in practice nobody does because the main > objective is speed, not predictability. The Intel hardware is close > to IEEE, but not fully compatible, and it offers some parameters that > libraries can play with to get different results from the same operations. > > Konrad. > -- You received this message because you are subscribed to the Google Groups "Clojure" group. To post to this group, send email to clojure@googlegroups.com Note that posts from new members are moderated - please be patient with your first post. To unsubscribe from this group, send email to clojure+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/clojure?hl=en --- You received this message because you are subscribed to the Google Groups "Clojure" group. To unsubscribe from this group and stop receiving emails from it, send an email to clojure+unsubscr...@googlegroups.com. For more options, visit https://groups.google.com/groups/opt_out.