I am also posting a copy this to the gnucap developer list. On Thursday 07 December 2006 11:01, John Doe wrote: > "-floatstore" is the slowest possible way to ensure 64 bit. > It writes to memory after every math operation. To ensure > 64 bit in the FPU use the code from the link I mentioned: > http://www.wrcad.com/linux_numerics.txt > > #ifdef linux > #include <fpu_control.h> > #endif > > main(argc, **argv) > { > #ifdef linux > /* > This puts the X86 FPU in 64-bit precision mode. The > default under Linux is to use 80-bit mode, which produces > subtle differences from FreeBSD and other systems, eg, > (int)(1000*atof("0.3")) is 300 in 64-bit mode, 299 in 80-bit > mode. > */ > fpu_control_t cw; > _FPU_GETCW(cw); > cw &= ~_FPU_EXTENDED; > cw |= _FPU_DOUBLE; > _FPU_SETCW(cw); > #endif > > > This is much faster.
Thanks, you are correct. It runs at the same speed as with only -O2, and has 6 test differences relative to the AMD. One of the differences was the time step I mentioned before. In one test it chose different time stepping. This is not an error. I guess it just shows that the issue doesn't go away completely, you can only minimize it. Another difference was a Fourier analysis of a pure sine wave. We all know that there is one frequency and nothing anywhere else, but calculating it there is a small noise component at every frequency. In either case, noise is about 300 dB below the desired component, but with insignificant variations. I made a plugin, so I could try it without recompiling or relinking. That was the easiest way to try it. > Also, I would never use "-fastmath" as > it is not IEEE compliant. That alone does not say to never use it, but rather to be aware of the problems. I would not use it in this case because it appears to have problems and provides no benefit. Now the big question .. should it be forced to 64 bit mode? I am not sure of the answer here. In favor of 64 bit mode, it is more predictable, in the sense of a better match of text based comparisons. In favor of the 80 bit mode, it is in theory more accurate, less prone to rounding errors that might be caused by something like poor conditioning of the matrix. It could be argued that the 80 bit mode is an improvement. As far as ng-spice is concerned, someone else can make that decision. For gnucap, I try to keep machine specific stuff out of the core, and consider increased precision to be an advantage. I see your point about consistency, meaning it should at least be available as an option. The ability to turn it on and off is valuable in evaluating the robustness of algorithms. I now know how to do it. Thank you for making me look at it again. I will make it available as a plug-in. I have not decided whether it will be installed by default or not. I am leaning toward not installing it by default. This is consistent with other defaults, such as turning on all of the mixed-mode and what is now known as "fast-spice" options by default. comments??? I also made a plugin that converts it to single precision, just to see what would happen. I don't recommend it. A lot of tests fail with real errors. It didn't run any faster. I think the limited range of single precision was a bigger factor than the reduced number of digits, but I did not do any more than run the test suite. It wasn't just reduced accuracy. There were convergence failures, obviously incorrect results, time stepping problems, ... It might be useful to developers to check robustness of algorithms. To a regular user, it makes the whole thing useless. _______________________________________________ Gnucap-devel mailing list [email protected] http://lists.gnu.org/mailman/listinfo/gnucap-devel
