[sage-support] Re: Quantlib (SWIG) and Sage - Experimental package
This works excellently. Thanks William. On Oct 25, 12:40 am, William Stein [EMAIL PROTECTED] wrote: On Fri, Oct 24, 2008 at 2:58 PM, tomanizer [EMAIL PROTECTED] wrote: Hi All, I have successfully compiled the new quantlib_swig-0.9.6 quantlib-0.9.6 packages from Sage 3.0.1. A number of functions work fine form the Sage notebook. Unfortunately the important quantlib Date() function is causing problems. I am trying to run from QuantLib import * calendar = TARGET() todaysDate = Date(6,October,2001) and I get the error: Traceback (most recent call last): File stdin, line 1, in module File /root/.sage/sage_notebook/worksheets/tomanizer/18/code/3.py, line 7, in module todaysDate = Date(Integer(6),October,Integer(2001)) File /filesrv/Sage/sage-3.0.2-debian32-intelx86-i686-Linux/local/ lib/python2.5/site-packages/sympy/plotting/, line 1, in module File /filesrv/Sage/sage-3.0.2-debian32-intelx86-i686-Linux/local/ lib/python2.5/site-packages/QuantLib/QuantLib.py, line 203, in __init__ this = _QuantLib.new_Date(*args) NotImplementedError: Wrong number of arguments for overloaded function 'new_Date'. Possible C/C++ prototypes are: Date() Date(Day,Month,Year) Date(BigInteger) Date(std::string const ,std::string const ) which is an error thrown by SWIG if the wrong parameters are passed to the functions. However, this example is from the Quantlib SWIG help file. So I assume it should work from quantlib SWIG. My suspicion is that the error is related to the Sage's handling of unicode. Does anone wlse have this problem? Is there a workaround? The problem is the Sage preparser turning integer literals into Sage integers, which the Swig/Quantlib interface doesn't understand. Here are some workarounds: Use r after the number to make it raw (not preparsed): sage: todaysDate = Date(6r,October,2001r) Explicitly cast to int: sage: todaysDate = Date(int(6),October,int(2001)) Turn of numerical literal preparsing: sage: Integer=int; RealNumber=float sage: todaysDate = Date(6,October,2001) --~--~-~--~~~---~--~~ To post to this group, send email to sage-support@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sage-support URLs: http://www.sagemath.org -~--~~~~--~~--~--~---
[sage-support] radius of convergence and inequalities
Dear Support - Team! I've got two independent questions. First, is there a possibility to calculate the radius of convergence of a PowerSeries and second, is there a method like SOLVE for inqualities? I have already searched the reference and manual but stayed unlucky. Thanks, Tom P.S: Please excuse my bad english, it's not my mother tounge. --~--~-~--~~~---~--~~ To post to this group, send email to sage-support@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sage-support URLs: http://www.sagemath.org -~--~~~~--~~--~--~---
[sage-support] Re: radius of convergence and inequalities
[EMAIL PROTECTED] wrote: Dear Support - Team! I've got two independent questions. First, is there a possibility to calculate the radius of convergence of a PowerSeries and second, is there a method like SOLVE for inqualities? I have already searched the reference and manual but stayed unlucky. The optional qepcad package can solve inequalities, as well as expressions involving quantifiers (for all, there exists, etc.). You can see the documentation by typing qepcad?; you'll need to install the qepcad optional spkg to use it. Thanks, Jason --~--~-~--~~~---~--~~ To post to this group, send email to sage-support@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sage-support URLs: http://www.sagemath.org -~--~~~~--~~--~--~---
[sage-support] Re: How to detect memory leaks?
Dear Michael and all others, I tried 3 approaches to trac my problem down. Summary: It seems that my extension class is deallocated (__dealloc__ is called) but not deleted (__del__ is not called), see approach 3 below. 1. On Oct 24, 6:03 pm, mabshoff [EMAIL PROTECTED] dortmund.de wrote: You need to rebuild Python after exporting SAGE_VALGRIND=yes - otherwise pymalloc is used and as is the valgrind log is useless in some regards. Thanks. I did rebuild sage 3.1.4 with SAGE_VALGRIND=yes, installed the optional valgrind spkg, and tried again. After computing F=F5(); G=F(I) I had sage: get_memory_usage() 1125.95703125 and after repeating the same computation 10 more times in a loop, it was sage: get_memory_usage() 1197.0703125 This time, valgrind found a tiny bit of unreachable memory, see http://sage.math.washington.edu/home/SimonKing/f5/sage-memcheck.25789 ==25789== LEAK SUMMARY: ==25789==definitely lost: 697 bytes in 17 blocks. ==25789== possibly lost: 399,844 bytes in 1,033 blocks. ==25789==still reachable: 38,600,299 bytes in 332,837 blocks. ==25789== suppressed: 337,860 bytes in 5,348 blocks. However, this still does not explain the loss of 71 MB reported by get_memory_usage 2. Debugging these is hard and valgrind will not help much in that case. Much more useful could be Guppy. I installed guppy. hpy told me that after the first round of my computation I had (only indicating those items that increased) Index Count % Size % Cumulative % Kind (class / dict of class) 1 42 15 4264 16 13480 50 tuple 4 19 7 2280 8 21416 79 unicode 5 3 1 1608 6 23024 85 dict (no owner) 9 11 4 560 2 26312 97 str But after 10 more runs I got 1 55 15 7424 20 16640 45 unicode 2 4 1 4960 13 21600 59 dict (no owner) 3 44 12 4408 12 26008 71 tuple 7 46 13 1104 3 34208 93 int 8 14 4 704 2 34912 95 str Unfortunately I did not find a guppy tutorial. In particular, I don't know how to find out where the increased size of the dicts and unicodes comes from. 3. Is the following method a correct way of testing whether instances of my extension classes are deleted at all? - In the init method of my class DecoratedPolynomial, I increased two globally defined counters by one. - I provided a custom __del__ method, that did nothing but to reduce the first counter by one. - I provided a custom __dealloc__ method, that simply reduced the second counter by one. Result: The init method was called 429 times (in 11 runs of the computation and deleting everything in the end), and so was the __dealloc__ method. But the __del__ method was not called *at all*! So, could this be at the core of the problem? I thought that a __dealloc__ method is called only after the __del__ method has done, or am I mistaken? To avoid misunderstanding: In my original code I did not provide custom __del__ or __dealloc__, but expected Cython to do the job. Cheers Simon --~--~-~--~~~---~--~~ To post to this group, send email to sage-support@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sage-support URLs: http://www.sagemath.org -~--~~~~--~~--~--~---
[sage-support] Re: How to detect memory leaks?
On Oct 25, 12:46 pm, Simon King [EMAIL PROTECTED] wrote: Dear Michael and all others, Hi Simon, I tried 3 approaches to trac my problem down. Summary: It seems that my extension class is deallocated (__dealloc__ is called) but not deleted (__del__ is not called), see approach 3 below. 1. On Oct 24, 6:03 pm, mabshoff [EMAIL PROTECTED] dortmund.de wrote: You need to rebuild Python after exporting SAGE_VALGRIND=yes - otherwise pymalloc is used and as is the valgrind log is useless in some regards. Thanks. I did rebuild sage 3.1.4 with SAGE_VALGRIND=yes, installed the optional valgrind spkg, and tried again. After computing F=F5(); G=F(I) I had sage: get_memory_usage() 1125.95703125 and after repeating the same computation 10 more times in a loop, it was sage: get_memory_usage() 1197.0703125 This time, valgrind found a tiny bit of unreachable memory, seehttp://sage.math.washington.edu/home/SimonKing/f5/sage-memcheck.25789 ==25789== LEAK SUMMARY: ==25789== definitely lost: 697 bytes in 17 blocks. ==25789== possibly lost: 399,844 bytes in 1,033 blocks. ==25789== still reachable: 38,600,299 bytes in 332,837 blocks. ==25789== suppressed: 337,860 bytes in 5,348 blocks. However, this still does not explain the loss of 71 MB reported by get_memory_usage This is pretty much as expected, but now that pymalloc is no longer used the log is likely much more readable, since a lot of false positives are gone. 2. Debugging these is hard and valgrind will not help much in that case. Much more useful could be Guppy. I installed guppy. hpy told me that after the first round of my computation I had (only indicating those items that increased) Index Count % Size % Cumulative % Kind (class / dict of class) 1 42 15 4264 16 13480 50 tuple 4 19 7 2280 8 21416 79 unicode 5 3 1 1608 6 23024 85 dict (no owner) 9 11 4 560 2 26312 97 str But after 10 more runs I got 1 55 15 7424 20 16640 45 unicode 2 4 1 4960 13 21600 59 dict (no owner) 3 44 12 4408 12 26008 71 tuple 7 46 13 1104 3 34208 93 int 8 14 4 704 2 34912 95 str Unfortunately I did not find a guppy tutorial. In particular, I don't know how to find out where the increased size of the dicts and unicodes comes from. Yep, that is somewhat of a black art. I am planning to do a talk at SD 11 about Guppy since my project there involves debugging a similar problem, but I have nothing obvious to contribute here. Is the latest version of f5.pyx in the wiki so I can play around with it? 3. Is the following method a correct way of testing whether instances of my extension classes are deleted at all? - In the init method of my class DecoratedPolynomial, I increased two globally defined counters by one. - I provided a custom __del__ method, that did nothing but to reduce the first counter by one. - I provided a custom __dealloc__ method, that simply reduced the second counter by one. Result: The init method was called 429 times (in 11 runs of the computation and deleting everything in the end), and so was the __dealloc__ method. But the __del__ method was not called *at all*! Mhh, some Python god need to comment here. But once any python element goes out of scope and the reference count is zero it gets deallocated. You could sprinkle your code with manual reference count checks and see if any one of them is going up and is never decremented when it should. But that is also a black art IMHO :) So, could this be at the core of the problem? I thought that a __dealloc__ method is called only after the __del__ method has done, or am I mistaken? To avoid misunderstanding: In my original code I did not provide custom __del__ or __dealloc__, but expected Cython to do the job. Cheers Simon Cheers, Michael --~--~-~--~~~---~--~~ To post to this group, send email to sage-support@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sage-support URLs: http://www.sagemath.org -~--~~~~--~~--~--~---
[sage-support] Re: How to detect memory leaks?
Hi Michael, On Oct 25, 11:28 pm, mabshoff [EMAIL PROTECTED] dortmund.de wrote: snip Unfortunately I did not find a guppy tutorial. In particular, I don't know how to find out where the increased size of the dicts and unicodes comes from. Yep, that is somewhat of a black art. I am planning to do a talk at SD 11 about Guppy since my project there involves debugging a similar problem, but I have nothing obvious to contribute here. Is the latest version of f5.pyx in the wiki so I can play around with it? At http://sage.math.washington.edu/home/SimonKing/f5/f5.pyx is the latest version, i.e., the one with counters in __init__, __del__ and __dealloc__ (which of course should eventually be removed). Cheers Simon --~--~-~--~~~---~--~~ To post to this group, send email to sage-support@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sage-support URLs: http://www.sagemath.org -~--~~~~--~~--~--~---