Re: [Numpy-discussion] Writing new ufuncs
On Mon, May 12, 2008 at 12:37 AM, Anne Archibald [EMAIL PROTECTED] wrote: 2008/5/11 Robert Kern [EMAIL PROTECTED]: Perhaps, but ufuncs only allow 0 or 1 for this value, currently. That's a shame, minus infinity is the identity for maximum too. Also, I was wrong about using PyUFunc_ff_f. Instead, use PyUFunc_ff_f_As_dd_d. Hmm. Well, I tried implementing logsum(), and it turns out neither PyUFunc_dd_d nor any of the other functions - mostly flagged as being part of the ufunc API and described in the automatically-generated ufunc_api.txt - are exported. All are static and not described in any header I could find. That said, I know scipy defines its own ufuncs, so it must either reimplement these or have some way I hadn't thought of to get at them. They are not exported as symbols. They are exported to other extension modules by #defining them to an element in an array just like the rest of the numpy C API. import_ufunc() sets up all of those #defines. They are automatically generated into the file __ufunc_api.h. I've attached a patch to add the ufunc logsum to numpy. It's a bit nasty for several reasons: * I wasn't sure where to put it, so I put it in _compiled_base.c in numpy/lib/src. I was very hesitant to touch umathmodule.c.src with its sui generis macro language. The place to add it would be in code_generators/generate_umath.py, which generates __umath_generated.c. * I'm not sure what to do about log1p - it seems to be available in spite of HAVE_LOG1P not being defined. In any case if it doesn't exist it seems crazy to implement it again here. Then maybe our test for HAVE_LOG1P is not correct. I don't think we can rely on its omnipresence, though. * since PyUFunc_dd_d does not seem to be exported, I just cut-n-pasted its code here. Obviously not a solution, but what are extension writers supposed to do? See above. Do we want a ufunc version of logsum() in numpy? I got the impression that in spite of some people's doubts as to its utility there was a strong contingent of users who do want it. Well, many of us were simply contending Chuck's contention that one shouldn't need the log representation. I would probably toss it into scipy.special, myself. This plus a logdot() would cover most of the operations one might want to do on numbers in log representation. (Maybe subtraction?). It could be written in pure python, for the most part, and reduce() would save a few exps and logs at the cost of a temporary, but accumulate() remains a challenge. (I don't expect many people feel the need for reduceat()). Anne P.S. it was surprisingly difficult to persuade numpy to find my tests. I suppose that's what the proposed transition to nose is for? -A Yes. -- Robert Kern I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth. -- Umberto Eco ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Going toward time-based release ?
On Sun, May 11, 2008 at 10:42 PM, Charles R Harris [EMAIL PROTECTED] wrote: I dunno. I tend to agree with Mike Ressler that Numpy really shouldn't change very quickly. We can add new features now and then, or move more stuff to type specific functions, but those aren't really user visible. Most work, I think, should go into tests, cleaning up the code base and documenting it, and fixing bugs. I also wonder how good a position we are in if Travis decides that there are other things he would rather spend his time on. Documentation and clean code will help with that. So I really don't see that we need much in the way of scheduled releases except to crack the whip on us lazy souls who let things ride until we don't have a choice. If we do have scheduled realeases, then I think it will also be necessary to draw up a plan for each release, i.e., cleanup and document ufuncobject.c.src, and so on. I don't think there is anyone who believes that NumPy should change very quickly. Let's try and drop that discussion. I am worried that if people keep bring up this straw man argument that our users will get the impression that we are preparing to break a lot of their code. Besides that changes to MA, there has been very few discussions about changing code. The discussions that have happened have been met with numerous comments by several developers that we need to be extremely conservative about making API changes. Even for the matrices discussion it seems that very few people are pushing for us to make any changes and of those that are most are thinking carefully about how to cause as little code breakage as possible. I know that just recently some matrices changes were made that have caused some problems, but I am going to rollback those changes. I think the main issue is that we need to have more regular releases of NumPy and SciPy. For NumPy you are completely correct that most (if not all) of the work should be focused on tests, cleaning up the code base and documenting it, and fixing bugs. The overriding reason that necessitates a 1.2 release at the end of August is the move to the nose testing framework. I feel that this change is extremely important and want it to take place as quickly as possible. Personally, I don't have any other major changes that I will champion getting into 1.2. The only other major change that has been put forth so far is the matrix change, which I personally don't think should change the default behavior in the 1.2 release. I also really like your suggestion to draw up plans for each release. And I would love it if you would be willing to take the lead on something for 1.2 like cleaning up and documenting ufuncobject.c.src. I would encourage you to start thinking about whether you could commit to something like that and, if so, creating a ticket for it briefly describing your plan. The other thing to consider is how best to coordinate NumPy and SciPy releases. There are occasionally times when we need to add or fix things in a NumPy release before the next SciPy release. While we shouldn't expect to see many changes in NumPy, I believe that there is a *slightly* greater tolerance for changes in SciPy. As you will recall, the whole MA change arose from my push to get rid of scipy.sandbox. While in retrospect the change could have been handled better, it is nice that that code has been removed from scipy.sandbox. I also am extremely happy with all the time and effort that Pierre spent on writing the MaskedArray. I want to make it absolutely clear that all the good, hard work was his and that the transition mistakes were mine, not his. I also want to thank Stefan who spent an entire night preparing for the merge during the December sprint (the time difference between Berkeley and South Africa makes working at the same time logistically difficult for him). Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Going toward time-based release ?
On Mon, May 12, 2008 at 12:43 AM, Jarrod Millman [EMAIL PROTECTED] wrote: On Sun, May 11, 2008 at 9:42 PM, Robert Kern [EMAIL PROTECTED] wrote: The semantics of major.minor.micro for numpy got confused because of an early mistake (IMO) wherein we designated 1.1 as a release which would allow code breakage. I disagree. I didn't realize that the ma change was going to break anyone's code. Once I understood that it did I decided to call the release 1.1.0 rather than 1.0.5. That's not what I'm referring to. Before 1.0 was released, people were worried about the stability of the API, and Travis said that 1.0.x would maintain API compatibility, but that 1.1, maybe a year from now, would be the opportunity to make API changes again. I think that statement colored development substantially. We had to add features to 1.0.x releases instead of just bugfixes because 1.1 was going to be this big thing. I think that it is reasonable to allow some API breakage in a minor release, but that there is no reason to encourage it and we should absolutely not require it. The problem as I see it is that we were abusing the maintenance releases to add a significant number of features, rather than just simply fixing bugs. I would very much like to get away from that and follow Python's model of only doing bugfixes/doc-enhancements in micro releases, new features in minor releases and code breakage never (or if we absolutely must, only after one full minor release with a deprecation warning). +1 I absolutely agree. You keep saying that you agree with me, but then you keep talking about allowing minor code breakage which I do not agree with. I don't think I am communicating the aversion to code breakage that I am trying to espouse. While strictly speaking, there is a conceivable reading of the words you use (I think that it is reasonable to allow some API breakage in a minor release for example) which I can agree with, I think we're meaning slightly different things. In particular, I think we're disagreeing on a meta level. I do agree that occasionally we will be presented with times when breaking an API is warranted. As a prediction of future, factual events, we see eye to eye. However, I think we will make better decisions about when breakage is warranted if we treat them as _a priori_ *un*reasonable. Ultimately, we may need to break APIs, but we should feel bad about it. And I mean viscerally bad. It is too easy to convince one's self that the new API is better and cleaner and that on balance making the break is better than not doing so. The problem is that the thing that we need to balance this against, the frustration of our users who see their code break and all of *their* users, is something we inherently can not know. Somehow, we need to internalize this frustration ourselves. Treating the breaking of code as inherently unreasonable, and refusing to accept excuses *even if they are right on the merits* is the only way I know how to do this. The scales of reason occasionally needs a well-placed thumb of unreason. -- Robert Kern I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth. -- Umberto Eco ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Going toward time-based release ?
On Mon, May 12, 2008 at 12:17 AM, Jarrod Millman [EMAIL PROTECTED] wrote: On Sun, May 11, 2008 at 10:42 PM, Charles R Harris [EMAIL PROTECTED] wrote: I dunno. I tend to agree with Mike Ressler that Numpy really shouldn't change very quickly. We can add new features now and then, or move more stuff to type specific functions, but those aren't really user visible. Most work, I think, should go into tests, cleaning up the code base and documenting it, and fixing bugs. I also wonder how good a position we are in if Travis decides that there are other things he would rather spend his time on. Documentation and clean code will help with that. So I really don't see that we need much in the way of scheduled releases except to crack the whip on us lazy souls who let things ride until we don't have a choice. If we do have scheduled realeases, then I think it will also be necessary to draw up a plan for each release, i.e., cleanup and document ufuncobject.c.src, and so on. I don't think there is anyone who believes that NumPy should change very quickly. Let's try and drop that discussion. I am worried that if people keep bring up this straw man argument that our users will get the impression that we are preparing to break a lot of their code. Besides that changes to MA, there has been very few discussions about changing code. The discussions that have happened have been met with numerous comments by several developers that we need to be extremely conservative about making API changes. Even for the matrices discussion it seems that very few people are pushing for us to make any changes and of those that are most are thinking carefully about how to cause as little code breakage as possible. I know that just recently some matrices changes were made that have caused some problems, but I am going to rollback those changes. I think the main issue is that we need to have more regular releases of NumPy and SciPy. For NumPy you are completely correct that most (if not all) of the work should be focused on tests, cleaning up the code base and documenting it, and fixing bugs. The overriding reason that necessitates a 1.2 release at the end of August is the move to the nose testing framework. I feel that this change is extremely important and want it to take place as quickly as possible. Personally, I don't have any other major changes that I will champion getting into 1.2. The only other major change that has been put forth so far is the matrix change, which I personally don't think should change the default behavior in the 1.2 release. I also really like your suggestion to draw up plans for each release. And I would love it if you would be willing to take the lead on something for 1.2 like cleaning up and documenting ufuncobject.c.src. I would encourage you to start thinking about whether you could commit to something like that and, if so, creating a ticket for it briefly describing your plan. As a start, I think we can just reindent all the c code to Python3.0 specs, and change the style of if and for loops like so: if (yes) { } else { } And put space around operators in for loops, i.e., for (i = 0; i 10; i++) { whatever; needed; } instead of for (i=0;i10;i++){whatever;needed;} and don't do this if ((a=foo(arg))==0) barf; intervening_code; do_stuff_with_a; Which makes the spot where a gets initialized almost invisible. It is amazing how much easier the code is to read and understand with these simple changes. But if we plan tasks for the release, then we also have to assign people to the task. That is where it gets sticky. Chuck ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Uncomfortable with matrix change
Could we add a from __future__ import something along with a deprecation warning? This could be used for Tim's new matrix class, or any other API change. -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/ORR(206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception [EMAIL PROTECTED] ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Going toward time-based release ?
On Mon, 2008-05-12 at 00:39 -0600, Charles R Harris wrote: Which makes the spot where a gets initialized almost invisible. It is amazing how much easier the code is to read and understand with these simple changes. But if we plan tasks for the release, then we also have to assign people to the task. That is where it gets sticky. That's exactly why I am suggesting a time-based release; that's the kind of stuff I want to see in numpy ( 2.* ) and scipy ( 1.*). The way I saw things was: it is ok to do this kind of changes, but only N days before the official release. After this point, the only (and I really mean only) reason is a critical bug. It is really easy to think hey, let's do this one line change, that can't possibly break anything, right ?, and two days after: hey, this breaks on windows because VS does not recognize this code. If between the two days, you have released numpy, you're screwed. Doing this with a time schedule is easier, I think. FWIW, I am doing exactly this for scipy.fftpack right now, and that's a lot of relatively boring work (fftpack is a bit special because it has a big list of possible combination dependencies with different code paths, and testing all of them is painful, but doing this in numpy.core is not that much easier). cheers, David ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Going toward time-based release ?
On Sun, May 11, 2008 at 11:31 PM, Robert Kern [EMAIL PROTECTED] wrote: That's not what I'm referring to. Before 1.0 was released, people were worried about the stability of the API, and Travis said that 1.0.x would maintain API compatibility, but that 1.1, maybe a year from now, would be the opportunity to make API changes again. I think that statement colored development substantially. We had to add features to 1.0.x releases instead of just bugfixes because 1.1 was going to be this big thing. Ah... I see. I would very much like to get away from that and follow Python's model of only doing bugfixes/doc-enhancements in micro releases, new features in minor releases and code breakage never (or if we absolutely must, only after one full minor release with a deprecation warning). +1 I absolutely agree. To clarify, in particular, I was agreeing that if we absolutely must break code we should only do it after a full minor release with a deprecation warning. I don't want a repeat of what happened with MA. You keep saying that you agree with me, but then you keep talking about allowing minor code breakage which I do not agree with. I don't think I am communicating the aversion to code breakage that I am trying to espouse. While strictly speaking, there is a conceivable reading of the words you use (I think that it is reasonable to allow some API breakage in a minor release for example) which I can agree with, I think we're meaning slightly different things. In particular, I think we're disagreeing on a meta level. I do agree that occasionally we will be presented with times when breaking an API is warranted. As a prediction of future, factual events, we see eye to eye. Are you saying that the changes to histogram and median should require waiting until 2.0--several years from now? When I say that we may allow minor API breakage this is the kind of thing I mean. I think that both instances are very reasonable and clean up minor warts. I also think that in both cases the implementation plan is reasonable. They first provide the new functionality as an option in a new minor release, then in the subsequent release switch the new functionality to the default but leave the old behavior accessable. The MA situation was handled slightly differently, which is unfortunate. I think that it is a reasonable change to make in a minor release, but we should have provided a warning first. At least, we are providing the old code still. Do you think that we should revisit this issue? Since we are importing the new MA from a different location maybe we should move the old code back to it's original location. Would it be reasonable to have: from numpy import ma -- new code from numpy.core import ma -- old code w/ a deprecation warning Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Going toward time-based release ?
2008/5/12 Jarrod Millman [EMAIL PROTECTED]: I agree, but we also have the problem that we don't have adequate tests. I don't think we realized the extent to which MaskedArrays necessitated code rewrites until Charles Doutriaux pointed out how much difficulty the change was causing to their code base. There's a valuable lesson to be learnt here: unit tests provide a contract between the developer and the user. When I did the MaskedArray merge, I made very sure that we strictly stuck to our contract -- the unit tests for numpy.core.ma (which ran without failure). Unfortunately, according to Charles' experience, that contract was inadequate. We shouldn't be caught with our pants down like that. The matplotlib guys used Pierre's maskedarray for a while before we did the merge, so we had good reason to believe that it was a vast improvement (and it was). I agree that the best policy would have been to make a point release right before the merge, but realistically, if we had to wait for such a release, the new maskedarrays would still not have been merged. Which brings me to my next point: we have very limited (human) resources, but releasing frequently is paramount. To what extent can we automate the release process? I've asked this question before, but I haven't had a clear answer: are the packages currently built automatically? Why don't we get the buildbots to produce nightly snapshot packages, so that when we tell users try the latest SVN version, it's been fixed there it doesn't send them into a dark depression? As for the NumPy unit tests: I have placed coverage reports online (http://mentat.za.net/numpy/coverage). This only covers Python (not extension) code, but having that part 100% tested is not impossible, nor would it take that much effort. The much more important issue is having the C extensions tested, and if anyone can figure out a way to get gcov to generate those coverage reports, I'd be in the seventh heaven. Thus far, the only way I know of is to build one large, static Python binary that includes numpy. Memory errors: Albert Strasheim recently changed his build client config to run Valgrind on the NumPy code. Surprise, surprise -- we introduced new memory errors since the last release. In the future, when *any* changes are made to the the C code: a) Add a unit test for the change, unless the test already exists (and I suggest we *strongly* enforce this policy). b) Document your change if it is not immediately clear what it does. b) Run the test suite through Valgrind, or if you're not on a linux platform, look at the buildbot (http://buildbot.scipy.org) output. Finally, our patch acceptance process is poor. It would be good if we could have a more formal system for reviewing incoming and *also* our own patches. I know Ondrej Certik had a review board in place for Sympy at some stage, so we could ask him what their experience was. So, +1 for more frequent releases, +1 for more tests and +1 for good developer discipline. Regards Stéfan ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Going toward time-based release ?
On Mon, May 12, 2008 at 12:09 AM, Stéfan van der Walt [EMAIL PROTECTED] wrote: Which brings me to my next point: we have very limited (human) resources, but releasing frequently is paramount. To what extent can we automate the release process? I've asked this question before, but I haven't had a clear answer: are the packages currently built automatically? Why don't we get the buildbots to produce nightly snapshot packages, so that when we tell users try the latest SVN version, it's been fixed there it doesn't send them into a dark depression? The packages aren't built automatically. We just changed the package build process. As of now David C. is building the Windows binaries and Chris B. is building the Mac OS X binaries. In terms of getting the releases out, this is not a very important problem. The main issue is getting the code stable enough to make a release. However, your suggestion to have nightly binaries autogenerated would be more useful than telling users to try the latest SVN. I assume that we should be able to have these binaries auto-generated, David and Chris should be able to provide a more detailed answer to this than I can. However, my concern with this is that, I believe, we currently need some oversight to ensure that the binaries don't have silly problems. Regardless of whether we automate the process or still have manual aspect to the process, it is a good idea to start creating binaries for release candidates or test releases. Memory errors: Albert Strasheim recently changed his build client config to run Valgrind on the NumPy code. Surprise, surprise -- we introduced new memory errors since the last release. In the future, when *any* changes are made to the the C code: a) Add a unit test for the change, unless the test already exists (and I suggest we *strongly* enforce this policy). b) Document your change if it is not immediately clear what it does. b) Run the test suite through Valgrind, or if you're not on a linux platform, look at the buildbot (http://buildbot.scipy.org) output. I would be happy to see a policy like this adopted. You have my vote. Finally, our patch acceptance process is poor. It would be good if we could have a more formal system for reviewing incoming and *also* our own patches. I know Ondrej Certik had a review board in place for Sympy at some stage, so we could ask him what their experience was. We need to be very careful not to make the process for contributing code too burdensome. The number of developers is increasing and our development community is growing; I don't want to see this process reversed. If we go this route, we may need better tools for creating and reviewing patches. I would recommend taking it slow. There are a number of suggestions for improving our development process. I would prefer changing only one or just a few aspects at a time. That way we can more easily understand what effect these changes have on our development process. This is another argument for increasing the frequency of releases. It would give us more of an opportunity to review and improve the process. -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Going toward time-based release ?
ma, 2008-05-12 kello 09:09 +0200, Stéfan van der Walt kirjoitti: [clip] As for the NumPy unit tests: I have placed coverage reports online (http://mentat.za.net/numpy/coverage). This only covers Python (not extension) code, but having that part 100% tested is not impossible, nor would it take that much effort. Small thing: would it be possible to use only two colors in the figleaf output, eg. uncovered code red, everything else black. The black bits in between are now slightly distracting. -- Pauli Virtanen ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Going toward time-based release ?
On 12 May 2008, at 04:59, David Cournapeau wrote: Also, time-based releases are by definition predictable, and as such, it is easier to plan upgrades for users As long as it does not imply that users have to upgrade every 3 months, because for some users this is impossible and/or undesirable. By 'upgrading' I'm not only referring to numpy/scipy, but also to external packages based on numpy/scipy. As Mike, I'm a bit sceptic about the whole idea. The current way doesn't seem broken, so why fix it? The argument that time-based releases avoids bugs caused by putting new untested things late in the release doesn't sound very convincing to me. Isn't this a discipline issue? To me it seems that one can avoid this with feature-based releases too. In fact won't the time-pressure of time-based releases increase the tendency to include untested things? Just the 2 cents of a user, Joris Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Going toward time-based release ?
Joris De Ridder wrote: As long as it does not imply that users have to upgrade every 3 months, because for some users this is impossible and/or undesirable. By 'upgrading' I'm not only referring to numpy/scipy, but also to external packages based on numpy/scipy. As I said, time-based release do not imply that we will break something every release. This is really an orthogonal issue. I certainly hope that the recent ma/matrix thing won't happen again. If it were me, I would have never accepted the ma or any matrix change in the numpy 1.* timespan. Concerning packages which depend on numpy, there is not much we can do: if they depend on new features on numpy, you will have to upgrade, but this has nothing to do with the release process. As Mike, I'm a bit sceptic about the whole idea. The current way doesn't seem broken, so why fix it? If the recent events do not show that something went wrong, I don't know what will :) numpy 1.0.4 was released 6 months ago, and numpy 1.1 has slipped for a long time now. If there is no schedule, it is easy to keep adding code, specially when the release approaches (I want to see my changes in the next release, let's push some code just before the release). The plan really is to have a code freeze and hard freeze between each release, and time-based releases are more natural for that IMO, because contributors can plan and synchronize more easily. Maybe it won't work, but it worths a try; As Robert mentioned it before, code freeze and hard code freeze is what matters. cheers, David ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] promises, promises
On Sun, May 11, 2008 at 12:16 PM, Alan G Isaac [EMAIL PROTECTED] wrote: To be specific: I do not recall any place in the NumPy Book where this behavior is promised. On Sun, 11 May 2008, Robert Kern apparently wrote: It's promised in the docstring! A matrix is a specialized 2-d array that retains it's 2-d nature through operations I guess I would say that is too ambiguous. And wrong:: x = np.mat('1 2;3 4') x[0,0] 1 Cheers, Alan Isaac ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Uncomfortable with matrix change
On Mon, May 12, 2008 at 1:46 AM, Chris.Barker [EMAIL PROTECTED] wrote: Could we add a from __future__ import something along with a deprecation warning? That's a Python language feature. It is not available to us. -- Robert Kern I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth. -- Umberto Eco ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] bug in oldnumeric.ma
All, I fixed the power function in numpy.ma following Anne's suggestion: compute first, mask the problems afterwards. It's a quick and dirty fix that crashes if the user has set its error system to raise an exception on invalid (np.seterr(invalid='raise')), but it works otherwise and keeps subclasses (such as TimeSeries). I will have to modify the .__pow__ method so that ma.power is called: right now, a**b calls ndarray(a).__pow__(b), which may yield NaNs and Infs. What should I do with oldnumeric.ma.power ? Try to fix it the same way, or leave the bug ? I'm not that enthusiastic to have to debug the old package, but if it's part of the job... ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] bug in oldnumeric.ma
2008/5/12 Pierre GM [EMAIL PROTECTED]: I fixed the power function in numpy.ma following Anne's suggestion: compute first, mask the problems afterwards. It's a quick and dirty fix that crashes if the user has set its error system to raise an exception on invalid (np.seterr(invalid='raise')), but it works otherwise and keeps subclasses (such as TimeSeries). I will have to modify the .__pow__ method so that ma.power is called: right now, a**b calls ndarray(a).__pow__(b), which may yield NaNs and Infs. What should I do with oldnumeric.ma.power ? Try to fix it the same way, or leave the bug ? I'm not that enthusiastic to have to debug the old package, but if it's part of the job... We should leave the oldnumeric.ma package alone. Even if its `power` is broken, some packages may depend on it. We'll provide it in 1.2 for backward compatibility, and get rid of it after 1.3. Regards Stéfan ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] bug in oldnumeric.ma
On Monday 12 May 2008 12:04:24 Stéfan van der Walt wrote: 2008/5/12 Pierre GM [EMAIL PROTECTED]: What should I do with oldnumeric.ma.power ? Try to fix it the same way, or leave the bug ? I'm not that enthusiastic to have to debug the old package, but if it's part of the job... We should leave the oldnumeric.ma package alone. Even if its `power` is broken, some packages may depend on it. We'll provide it in 1.2 for backward compatibility, and get rid of it after 1.3. OK then, I prefer that... Additional question: when raising to a power in place, NaNs/Infs can show up in the _data part: should I set those invalid data to fill_value or not ? ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Going toward time-based release ?
On Mon, May 12, 2008 at 2:08 AM, Jarrod Millman [EMAIL PROTECTED] wrote: Are you saying that the changes to histogram and median should require waiting until 2.0--several years from now? When I say that we may allow minor API breakage this is the kind of thing I mean. I think that both instances are very reasonable and clean up minor warts. I also think that in both cases the implementation plan is reasonable. They first provide the new functionality as an option in a new minor release, then in the subsequent release switch the new functionality to the default but leave the old behavior accessable. I think they are tolerable. Like I said, I don't think our yardstick should be reasonableness. Reason is too easily fooled in these instances. The MA situation was handled slightly differently, which is unfortunate. I think that it is a reasonable change to make in a minor release, but we should have provided a warning first. At least, we are providing the old code still. Do you think that we should revisit this issue? Since we are importing the new MA from a different location maybe we should move the old code back to it's original location. Would it be reasonable to have: from numpy import ma -- new code from numpy.core import ma -- old code w/ a deprecation warning I think that might be better, yes. -- Robert Kern I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth. -- Umberto Eco ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Going toward time-based release ?
On Mon, May 12, 2008 at 5:31 AM, Joris De Ridder [EMAIL PROTECTED] wrote: As long as it does not imply that users have to upgrade every 3 months, because for some users this is impossible and/or undesirable. By 'upgrading' I'm not only referring to numpy/scipy, but also to external packages based on numpy/scipy. I can't imagine why anyone would have to upgrade. Could you explain under what circumstances you would see having to upgrade just because their is a new upstream release? I think I must be misunderstanding your concern. As Mike, I'm a bit sceptic about the whole idea. The current way doesn't seem broken, so why fix it? The argument that time-based releases avoids bugs caused by putting new untested things late in the release doesn't sound very convincing to me. Isn't this a discipline issue? To me it seems that one can avoid this with feature-based releases too. In fact won't the time-pressure of time-based releases increase the tendency to include untested things? As a data point, I have to say that I view the current process as a bit broken. I have been very frustrated trying to get this latest release out and if you remember the reason I took over release management last summer was that the current release of NumPy and SciPy didn't work together for several months prior to me releasing 1.0.3.1 and 0.5.2.1. I am a little surprised that anyone would believe that the current system isn't broken. Now I don't think that that means the solution is to move to a time-based release schedule necessarily. But since I was hoping to do something like a time-based release for 1.2 anyway, I am happy to try it and see how it works. The concern is obviously not to include untested things; but no one wants to do that. If we get to the end of the 3 months and can't release a stable, well-tested release by using the trunk or by discarding some features, I think we would just consider the experiment a failure. There is nothing to force us to release untested code. If you look at the people in favor of this, you will find they are also some of the most adamant voices about including unit tests for all new code as well. So I don't believe any of us will require that we release 1.2 at the end of three months regardless of the quality. The more likely scenario is that those of us supporting this idea will be able to use the time-based release cycle as a mechanism to ensure high-quality, well-tested code is produced. Again it this experiment fails, we should know before we actually make the release. -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] Installing numpy/scipy on CentOS 4
Hello, I'm having trouble getting the python-numpy RPM to build under CentOS 4.6. I've already built and installed atlas, lapack3 and refblas3 from the CentOS 5 source RPMS, but numpy won't build correctly. Although the ultimate error may be unrelated to Atlas, clearly not all the atlas dependencies are being satisfied. Only the .so files are supplied by the RPMS, but it seems that some additional lib* files may be needed as well. Side notes; * gcc-g77 is not installed (apparently causes problems), gfortran is installed from the gcc4 RPM. * I ran export LD_LIBRARY_PATH=/usr/lib/atlas/sse2 * I created the missing directories in /var/tmp/python-numpy-1.0.4-build but it's really not clear that this is really the problem * The numpy SPEC file shows a dependency on lapack3 3.1, but rpmbuild doesn't complain about this, and it seems everyone uses lapack 3.0.x * Upgrading to CentOS 5 is not an option Can anyone provide any insight as to what exactly is missing from Atlas and how to correctly resolve this dependency? Any other tips are appreciated. rpmbuild output below. Chris # rpmbuild -vvv -bi SPECS/python-numpy.spec Executing(%prep): /bin/sh -e /var/tmp/rpm-tmp.7373 + umask 022 + cd /usr/src/redhat/BUILD + cd /usr/src/redhat/BUILD + rm -rf numpy-1.0.4 + /bin/gzip -dc /usr/src/redhat/SOURCES/numpy-1.0.4.tar.gz + tar -xf - + STATUS=0 + '[' 0 -ne 0 ']' + cd numpy-1.0.4 ++ /usr/bin/id -u + '[' 0 = 0 ']' + /bin/chown -Rhf root . ++ /usr/bin/id -u + '[' 0 = 0 ']' + /bin/chgrp -Rhf root . + /bin/chmod -Rf a+rX,u+w,g-w,o-w . + exit 0 Executing(%build): /bin/sh -e /var/tmp/rpm-tmp.7373 + umask 022 + cd /usr/src/redhat/BUILD + cd numpy-1.0.4 + export 'CFLAGS=-O2 -g -march=i386 -mcpu=i686 -fPIC' + CFLAGS='-O2 -g -march=i386 -mcpu=i686 -fPIC' + python setup.py config_fc --fcompiler=gnu95 build Running from numpy source directory. non-existing path in 'numpy/distutils': 'site.cfg' F2PY Version 2_4422 blas_opt_info: blas_mkl_info: libraries mkl,vml,guide not found in /usr/local/lib libraries mkl,vml,guide not found in /usr/lib NOT AVAILABLE atlas_blas_threads_info: Setting PTATLAS=ATLAS libraries ptf77blas,ptcblas,atlas not found in /usr/lib/atlas/sse2 libraries ptf77blas,ptcblas,atlas not found in /usr/local/lib libraries ptf77blas,ptcblas,atlas not found in /usr/lib/atlas libraries ptf77blas,ptcblas,atlas not found in /usr/lib/sse2 libraries ptf77blas,ptcblas,atlas not found in /usr/lib NOT AVAILABLE atlas_blas_info: libraries f77blas,cblas,atlas not found in /usr/lib/atlas/sse2 libraries f77blas,cblas,atlas not found in /usr/local/lib libraries f77blas,cblas,atlas not found in /usr/lib/atlas libraries f77blas,cblas,atlas not found in /usr/lib/sse2 libraries f77blas,cblas,atlas not found in /usr/lib NOT AVAILABLE /usr/src/redhat/BUILD/numpy-1.0.4/numpy/distutils/system_info.py:1340: UserWarning: Atlas (http://math-atlas.sourceforge.net/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [atlas]) or by setting the ATLAS environment variable. warnings.warn(AtlasNotFoundError.__doc__) blas_info: libraries blas not found in /usr/local/lib FOUND: libraries = ['blas'] library_dirs = ['/usr/lib'] language = f77 FOUND: libraries = ['blas'] library_dirs = ['/usr/lib'] define_macros = [('NO_ATLAS_INFO', 1)] language = f77 lapack_opt_info: lapack_mkl_info: mkl_info: libraries mkl,vml,guide not found in /usr/local/lib libraries mkl,vml,guide not found in /usr/lib NOT AVAILABLE NOT AVAILABLE atlas_threads_info: Setting PTATLAS=ATLAS libraries ptf77blas,ptcblas,atlas not found in /usr/lib/atlas/sse2 libraries lapack_atlas not found in /usr/lib/atlas/sse2 libraries ptf77blas,ptcblas,atlas not found in /usr/local/lib libraries lapack_atlas not found in /usr/local/lib libraries ptf77blas,ptcblas,atlas not found in /usr/lib/atlas libraries lapack_atlas not found in /usr/lib/atlas libraries ptf77blas,ptcblas,atlas not found in /usr/lib/sse2 libraries lapack_atlas not found in /usr/lib/sse2 libraries ptf77blas,ptcblas,atlas not found in /usr/lib libraries lapack_atlas not found in /usr/lib numpy.distutils.system_info.atlas_threads_info NOT AVAILABLE atlas_info: libraries f77blas,cblas,atlas not found in /usr/lib/atlas/sse2 libraries lapack_atlas not found in /usr/lib/atlas/sse2 libraries f77blas,cblas,atlas not found in /usr/local/lib libraries lapack_atlas not found in /usr/local/lib libraries f77blas,cblas,atlas not found in /usr/lib/atlas libraries lapack_atlas not found in /usr/lib/atlas libraries f77blas,cblas,atlas not found in /usr/lib/sse2 libraries lapack_atlas not found in /usr/lib/sse2 libraries f77blas,cblas,atlas not found in /usr/lib libraries lapack_atlas not found in /usr/lib numpy.distutils.system_info.atlas_info NOT AVAILABLE
Re: [Numpy-discussion] Going toward time-based release ?
On Mon, May 12, 2008 at 9:18 AM, Robert Kern [EMAIL PROTECTED] wrote: The MA situation was handled slightly differently, which is unfortunate. I think that it is a reasonable change to make in a minor release, but we should have provided a warning first. At least, we are providing the old code still. Do you think that we should revisit this issue? Since we are importing the new MA from a different location maybe we should move the old code back to it's original location. Would it be reasonable to have: from numpy import ma -- new code from numpy.core import ma -- old code w/ a deprecation warning I think that might be better, yes. I would like to hear from more people about whether they think this would be a useful solution to the current complaints about the new MA implementation. In particular, I would like to hear from at least the following people please: Travis, Stefan, Pierre, and David. -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Installing numpy/scipy on CentOS 4
On Mon, May 12, 2008 at 11:50 AM, Chris Miller [EMAIL PROTECTED] wrote: Hello, I'm having trouble getting the python-numpy RPM to build under CentOS 4.6. I've already built and installed atlas, lapack3 and refblas3 from the CentOS 5 source RPMS, but numpy won't build correctly. Although the ultimate error may be unrelated to Atlas, clearly not all the atlas dependencies are being satisfied. Only the .so files are supplied by the RPMS, but it seems that some additional lib* files may be needed as well. Side notes; * gcc-g77 is not installed (apparently causes problems), gfortran is installed from the gcc4 RPM. * I ran export LD_LIBRARY_PATH=/usr/lib/atlas/sse2 * I created the missing directories in /var/tmp/python-numpy-1.0.4-build but it's really not clear that this is really the problem * The numpy SPEC file shows a dependency on lapack3 3.1, but rpmbuild doesn't complain about this, and it seems everyone uses lapack 3.0.x * Upgrading to CentOS 5 is not an option Can anyone provide any insight as to what exactly is missing from Atlas and how to correctly resolve this dependency? Any other tips are appreciated. rpmbuild output below. running build_scripts creating build/scripts.linux-i686-2.3 Creating build/scripts.linux-i686-2.3/f2py adding 'build/scripts.linux-i686-2.3/f2py' to scripts changing mode of build/scripts.linux-i686-2.3/f2py from 644 to 755 + exit 0 Executing(%install): /bin/sh -e /var/tmp/rpm-tmp.28086 + umask 022 + cd /usr/src/redhat/BUILD + cd numpy-1.0.4 + /usr/lib/rpm/brp-compress + /usr/lib/rpm/brp-strip + /usr/lib/rpm/brp-strip-static-archive + /usr/lib/rpm/brp-strip-comment-note Processing files: python-numpy-1.0.4-4.1 error: File not found by glob: /var/tmp/python-numpy-1.0.4-build/usr/bin/* error: File not found by glob: /var/tmp/python-numpy-1.0.4-build/usr/lib/python*/site-packages/numpy error: File not found by glob: /var/tmp/python-numpy-1.0.4-build/usr/lib/python*/site-packages/numpy*.egg-info It looks like nothing actually executed python setup.py install to put the files into /var/tmp/python-numpy-1.0.4-build/. I suspect a problem in the spec file. Unfortunately, I am not familiar with building RPMs, so I don't know where you got the spec file from or where you can get a good one. -- Robert Kern I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth. -- Umberto Eco ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Going toward time-based release ?
Jarrod Millman wrote: I can't imagine why anyone would have to upgrade. Could you explain under what circumstances you would see having to upgrade just because their is a new upstream release? I think I must be misunderstanding your concern. One circumstance in which you would need to upgrade is if you distribute software with a numpy dependency. If your user base upgrades to the latest numpy release, and that latest release breaks your code, you will have unhappy users. -- Christopher Hanley Systems Software Engineer Space Telescope Science Institute 3700 San Martin Drive Baltimore MD, 21218 (410) 338-4338 ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Going toward time-based release ?
On Mon, May 12, 2008 at 5:41 AM, David Cournapeau [EMAIL PROTECTED] wrote: Joris De Ridder wrote: As Mike, I'm a bit sceptic about the whole idea. The current way doesn't seem broken, so why fix it? If the recent events do not show that something went wrong, I don't know what will :)... The plan really is to have a code freeze and hard freeze between each release, and time-based releases are more natural for that IMO, because contributors can plan and synchronize more easily. Maybe it won't work, but it worths a try; As Robert mentioned it before, code freeze and hard code freeze is what matters. I agree with this, so I think I'll have to apologize for misunderstanding the precise intent of the original message. Code freezing and plenty of testing to a schedule are a good thing. My only concern is that people don't feel compelled to push out a new release simply because the calendar rolled over (Fedora cough; Ubuntu, cough, cough). If there is a compelling feature set or bug fix, then by all means, set the schedule and go for it. Just don't fire up a release solely to show signs of life. Mike P.S. I'll take that median change in 1.1, wink :-) -- [EMAIL PROTECTED] ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Installing numpy/scipy on CentOS 4
Robert Kern wrote: It looks like nothing actually executed python setup.py install to put the files into /var/tmp/python-numpy-1.0.4-build/. I suspect a problem in the spec file. Unfortunately, I am not familiar with building RPMs, so I don't know where you got the spec file from or where you can get a good one. You're right about that. There was a OS flavor conditional that was not being evaluated properly. I commented these out and nailed the proper commands and it compiled. I'm still concerned about the missing library warnings, but I'll give it a try and see if it works properly. Thanks! Chris ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Going toward time-based release ?
On Mon, May 12, 2008 at 10:30 AM, Pierre GM [EMAIL PROTECTED] wrote: numpy.oldnumeric.ma is not bug-free, however: Charles recently pointed out a problem with power, that failed to work properly with a float exponent. We fixed the problem in numpy.ma, but left the old package unmodified. In short, numpy.oldnumeric.ma is no longer supported, and left for convenience. My question was whether there was some way we could, by making some very minor changes, make the transition more gradual. Specifically, I was suggesting two things: 1. It seemed to me that a lot of the users of the old ma implementation called it from np.core. Since the new implementation doesn't get called from there, would it make sense to have the old implementation reside in np.core rather than in np.oldnumeric? 2. Should we add an import warning to the old implementation explaining the change. Something like np.core.ma (or np.oldnumeric.ma) is deprecated. It is no longer being supported, so it will no longer receive bug fixes. Please consider using np.ma. In 1.3, np.core.ma is moving to np.oldnumeric.ma. Any comments on whether this would be helpful or useful. Is there anything else we should consider to ease the pains caused by this transition? Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Going toward time-based release ?
On Mon, May 12, 2008 at 10:02 AM, Christopher Hanley [EMAIL PROTECTED] wrote: One circumstance in which you would need to upgrade is if you distribute software with a numpy dependency. If your user base upgrades to the latest numpy release, and that latest release breaks your code, you will have unhappy users. I see, the issue is whether you(plural) will need to update your code base to support your users who may have updated to a new NumPy/SciPy release. This concern really goes to whether we should ever break code with our releases, which is orthogonal to whether we should try using a time-based release cycle. It is very clear that our users are not happy with the amount of API breaks in 1.1. All I can say, is that I am sorry that the current release is going to break some code bases out there. I am trying to figure out if there is a way to mitigate the problems caused by this release and would be happy to hear comments about how we could best reduce the problems caused by this release. In particular, it would be useful if I could get some feedback on my suggestion about the MA transition. Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Installing numpy/scipy on CentOS 4
On Mon, May 12, 2008 at 12:29 PM, Chris Miller [EMAIL PROTECTED] wrote: Robert Kern wrote: It looks like nothing actually executed python setup.py install to put the files into /var/tmp/python-numpy-1.0.4-build/. I suspect a problem in the spec file. Unfortunately, I am not familiar with building RPMs, so I don't know where you got the spec file from or where you can get a good one. You're right about that. There was a OS flavor conditional that was not being evaluated properly. I commented these out and nailed the proper commands and it compiled. I'm still concerned about the missing library warnings, but I'll give it a try and see if it works properly. Can you show us $ ls -l /usr/lib/atlas/sse2 ? -- Robert Kern I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth. -- Umberto Eco ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Going toward time-based release ?
On Monday 12 May 2008 13:48:55 you wrote: 1. It seemed to me that a lot of the users of the old ma implementation called it from np.core. Since the new implementation doesn't get called from there, would it make sense to have the old implementation reside in np.core rather than in np.oldnumeric? That sounds like a good idea if np.oldnumeric is to disappear very soon. If not, maybe we could have np.core.ma points to np.oldnumeric.ma (or just do a from np.oldnumeric.ma import *) 2. Should we add an import warning to the old implementation explaining the change. Something like np.core.ma (or np.oldnumeric.ma) is deprecated. It is no longer being supported, so it will no longer receive bug fixes. Please consider using np.ma. In 1.3, np.core.ma is moving to np.oldnumeric.ma. Sounds good. Any comments on whether this would be helpful or useful. Is there anything else we should consider to ease the pains caused by this transition? A wiki page on www.scipy.org where we would (1) describe the API changes (it's now hidden in the DeveloperZone); (2) suggest some simple solutions to common problems. The (1) is easy to do, the (2) would be updated when needed. ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Installing numpy/scipy on CentOS 4
Robert Kern wrote: Can you show us $ ls -l /usr/lib/atlas/sse2 # ls -l /usr/lib/atlas/sse2 total 3868 -rw-r--r-- 1 root root 3705240 May 11 13:02 libblas.so.3.0 -rw-r--r-- 1 root root 243466 May 11 13:02 liblapack.so.3.0 Chris ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] ticket 788: possible blocker
To close out this thread: With r5155 Travis fixed the problem, so the ticket is closed. Thank you! Eric Eric Firing wrote: I have added a patch to the ticket. I believe it fixes the problem. It ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Installing numpy/scipy on CentOS 4
On Mon, May 12, 2008 at 1:10 PM, Chris Miller [EMAIL PROTECTED] wrote: Robert Kern wrote: Can you show us $ ls -l /usr/lib/atlas/sse2 # ls -l /usr/lib/atlas/sse2 total 3868 -rw-r--r-- 1 root root 3705240 May 11 13:02 libblas.so.3.0 -rw-r--r-- 1 root root 243466 May 11 13:02 liblapack.so.3.0 Okay, can you also do the following: $ ls -l /usr/lib/lib{lapack,f77blas,cblas,atlas}.* $ ls -l /usr/lib/atlas/lib{lapack,f77blas,cblas,atlas}.* Is there an atlas-devel RPM that you also need to install? -- Robert Kern I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth. -- Umberto Eco ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Going toward time-based release ?
Hi, As Fernando mentioned, we are considering moving to a time-based released process with IPython1. Obviously, IPython1 is a very different project than numpy, but I figured it might be useful to state some of the other reasons we are thinking about going this direction: 1. It stops feature creep - oh, just this one more thing and then we will release We really struggle with this in IPython1. 2. It is a way of saying to the community regularly this project is not dead, we are fixing bugs and adding new features all the time. For those of us who follow the lists, this is not as big of a deal, but _many_ users are not on the lists. When they see that there hasn't been a release in 6 months, what do they think? This is marketing. 3. It gets bug fixes and better tested code into users hands sooner. Think of all the bugs that have been fixes since 1.0.4 was released in November! 4. It gives developers regular deadlines which (hopefully) motivates them to contribute code more regularly. An aside (this holds true for Numpy, Scipy and IPython): I have noticed that all of these projects are very slow to bump up the version numbers (for example, IPython0 is only at 0.8.x). I think this is unfortunate because it leaves us with no choice but to have API breakage + huge new features in minor releases. Why not be more bold and start the up the version numbers more quickly to reflect that lots of code is being written? My theory - we are all perfectionists and we are afraid to release NumPy 2.0/SciPy1.0/IPython0-1.0 because we know the code is not perfect yet. I think Sage (they are 3.0 at this point and started long after scipy/numpy/ipython) is a great counterexample of a project that is not afraid to bump the version numbers up quickly to reflect fast moving code. My two cents. Cheers, Brian On Sun, May 11, 2008 at 8:59 PM, David Cournapeau [EMAIL PROTECTED] wrote: Hi, I would like to know how people feel about going toward a time-based release process for numpy (and scipy). By time-based release, I mean: - releases of numpy are time-based, not feature based. - a precise schedule is fixed, and the release manager(s) try to enforce this schedule. Why ? I already suggested the idea a few months ago, and I relaunch the idea because believe the recent masked array + matrix issues could have been somewhat avoided with such a process (from a release point of view, of course). With a time-based release, there is a period where people can write to the release branch, try new features, and a freeze period where only bug fixes are allowed (and normally, no api changes are allowed). Also, time-based releases are by definition predictable, and as such, it is easier to plan upgrades for users, and to plan breaks for developers (for example, if we release say every 3 months, we would allow one or two releases to warn about future incompatible changes, before breaking them for real: people would know it means 6 months to change their code). The big drawback is of course someone has to do the job. I like the way bzr developers do it; every new release, someone else volunteer to do the release, so it is not always the same who do the boring job. Do other people see this suggestion as useful ? If yes, we would have to decide on: - a release period (3 months sounds like a reasonable period to me ?) - a schedule within a release (api breaks would only be allowed in the first month, code addition would be allowed up to two months, and only bug fixes the last month, for example). - who does the process (if nobody steps in, I would volunteer for the first round, if only for seeing how/if it works). cheers, David ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Going toward time-based release ?
Jarrod Millman wrote: [...] It is very clear that our users are not happy with the amount of API breaks in 1.1. All I can say, is that I am sorry that the current release is going to break some code bases out there. I am trying to figure out if there is a way to mitigate the problems caused by this release and would be happy to hear comments about how we could best reduce the problems caused by this release. In particular, it would be useful if I could get some feedback on my suggestion about the MA transition. Jarrod, As one who pushed for the MA transition, I appreciate your suggestion. It may have one unintended consequence, however, that may cause more trouble than it saves: it may lead to more situations where the ma versions are unintentionally mixed. This will probably work if an old_ma array ends up as input for a new_ma function, but the reverse often will not work correctly. (Pierre GM tried to make his MaskedArray Here is an illustration (untested, but should show the problem): module1.py: --- import numpy.core.ma as ma # with your suggestion, imports old_ma def dummy(arr): return ma.sin(arr) script.py: --- from module1 import dummy from pylab import * # now ma is new_ma x = ma.array([1,2,3], mask=[True, False, False]) plot(dummy(x)) The earlier strategy, of putting the old version solely in oldnumeric, is somewhat less likely to cause the problem because it requires anyone wanting the old version to deliberately select it--so at least the person is then aware that something has changed. Eric ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Installing numpy/scipy on CentOS 4
Robert Kern wrote: Okay, can you also do the following: $ ls -l /usr/lib/lib{lapack,f77blas,cblas,atlas}.* $ ls -l /usr/lib/atlas/lib{lapack,f77blas,cblas,atlas}.* -rw-r--r-- 1 root root 5083708 May 9 16:10 /usr/lib/liblapack.a lrwxrwxrwx 1 root root 18 May 11 10:47 /usr/lib/liblapack.so - liblapack.so.3.0.0 lrwxrwxrwx 1 root root 18 May 11 10:47 /usr/lib/liblapack.so.3 - liblapack.so.3.0.0 -rw-r--r-- 1 root root 3831921 May 9 16:10 /usr/lib/liblapack.so.3.0.0 On the f77 stuff, I read specifically to delete the gcc-f77 package as that conflicts with gcc-gfortran. The latter we are using is actually gcc4 not gcc3 (fyi). I see in Atlas that G77 is defined as gfortran, so this may not be an issue. Is there an atlas-devel RPM that you also need to install? The Source RPM from the ashigabou repository does not generate a -devel RPM. I used the CentOS5 SRPMS here : http://download.opensuse.org/repositories/home:/ashigabou/CentOS_5/src/ Looks like the devel generation stuff in the spec file is incomplete and commented out. I did try to install Atlas from source at one point, but numpy still had issues finding libraries. Chris ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Installing numpy/scipy on CentOS 4
On Mon, May 12, 2008 at 2:19 PM, Chris Miller [EMAIL PROTECTED] wrote: Robert Kern wrote: Okay, can you also do the following: $ ls -l /usr/lib/lib{lapack,f77blas,cblas,atlas}.* $ ls -l /usr/lib/atlas/lib{lapack,f77blas,cblas,atlas}.* -rw-r--r-- 1 root root 5083708 May 9 16:10 /usr/lib/liblapack.a lrwxrwxrwx 1 root root 18 May 11 10:47 /usr/lib/liblapack.so - liblapack.so.3.0.0 lrwxrwxrwx 1 root root 18 May 11 10:47 /usr/lib/liblapack.so.3 - liblapack.so.3.0.0 -rw-r--r-- 1 root root 3831921 May 9 16:10 /usr/lib/liblapack.so.3.0.0 On the f77 stuff, I read specifically to delete the gcc-f77 package as that conflicts with gcc-gfortran. The latter we are using is actually gcc4 not gcc3 (fyi). I see in Atlas that G77 is defined as gfortran, so this may not be an issue. It's still called libf77blas regardless. Well, you don't have ATLAS installed. Or if you do, it's an extremely weird installation of ATLAS. Is there an atlas-devel RPM that you also need to install? The Source RPM from the ashigabou repository does not generate a -devel RPM. I used the CentOS5 SRPMS here : http://download.opensuse.org/repositories/home:/ashigabou/CentOS_5/src/ Looks like the devel generation stuff in the spec file is incomplete and commented out. I did try to install Atlas from source at one point, but numpy still had issues finding libraries. Can you show me a list of files in the RPM that you installed? I don't know the rpm command off-hand. Since these are David Cournapeau's RPMs, perhaps he can chime in, here. -- Robert Kern I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth. -- Umberto Eco ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Going toward time-based release ?
This is a patch for the previous message; I got distracted and failed to finish a sentence. Eric Firing wrote: [...] versions are unintentionally mixed. This will probably work if an old_ma array ends up as input for a new_ma function, but the reverse often will not work correctly. (Pierre GM tried to make his MaskedArray handle old-style ma arrays gracefully, but there were no changes to old ma to make the reverse true.) ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Installing numpy/scipy on CentOS 4
Robert Kern wrote: On Mon, May 12, 2008 at 2:19 PM, Chris Miller [EMAIL PROTECTED] wrote: On the f77 stuff, I read specifically to delete the gcc-f77 package as that conflicts with gcc-gfortran. The latter we are using is actually gcc4 not gcc3 (fyi). I see in Atlas that G77 is defined as gfortran, so this may not be an issue. It's still called libf77blas regardless. Well, you don't have ATLAS installed. Or if you do, it's an extremely weird installation of ATLAS. Can you show me a list of files in the RPM that you installed? I don't know the rpm command off-hand. Since these are David Cournapeau's RPMs, perhaps he can chime in, here. I think you hit the nail on the head, we need the -devel package which would contain the files in /usr/lib. All this RPM has is the shared objects : # rpm -ql atlas /usr/lib/atlas/sse2/libblas.so.3.0 /usr/lib/atlas/sse2/liblapack.so.3.0 David, do you have an Atlas spec file that builds the -devel package? Chris ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Going toward time-based release ?
Brian Granger wrote: Hi, As Fernando mentioned, we are considering moving to a time-based released process with IPython1. Obviously, IPython1 is a very different project than numpy, but I figured it might be useful to state some of the other reasons we are thinking about going this direction: 1. It stops feature creep - oh, just this one more thing and then we will release We really struggle with this in IPython1. 2. It is a way of saying to the community regularly this project is not dead, we are fixing bugs and adding new features all the time. For those of us who follow the lists, this is not as big of a deal, but _many_ users are not on the lists. When they see that there hasn't been a release in 6 months, what do they think? This is marketing. 3. It gets bug fixes and better tested code into users hands sooner. Think of all the bugs that have been fixes since 1.0.4 was released in November! This is a major point, and it has related aspects: 3a) The only way to get code fully tested is to get it out and in use. People have to try it in real life. There are severe limits to what unit testing can accomplish. A combination of something like daily (or weekly) builds and more frequent scheduled releases would facilitate more effective testing, and would give people more time to find out what works and what breaks, what we need to fix and what they will need to change. 3b) The above is one of the reasons I favored merging the new MA implementation--it needed more exposure, and there seemed to be no other mechanism to get it--but the other is that given the uncertainty regarding release timing it looked like practically now or never. With a release cycle and a policy for changes in place, the merge could have been planned better and started earlier. I am very sympathetic to the argument for stability, but it has to be balanced against the need for genuine improvement and maintenance, and against the developers' time burden associated with maintaining multiple versions, or support for multiple versions. 4. It gives developers regular deadlines which (hopefully) motivates them to contribute code more regularly. An aside (this holds true for Numpy, Scipy and IPython): I have noticed that all of these projects are very slow to bump up the version numbers (for example, IPython0 is only at 0.8.x). I think this is unfortunate because it leaves us with no choice but to have API breakage + huge new features in minor releases. Why not be more bold and start the up the version numbers more quickly to reflect that lots of code is being written? My theory - we are all perfectionists and we are afraid to release NumPy 2.0/SciPy1.0/IPython0-1.0 because we know the code is not perfect yet. I think Sage (they are 3.0 at this point and started long after scipy/numpy/ipython) is a great counterexample of a project that is not afraid to bump the version numbers up quickly to reflect fast moving code. My two cents. All good points. Thank you. Eric Cheers, Brian On Sun, May 11, 2008 at 8:59 PM, David Cournapeau [EMAIL PROTECTED] wrote: Hi, I would like to know how people feel about going toward a time-based release process for numpy (and scipy). By time-based release, I mean: - releases of numpy are time-based, not feature based. - a precise schedule is fixed, and the release manager(s) try to enforce this schedule. Why ? I already suggested the idea a few months ago, and I relaunch the idea because believe the recent masked array + matrix issues could have been somewhat avoided with such a process (from a release point of view, of course). With a time-based release, there is a period where people can write to the release branch, try new features, and a freeze period where only bug fixes are allowed (and normally, no api changes are allowed). Also, time-based releases are by definition predictable, and as such, it is easier to plan upgrades for users, and to plan breaks for developers (for example, if we release say every 3 months, we would allow one or two releases to warn about future incompatible changes, before breaking them for real: people would know it means 6 months to change their code). The big drawback is of course someone has to do the job. I like the way bzr developers do it; every new release, someone else volunteer to do the release, so it is not always the same who do the boring job. Do other people see this suggestion as useful ? If yes, we would have to decide on: - a release period (3 months sounds like a reasonable period to me ?) - a schedule within a release (api breaks would only be allowed in the first month, code addition would be allowed up to two months, and only bug fixes the last month, for example). - who does the process (if nobody steps in, I would volunteer for the first round, if only for seeing
Re: [Numpy-discussion] ticket 788: possible blocker
2008/5/12 Eric Firing [EMAIL PROTECTED]: To close out this thread: With r5155 Travis fixed the problem, so the ticket is closed. Strange, when I look over that patch, my keyboard automatically holds in Shift and starts typing 1 through 9: (*@#*@#[EMAIL PROTECTED](*)@# I ask you with tears in my eyes: where is the regression test? Stéfan ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Going toward time-based release ?
On Mon, May 12, 2008 at 12:57 PM, Stéfan van der Walt [EMAIL PROTECTED] wrote: I think that's a good idea. If I recall correctly, numpy.core.ma had been exposed as numpy.ma, and some code uses it. Having an old and a new version in places where previously only the old version used to be won't solve the problem. The two MaskedArray modules are (supposed to be) API compatible, and we shouldn't see any breakage (Charles' experience was unfortunate, but that's been sorted out now); therefore, I'd expose the new masked arrays as numpy.ma and add a warning to the release message, which also refers anyone with problems to numpy.oldnumeric.ma (which, btw, has a number of bugs itself, which were discovered while coding the new masked arrays). Do you think that the release notes all ready capture this: http://projects.scipy.org/scipy/numpy/milestone/1.1.0 If not, please feel free to suggest some additional language. Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Installing numpy/scipy on CentOS 4
Robert, I looked in the build directory from Atlas and it is indeed generating the additional libraries. I copied them over to /usr/lib and rebuilt the numpy RPM. Indeed the behavior is different, and numpy sees the libraries. Of course some new oddities have entered the equation. Specifically this : * Lapack library (from ATLAS) is probably incomplete: size of /usr/lib/liblapack.so is 3742k (expected 4000k) Follow the instructions in the KNOWN PROBLEMS section of the file numpy/INSTALL.txt. * The INSTALL file is missing :-( I wonder if this is really a problem or just the result of stripping the binary (assuming RPM stripped it). I rebuilt lapack3 but the binary is still under 4MB. Let me know what you think. Below is the rest of the relevant output. BTW, thanks for your responsiveness to this issue. Chris F2PY Version 2_4422 blas_opt_info: blas_mkl_info: libraries mkl,vml,guide not found in /usr/local/lib libraries mkl,vml,guide not found in /usr/lib NOT AVAILABLE atlas_blas_threads_info: Setting PTATLAS=ATLAS libraries ptf77blas,ptcblas,atlas not found in /usr/local/lib libraries ptf77blas,ptcblas,atlas not found in /usr/lib/atlas libraries ptf77blas,ptcblas,atlas not found in /usr/lib/sse2 Setting PTATLAS=ATLAS Setting PTATLAS=ATLAS FOUND: libraries = ['ptf77blas', 'ptcblas', 'atlas'] library_dirs = ['/usr/lib'] language = c snip lapack_opt_info: lapack_mkl_info: mkl_info: libraries mkl,vml,guide not found in /usr/local/lib libraries mkl,vml,guide not found in /usr/lib NOT AVAILABLE NOT AVAILABLE atlas_threads_info: Setting PTATLAS=ATLAS libraries ptf77blas,ptcblas,atlas not found in /usr/local/lib libraries lapack_atlas not found in /usr/local/lib libraries ptf77blas,ptcblas,atlas not found in /usr/lib/atlas libraries lapack_atlas not found in /usr/lib/atlas libraries ptf77blas,ptcblas,atlas not found in /usr/lib/sse2 libraries lapack_atlas not found in /usr/lib/sse2 libraries lapack_atlas not found in /usr/lib numpy.distutils.system_info.atlas_threads_info Setting PTATLAS=ATLAS /usr/src/redhat/BUILD/numpy-1.0.4/numpy/distutils/system_info.py:955: UserWarning: * Lapack library (from ATLAS) is probably incomplete: size of /usr/lib/liblapack.so is 3742k (expected 4000k) Follow the instructions in the KNOWN PROBLEMS section of the file numpy/INSTALL.txt. * warnings.warn(message) Setting PTATLAS=ATLAS FOUND: libraries = ['lapack', 'ptf77blas', 'ptcblas', 'atlas'] library_dirs = ['/usr/lib'] language = c ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Installing numpy/scipy on CentOS 4
On Mon, May 12, 2008 at 3:44 PM, Chris Miller [EMAIL PROTECTED] wrote: Robert, I looked in the build directory from Atlas and it is indeed generating the additional libraries. I copied them over to /usr/lib and rebuilt the numpy RPM. Indeed the behavior is different, and numpy sees the libraries. Of course some new oddities have entered the equation. Specifically this : * Lapack library (from ATLAS) is probably incomplete: size of /usr/lib/liblapack.so is 3742k (expected 4000k) Follow the instructions in the KNOWN PROBLEMS section of the file numpy/INSTALL.txt. * The INSTALL file is missing :-( I wonder if this is really a problem or just the result of stripping the binary (assuming RPM stripped it). I rebuilt lapack3 but the binary is still under 4MB. Let me know what you think. Hmm. That text got copied over from scipy without thinking way back in the day. Anyways, see this: http://svn.scipy.org/svn/scipy/trunk/INSTALL.txt Basically, ATLAS only provides a few optimized LAPACK functions. To get a complete LAPACK, you need a FORTRAN LAPACK built first, then replace the relevant object files with the ATLAS-optimized versions. But you're probably correct that it's just symbol stripping. -- Robert Kern I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth. -- Umberto Eco ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] Test from work
This is a test -teo ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Going toward time-based release ?
2008/5/12 Jarrod Millman [EMAIL PROTECTED]: On Mon, May 12, 2008 at 12:05 PM, Eric Firing [EMAIL PROTECTED] wrote: As one who pushed for the MA transition, I appreciate your suggestion. It may have one unintended consequence, however, that may cause more trouble than it saves: it may lead to more situations where the ma versions are unintentionally mixed. This will probably work if an old_ma array ends up as input for a new_ma function, but the reverse often will not work correctly. Good point. I now agree that it is best to avoid having the old code accessible via np.core.ma. If the old code is only accessible via np.oldnumeric, then I don't see any reason to add a deprecation warning since it is kind of implied by placing it in np.oldnumeric. Does it make sense to make the *new* code available in numpy.core.ma, optionally with a DeprecationWarning? I realize it was supposedly never advertised as being there, but given the state of numpy documentation many users will have found it there; at the least matplotlib did. Anne ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Going toward time-based release ?
Anne Archibald wrote: 2008/5/12 Jarrod Millman [EMAIL PROTECTED]: On Mon, May 12, 2008 at 12:05 PM, Eric Firing [EMAIL PROTECTED] wrote: As one who pushed for the MA transition, I appreciate your suggestion. It may have one unintended consequence, however, that may cause more trouble than it saves: it may lead to more situations where the ma versions are unintentionally mixed. This will probably work if an old_ma array ends up as input for a new_ma function, but the reverse often will not work correctly. Good point. I now agree that it is best to avoid having the old code accessible via np.core.ma. If the old code is only accessible via np.oldnumeric, then I don't see any reason to add a deprecation warning since it is kind of implied by placing it in np.oldnumeric. Does it make sense to make the *new* code available in numpy.core.ma, optionally with a DeprecationWarning? I realize it was supposedly never advertised as being there, but given the state of numpy documentation many users will have found it there; at the least matplotlib did. If it can be done cleanly with a DeprecationWarning, then I don't see any problem offhand. Eric Anne ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] ticket 788: possible blocker
Stéfan van der Walt wrote: 2008/5/12 Eric Firing [EMAIL PROTECTED]: To close out this thread: With r5155 Travis fixed the problem, so the ticket is closed. Strange, when I look over that patch, my keyboard automatically holds in Shift and starts typing 1 through 9: (*@#*@#[EMAIL PROTECTED](*)@# I ask you with tears in my eyes: where is the regression test? Stefan, Maybe Travis could whip one up instantly, but I can't; I never figured out what is the difference between the dtypes of the ndarray that triggered the bug and the more common ones that don't, so I simply don't know how to make a test case. I'm sure I *could* figure it out, but it might take quite a bit of time--and in the grand scheme of things, I don't think that would be time particularly well-spent. I can't make it a priority. Eric ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] ticket 788: possible blocker
On Mon, May 12, 2008 at 6:00 PM, Eric Firing [EMAIL PROTECTED] wrote: Stéfan van der Walt wrote: 2008/5/12 Eric Firing [EMAIL PROTECTED]: To close out this thread: With r5155 Travis fixed the problem, so the ticket is closed. Strange, when I look over that patch, my keyboard automatically holds in Shift and starts typing 1 through 9: (*@#*@#[EMAIL PROTECTED](*)@# I ask you with tears in my eyes: where is the regression test? Stefan, Maybe Travis could whip one up instantly, but I can't; I never figured out what is the difference between the dtypes of the ndarray that triggered the bug and the more common ones that don't, so I simply don't know how to make a test case. I'm sure I *could* figure it out, but it might take quite a bit of time--and in the grand scheme of things, I don't think that would be time particularly well-spent. I can't make it a priority. The pickled array should be sufficient. -- Robert Kern I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth. -- Umberto Eco ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] ticket 788: possible blocker
Robert Kern wrote: On Mon, May 12, 2008 at 6:00 PM, Eric Firing [EMAIL PROTECTED] wrote: Stéfan van der Walt wrote: 2008/5/12 Eric Firing [EMAIL PROTECTED]: To close out this thread: With r5155 Travis fixed the problem, so the ticket is closed. Strange, when I look over that patch, my keyboard automatically holds in Shift and starts typing 1 through 9: (*@#*@#[EMAIL PROTECTED](*)@# I ask you with tears in my eyes: where is the regression test? Stefan, Maybe Travis could whip one up instantly, but I can't; I never figured out what is the difference between the dtypes of the ndarray that triggered the bug and the more common ones that don't, so I simply don't know how to make a test case. I'm sure I *could* figure it out, but it might take quite a bit of time--and in the grand scheme of things, I don't think that would be time particularly well-spent. I can't make it a priority. The pickled array should be sufficient. I did not realize it was OK to include test data files in the tests subdirectory, but now I see that there is one, testdata.fits. Eric ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Installing numpy/scipy on CentOS 4
Chris Miller wrote: Hello, I'm having trouble getting the python-numpy RPM to build under CentOS 4.6. I've already built and installed atlas, lapack3 and refblas3 from the CentOS 5 source RPMS, but numpy won't build correctly. Although the ultimate error may be unrelated to Atlas, clearly not all the atlas dependencies are being satisfied. Only the .so files are supplied by the RPMS, but it seems that some additional lib* files may be needed as well. The problem is that rpm is a mess, and each distribution needs special casing because they can't agree on what to put where. If you look at the rpm spec file, you will see that it is special cased for CENTOS/RHEL. I used version 5 because that's the only one available on the build system. You could change that to version 4 and see what happens. cheers, David ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Installing numpy/scipy on CentOS 4
Robert Kern wrote: It's still called libf77blas regardless. Well, you don't have ATLAS installed. Or if you do, it's an extremely weird installation of ATLAS. It is weird, but there is a rationale to the weirdness :) It is basically imposible to build a rpm for ATLAS, because every build produces a different binary. Because there is no atlas binary, I cannot depend on it, and so numpy/scipy depends on netlib blas/lapack. For people who still want to use atlas, I provided a source rpm that people can use if they want. Instead of building atlas the normal way, it builds full blas and lapack, which are drop-in replacements for netlib blas/lapack, while using atlas optimization. It means that atlas specific optimizations in numpy/scipy won't be used, but well, that's better than nothing, and is certainly faster for many operations than netlib. It works pretty well on opensuse 10, fedora 6, 7, and 8, as well as RHEL 5, both 32 and 64 bits so I would be surprised if there was a problem specific to centos related to atlas. All this is put on the wiki. I am open to suggestion for better instructions: http://www.scipy.org/Installing_SciPy/Linux cheers, David ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] ticket 788: possible blocker
Eric Firing wrote: Stéfan van der Walt wrote: 2008/5/12 Eric Firing [EMAIL PROTECTED]: To close out this thread: With r5155 Travis fixed the problem, so the ticket is closed. Strange, when I look over that patch, my keyboard automatically holds in Shift and starts typing 1 through 9: (*@#*@#[EMAIL PROTECTED](*)@# I ask you with tears in my eyes: where is the regression test? Stefan, Maybe Travis could whip one up instantly, but I can't; I think Stefan is asking me, not you.I don't think you should feel any sense of guilt. I was the one who closed the ticket sans regression test. I tend to still be of the opinion that a bug fix without a regression test is better than no bug fix at all. Obviously, whether future changes actually fix a bug (without introducing new ones) is a strong argument for regression tests, and I gratefully accept all tests submitted. -Travis ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion