Re: [Numpy-discussion] Does float16 exist?

2008-01-09 Thread David M. Cooke

On Jan 9, 2008, at 00:00 , Robert Kern wrote:

 Charles R Harris wrote:

 I see that there are already a number of parsers available for  
 Python,
 SPARK, for instance is included in the 2.5.1 distribution.

 No, it isn't.


It's used to generate the new parser in 2.5 (Parser/spark.py). It's  
just not included when installed :)

-- 
||\/|
/--\
|David M. Cooke  http://arbutus.physics.mcmaster.ca/dmc/
|[EMAIL PROTECTED]
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] how to work with mercurial and numpy right now

2008-01-08 Thread David M. Cooke

On Jan 8, 2008, at 04:36 , David Cournapeau wrote:

 Ondrej Certik wrote:
 Hi,

 if you want to play with Mercurial now (without forcing everyone else
 to leave svn), I suggest this:

 http://cheeseshop.python.org/pypi/hgsvn

 I tried that and it works. It's a very easy way to create a hg mirror
 at your computer. And then you can take this
 as the official upstream repository (which you don't have write  
 access
 to). Whenever somone commits
 to the svn, you just do hgpullsvn and it updates your mercurial repo.

 Then you just clone it and create branches, for example the scons
 branch can be easily managed like this.
 Then you prepare patches, against your official local mercurial
 mirror, using for example
 hg export, or something, those patches should be possible to apply
 against the svn repository as well.
 You sent them for review and then (you or someone else) commit them
 using svn, then you'll hgpullsvn your local mercurial mirror and
 merge the changes to all your other branches.

 The main problem if this approach is that it is quite heavy on the svn
 server; that's why it would be better if the mirrors are done only  
 once,
 and are publicly available, I think. Besides, it is easier (and  
 faster)
 to do the mirrors locally (or from the file:// method, or from a svn
 dump; both mercurial and bzr have methods to import from those)


At least for mercurial's convert command, it's a one-time thing -- you  
can't update a created repo from svn.

AFAIK, all the tools can specify a svn revision to start from, if you  
don't need history (or just recent history).

-- 
||\/|
/--\
|David M. Cooke  http://arbutus.physics.mcmaster.ca/dmc/
|[EMAIL PROTECTED]
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] how to work with mercurial and numpy right now

2008-01-08 Thread David M. Cooke
On Jan 8, 2008, at 07:16 , David Cournapeau wrote:

 David M. Cooke wrote:
 AFAIK, all the tools can specify a svn revision to start from, if you
 don't need history (or just recent history).

 Are you sure ? bzr-svn does not do it (logically, since bzr-svn can
 pull/push), and I don't see any option from the convert extension from
 mercurial. I don't have hgpullsvn at hand, I don't remember having  
 seen
 the option either.


Thought they did; hgimportsvn from hgsvn does, and so does tailor.

-- 
||\/|
/--\
|David M. Cooke  http://arbutus.physics.mcmaster.ca/dmc/
|[EMAIL PROTECTED]
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Error importing from numpy.matlib

2008-01-05 Thread David M. Cooke
On Jan 3, 2008, at 18:29 , Steve Lianoglou wrote:

 Anyway, somewhere in my codebase (for a long time now) I'm doing:

 from numpy.matlib import *

 Now, when I try to use this code, or just type that in the
 interpreter, I get this message:

 AttributeError: 'module' object has no attribute 'pkgload'

 This doesn't happen when I do:

 import numpy.matlib as M

 Anyway, can anyone shed some light on this?


This is the behaviour you'd get if the module's __all__ attribute  
lists objects which don't exist in the module. Looks like a regression  
in r4659; fixed in SVN now as r4674.

-- 
||\/|
/--\
|David M. Cooke  http://arbutus.physics.mcmaster.ca/dmc/
|[EMAIL PROTECTED]

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Moving away from svn ?

2008-01-05 Thread David M. Cooke
On Jan 4, 2008, at 13:58 , Fernando Perez wrote:

 My vote so far is for hg, for performance reasons but also partly
 because sage and sympy already use it, two projects I'm likely to
 interact a lot with and that are squarely in line with the
 ipython/numpy/scipy/matplotlib world.  Since they went first and made
 the choice, I'm happy to let that be a factor in my decision.  I'd
 rather use a tool that others in the same community are also using,
 especially when the choice is a sound one on technical merit alone.

 Just my 1e-2...


+1 on mercurial. It's what I use these days (previously, I used darcs,  
which I still like for its patch-handling semantics, but its  
dependence on Haskell, and the dreaded exponential-time merge are a  
bit of a pain).

One thing that can help is an official Mercurial mirror of the  
subversion repository. IIRC, sharing changegroups or pulling patches  
between hg repos requires that they have a common ancestor repo (as  
opposed to two developers independently converting the svn repo).

-- 
||\/|
/--\
|David M. Cooke  http://arbutus.physics.mcmaster.ca/dmc/
|[EMAIL PROTECTED]

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] RAdian -- degres conversion

2007-12-14 Thread David M. Cooke
On Dec 14, 2007, at 14:33 , Christopher Barker wrote:

 HI all,

 Someone on the wxPython list just pointed out that the math module now
 includes includes angle-conversion utilities:

 degrees.__doc__
 degrees(x) - converts angle x from radians to degrees
 radians.__doc__
 radians(x) - converts angle x from degrees to radians

 Not a big deal, but handy. As I generally like to think if numpy as a
 superset of the math module, perhaps is should include these too.


Done.

-- 
||\/|
/--\
|David M. Cooke  http://arbutus.physics.mcmaster.ca/dmc/
|[EMAIL PROTECTED]

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Changing the distributed binary for numpy 1.0.4 for windows ?

2007-12-10 Thread David M. Cooke
On Dec 10, 2007, at 10:30 , Matthieu Brucher wrote:
 2007/12/10, Alexander Michael [EMAIL PROTECTED]: On Dec 10, 2007  
 6:48 AM, David Cournapeau [EMAIL PROTECTED] wrote:
  Hi,
 
  Several people reported problems with numpy 1.0.4 (See #627 and
  #628, but also other problems mentionned on the ML, which I cannot
  find). They were all solved, as far as I know, by a binary I  
 produced
  (simply using mingw + netlib BLAS/LAPACK,  no ATLAS). Maybe it  
 would be
  good to use those instead ? (I can recompile them if there is a  
 special
  thing to do to build them)

 Do I understand correctly that you are suggesting removing ATLAS from
 the Windows distribution? Wouldn't this make numpy very slow? I know
 on RHEL5 I see a very large improvement between the basic BLAS/LAPACK
 and ATLAS. Perhaps we should make an alternative Windows binary
 available without ATLAS just for those having problems with ATLAS?
 That's why David proposed the netlib version of BLAS/LAPACK and not  
 the default implementation in numpy.

 I would agree with David ;)


Our versions of BLAS/LAPACK are f2c'd versions of the netlib 3.0 BLAS/ 
LAPACK (actually, of Debian's version of these -- they include several  
fixes that weren't upstream).

So netlib's versions aren't going to be any faster, really. And  
netlib's BLAS is slow. Now, if there is a BLAS that's easier to  
compile than ATLAS on windows, that'd be improvement.

-- 
||\/|
/--\
|David M. Cooke  http://arbutus.physics.mcmaster.ca/dmc/
|[EMAIL PROTECTED]

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy distutils patch

2007-11-19 Thread David M. Cooke
On Nov 18, 2007, at 23:30 , Jarrod Millman wrote:

 Hello,

 I never got any reply about the 'fix' for distutils.util.split_quoted
 in numpy/distutils/ccompiler.py.  Can anyone confirm whether this fix
 is correct or necessary?  If so, I would like to submit a patch
 upstream for this.

My opinion is that it's not necessary, or correct. The fix leaves  
quotes in if there is no whitespace, so 'Hi' is converted to  
['Hi'], while 'Hi there' becomes ['Hi there']. I can't see when  
you'd want that behaviour.

Also, it's only used by ccompiler (numpy.distutils.ccompiler replaces  
the version in distutils.ccompiler). numpy.distutils.fcompiler  
*doesn't* use this version, it uses distutils.utils.split_quoted.  
Since we run into more variety in terms of command lines with the  
Fortran compilers than the C compilers I think, and haven't been  
bitten by supposedly-bad quoting problems, I'll say we don't need our  
version.

 On Oct 29, 2007 2:17 AM, Jarrod Millman [EMAIL PROTECTED] wrote:
 Hey,

 I was looking at numpy/distutils/ccompiler.py and noticed that it has
 a fix for distutils.util.split_quoted.

 Here is the relevant code from split_quoted in  
 numpy.distutils.ccompiler:
 ---
 def split_quoted(s):

 snip

   if _has_white_re.search(s[beg+1:end-1]):
s = s[:beg] + s[beg+1:end-1] + s[end:]
pos = m.end() - 2
else:
# Keeping quotes when a quoted word does not contain
# white-space. XXX: send a patch to distutils
pos = m.end()

 snip
 ---

 Here is the relevant code from split_quoted in distutils.util:
 ---
 def split_quoted(s):

 snip

s = s[:beg] + s[beg+1:end-1] + s[end:]
pos = m.end() - 2

 snip
 ---

 Does anyone know if a patch was ever submitted upstream?  If not, is
 there any reason that a patch shouldn't be submitted?

 Thanks,

 --
 Jarrod Millman
 Computational Infrastructure for Research Labs
 10 Giannini Hall, UC Berkeley
 phone: 510.643.4014
 http://cirl.berkeley.edu/




 -- 
 Jarrod Millman
 Computational Infrastructure for Research Labs
 10 Giannini Hall, UC Berkeley
 phone: 510.643.4014
 http://cirl.berkeley.edu/
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion


-- 
||\/|
/--\
|David M. Cooke  http://arbutus.physics.mcmaster.ca/dmc/
|[EMAIL PROTECTED]

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Making a minimalist NumPy

2007-11-04 Thread David M. Cooke
On Nov 4, 2007, at 15:51 , Benjamin M. Schwartz wrote:

 NumPy is included in the OLPC operating system, which is very  
 constrained in
 space.  Therefore, it would be nice to remove some subpackages to  
 save a few
 megabytes.  For example, the system does not include any Fortran  
 code or
 compiler, so f2py (3.6 MB) seems superfluous.  I also think the  
 distutils
 subpackage (1.9M) is probably not necessary.  Therefore, I have two  
 questions.

 1. Which packages do you think are necessary to have a functioning  
 NumPy?

 2. What is the easiest way to make (or get) a minimal NumPy  
 installation?  For
 example, would the scons/autoconf branch make this easier?


The *biggest* single optimisation for space that you could make is not  
to have both .pyc and .pyo files. AFAIK, the only difference between  
the two now is that .pyo files don't have asserts included. Testing on  
build 625 of the OLPC runnng in VMWare, that removes about 3MB from  
the numpy package right there (and even more when done globally --  
about 25MB.). [btw, the Python test/ directory would be another 14MB.]

After that,

1) remove f2py -- as you say, no Fortran, no need (2.2MB)
2) remove test directories find . -name tests -type d -exec rm -rf  
{} ;' (1MB)
3) remove distutils (1MB)

Now, it's down to about 3.5 MB; there's not much more that can be done  
after that.

While I'm confident that removing the f2py and test directories can be  
done safely (by just removing the directories), I'm not so sure about  
numpy.distutils -- it really depends on what other software you're  
using. It could be trimmed a bit: the Fortran compiler descriptions in  
distutils/fcompiler/ could be removed, and system_info.py could be cut  
down. Although you can make up for the extra space it uses by removing  
Numeric (2MB).

-- 
||\/|
/--\
|David M. Cooke  http://arbutus.physics.mcmaster.ca/dmc/
|[EMAIL PROTECTED]

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] vectorizing loops

2007-11-01 Thread David M. Cooke
On Nov 1, 2007, at 08:56 , Francesc Altet wrote:

 A Wednesday 31 October 2007, Timothy Hochberg escrigué:
 On Oct 31, 2007 3:18 AM, Francesc Altet [EMAIL PROTECTED] wrote:

 [SNIP]

 Incidentally, all the improvements of the PyTables flavor of
 numexpr have been reported to the original authors, but, for the
 sake of keeping numexpr simple, they decided to implement only some
 of them. However, people is encouraged to try out the Pytables
 flavor from:

 My original concern was that we'd start overflowing the code cache
 and slow things down. Some experiments seemed to confirm this on some
 processors, but it then turned out those were in error. At least
 that's my recollection. Because of that, and because PyTables is, as
 far as I know, the major user of numexpr, I suspect I'd be fine
 putting those changes in now. I say suspect since it's been a long
 time since I looked at the relevant patches, and I should probably
 look at those again before I commit myself. I just haven't had the
 free time and motivation to go back and look at the patches.  I can't
 speak for David though.

 If I remember correctly, another point where you (specially David)  
 were
 not very keen to include was the support for strings, arguing that
 numexpr is meant mainly to deal with numerical data, not textual.
 However, our experience is that adding this support was not too
 complicated (mainly due to NumPy flexibility), and can be helpful in
 some instances.  As for one, we use them for evaluating expressions
 like 'array_string == some_string', and this can be very convenient
 to use when you are in the middle of potentially complex boolean
 expressions that you want to evaluate *fast*.

 At any rate, we would be glad if you would like to integrate our  
 patches
 in the main numexpr, as there is not much sense to have different
 implementations of numexpr (most specially when it seems that there  
 are
 not much users out there).  So, count on us for any question you may
 have in this regard.

Well, I don't have much time to work on it, but if you make sure your  
patches on the scipy Trac apply clean, I'll have a quick look at them  
and apply them. Since you've had them working in production code, they  
should be good ;-)

Another issue is that numexpr is still in the scipy sandbox, so only  
those who enable it will use it (or use it through PyTables). One  
problem with moving it out is that Tim reports the compile times on  
Windows are ridiculous (20 mins!). Maybe numexpr should become a  
scikit? It certainly doesn't need the rest of scipy.

-- 
||\/|
/--\
|David M. Cooke  http://arbutus.physics.mcmaster.ca/dmc/
|[EMAIL PROTECTED]

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [SciPy-dev] adopting Python Style Guide for classes

2007-10-04 Thread David M. Cooke
Alan G Isaac [EMAIL PROTECTED] writes:

 To help me understand, might someone offer some examples of
 NumPy names that really should be changed?

Internal classes, like:
- nd_grid, etc. in numpy/lib/index_tricks.py
- masked_unary_operation, etc. in numpy/core/ma.py

Things we probably wouldn't change:
- array-like things, such as numpy.lib.polynomial.poly1d
- ufuncs implemented in Python like vectorize
- distributions from scipy.stats

In numpy, outside of tests, distutils, and f2py, there's actually not
that many classes defined in Python (as opposed to C). In scipy, most
non-testing classes are already CamelCase (one exception is weave).

-- 
||\/|
/--\
|David M. Cooke  http://arbutus.physics.mcmaster.ca/dmc/
|[EMAIL PROTECTED]
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] adopting Python Style Guide for classes

2007-10-02 Thread David M. Cooke
On Tue, Oct 02, 2007 at 09:12:43AM +0200, Pearu Peterson wrote:
 
 
 Jarrod Millman wrote:
  Hello,
  
 ..
  Please let me know if you have any major objections to adopting the
  Python class naming convention.
 
 I don't object.

Me either.

  2.  Any one adding a new class to NumPy would use CapWords.
  3.  When we release NumPy 1.1, we will convert all (or almost all)
  class names to CapWords.  There is no reason to worry about the exact
  details of this conversion at this time.  I just would like to get a
  sense whether, in general, this seems like a good direction to move
  in.  If so, then after we get steps 1 and 2 completed we can start
  discussing how to handle step 3.
 
 After fixing the class names in tests then how many classes use
 camelcase style in numpy/distutils? How many of them are implementation
 specific and how many of them are exposed to users? I think having this
 statistics would be essential to make any decisions. Eg would we
 need to introduce warnings for the few following releases of numpy/scipy
 when camelcase class is used by user code, or not?

In numpy/distutils, there's the classes in command/* modules (but note
that distutils uses the same lower_case convention, so I'd say keep
them), cpu_info (none of which are user accessible; I'm working in there
now), and system_info (which are documented as user accessible). Poking
through the rest, it looks like only the system_info classes are ones
that we would expect users to subclass. We could document the lower_case
names as deprecated, and alias them to CamlCase versions.

-- 
||\/|
/--\
|David M. Cooke  http://arbutus.physics.mcmaster.ca/dmc/
|[EMAIL PROTECTED]
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] As seen on PyPI -- a new bindings generator

2007-09-25 Thread David M. Cooke
Just saw this come up on PyPI:

https://launchpad.net/pybindgen/


Python Bindings Generator
PyBindGen is a Python module that is geared to generating C/C++ code
that binds a C/C++ library for Python. It does so without extensive use
of either C++ templates or C pre-processor macros. It has modular
handling of C/C++ types, and can be easily extended with Python plugins.
The generated code is almost as clean as what a human programmer would
write.


Looks interesting, esp. for C++ wrappers that Pyrex can't do.
Also, it uses Waf (http://code.google.com/p/waf/) instead of distutils,
which looks like an interesting alternative.

-- 
||\/|
/--\
|David M. Cooke  http://arbutus.physics.mcmaster.ca/dmc/
|[EMAIL PROTECTED]
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] how to include numpy headers when building an extension?

2007-09-11 Thread David M. Cooke
David Cournapeau [EMAIL PROTECTED] writes:

 Travis E. Oliphant wrote:
 Christopher Barker wrote:
   
 I know this has been discussed, but why doesn't numpy put its includes 
 somewhere that distutils would know where to find it?
   
 
 I think one answer is because distutils doesn't have defaults that play 
 well with eggs.   NumPy provides very nice extensions to distutils which 
 will correctly add the include directories you need.
   
 Concerning numpy.distutils, is anyone working on improving it ? There 
 was some work started, but I have not seen any news on this front: am I 
 missing something ? I would really like to have the possibility to 
 compile custom extensions to be used through ctypes, and didn't go very 
 far just by myself, unfortunately (understanding distutils is not 
 trivial, to say the least).

I work on it off and on. As you say, it's not trivial :-) It also has
a tendency to be fragile, so large changes are harder. Something will work
for me, then I merge it into the trunk, and it breaks on half-a-dozen
platforms that I can't test on :-) So, it's slow going.

I've got a list of my current goals at
http://scipy.org/scipy/numpy/wiki/DistutilsRevamp.

-- 
||\/|
/--\
|David M. Cooke  http://arbutus.physics.mcmaster.ca/dmc/
|[EMAIL PROTECTED]
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Anyone have a well-tested SWIG-based C++ STL valarray = numpy.array typemap to share?

2007-09-07 Thread David M. Cooke
Christopher Barker [EMAIL PROTECTED] writes:

 Joris De Ridder wrote:
 A related question, just out of curiosity: is there a technical  
 reason why Numpy has been coded in C rather than C++?

 There was a fair bit of discussion about this back when the numarray 
 project started, which was a re-implementation of the original Numeric.

 IIRC, one of the drivers was that C++ support was still pretty 
 inconsistent across compilers and OSs, particularly if you wanted to 
 really get the advantages of C++, by using templates and the like.

 It was considered very important that the numpy code base be very portable.

One of the big problems has always been that the C++ application
binary interface (ABI) has historically not been all that stable: all
the C++ libraries your program used would have to be compiled by the
same version of the compiler. That includes Python. You couldn't
import an extension module written in C++ compiled with g++ 3.3, say,
at the same time as one compiled with g++ 4.0, and your Python would
have to been linked with the same version.

While the ABI issues (at least on Linux with GCC) are better now, it's
still something of a quagmire.

-- 
||\/|
/--\
|David M. Cooke  http://arbutus.physics.mcmaster.ca/dmc/
|[EMAIL PROTECTED]
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Anyone have a well-tested SWIG-based C++ STL valarray = numpy.array typemap to share?

2007-09-07 Thread David M. Cooke
Bill Spotz [EMAIL PROTECTED] writes:

 On Sep 5, 2007, at 11:38 AM, Christopher Barker wrote:

 Of course, it should be possible to write C++ wrappers around the core
 ND-array object, if anyone wants to take that on!

 boost::python has done this for Numeric, but last I checked, they  
 have not upgraded to numpy.

Even then, their wrappers went through the Python interface, not the C
API. So, it's no faster than using Python straight.

-- 
||\/|
/--\
|David M. Cooke  http://arbutus.physics.mcmaster.ca/dmc/
|[EMAIL PROTECTED]
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] fast putmask implementation

2007-08-16 Thread David M. Cooke
On Thu, Aug 16, 2007 at 04:39:02PM -1000, Eric Firing wrote:
 As far as I can see there is no way of using svn diff to deal with 
 this automatically, so in the attached revision I have manually removed 
 chunks resulting solely from whitespace.

 Is there a better way to handle this problem?  A better way to make diffs?  
 Or any possibility of routinely cleaning the junk out of the svn source 
 files?  (Yes, I know--what is junk to me probably results from what others 
 consider good behavior of the editor.)

'svn diff -x -b' might work better (-b gets passed to diff, which makes
it ignore space changes). Or svn diff -x -w to ignore all whitespace.

Me, I hate trailing ws too (I've got Emacs set up so that gets
highlighted as red, which makes me angry :). The hard tabs in C code is
keeping with the style used in the C Python sources (Emacs even has a
'python' C style -- do C-c . python).

-- 
||\/|
/--\
|David M. Cooke  http://arbutus.physics.mcmaster.ca/dmc/
|[EMAIL PROTECTED]
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Buildbot for numpy

2007-06-16 Thread David M. Cooke
On Sat, Jun 16, 2007 at 10:11:55AM +0200, Stefan van der Walt wrote:
 Hi all,
 
 Short version
 =
 
 We now have a numpy buildbot running at
 
 http://buildbot.scipy.org
 
 While we are still working on automatic e-mail notifications, the
 system already provides valuable feedback -- take a look at the
 waterfall display:
 
 http://buildbot.scipy.org
 
 If your platform is not currently on the list, please consider
 volunteering a machine as a build slave.  This machine will be
 required to run the buildbot client, and to build a new version of
 numpy whenever changes are made to the repository.  (The machine does
 not have to be dedicated to this task, and can be your own
 workstation.)

Awesome. I've got a iBook (PPC G4) running OS X that can be used as a slave
(it's just being a server right now).

-- 
||\/|
/--\
|David M. Cooke  http://arbutus.physics.mcmaster.ca/dmc/
|[EMAIL PROTECTED]
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] question about numpy

2007-06-15 Thread David M. Cooke
On Tue, Jun 12, 2007 at 11:58:28AM -0400, Xuemei Tang wrote:
 Dear Sir/Madam,
 
 I meet a problem when I installed numpy. I installed numpy by the command
 python setup.py install. Then I tested it by python -c 'import numpy;
 numpy.test()'. But it doesn't work. There is an error message:
 Running from numpy source directory.

^ don't do that :)

Instead, change out of the source directory, and rerun.

-- 
||\/|
/--\
|David M. Cooke  http://arbutus.physics.mcmaster.ca/dmc/
|[EMAIL PROTECTED]
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Incompatability of svn 3868 distutils with v10.0 Intel compilers and MKL9.1

2007-06-14 Thread David M. Cooke
On Thu, Jun 14, 2007 at 04:17:04PM +0200, Albert Strasheim wrote:
 Hello all
 
 On Thu, 14 Jun 2007, Matthieu Brucher wrote:
 
  
  cc_exe = 'icc -g -fomit-frame-pointer -xT -fast'
  
  Just some comments on that :
  - in release mode, you should not use '-g', it slows down the execution of
  your program
 
 Do you have a reference that explains this in more detail? I thought -g 
 just added debug information without changing the generated code?

I had a peek at the icc manual. For icc, -g by itself implies -O0 and
-fno-omit-frame-pointer, so it will be slower. However, -g -O2
-fomit-frame-pointer shouldn't be any slower than without the -g.

For gcc, -g does what you said.

-- 
||\/|
/--\
|David M. Cooke  http://arbutus.physics.mcmaster.ca/dmc/
|[EMAIL PROTECTED]
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Overview of extra build options in svn's numpy.distutils

2007-06-12 Thread David M . Cooke
Hi,

Looks like I broke more stuff than I intended when I merged my  
distutils-revamp branch. Not that I didn't expect something to break,  
this stuff is fragile!

The main purpose of the merge was to allow the user to configure more  
stuff regarding how things are compiled with Fortran. For instance,  
here's my ~/.pydistutils.cfg

[config_fc]
fcompiler=gnu95
f77exec=gfortran-mp-4.2
f90exec=gfortran-mp-4.2
#opt = -g -Wall -O2
f77flags=-g -Wall -O

(I use gfortran 4.2 on my MacBook, installed using MacPorts.) Other  
options to set are listed in numpy/distutils/fcompiler/__init__.py,  
in the FCompiler class.

distutils config_fc key, [environment variable]

distutils flags for config_fc section

compiler - Fortran compiler to use (the numpy.distutils name, like  
gnu95 or intel)
noopt - don't compile with optimisations
noarch - don't compile with host architecture optimisations
debug - compile with debug optimisations
verbose - spew more stuff to the console when doing distutils stuff

executables:

f77exec [F77] - executable for Fortran 77
f90exec [F90] - executable for Fortran 90
ldshared [LDSHARED] - executable for shared libraries for Fortran
ld [LD] - executable for linker for Fortran
ar [AR] - library archive maker (for .a files)
ranlib [RANLIB] - some things need ranlib run over the libraries.

flags:

f77flags [F77FLAGS] - compiler flags for Fortran 77 compiler
f90flags [F90FLAGS] - compiler flags for Fortran 90 compiler
freeflags [FREEFLAGS] - compiler flags for free-format Fortran 90
opt [FOPT] - optimisation flags for all Fortran compilers (used if  
noopt is false)
arch [FARCH] - architecture-specific flags for Fortran (used if  
noarch is false)
fdebug [FDEBUG] - debug-specific flags for Fortran (used if debug is  
true)
fflags [FFLAGS] - extra compiler flags
ldflags [LDFLAGS] - extra linker flags
arflags [ARFLAGS] - extra library archiver flags


There's also more central logic for finding executables, which should  
be more flexible, and takes care of, for instance, using the  
specified F77 compiler for the linker if the linker isn't specified,  
or the F90 compiler if either isn't, etc.

Some of this type of stuff should be done for the C compiler, but  
isn't, as that would be messier with regards to hooking into Python's  
distutils.

Personally, I think Python's distutils is a poorly laid-out  
framework, that is in need of serious refactoring. However, that's a  
lot of work, and I'm not going to do it right away...

-- 
||\/|
/--\
|David M. Cooke  http://arbutus.physics.mcmaster.ca/dmc/
|[EMAIL PROTECTED]

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy r3857 build problem

2007-06-03 Thread David M. Cooke
On Sat, Jun 02, 2007 at 08:14:54PM -0400, Christopher Hanley wrote:
 Hi,
 
 I cannot build the latest version of numpy in svn (r3857) on my Intel 
 MacBook running OSX 10.4.9.  I'm guessing that the problem is that a 
 fortran compiler isn't found.  Since NUMPY doesn't require FORTRAN I 
 found this surprising.  Has there been a change in policy?  I'm 
 attaching the build log to this message.

Fixed in r3858

-- 
||\/|
/--\
|David M. Cooke  http://arbutus.physics.mcmaster.ca/dmc/
|[EMAIL PROTECTED]
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] build problem on Windows (was: build problemon RHE3 machine)

2007-05-30 Thread David M. Cooke
On Thu, May 31, 2007 at 02:32:21AM +0200, Albert Strasheim wrote:
 Hello all
 
 - Original Message - 
 From: David M. Cooke [EMAIL PROTECTED]
 To: Discussion of Numerical Python numpy-discussion@scipy.org
 Sent: Thursday, May 31, 2007 2:08 AM
 Subject: Re: [Numpy-discussion] build problem on Windows (was: build 
 problemon RHE3 machine)
 
 
  On Wed, May 30, 2007 at 03:06:04PM +0200, Albert Strasheim wrote:
  Hello
 
  I took a quick look at the code, and it seems like new_fcompiler(...) is 
  too
  soon to throw an error if a Fortran compiler cannot be detected.
 
  Instead, you might want to return some kind of NoneFCompiler that throws 
  an
  error if the build actually tries to compile Fortran code.
 
  Maybe it's fixed now :-P new_fcompiler will return None instead of
  raising an error. build_ext and build_clib should handle it from there
  if they need the Fortran compiler.
 
 Almost there, but not quite:
 
 snipped some output
 don't know how to compile Fortran code on platform 'nt'
 Running from numpy source directory.
 Traceback (most recent call last):
[snip]
   File C:\home\albert\work2\numpy\numpy\distutils\command\config.py, line 
 31, in _check_compiler
 self.fcompiler.customize(self.distribution)
 AttributeError: 'NoneType' object has no attribute 'customize'

Try it now.

-- 
||\/|
/--\
|David M. Cooke  http://arbutus.physics.mcmaster.ca/dmc/
|[EMAIL PROTECTED]
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] build problem on RHE3 machine

2007-05-29 Thread David M. Cooke
On May 29, 2007, at 08:56 , Albert Strasheim wrote:

 Hello all

 - Original Message -
 From: David M. Cooke [EMAIL PROTECTED]
 To: Discussion of Numerical Python numpy-discussion@scipy.org
 Sent: Friday, May 25, 2007 7:50 PM
 Subject: Re: [Numpy-discussion] build problem on RHE3 machine


 On Fri, May 25, 2007 at 12:45:32PM -0500, Robert Kern wrote:
 David M. Cooke wrote:
 On Fri, May 25, 2007 at 07:25:15PM +0200, Albert Strasheim wrote:
 I'm still having problems on Windows with r3828. Build command:

 python setup.py -v config --compiler=msvc build_clib -- 
 compiler=msvc
 build_ext --compiler=msvc bdist_wininst

 Can you send me the output of

 python setup.py -v config_fc --help-fcompiler

 And what fortran compiler are you trying to use?

 If he's trying to build numpy, he shouldn't be using *any* Fortran
 compiler.

 Ah true. Still, config_fc wll say it can't find one (and that  
 should be
 fine).
 I think the bug has to do with how it searches for a compiler.

 I see there's been more work on numpy.distutils, but I still can't  
 build
 r3841 on a normal Windows system with Visual Studio .NET 2003  
 installed.

 Is there any info I can provide to get this issue fixed?

Anything you've got :) The output of these are hopefully useful to me  
(after removing build/):

$ python setup.py -v build
$ python setup.py -v config_fc --help-fcompiler

-- 
||\/|
/--\
|David M. Cooke  http://arbutus.physics.mcmaster.ca/dmc/
|[EMAIL PROTECTED]

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy 1.0.3 install problem. Help!

2007-05-27 Thread David M. Cooke
On Sun, May 27, 2007 at 12:29:30PM -0600, Travis Oliphant wrote:
 Yang, Lu wrote:
  Thanks, Travis. I don't have problem building other applications on the 
  same platform.
  Are there any files in the extracted /numpy-1.0.3 that I can modify the 
  path of the C
  compiler? I have checked all the files in it withouth luck.

 
 The C-compiler that is used is the same one used to build Python.  It is 
 picked up using Python distutils.   So, another problem could be that 
 the compiler used to build Python is not available. 
 
 You should look in the file
 
 $PYTHONDIR/config/Makefile
 
 where $PYTHONDIR is where Python is installed on your system.
 
 There will be a line
 
 CC =
 
 in there.   That's the compiler that is going to be used.Also, check 
 your $PATH variable when you are building numpy if a full path name is 
 not given in the CC line to see what compiler will be picked up.

Also, check that you don't have a CC environment variable defined (i.e.,
echo $CC should be blank), as that will overrride the Python Makefile
settings.

-- 
||\/|
/--\
|David M. Cooke  http://arbutus.physics.mcmaster.ca/dmc/
|[EMAIL PROTECTED]
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] build problem on RHE3 machine

2007-05-25 Thread David M. Cooke
On Fri, May 25, 2007 at 08:27:47AM -0400, Christopher Hanley wrote:
 Good Morning,
 
 When attempting to do my daily numpy build from svn I now receive the 
 following error.  I am a Redhat Enterprise 3 Machine running Python 2.5.1.
 
libraries lapack_atlas not found in /usr/local/lib
libraries f77blas,cblas,atlas not found in /usr/lib
libraries lapack_atlas not found in /usr/lib
 numpy.distutils.system_info.atlas_info
NOT AVAILABLE
 
 /data/sparty1/dev/numpy/numpy/distutils/system_info.py:1221: UserWarning:
  Atlas (http://math-atlas.sourceforge.net/) libraries not found.
  Directories to search for the libraries can be specified in the
  numpy/distutils/site.cfg file (section [atlas]) or by setting
  the ATLAS environment variable.
warnings.warn(AtlasNotFoundError.__doc__)
 lapack_info:
libraries lapack not found in /usr/stsci/pyssgdev/Python-2.5.1/lib
libraries lapack not found in /usr/local/lib
FOUND:
  libraries = ['lapack']
  library_dirs = ['/usr/lib']
  language = f77
 
FOUND:
  libraries = ['lapack', 'blas']
  library_dirs = ['/usr/lib']
  define_macros = [('NO_ATLAS_INFO', 1)]
  language = f77
 
 running install
 running build
 running config_cc
 unifing config_cc, config, build_clib, build_ext, build commands 
 --compiler options
 running config_fc
 unifing config_fc, config, build_clib, build_ext, build commands 
 --fcompiler options
 running build_src
 building py_modules sources
 creating build
 creating build/src.linux-i686-2.5
 creating build/src.linux-i686-2.5/numpy
 creating build/src.linux-i686-2.5/numpy/distutils
 building extension numpy.core.multiarray sources
 creating build/src.linux-i686-2.5/numpy/core
 Generating build/src.linux-i686-2.5/numpy/core/config.h
 customize GnuFCompiler
 customize IntelFCompiler
 Could not locate executable ifort
 Could not locate executable ifc
 customize LaheyFCompiler
 Could not locate executable lf95
 customize PGroupFCompiler
 Could not locate executable pgf90
 customize AbsoftFCompiler
 Could not locate executable f90
 customize NAGFCompiler
 Could not locate executable f95
 customize VastFCompiler
 customize GnuFCompiler
 customize CompaqFCompiler
 Could not locate executable fort
 customize IntelItaniumFCompiler
 Could not locate executable ifort
 Could not locate executable efort
 Could not locate executable efc
 customize IntelEM64TFCompiler
 Could not locate executable ifort
 Could not locate executable efort
 Could not locate executable efc
 customize Gnu95FCompiler
 Could not locate executable gfortran
 Could not locate executable f95
 customize G95FCompiler
 Could not locate executable g95
 error: don't know how to compile Fortran code on platform 'posix'
 
 This problem is new this morning.

Hmm, my fault. I'll have a look.

-- 
||\/|
/--\
|David M. Cooke  http://arbutus.physics.mcmaster.ca/dmc/
|[EMAIL PROTECTED]
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] build problem on RHE3 machine

2007-05-25 Thread David M. Cooke
On Fri, May 25, 2007 at 08:27:47AM -0400, Christopher Hanley wrote:
 Good Morning,
 
 When attempting to do my daily numpy build from svn I now receive the 
 following error.  I am a Redhat Enterprise 3 Machine running Python 2.5.1.
 
libraries lapack_atlas not found in /usr/local/lib
libraries f77blas,cblas,atlas not found in /usr/lib
libraries lapack_atlas not found in /usr/lib
 numpy.distutils.system_info.atlas_info
NOT AVAILABLE
 
 /data/sparty1/dev/numpy/numpy/distutils/system_info.py:1221: UserWarning:
  Atlas (http://math-atlas.sourceforge.net/) libraries not found.
  Directories to search for the libraries can be specified in the
  numpy/distutils/site.cfg file (section [atlas]) or by setting
  the ATLAS environment variable.
warnings.warn(AtlasNotFoundError.__doc__)
 lapack_info:
libraries lapack not found in /usr/stsci/pyssgdev/Python-2.5.1/lib
libraries lapack not found in /usr/local/lib
FOUND:
  libraries = ['lapack']
  library_dirs = ['/usr/lib']
  language = f77
 
FOUND:
  libraries = ['lapack', 'blas']
  library_dirs = ['/usr/lib']
  define_macros = [('NO_ATLAS_INFO', 1)]
  language = f77
 
 running install
 running build
 running config_cc
 unifing config_cc, config, build_clib, build_ext, build commands 
 --compiler options
 running config_fc
 unifing config_fc, config, build_clib, build_ext, build commands 
 --fcompiler options
 running build_src
 building py_modules sources
 creating build
 creating build/src.linux-i686-2.5
 creating build/src.linux-i686-2.5/numpy
 creating build/src.linux-i686-2.5/numpy/distutils
 building extension numpy.core.multiarray sources
 creating build/src.linux-i686-2.5/numpy/core
 Generating build/src.linux-i686-2.5/numpy/core/config.h
 customize GnuFCompiler
 customize IntelFCompiler
 Could not locate executable ifort
 Could not locate executable ifc
 customize LaheyFCompiler
 Could not locate executable lf95
 customize PGroupFCompiler
 Could not locate executable pgf90
 customize AbsoftFCompiler
 Could not locate executable f90
 customize NAGFCompiler
 Could not locate executable f95
 customize VastFCompiler
 customize GnuFCompiler
 customize CompaqFCompiler
 Could not locate executable fort
 customize IntelItaniumFCompiler
 Could not locate executable ifort
 Could not locate executable efort
 Could not locate executable efc
 customize IntelEM64TFCompiler
 Could not locate executable ifort
 Could not locate executable efort
 Could not locate executable efc
 customize Gnu95FCompiler
 Could not locate executable gfortran
 Could not locate executable f95
 customize G95FCompiler
 Could not locate executable g95
 error: don't know how to compile Fortran code on platform 'posix'
 
 This problem is new this morning.

Could you send me the results of running with the -v flag?
i.e., python setup.py -v build


-- 
||\/|
/--\
|David M. Cooke  http://arbutus.physics.mcmaster.ca/dmc/
|[EMAIL PROTECTED]
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] build problem on RHE3 machine

2007-05-25 Thread David M. Cooke
On Fri, May 25, 2007 at 12:45:32PM -0500, Robert Kern wrote:
 David M. Cooke wrote:
  On Fri, May 25, 2007 at 07:25:15PM +0200, Albert Strasheim wrote:
  I'm still having problems on Windows with r3828. Build command:
 
  python setup.py -v config --compiler=msvc build_clib --compiler=msvc 
  build_ext --compiler=msvc bdist_wininst
  
  Can you send me the output of
  
  python setup.py -v config_fc --help-fcompiler
  
  And what fortran compiler are you trying to use?
 
 If he's trying to build numpy, he shouldn't be using *any* Fortran compiler.

Ah true. Still, config_fc wll say it can't find one (and that should be fine).
I think the bug has to do with how it searches for a compiler.

-- 
||\/|
/--\
|David M. Cooke  http://arbutus.physics.mcmaster.ca/dmc/
|[EMAIL PROTECTED]
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] FW: RE: Linux numpy 1.0.1 install failed

2007-05-22 Thread David M. Cooke
On Tue, May 22, 2007 at 01:11:35PM -0400, Gong, Shawn (Contractor) wrote:
 Hi Robert
 Running from numpy source directory message also appears when I
 installed numpy.
 I am running python 2.3.6, not 2.4

Just what it says; the current directory is the directory that the
numpy source is in. If you do 'import numpy' there, it finds the
*source* first, not the installed package.

 You said It is picking up the partial numpy package in the source.  Do
 you mean it is picking up the partial numpy package from python 2.3.6 ? 
 
 How do I change out of that directory and try again?

cd ..

-- 
||\/|
/--\
|David M. Cooke  http://arbutus.physics.mcmaster.ca/dmc/
|[EMAIL PROTECTED]
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] best way for storing extensible data?

2007-05-18 Thread David M. Cooke
On Fri, May 18, 2007 at 03:10:23PM +0300, dmitrey wrote:
 hi all,
 what is best way for storing data in numpy array? (amount of memory for 
 preallocating is unknown)
 Currently I use just a Python list, i.e.
 
 r = []
 for i in xrange(N)#N is very big
...
r.append(some_value)

In the above, you know how big you need b/c you know N ;-) so empty is
a good choice:

r = empty((N,), dtype=float)
for i in xrange(N):
r[i] = some_value

empty() allocates the array, but doesn't clear it or anything (as
opposed to zeros(), which would set the elements to zero).

If you don't know N, then fromiter would be best:

def ivalues():
while some_condition():
...
yield some_value

r = fromiter(ivalues(), dtype=float)

It'll act like appending to a list, where it will grow the array (by
doubling, I think) when it needs to, so appending each value is
amortized to O(1) time. A list though would use more memory
per element as each element is a full Python object.

-- 
||\/|
/--\
|David M. Cooke  http://arbutus.physics.mcmaster.ca/dmc/
|[EMAIL PROTECTED]
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] howto make from flat array (1-dim) 2-dimensional?

2007-05-13 Thread David M. Cooke
On Sun, May 13, 2007 at 02:36:39PM +0300, dmitrey wrote:
 i.e. for example from flat array [1, 2, 3] obtain
 array([[ 1.],
[ 2.],
[ 3.]])
 
 I have numpy v 1.0.1
 Thx, D.

Use newaxis:

In [1]: a = array([1., 2., 3.])
In [2]: a
Out[2]: array([ 1.,  2.,  3.])
In [3]: a[:,newaxis]
Out[3]: 
array([[ 1.],
   [ 2.],
   [ 3.]])
In [4]: a[newaxis,:]
Out[4]: array([[ 1.,  2.,  3.]])

When newaxis is used as an index, a new axis of dimension 1 is added.

-- 
||\/|
/--\
|David M. Cooke  http://arbutus.physics.mcmaster.ca/dmc/
|[EMAIL PROTECTED]
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] NumPy 1.0.3 release next week

2007-05-11 Thread David M. Cooke
On Sat, May 12, 2007 at 12:50:10AM +0200, Albert Strasheim wrote:
 Here's another issue with a patch that looks ready to go:
 
 http://projects.scipy.org/scipy/numpy/ticket/509
 
 Enhancement you might consider:
 
 http://projects.scipy.org/scipy/numpy/ticket/375
 
 And this one looks like it can be closed:
 
 http://projects.scipy.org/scipy/numpy/ticket/395
 
 Cheers,
 
 Albert
 
 On Fri, 11 May 2007, Albert Strasheim wrote:
 
  Here are a few tickets that might warrant some attention from someone 
  who is intimately familiar with NumPy's internals ;-)
  
  http://projects.scipy.org/scipy/numpy/ticket/390
  http://projects.scipy.org/scipy/numpy/ticket/405
  http://projects.scipy.org/scipy/numpy/ticket/466
  http://projects.scipy.org/scipy/numpy/ticket/469

I've added a 1.0.3 milestone and set these to them (or to 1.1, according
to Travis's comments).

-- 
||\/|
/--\
|David M. Cooke  http://arbutus.physics.mcmaster.ca/dmc/
|[EMAIL PROTECTED]
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Difference in the number of elements in a fromfile() between Windows and Linux

2007-05-04 Thread David M. Cooke
On Sat, May 05, 2007 at 12:34:30AM +0200, Stefan van der Walt wrote:
 On Fri, May 04, 2007 at 09:44:02AM -0700, Christopher Barker wrote:
  Matthieu Brucher wrote:
   Example of the first line of my data file :
   0.0 inf 13.9040914426 14.7406669444 inf 4.41783247603 inf inf 
   6.05071515635 inf inf inf 15.6925185021 inf inf inf inf inf inf inf
  
  I'm pretty sure fromfile() is using the standard C fscanf(). That means 
  that whether in understands inf depends on the C lib. I'm guessing 
  that the MS libc doesn't understand the same spelling of inf that the 
  gcc one does. There may indeed be no literal for the IEEE Inf.
 
 It would be interesting to see how Inf and NaN (vs. inf and nan) are
 interpreted under Windows.
 
 Are there any free fscanf implementations out there that we can
 include with numpy?

There's no need; all that fscanf is being used for is with the single
format string %d (and variants for each type). So that's easily
replaced with type-specific functions (strtol, strtod, etc.). For the
floating-point types, checking first if the string matches inf or nan
patterns would be sufficient.

There's a bug in fromfile anyways: because it passes the separator
directly to fscanf to skip over it, using a % in your separator will not
work.

-- 
||\/|
/--\
|David M. Cooke  http://arbutus.physics.mcmaster.ca/dmc/
|[EMAIL PROTECTED]
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Difference in the number of elements in a fromfile() between Windows and Linux

2007-05-04 Thread David M. Cooke
On Fri, May 04, 2007 at 05:43:00PM -0500, Robert Kern wrote:
 Stefan van der Walt wrote:
  On Fri, May 04, 2007 at 09:44:02AM -0700, Christopher Barker wrote:
  Matthieu Brucher wrote:
  Example of the first line of my data file :
  0.0 inf 13.9040914426 14.7406669444 inf 4.41783247603 inf inf 
  6.05071515635 inf inf inf 15.6925185021 inf inf inf inf inf inf inf
  I'm pretty sure fromfile() is using the standard C fscanf(). That means 
  that whether in understands inf depends on the C lib. I'm guessing 
  that the MS libc doesn't understand the same spelling of inf that the 
  gcc one does. There may indeed be no literal for the IEEE Inf.
  
  It would be interesting to see how Inf and NaN (vs. inf and nan) are
  interpreted under Windows.
 
 I'm pretty sure that they are also rejected. 1.#INF and 1.#QNAN might be
 accepted though since that's what ftoa() gives for those quantities.

So, from some googling, here's the special strings for floats, as
regular expressions. The case of the letters doesn't seem to matter.

positive infinity:
[+]?inf
[+]?Infinity
1\.#INF

negative infinity:
-Inf
-1.#INF
-Infinity

not a number:
s?NaN[0-9]+ (The 's' is for signalling NaNs, the digits are for
diagnostic information. See the decimal spec at
http://www2.hursley.ibm.com/decimal/daconvs.html)
-1\.#IND
1\.#QNAN (Windows quiet NaN?)

There may be more. If we wish to support these, then writing our own
parser for them is probably the only way. I'll do it, I just need a
complete list of what we want to accept :-)

On the other side of the coin, I'd argue the string representations of
our float scalars should also be platform-agnostic (standardising on Inf
and NaN would be best, I think).

-- 
||\/|
/--\
|David M. Cooke  http://arbutus.physics.mcmaster.ca/dmc/
|[EMAIL PROTECTED]
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Oddity with numpy.int64 integer division

2007-04-24 Thread David M. Cooke

On Apr 23, 2007, at 22:04 , Warren Focke wrote:

But even C89 required that x == (x/y)*y + (x%y), and that's not the  
case

here.


Missed that. You're right. We pull the same trick Python does with %  
so that the sign of x % y agrees with the sign of y, but we don't  
follow Python in guaranteeing floor division. To fix that means  
calculating x % y (as x - (x/y)*y) and checking if the sign is the  
same as y.


This would be a backward incompatible change, but it would restore  
the above invariant, which I think is important. Alternatively, we  
could change the modulo so that the invariant holds, but we don't  
agree with Python in either / or %. Comments?



On Mon, 23 Apr 2007, David M. Cooke wrote:


On Apr 23, 2007, at 16:41 , Christian Marquardt wrote:

On Mon, April 23, 2007 22:29, Christian Marquardt wrote:

Actually,

it happens for normal integers as well:


n = np.array([-5, -100, -150])
n // 100

   array([ 0, -1, -1])

-5//100, -100//100, -150//100

   (-1, -1, -2)


and finally:


n % 100

   array([95,  0, 50])

-5 % 100, -100 % 100, -150 % 100

   (95, 0, 50)

So plain python / using long provides consistent results across //
and %, but numpy doesn't...


Python defines x // y as returning the floor of the division, and x %
y has the same sign as y. However, in C89, it is implementation-
defined (i.e., portability-pain-in-the-ass) whether the floor or ceil
is used when the signs of x and y differ. In C99, the result should
be truncated. From the C99 spec, sec. 6.5.5, #6:
When integers are divided, the result of the / operator
is   the   algebraic   quotient  with  any  fractional  part
discarded.76)  If the quotient  a/b  is  representable,  the
expression (a/b)*b + a%b shall equal a.

Numpy follows whatever the C compiler decides to use, because of
speed-vs.-Python compatibility tradeoff.



  Christian.


On Mon, April 23, 2007 22:20, Christian Marquardt wrote:

Dear all,

this is odd:


import numpy as np
fact = 2825L * 86400L
nn = np.array([-20905000L])
nn

   array([-20905000], dtype=int64)

nn[0] // fact

   0

But:


long(nn[0]) // fact

   -1L

Is this a bug in numpy, or in python's implementation of longs? I
would
think both should give the same, really... (Python 2.5, numpy
1.0.3dev3725,
Linux, Intel compilers...)

Many thanks for any ideas / advice,

  Christian


--
||\/|
/--\
|David M. Cooke  http://arbutus.physics.mcmaster.ca/dmc/
|[EMAIL PROTECTED]



___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion



--
||\/|
/--\
|David M. Cooke  http://arbutus.physics.mcmaster.ca/dmc/
|[EMAIL PROTECTED]



PGP.sig
Description: This is a digitally signed message part
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Oddity with numpy.int64 integer division

2007-04-23 Thread David M. Cooke

On Apr 23, 2007, at 16:41 , Christian Marquardt wrote:

On Mon, April 23, 2007 22:29, Christian Marquardt wrote:

Actually,

it happens for normal integers as well:


n = np.array([-5, -100, -150])
n // 100

   array([ 0, -1, -1])

-5//100, -100//100, -150//100

   (-1, -1, -2)


and finally:


n % 100

   array([95,  0, 50])

-5 % 100, -100 % 100, -150 % 100

   (95, 0, 50)

So plain python / using long provides consistent results across //
and %, but numpy doesn't...


Python defines x // y as returning the floor of the division, and x %  
y has the same sign as y. However, in C89, it is implementation- 
defined (i.e., portability-pain-in-the-ass) whether the floor or ceil  
is used when the signs of x and y differ. In C99, the result should  
be truncated. From the C99 spec, sec. 6.5.5, #6:

   When integers are divided, the result of the / operator
   is   the   algebraic   quotient  with  any  fractional  part
   discarded.76)  If the quotient  a/b  is  representable,  the
   expression (a/b)*b + a%b shall equal a.

Numpy follows whatever the C compiler decides to use, because of  
speed-vs.-Python compatibility tradeoff.




  Christian.


On Mon, April 23, 2007 22:20, Christian Marquardt wrote:

Dear all,

this is odd:


import numpy as np
fact = 2825L * 86400L
nn = np.array([-20905000L])
nn

   array([-20905000], dtype=int64)

nn[0] // fact

   0

But:


long(nn[0]) // fact

   -1L

Is this a bug in numpy, or in python's implementation of longs? I  
would

think both should give the same, really... (Python 2.5, numpy
1.0.3dev3725,
Linux, Intel compilers...)

Many thanks for any ideas / advice,

  Christian


--
||\/|
/--\
|David M. Cooke  http://arbutus.physics.mcmaster.ca/dmc/
|[EMAIL PROTECTED]



PGP.sig
Description: This is a digitally signed message part
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Building numpy on Solaris x86 with sun CC and libsunperf

2007-04-19 Thread David M. Cooke
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Peter C. Norton wrote:
 Hello all,
 
 I'm trying to build numpy for some of my users, and I can't seem to
 get the [blas_opt] or the [lapack_opt] settings to be honored in my
 site.cfg:
 
 $ CFLAGS=-L$STUDIODIR/lib/ -l=sunperf CPPFLAGS='-DNO_APPEND_FORTRAN' \
 /scratch/nortonp/python-2.5.1c1/bin/python setup.py config
 Running from numpy source directory.
 F2PY Version 2_3649
 blas_opt_info:
 blas_mkl_info:
   libraries mkl,vml,guide not found in /usr/local/lib
   libraries mkl,vml,guide not found in 
 /lang/SunOS.5.i386/studio-11.0/SUNWspro/lib
   NOT AVAILABLE
 [etc, with nothing found]
 
 And after all this, I get
 /projects/python-2.5/numpy-1.0.2/numpy/distutils/system_info.py:1210:
 UserWarning:
 Atlas (http://math-atlas.sourceforge.net/) libraries not found.
 Directories to search for the libraries can be specified in the
 numpy/distutils/site.cfg file (section [atlas]) or by setting
 the ATLAS environment variable.
   warnings.warn(AtlasNotFoundError.__doc__)
 
 and a similar thing happens with lapack. My site.cfg boils down to
 this:
 
 [DEFAULT]
 library_dirs = /usr/local/lib:/lang/SunOS.5.i386/studio-11.0/SUNWspro/lib
 include_dirs = 
 /usr/local/include:/lang/SunOS.5.i386/studio-11.0/SUNWspro/include
 
 [blas_opt]
 libraries = sunperf
 
 [lapack_opt]
 libraries = sunperf
 
 If I mess around with system_info.py I can get setup to acknowledge
 the addition to the list, but it seems from the output that the
 optimized libraries section in the site.cfg is ignored (eg. never
 added to the classes _lib_names array).
 

Try this instead of blas_opt and lapack_opt:

[blas]
blas_libs = sunperf

[lapack]
lapack_libs = sunperf

 Also, since the lapack and blas libraries are already essentially part
 of libsunperf, built, do I still need a fortran compiler to link build
 numpy or can I just bypass that (somehow) and link the .so and go on
 my merry way?

I think you should be fine with a C compiler. You'd probably have to
fiddle with the definitions of the blas_info and lapack_info classes in
numpy.distutils.system_info -- they hardcode using Fortran.

At some point, system_info will get some more lovin' -- it should be
refactored to be more consistent.

- --
||\/|
/--\
|David M. Cooke  http://arbutus.physics.mcmaster.ca/dmc/
|[EMAIL PROTECTED]
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.7 (Darwin)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFGJ88XN9ixZKFWjRQRAlxtAJ41hBrT3XNLclomsnRV+v8pP+/CqACfXw4X
iAD00A86D+lj21Qo/p8p6Mk=
=/A7M
-END PGP SIGNATURE-
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Problems building numpy and scipy on AIX

2007-04-19 Thread David M. Cooke
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Christian Marquardt wrote:
 Dear David,
 
 the svn version of numpy does indeed build cleanly on AIX. Many thanks!
 
 However, the wrapper problem still exists for the C++ compiler, and shows
 up when compiling scipy. Now, I *assume* that SciPy is using the distutils
 as installed by numpy. Do you know where the linker settings for the C++
 compiler might be overwritten? There are two or three C compiler related
 python modules in numpy/distutils... Or would you think that this problem
 is entirely unrelated to the distutils in numpy?

I'm working on a better solution, but the quick fix to your problem is
to look in numpy/distutils/command/build_ext.py. There are two lines
that reference self.compiler.linker_so[0]; change those 0s to a 1s, so
it keeps the ld_so_aix script there when switching the linker.

- --
||\/|
/--\
|David M. Cooke  http://arbutus.physics.mcmaster.ca/dmc/
|[EMAIL PROTECTED]
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.7 (Darwin)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFGJ9MzN9ixZKFWjRQRArNwAKC029wYORk9sm+FShcYKNd0UEcMdgCghHGC
rjYqtaESdt8zRgZHCDxYbDk=
=PS30
-END PGP SIGNATURE-
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Question about Optimization (Inline, and Pyrex)

2007-04-18 Thread David M. Cooke
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Anne Archibald wrote:
 
 It would be perfectly possible, in principle, to implement an
 ATLAS-like library that handled a variety (perhaps all) of numpy's
 basic operations in platform-optimized fashion. But implementing ATLAS
 is not a simple process! And it's not clear how much gain would be
 available - it would almost certainly be noticeably faster only for
 very large numpy objects (where the python overhead is unimportant),
 and those objects can be very inefficient because of excessive
 copying. And the scope of improvement would be very limited; an
 expression like A*B+C*D would be much more efficient, probably, if the
 whole expression were evaluated at once for each element (due to
 memory locality and temporary allocation). But it is impossible for
 numpy, sitting inside python as it does, to do that.

numexpr (in the scipy sandbox) does something like this: it takes an
expression like A*B+C*D and constructs a small bytecode program that
does that calculation in chunks, minimising temporary variables and
number of passes through memory. As it is, the speed is faster than the
python expression, and comparable to that of weave. I've been thinking
of making a JIT for it, but I haven't had the time :)

- --
||\/|
/--\
|David M. Cooke  http://arbutus.physics.mcmaster.ca/dmc/
|[EMAIL PROTECTED]
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.7 (Darwin)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFGJbvTN9ixZKFWjRQRAi+WAJ9HmeCTeB59Jso5vlVzbgHQ0TDj9ACfdKWy
jYEnsRYau8T5BVAKnZJWpLk=
=75Jc
-END PGP SIGNATURE-
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Problems building numpy and scipy on AIX

2007-04-18 Thread David M. Cooke
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Christian Marquardt wrote:
 Hello,
 
 I've run into a problem building numpy-1.0.2 on AIX (using gcc and native
 Fortran compilers). The problem on that platform in general is that the
 process for building shared libraries is different from what's normally
 done (and it's a pain...)

Already fixed in svn :)

- --
||\/|
/--\
|David M. Cooke  http://arbutus.physics.mcmaster.ca/dmc/
|[EMAIL PROTECTED]
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.7 (Darwin)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFGJjctN9ixZKFWjRQRAoLZAJ4uz6L/dO1j47nz4o5BEFiFLlc6bwCfayha
tWZCkDzXjNR7lrJK7AVMyTc=
=9it9
-END PGP SIGNATURE-
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy with uclibc compiled python

2007-04-06 Thread David M. Cooke
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

David Shepherd wrote:
 Hey all,
 
 We started to try and compile the numpy module on an embedded PowerPC 
 Xilinx board with no luck.  This is one of the errors I get when I try 
 to build the module on-board.  It is due to the fact that the compiler 
 is located in a different location than the original compiler for python 
 (i think).  I made symbolic links to fix the first error.  The second 
 more critical error that I cannot recover from is the Error: 
 Unrecognized opcode: `fldenv'.  There are many more duplicates of this 
 error, but it occurs when the umathmodule.c is being compiled.

Setting CC to your C compiler should work

 I am using Binutils 2.17, GCC 3.4.6, and Python 2.5.  Again, python was 
 compiled with uclibc, not the standard libraries.  We require the FFT 
 module for a project that is due in about a week.  Any help would be 
 appreciated.

 multiple assembler opcode errors when it tries to run the entry build:
 
 powerpc-linux-gcc -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -fPIC 
 -Ibuild/src.linux-ppc-2.5/numpy/core/src -Inumpy/core/include 
 -Ibuild/src.linux-ppc-2.5/numpy/core -Inumpy/core/src 
 -Inumpy/core/include -I/root/python/include/python2.5 -c 
 build/src.linux-ppc-2.5/numpy/core/src/umathmodule.c -o 
 build/temp.linux-ppc-2.5/build/src.linux-ppc-2.5/numpy/core/src/umathmodule.o
 .
 .
 .
 .
 Error: Unrecognized opcode: `fldenv'

The culprit looks like numpy/core/include/numpy/fenv/fenv.h, which is
odd, as I don't see how it would be included -- it's only used for
cygwin. I would think there would be no floating point environment
support included, as the system fenv.h is only included when one of
__GLIBC__, __APPLE__, or __MINGW32__ is defined.

See if the attached patch helps.

- --
||\/|
/--\
|David M. Cooke  http://arbutus.physics.mcmaster.ca/dmc/
|[EMAIL PROTECTED]
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (Darwin)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFGFmXYN9ixZKFWjRQRAu23AJ44AslsO5HxqDiVuLjYTzI59Dpt4wCgnBOY
2zEHO9XAQVDVTLtWS7xhWpA=
=Hag4
-END PGP SIGNATURE-
Index: numpy/core/include/numpy/ufuncobject.h
===
--- numpy/core/include/numpy/ufuncobject.h  (revision 3673)
+++ numpy/core/include/numpy/ufuncobject.h  (working copy)
@@ -276,6 +276,13 @@
(void) fpsetsticky(0);  \
}
 
+#elif defined(__UCLIBC__)
+
+#define NO_FLOATING_POINT_SUPPORT
+#define UFUNC_CHECK_STATUS(ret) { \
+ret = 0;  \
+  }
+
 #elif defined(linux) || defined(__APPLE__) || defined(__CYGWIN__) || 
defined(__MINGW32__)
 
 #if defined(__GLIBC__) || defined(__APPLE__) || defined(__MINGW32__)
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy with uclibc compiled python

2007-04-06 Thread David M. Cooke
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

David Shepherd wrote:
 You know, looking at the core it looks like it has something to do with 
 the define(linux) statement.  I am running linux, but its a buildroot 
 (uclibc) rootfs.  Let me know if you need anymore information.  Thanks.
 
 Dave
 
 David Shepherd wrote:
 Well, I get further into the compile process, but now it fails on 
 another gcc compile operations.  This is essentially the same error 
 message I was getting before.  It tries to #include fenv.h, but does 
 not find it (it shouldn't be including it anyway, as you said).  This 
 time it fails when trying to compile _capi.c.  Thanks for all your 
 help so far!

Try this updated patch. It replaces the defined(linux) tests with
defined(__GLIBC__).

- --
||\/|
/--\
|David M. Cooke  http://arbutus.physics.mcmaster.ca/dmc/
|[EMAIL PROTECTED]
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (Darwin)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFGFo8PN9ixZKFWjRQRAsFRAJwKK4pOaxxTUCR71vF3P7R+QMY2dACgsnsY
4xssXvgP96hfEbiOvdSFUUM=
=AT85
-END PGP SIGNATURE-
Index: numpy/core/include/numpy/ufuncobject.h
===
--- numpy/core/include/numpy/ufuncobject.h  (revision 3673)
+++ numpy/core/include/numpy/ufuncobject.h  (working copy)
@@ -276,7 +276,7 @@
(void) fpsetsticky(0);  \
}
 
-#elif defined(linux) || defined(__APPLE__) || defined(__CYGWIN__) || 
defined(__MINGW32__)
+#elif defined(__GLIBC__) || defined(__APPLE__) || defined(__CYGWIN__) || 
defined(__MINGW32__)
 
 #if defined(__GLIBC__) || defined(__APPLE__) || defined(__MINGW32__)
 #include fenv.h
Index: numpy/numarray/_capi.c
===
--- numpy/numarray/_capi.c  (revision 3673)
+++ numpy/numarray/_capi.c  (working copy)
@@ -224,7 +224,7 @@
 }
 
 /* Likewise for Integer overflows */
-#if defined(linux) || defined(__APPLE__) || defined(__CYGWIN__) || 
defined(__MINGW32__)
+#if defined(__GLIBC__) || defined(__APPLE__) || defined(__CYGWIN__) || 
defined(__MINGW32__)
 #if defined(__GLIBC__) || defined(__APPLE__) || defined(__MINGW32__)
 #include fenv.h
 #elif defined(__CYGWIN__)
@@ -2937,7 +2937,7 @@
return retstatus;
 }
 
-#elif defined(linux) || defined(__APPLE__) || defined(__CYGWIN__) || 
defined(__MINGW32__)
+#elif defined(__GLIBC__) || defined(__APPLE__) || defined(__CYGWIN__) || 
defined(__MINGW32__)
 #if defined(__GLIBC__) || defined(darwin) || defined(__MINGW32__)
 #include fenv.h
 #elif defined(__CYGWIN__)
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy with uclibc compiled python

2007-04-06 Thread David M. Cooke
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

David Shepherd wrote:
 The second part of the patch is failing:
 
 # patch -p0  ../uclibc-fenv.patch
 patching file numpy/core/include/numpy/ufuncobject.h
 patching file numpy/numarray/_capi.c
 Hunk #1 FAILED at 224.
 Hunk #2 FAILED at 2937.
 2 out of 2 hunks FAILED -- saving rejects to file
 numpy/numarray/_capi.c.rej
 #

Ahh, you're not using a current subversion checkout :-) For your
purposes, you could just change the #if defined(linux) to #if
defined(__GLIBC__) (or #if 0 if that strikes your fancy).

- --
||\/|
/--\
|David M. Cooke  http://arbutus.physics.mcmaster.ca/dmc/
|[EMAIL PROTECTED]
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (Darwin)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFGFpvoN9ixZKFWjRQRAuAeAJ4xaAxUUz828DeMRUd5vYPl0K6TfACgsSq7
2lvovjwFjEDECCJHKeQwhTQ=
=OGOK
-END PGP SIGNATURE-
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] how to run the tests.

2007-04-05 Thread David M. Cooke
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Christopher Barker wrote:
 Just a quick comment.
 
 I just built 1.0.2 on my OS-X box, and it took me entirely too long to 
 figure out how to run the tests. I suggest something like:
 
 
 after installing, to run the numpy unit tests, you can run:
 
 import numpy
 numpy.test()
 
 
 be added to the readme after the instructions on how to run setup.py 
 install.

I use python -c 'import numpy; numpy.test()' for that. I've added a note
 to the README.

 A question:
 lapack_lite.so is linked against:
  
 /System/Library/Frameworks/Accelerate.framework/Versions/A/Accelerate
 
 Does that mean the Apple-supplied BLAS/LAPACK is being used?

Yes; the Accelerate framework exists on all installations of OS X, so we
can use it with no problems.

- --
||\/|
/--\
|David M. Cooke  http://arbutus.physics.mcmaster.ca/dmc/
|[EMAIL PROTECTED]
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (Darwin)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFGFU0NN9ixZKFWjRQRAtcKAJ0QOu+A3MdjJfAbcmjyLJ3C2yI7SACeLQXa
dZOkiYWSa2kWZRoUJ/MEFxM=
=EoOZ
-END PGP SIGNATURE-
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] .data doesn't account for .transpose()?

2007-03-29 Thread David M. Cooke
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Charles R Harris wrote:
 On 3/29/07, Anne Archibald [EMAIL PROTECTED] wrote:

 
 I think the preferred names are C_CONTIGUOUS and F_CONTIGUOUS, for
 instance:
 
 In [2]:eye(2).flags['C_CONTIGUOUS']
 Out[2]:True
 
 In [3]:eye(2).T.flags['F_CONTIGUOUS']
 Out[3]:True
 
 However, that may only be in svn at the moment. C_CONTIGUOUS is an alias
 for
 CONTIGUOUS and
 F_CONTIGUOUS is an alias for F. I think the new names are clearer than
 before.

FWIW, you can use attributes on flags too:

In [1]: eye(2).flags.c_contiguous
Out[1]: True
In [2]: eye(2).T.flags.f_contiguous
Out[2]: True

- --
||\/|
/--\
|David M. Cooke  http://arbutus.physics.mcmaster.ca/dmc/
|[EMAIL PROTECTED]
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (Darwin)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFGDEvj+kNzddXW8YwRAi9oAKDQCLoZjAPSSMscVkvVsxpiHgU4LwCgzhjD
PQ4QdFd4urTaJND85u4ONbI=
=ZflB
-END PGP SIGNATURE-
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] New Trac feature: TracReSTMacro

2007-03-20 Thread David M. Cooke
On Mon, Mar 19, 2007 at 12:54:51PM -0500, Jeff Strunk wrote:
 Good afternoon,
 
 By request, I have installed the TracReSTMacro on the numpy, scipy, and 
 scikits tracs. This plugin allows you to display ReST formatted text directly 
 from svn.
 
 For example, http://projects.scipy.org/neuroimaging/ni/wiki/ReadMe in its 
 entirety is:
 [[ReST(/ni/trunk/README)]]

Hmm, I'm getting an Internal Server Error on
http://projects.scipy.org/scipy/numpy/wiki/NumPyCAPI
which has the content
[[ReST(/numpy/trunk/doc/CAPI.txt)]]

-- 
||\/|
/--\
|David M. Cooke  http://arbutus.physics.mcmaster.ca/dmc/
|[EMAIL PROTECTED]
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] New Trac feature: TracReSTMacro

2007-03-20 Thread David M. Cooke
On Tue, Mar 20, 2007 at 12:57:45PM -0500, Jeff Strunk wrote:
 On Tuesday 20 March 2007 11:54 am, David M. Cooke wrote:
  On Mon, Mar 19, 2007 at 12:54:51PM -0500, Jeff Strunk wrote:
   Good afternoon,
  
   By request, I have installed the TracReSTMacro on the numpy, scipy, and
   scikits tracs. This plugin allows you to display ReST formatted text
   directly from svn.
  
   For example, http://projects.scipy.org/neuroimaging/ni/wiki/ReadMe in its
   entirety is:
   [[ReST(/ni/trunk/README)]]
 
  Hmm, I'm getting an Internal Server Error on
  http://projects.scipy.org/scipy/numpy/wiki/NumPyCAPI
  which has the content
  [[ReST(/numpy/trunk/doc/CAPI.txt)]]
 
 The content was:
 [[ReST(/numpy/trunk/numpy/doc/CAPI.txt)]]
 
 It should have been:
 [[ReST(/trunk/numpy/doc/CAPI.txt)]]
 
 I fixed it in the database, and the page works.
 
 Thanks,
 Jeff

Ahh, ok. I was trying to puzzle it out from your neuroimaging example.

-- 
||\/|
/--\
|David M. Cooke  http://arbutus.physics.mcmaster.ca/dmc/
|[EMAIL PROTECTED]
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] correct way to specify type in array definition

2007-03-15 Thread David M. Cooke
On Thu, Mar 15, 2007 at 11:23:36AM +0100, Francesc Altet wrote:
 El dj 15 de 03 del 2007 a les 06:01 -0400, en/na Brian Blais va
 escriure:
  Hello,
  
  Can someone tell me what the preferred way to specify the type of an array? 
   I want
  it to be a float array, no matter what is given (say, integers).  I can do:
  
  a=numpy.array([1,2,3],numpy.dtype('float'))
  
  or
  
  a=numpy.array([1,2,3],type(1.0))
  
  or perhaps many others.  Is there a way that is recommended?
 
 Well, this depends on your preferences, I guess, but I like to be
 explicit, so I normally use:
 
 a=numpy.array([1,2,3], numpy.float64)
 
 but, if you are a bit lazy to type, the next is just fine as well:
 
 a=numpy.array([1,2,3], 'f8')
 

I just do

a = numpy.array([1,2,3], dtype=float)

The Python types int, float, and bool translate to numpy.int_,
numpy.double, and numpy.bool (i.e., the C equivalents of the Pythonn
types; note that int_ is a C long).

-- 
||\/|
/--\
|David M. Cooke  http://arbutus.physics.mcmaster.ca/dmc/
|[EMAIL PROTECTED]
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Which dtype are supported by numexpr ?

2007-03-14 Thread David M. Cooke
On Wed, 14 Mar 2007 13:02:10 +0100
Francesc Altet [EMAIL PROTECTED] wrote:

 The info above is somewhat inexact. I was talking about the enhanced
 numexpr version included in PyTables 2.0 (see [1]). The original version of
 numexpr (see [2]) doesn't have support for int64 on 32-bit platforms and
 also neither does for strings. Sorry for the confusion.

What are you doing with strings in numexpr? Ivan Vilata i Balaguer submitted
a patch to add them, but I rejected it as I didn't have any use-cases for
strings that needed the speed of numexpr, and were worth adding the
complexity.

-- 
||\/|
/--\
|David M. Cooke  http://arbutus.physics.mcmaster.ca/dmc/
|[EMAIL PROTECTED]
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [PATCH] a fix for compiling numexpr with MSVC Toolkit 2003

2007-03-09 Thread David M. Cooke
On Fri, 9 Mar 2007 16:11:28 +
[EMAIL PROTECTED] wrote:

 Hi,
 
 I've done a patch for allowing compiling the last version of numexpr with
 the MSVC Toolkit 2003 compiler on Windows platforms. You can fetch it
 from:
 
 http://www.pytables.org/trac/changeset/2514/trunk

Checked in. It didn't match up; you've got about 150 more lines in your
version than scipy's version.

 BTW, I understand now why Tim Hochberg was so worried about the time
 that it takes to compile numexpr on Win platforms. On my Pentium4 @ 2
 GHz, and using the MSVC Toolkit 2003, compiling numexpr takes 20 minutes
 aprox. (!). With the same machine and using GCC under Linux it takes no
 more than 1 minute. Mmmm, I think it's time to look at the MINGW
 compiler (GCC based).

Wow! I wonder if lowering the optimisation level would help.

-- 
||\/|
/--\
|David M. Cooke  http://arbutus.physics.mcmaster.ca/dmc/
|[EMAIL PROTECTED]
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy.linalg.qr bug on 64-bit platforms

2007-03-08 Thread David M. Cooke

On Mar 7, 2007, at 04:57 , Lars Bittrich wrote:


On Monday 05 March 2007 08:01, Steffen Loeck wrote:
Has there been any progress in solving this problem? I get the  
same error

message and have no idea how to solve it.


I do not understand those code parts very well but I think the  
values passed

to the lapack routines must be integer and not long integer on 64bit
architecture. A few tests with the attached patch worked well.


Yeah, this problem is kind of ugly; we should have a test for the  
integer size in Fortran compilers. However, I don't know how to do  
that. We've been pretty lucky so far: INTEGER seems to be a C int on  
everything. I know this is true for GNU Fortran (g77, gfortran) on  
Linux, on both 32-bit and 64-bit platforms. But I don't think there's  
any guarantee that it would be true in general.


--
||\/|
/--\
|David M. Cooke  http://arbutus.physics.mcmaster.ca/dmc/
|[EMAIL PROTECTED]



PGP.sig
Description: This is a digitally signed message part
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] building an array using smaller arrays

2007-03-01 Thread David M. Cooke

On Mar 1, 2007, at 13:33 , Rudolf Sykora wrote:


Hello,

since noone has reacted to my last e-mail yet (for several days), I  
feel the need to ask again (since I still do not know a good answer).

Please help me.

 Hello everybody,
 I wonder how I could most easily accomplish the following:

Say I have sth like:
 a = array( [1, 2] )
 and I want to use this array to build another array in the  
following sence:
 b = array( [[1, 2, 3, a], [5, a, 6, 7], [0, 2-a, 3, 4]])  # this  
doesn't work


 I would like to obtain
 b = array( [[1, 2, 3, 1, 2],  [5 ,1 ,2 ,6 ,7], [0, 1, 0, 3, 4]] )

 I know a rather complicated way but believe there must be an  
easy one.

 Thank you very much.

 Ruda

I would need some sort of flattening operator...
The solution I know is very ugly:

 b = array(( concatenate(([1, 2, 3], a)), concatenate(([5], a, [6,  
7])), concatenate(([0], 2-a, [3, 4])) ))


Define a helper function

def row(*args):
res = []
for a in args:
a = asarray(a)
if len(a.shape) == 0:
res.append(a)
elif len(a.shape) == 1:
res += a.tolist()
else:
raise ValueError(arguments to row must be row-like)
return array(res)

then

b = array([ row(1,2,3,a), row(5,a,6,7), row(0,2-a,3,4) ])

--
||\/|
/--\
|David M. Cooke  http://arbutus.physics.mcmaster.ca/dmc/
|[EMAIL PROTECTED]



PGP.sig
Description: This is a digitally signed message part
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] what goes wrong with cos(), sin()

2007-02-22 Thread David M. Cooke
On Feb 21, 2007, at 14:54 , Christopher Barker wrote:

 Anne Archibald wrote:
 Or, to see more clearly, try taking (on a pocket calculator, say)
 sin(3.14) (or even sin(pi)).

 This is an interesting point. I took a class from William Kahan once
 (pass/fail, thank god!), and one question he posed to us was:

 How many digits of pi is used in an HP calculator?

FWIW,

There are two data types for reals (at least on the HP 28 and 48  
series, and others in that line): a 12 decimal digit real used for  
communicating with the user, and an extended 15 decimal digit real  
used internally. All calculations are done in base 10.

The exponent e for the 12-digit real is in the range -500  e  500,  
and for the 15-digit, -5  e  5.

AFAIK, most of HP's calculators are like this.

-- 
||\/|
/--\
|David M. Cooke  http://arbutus.physics.mcmaster.ca/dmc/
|[EMAIL PROTECTED]

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Forcing the use of unoptimized blas/lapack even when atlas is present

2007-02-19 Thread David M. Cooke
On Feb 19, 2007, at 11:04 , Robert Kern wrote:

 David Cournapeau wrote:
 Hi there,

 I am developing a building tool to automatically build the whole
 numpy/scipy/matplotlib set from sources including dependencies,  
 and one
 of the problem I got is to force which blas/lapack version to use  
 when
 building numpy and scipy.
 I thought that doing a BLAS=blaslib LAPACK=lapacklib python
 setup.config was enough when build numpy, but numpy still wants to  
 use
 atlas. I would like to avoid using site.cfg if possible, as I want to
 build everything automatically,

 Set ATLAS=0, I believe.

Not quite, you need
LAPACK=None BLAS=None

(ATLAS=None is only needed if ATLAS is being looked for specifically,  
i.e., system_info.atlas_info is used instead of  
system_info.lapack_opt_info in the setup.py. AFAIK, that's only used  
when debugging ATLAS installs in scipy).

-- 
||\/|
/--\
|David M. Cooke  http://arbutus.physics.mcmaster.ca/dmc/
|[EMAIL PROTECTED]

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] SVN Build, optimized libraries, site.cfg, windows

2007-02-16 Thread David M. Cooke
On Feb 11, 2007, at 22:51 , Satrajit Ghosh wrote:

 Hi,

 I'm also not quite clear whether the optimized FFTW and UMFPACK  
 libraries
 are being used or required in numpy at all as show_config() doesn't  
 report
 it.

 I see that fftw and umfpack are being used for scipy.

 I have attached my site.cfg. Any help would be much appreciated.

No, they're only in there for scipy (and for other packages that  
would like to use them). They're not required for Numpy.

-- 
||\/|
/--\
|David M. Cooke  http://arbutus.physics.mcmaster.ca/dmc/
|[EMAIL PROTECTED]

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Exported symbols and code reorganization.

2007-01-10 Thread David M. Cooke
On Jan 10, 2007, at 13:52 , Charles R Harris wrote:
 On 1/10/07, David M. Cooke [EMAIL PROTECTED] wrote: On  
 Jan 7, 2007, at 00:16 , Charles R Harris wrote:
 
  That brings up the main question I have about how to break up the C
  files. I note that most of the functions in multiarraymodule.c, for
  instance, are part of the C-API, and are tagged as belonging to
  either the MULTIARRAY_API or the OBJECT_API. Apparently the build
  system scans for these tags and extracts the files somewhere. So,
  what is this API, is it available somewhere or is the code just
  copied somewhere convenient. As to breaking up the files, the scan
  only covers the code in the two current files, included code from
  broken out parts is not seen. This strikes me as a bit of a kludge,
  but I am sure there is a reason for it. Anyway, I assume the build
  system can be fixed, so that brings up the question of how to break
  up the files. The maximal strategy is to make every API functions,
  with it's helper functions, a separate file. This adds a *lot* of
  files, but it is straight forward and modular. A less drastic
  approach is to start by breaking multiarraymodule into four files:
  the converters, the two apis, and the module functions. My own
  preference is for the bunch of files, but I suspect some will  
 object.

 The code for pulling out the ``MULTIARRAY_API`` and ``OBJECT_API``
 (also ``UFUNC_API``) is in ``numpy/core/code_generators``. Taking
 ``MULTIARRAY_API`` as an example, the ``generate_array_api.py`` is
 run by the ``numpy/core/setup.py`` file to generate the multiarray
 (and object) API. The file ``numpy/core/code_generators/
 array_api_order.txt`` is the order in which the API functions are
 added to the  ``PyArray_API`` array; this is our guarantee that the
 binary API doesn't change when functions are added. The files scanned
 are listed ``in numpy/core/code_generators/genapi.py``, which is also
 the module that does the heavy lifting in extracting the tagged
 functions.

 Looked to me like the order could change without causing problems.  
 The include file was also written by the code generator and for  
 extension modules was just a bunch of macros assigning the proper  
 function pointer to the correct name. That brings up another bit,  
 however. At some point I would like to break the include file into  
 two parts, one for inclusion in the other numpy modules and another  
 for inclusion in extension modules, the big #ifdef in the current  
 file offends my sense of esthetics. It should also be possible to  
 attach the function pointers to real function prototype like  
 declarations, which would help extension modules check the code at  
 compile time.

No, the order is necessary for binary compatibility. If PyArray_API 
[3] points to function 'A', and PyArray_API[4] points to function  
'B', then, if A and B are reversed in a newer version, any extension  
module compiled with the previous version will now call function 'B'  
instead of 'A', and vice versa. Adding functions to the end is ok,  
though.

Instead of using an array, we could instead use a large struct, whose  
members are of right type as the function assigned to them, as in

struct PyArray_API_t {
PyObject *(*transpose)(PyArrayObject *, PyArray_Dims *);
PyObject *(*take_from)(PyArrayObject *, PyObject *, int,  
PyArrayObject *, NPY_CLIPMODE);
}

struct PyArray_API_t PyArray_API = {
PyArray_Transpose,
PyArray_TakeFrom,
}

#define PyArray_Transpose (PyArray_API-transpose)

This would give us better type-checking when compiling, and make it  
easier when running under gdb (when your extension crashes when  
calling into numpy, gdb would report the function as something like  
PyArray_API[31], because that's all the information it has). We would  
still have to guarantee the order for binary compability. One problem  
is that you'll have to make sure that the alignment of the fields  
doesn't change either (something that's not a problem for an array of  
pointers).

Now, I was going to try to remove the order requirement, but never  
got around to it (you can see some of the initial work in numpy/core/ 
code_generators/genapi.py in the api_hash() routines). The idea is to  
have a unique identifier for each function (I use a hash of the name  
and the arguments, but for this example, let's just use the function  
name). An extension module, when compiled, would have a list of  
function names in the order it expects. In the import_array(), it  
would call numpy to give it the addresses corresponding to those names.

As Python code, the above would look something like this:

COMPILED_WITH_NUMPY_VERSION=1.2
API_names = [PyArray_Transpose, PyArray_TakeFrom]
def import_array():
 global API
 API = numpy.get_c_api(API_names, COMPILED_WITH_NUMPY_VERSION)

def a_routine():
 API[3](an_array)

(Of course, that'd have to be translated to C.) numpy.get_c_api would  
be responsible for putting things

Re: [Numpy-discussion] Numpy without BLAS, LAPACK

2006-11-22 Thread David M. Cooke
On 22 Nov 2006 16:44:07 -
[EMAIL PROTECTED] wrote:

 (Reposting to numpy-discussion@scipy.org instead of the SourceForge list.)
 
 
 It is my understanding that Numpy has lite versions of BLAS and LAPACK
 that it will use if it cannot find system libraries. Is it possible to FORCE
 it to use the lite versions rather than existing system libraries?
 
 (Where
 I come from: I'm trying to install numpy in a local directory on a Beowulf
 cluster, which has local BLAS and LAPACK that the numpy setup finds;
 however, these libraries are missing some functions, so import numpy
 fails at numpy.linalg. The fact is that I don't need numpy.linalg at all,
 so I'd be very happy with the lite libraries, rather than going to the
 trouble of recompiling BLAS and LAPACK or ATLAS, etc.)

Set the environment variables ATLAS, BLAS, and LAPACK to 'None'. For instance,

$ ATLAS=None BLAS=None LAPACK=None python setup.py install

Note that numpy.dot() uses BLAS if it's found, so if you compile without it,
it'll be slower. This may impact other routines not in numpy.linalg.

(It's on my list to add a faster xgemm routine when BLAS isn't used...)

-- 
||\/|
/--\
|David M. Cooke  http://arbutus.physics.mcmaster.ca/dmc/
|[EMAIL PROTECTED]
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion