Re: [Numpy-discussion] record descriptors: trailing unused bytes

2006-09-18 Thread Travis Oliphant
Martin Wiechert wrote:
> Hi list,
>
> does anybody know, if in C in the PyArray_Descr struct it is safe to manually 
> change descr->elsize and descr->alignment as long as these are compatible and 
> descr->elsize is large enough to hold all fields? Of course I mean before any 
> array is constructed using descr.
>   

Well, you should really make a copy of the PyArray_Descr structure and 
then fill it in as desired unless you are sure that no other arrays have 
are using that particular data-type object to describe their own data:  
PyArray_DescrNew gives you a new copy based on an old one (you just get 
a reference to the funciton pointers in the 'f' member so don't go 
changing those).

I guess your statement about "before any array is constructed using 
descr" means you are sure that there are no other arrays using the old 
elsize and alignment.

Then it should be fine.  There are not supposed to be any assumptions 
about the data-types except for what is explicitly provided in the 
data-type object (PyArray_Descr *).  So, changing them will change the 
data-type and things should ideally work.

-Travis




-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] buggy buggy bugyy: format and casting ==> BUG in numpy.sum?

2006-09-18 Thread Travis Oliphant
Eric Emsellem wrote:
> Hi again
>
> after some hours of debugging I finally (I think) found the problem:
>
> numpy.sum([[0,1,2],[2,3,4]])
> 24
>
> numpy.sum([[0,1,2],[2,3,4]],axis=0)
> array([2, 4, 6])
>
> numpy.sum([[0,1,2],[2,3,4]],axis=1)
> array([3, 9])
>
>
> Isn't the first line supposed to act as with "axis=0" by default (see
> help numpy.sum!)...???
> Not setting axis=0 it sums everything!
>   
See the Release Notes page on www.scipy.org.  It documents everything 
that has changed.  Several things will break old code as indicated. 

There are several options for keeping old code working:

1) Use the numpy.oldnumeric compatibility layer which keeps the same 
definitions and defaults as Numeric

2) Use conversion tools (like the recently added fix_default_axis) tool 
to automatically insert axis=0 arguments in all code where it is not 
present (or to automatically change the import to oldnumeric).

For the future, you must specify which axis you mean for a Nd array or 
the code will assume you meant to work over the entire N-d array.

We all recognize this is a pain to change.  That's why the backward 
compatibilty options are avaiable and the tools have been written.  
Believe me, I know what a pain it is.  I have had to keep SciPy and 
Matplotlib working with all the changes to NumPy. 



-Travis



-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] feasability study to migrate from numarray to numpy

2006-09-18 Thread Travis Oliphant
mg wrote:
> Hi all,
>
> I am doing a feseability study to migrate our Python based FEM 
> applications from Numarray to Numpy.
>
> First, I tried to install Numpy from Python-2.4 on linux-x86, 
> linux-86-64bit. So, all work fine. Great! Moreover, I change easily the 
> BLAS linked libraries. I tried with ATLAS and GOTO. Great again!
>
> Second, I try to do the same think on windows-x86 without success. So my 
> first question is: is Numpy-1.0b5 has been tested and is supported on 
> Windows?
>   
Yes, it should work.  Builds for windows were provided.   But, perhaps 
there are configuration issues for your system that we are not handling 
correctly.

> Third, I tried to install Numpy from Python-2.5, which is our standard 
> Python, on linux-x86... and the compilation stopped during the 
> compilation of core/src/multiarraymodule.c. So my second question is: is 
> there a workaround or is the porting to Python2.5 is yet schedule?
>   
There was a problem with Python 2.5 and NumPy 1.0 that is fixed in SVN.  
Look for NumPy 1.0rc1 to come out soon.
> My third question is: is the tool to migrate the numarray based Python 
> scripts (numpy.numarray.alter_code1) work fine? (I suppose yes...)
>   
It needs more testing.  It would be great if you could help us find and 
fix bugs in it.   I don't have a lot of numarray code to test.
> We have created a lot of bindings in order to pilote our generic-C++ 
> framework with Python scripts. So, about the Numpy API, is it widely 
> different than the Numarray API? (We will order the Numpy Guide too.)
>   
It is more similar to the Numeric C-API.  However, the numarray C-API is 
completely supported by including numpy/libnumarray.h so you should be 
able to convert your C code very easily.   Any problems encountered 
should be noted and we'll get them fixed.
> To not duplicate large numerical memory arrays, Numarray allows to 
> aliasing the memory of some bindings with arrays from Numarray, and we 
> have used this feature intensively. So, I wonder if it is currently 
> supported (or even scheduled)?
I'm pretty sure the answer is yes (because the Numarray C-API is 
supported), though I'm not exactly sure what you mean.  Do you mean that 
you have memory created in the C/C++ framework and then you have an 
array use that memory for it's data area?  If that is what you mean, 
then the answer is definitely yes.


-Travis



-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] Changing byte ordering with astype fails with 0d arrays

2006-09-18 Thread Travis Oliphant
Matthew Brett wrote:
> Hi,
>
> As expected:
>
> In [67]:a = array([1], dtype='
> In [68]:a.astype('>i4').dtype
> Out[68]:dtype('>i4')
>
> I was also expecting this to work for 0d arrays, but it doesn't:
>
> In [69]:a = array(1, dtype='
> In [70]:a.astype('>i4').dtype
> Out[70]:dtype('
>   
The problem is that the astype method is returning an array scalar (it 
used to be that 0-d arrays were "avoided" at all costs).  We've since 
relaxes this requirement and I think here's another place where it needs 
to be relaxed. 

-Travis


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] Segfault on byteswap() on recarrays

2006-09-18 Thread Travis Oliphant
Matthew Brett wrote:
> Hi,
>
> I noticed this works:
>
> In [5]:a = array((1,), dtype=[('one', '
> In [6]:a.byteswap()
> Out[6]:
> array((16777216,),
>   dtype=[('one', '
> But, extending the recarray leads to a segfault on byteswapping:
>
> In [8]:a = array((1, 2), dtype=[('one', '
> In [9]:a.byteswap()
> Segmentation fault
>   
Great catch.   Fixed in SVN.

-Travis


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] ***[Possible UCE]*** Re: Fwd: Collection.findTransformation() never stops

2006-09-18 Thread Travis Oliphant
Dr. Seth Olsen wrote:
>
> Hi MMTKers and NUMPYers,
>
> Bill's answer to my inquiry about the problem I'm having with 
> Collection.findTransformation() (and also, incidentally, with the 
> dgesvd call in Subspace.getBasis(), has convinced me that I can no 
> long use MMTK without changing some of the code over to numpy.  I have 
> already been able to determine that invoking 
> numpy.oldnumeric.alter_code1.convertall() over the MMTK directory tree 
> is not the answer.

Why not?It should be.  That is the recommended way to begin porting 
code.

If we need to improve alter_code, we cannot do it unless we receive bug 
reports.  Please tell us what difficulty you had.


-Travis


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] Fwd: Collection.findTransformation() never stops

2006-09-18 Thread Travis Oliphant
Dr. Seth Olsen wrote:
>
> Hi Bill,
>
> MMTK has not made the conversion over to the new numpy module.  It is 
> built against the old Numeric code, and the word from its developers 
> is that changing to numpy cannot be a priority now.
>
My suggestion is to *kindly* put pressure on them. 

I've spent at least a hundred hours making it easy for people to port 
Numeric and Numarray-built code to NumPy.Because of this, I'm a 
little bit frustrated by this kind of response.  I understand it will 
take time for people to migrate, but it really does not take that long 
to port code to use NumPy.   

I've offered to do it for any open source code.   In fact, I just spent 
30 minutes and ported both Scientific Python and MMTK to use numpy.
I'll send you a patch if you want. It is true, that the result needs 
to be better tested, but I'm confident that any errors which might 
remain in the compatibility layer will be easily fixable (and we need 
people who are willing to do the tests to fix them).

I'd rather not do this, but if necessary we can easily create an SVN 
tree of third-party packages ported to use NumPy if the package-owners 
are not willing to do it.   Keeping Numeric packages around except for 
legacy systems will only make things harder.

I'll repeat the same offer I've made before:  I will gladly give my book 
and my help to any open source library author who will make porting to 
NumPy a priority for their package.  Note, however, my (free) ports to 
use NumPy do not use any "numerix-style" layer.  The library is 
converted to work with NumPy alone.  In other words, I won't spend any 
more "spare" time supporting 3 array packages.

Best regards,

-Travis


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] Fwd: Collection.findTransformation() never stops

2006-09-19 Thread Travis Oliphant
Dr. Seth Olsen wrote:
>
> Hi Travis,
>
> I would very happily accept the Scientific and MMTK patches.  Thank 
> you very much for the offer.

Look at http://www.scipy.org/Porting_to_NumPy

for information (and patches) on how to convert ScientificPython and 
MMTK to use NumPy.  I managed to get the codes to build and install but 
they are not tested.  Any problems you encounter would be useful to know 
about.

You can patch the code by changing to the top-level directory and entering

patch -p1 < patch_file

If it works for you, please email the developers and let them know how 
easy it can be (hopefully).

Best,

-Travis


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy fails to build with Python 2.5

2006-09-19 Thread Travis Oliphant
William Grant wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> mg wrote:
>   
>> Hi,
>>
>> I have the same problem with Python-2.5 and Python-2.6a with which i can 
>> not compile Numpy-1.0b5 and Numpy from svn. (I suppose your version of 
>> Scipy is based on one of these Numpy.)
>> I posted my problem yesterday. The answer from Travis was it will be 
>> corrected in Numpy-1.0rc1 which will come soon.
>>
>> regards,
>> Mathieu.
>> 
>
> Yes, numpy 1.0b5 is the one... Approximately how soon is soon? Before or
> after the 28th?
>   
Before.It was supposed to come out this weekend.  It will be today 
or tomorrow.

-Travis


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] what's the difference between npy_intp and size_t?

2006-09-19 Thread Travis Oliphant
Martin Wiechert wrote:
> Hi list,
>
> Please forgive my ignorance: Is there any difference between npy_intp and 
> size_t. Aren't both "ints just big enough to be safe with pointer arithmetics 
> even on 64 bit architectures?".
>   

size_t is unsigned
npy_intp is signed

It is basically the same as Py_ssize_t (which is not available until 
Python 2.5).  

-Travis


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] [MMTK] Re: Fwd: Collection.findTransformation() never stops

2006-09-19 Thread Travis Oliphant
Konrad Hinsen wrote:
> On Sep 19, 2006, at 5:11, Dr. Seth Olsen wrote:
>
>   
>> Bill's answer to my inquiry about the problem I'm having with  
>> Collection.findTransformation() (and also, incidentally, with the  
>> dgesvd call in Subspace.getBasis(), has convinced me that I can no  
>> long use MMTK without changing some of the code over to numpy.  I  
>> have already been able to determine that invoking  
>> numpy.oldnumeric.alter_code1.convertall() over the MMTK directory  
>> tree is not the answer.  Has anyone on either of these lists ever  
>> tried this before and, if so, can it be done (without destroying my  
>> sanity)?
>> 
>
> Adapting MMTK to NumPy is likely to be a major effort, in particular  
> for the C modules. 
Well,  I got both ScientificPython and MMTK to compile and import using 
the steps outlined on http://www.scipy.org/Porting_to_NumPy  in about 1 
hour (including time to fix alter_code1 to make the process even easier).

C-modules are actually easier to port because the Numeric C-API is 
totally supported.

> A lot ot testing would also be required. If anyone  
> wants to tackle this, 
> I'd be happy to see the results. 
Great.  I can totally understand people not having the time, but it is 
fantastic to hear that the willingness to accept patches is there.

I was able to install ScientificPython and MMTK for NumPy on my system 
using the patches provided on that page.Is there a test suite that 
can be run? 

Users of MMTK could really help out here.


-Travis



-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] arr.dtype.kind is 'i' for dtype=unit !?

2006-09-19 Thread Travis Oliphant
Sebastian Haase wrote:
> Hi,
> What are the possible values of 
> arr.dtype.kind ?
>
> It seems that signed and unsigned are considered to be the same "kind" 
>   
 arr=N.arange(10,dtype=N.uint)
 arr.dtype.kind
 
> 'i'
>   
 arr.dtype.itemsize
 
> 8
> (OK - this is just showing off our amd64 linux ;-) )
>
> How can I distinguish signed from unsigned without having to list all 
> possible 
> cases explicitly ?
>
>
>   
Hmm  This is a problem.   There is a 'u' kind for unsigned integers.  

On my system I get 'u' when running the code you just gave.

Can anybody on a 64-bit system confirm?

-Travis


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] Resolution of tickets.

2006-09-19 Thread Travis Oliphant
Charles R Harris wrote:
> Question,
>
> How does one mark a ticket as fixed? I don't see this field in the 
> ticket views I get, is there a list of accepted fixers?
>
You have to be logged in to the Trac site.  If you have SVN write access 
you should be able to log in.  Then there is a "resolution" section at 
the very bottom.

-Travis


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] sorting -inf, nan, inf

2006-09-19 Thread Travis Oliphant
Tim Hochberg wrote:

>A. M. Archibald wrote:
>  
>
>>On 19/09/06, Tim Hochberg <[EMAIL PROTECTED]> wrote:
>>
>>  
>>
>>
>>>I'm not sure where the breakpoint is, but I was seeing failures for all
>>>three sort types with N as high as 1. I suspect that they're all
>>>broken in the presence of  NaNs.  I further suspect you'd need some
>>>punishingly slow n**2 algorithm to be robust in the presence of NaNs.
>>>
>>>  
>>>
>>Not at all. Just temporarily make NaNs compare greater than any other
>>floating-point value and use quicksort (say). You can even do this for
>>python lists without much trouble.
>>  
>>
>>
>I misspoke. What I meant here was keeping the behavior that people think 
>that we already have but don't: NaNs stay in place and everything is 
>sorted around them. And even that's not true, since you could just 
>record where the NaNs are, remove them, sort and put them back. What I 
>was really getting at was, that I'm guessing, and it's just a guess, 
>that (a) none of the fast sorting algorithms do anything sensible unless 
>special cased and (b) one could come up with a naive n**2 sort that does 
>do something sensible without special casing (where sensible means leave 
>the NaNs alone).
>  
>
>>That's actually a viable suggestion for numpy's sorting, although it
>>would be kind of ugly to implement: do a quick any(isnan(A)), and if
>>not, use the fast stock sorting algorithms; if there is a NaN
>>somewhere in the array, use a version of the sort that has a tweaked
>>comparison function so the NaNs wind up at the end and are easy to
>>trim off.
>>
>>But the current situation, silently returning arrays in which the
>>non-NaNs are unsorted, is really bad.
>>  
>>
>>
>If your going to do isnan anyway, why not just raise an exception. An 
>array with NaNs in it can't be sorted by any common sense definition of 
>sorting. Any treatment of NaNs is going to be arbitrary, so we might as 
>well make the user specify what they want. "In the face of ambiguity, 
>refuse the temptation to guess" and all that.
>
>My favorite solution would be to make sort respect the invalid mode of 
>seterr/geterr. However at the moment it doesn't seem to (in beta4 at 
>least) but neither does add or multiply so those probably need to be 
>looked at again
>  
>
The geterr/seterr stuff changes how IEEE hardware flags are handled in 
ufuncs.  Currently they are not even looked at elsewhere. Are you 
saying that add and multiply don't respect the invalid flag?   If not, 
then this might be hardware related.  Does the IEEE invalid hardware 
flag get raised on multiplication by nan or only on creation of nan?   
All the seterr/geterr stuff relies on the hardware flags.   We don't do 
any other checking.


-Travis


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] arr.dtype.kind is 'i' for dtype=unit !?

2006-09-19 Thread Travis Oliphant
Sebastian Haase wrote:

>OK - I'm really sorry !!
>I also get 'u' -- I had a typo there ... 
>
>But what is the complete list of kind values ?
>  
>
It's in the array interface specification:

http://numpy.scipy.org/array_interface.shtml

-Travis


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] please change mean to use dtype=float

2006-09-19 Thread Travis Oliphant
Sebastian Haase wrote:

>Hello all,
>I just had someone from my lab coming to my desk saying:
>"My god - SciPy is really stupid 
>An array with only positive numbers claims to have a negative mean !! "?
>
>  
>
>I was asking about this before ... the reason was of course that her array was 
>of dtype int32 and had many large values  to cause an overflow (wrap 
>around) .
>
>Now that the default for axis is None (for all functions having an axis 
>argument),
>can we please change dtype to default to float64 !?
>  
>

The default is float64 now (as long as you are not using 
numpy.oldnumeric). 

I suppose more appropriately, we could reduce over float for integer 
data-types when calculating the mean as well (since a floating point is 
returned anyway).


-Travis


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] Division by zero doesn't raise exception in the integer case.

2006-09-19 Thread Travis Oliphant
Charles R Harris wrote:

> Travis,
>
> Is this intentional?
>
> In [77]: arange(5, dtype=int)/0
> Out[77]: array([0, 0, 0, 0, 0])
>
> It looks deliberate because all zeros are returned, but it might be 
> better if it raised an exception.


It is deliberate.   Numarray introduced it (the only difference being 
that by default NumPy has division-by-zero erros turned off). It's tied 
to the way floating-point division-by zero is handled.   There is a 
valid argument for having a separate integer-division flag so that you 
can raise exceptions for integer-division but not for floating-point 
division.  I'm open to that change for 1.0rc1

-Travis


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] max argmax combo

2006-09-19 Thread Travis Oliphant
Charles R Harris wrote:

>
>
> On 9/18/06, *Bill Baxter* <[EMAIL PROTECTED] 
> > wrote:
>
> On 9/19/06, Charles R Harris <[EMAIL PROTECTED]
> > wrote:
> > On 9/18/06, Bill Baxter <[EMAIL PROTECTED]
> > wrote:
> > > I find myself often wanting both the max and the argmax of an
> array.
> > > (And same for the other arg* functions)
>
> > > You have to do something like
> > > a = rand(10,5)
> > > imax = a.argmax(axis=0)
> > > vmax = a[(imax, range(5))]
> > >
> > I don't generally like overloading return values, the function
> starts to
> > lose its definition and becomes a bit baroque where simply
> changing a
> > keyword value can destroy the viability of the following code.
>
> Agreed.  Seems like the only justification is if you get multiple
> results from one calculation but only rarely want the extra values.
> It doesn't make sense to always return them, but it's also not worth
> making a totally different function.
>
>
> > But I can see the utility of what you want. Hmm,  this problem
> is not unique to argmax.
> > Maybe what we need is a general way to extract values, something
> like
> >
> > extract(a, imax, axis=0)
> >
> > to go along with all the single axis functions.
>
> Yes, I think that would be easier to remember.
>
> It should also work for the axis=None case.
>   imax = a.argmax(axis=None)
>   v = extract(a, imax, axis=None)
>
>
> It shouldn't be too difficult to jig something up given all the 
> example code. I can do that, but I would like more input first. The 
> questions I have are these.
>
> 1) Should it be done?
> 2) Should it be a method? (functions being somewhat deprecated)
> 3) What name should it have?
>
> I think Travis will have to weigh in on this. IIRC, he felt that the 
> number of methods was getting out of hand.


I can support adding a *function* that does both.   It can't be named 
extract (that already exists).   There should be one for all the 
"arg"-like functions.

If somebody doesn't add it before 1.0 final, it can wait for 1.0.1

-Travis


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] Division by zero doesn't raise exception in the integer case.

2006-09-19 Thread Travis Oliphant
Charles R Harris wrote:

> Travis,
>
> Is this intentional?
>
> In [77]: arange(5, dtype=int)/0
> Out[77]: array([0, 0, 0, 0, 0])
>
> It looks deliberate because all zeros are returned, but it might be 
> better if it raised an exception.


As mentioned before we translate integer division errors into 
floating-point erros and use the same hardware trapping to trap them if 
the user requests it. Simulating and "integer-division-by-zero" 
hardware flag is not trivial as we would have to manage context 
switching ourselves.   So, at least for 1.0, integer and floating-point 
division by zero are going to be handled the same.

-Travis


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] please change mean to use dtype=float

2006-09-19 Thread Travis Oliphant
Sebastian Haase wrote:

>On Tuesday 19 September 2006 15:48, Travis Oliphant wrote:
>  
>
>>Sebastian Haase wrote:
>>
>>
>
>  
>
>>>can we please change dtype to default to float64 !?
>>>  
>>>
>>The default is float64 now (as long as you are not using
>>numpy.oldnumeric).
>>
>>I suppose more appropriately, we could reduce over float for integer
>>data-types when calculating the mean as well (since a floating point is
>>returned anyway).
>>
>>
>>
>
>Is now mean() always "reducing over" float64 ? 
>The svn note """Log:
>Fix mean, std, and var methods so that they reduce over double data-type with 
>integer inputs.
>"""
>makes it sound that a float32 input is stays float32 ?
>  
>
Yes, that is true.  Only integer inputs are changed because you are 
going to get a floating point output anyway.

>For mean calculation this might introduce large errors - I usually would 
>require double-precision for *any*  input type ...
>  
>
Of course.  The system is not fool-proof.  I hesitate to arbitrarily 
change this.  The advantage of using single-precision calculation is 
that it is faster.  We do rely on the user who expressly requests these 
things to be aware of the difficulties.

>(don't know how to say this for complex types !? Are here real and imag 
>treated separately / independently ?)
>  
>

There is a complex add performed at a low-level as two separate adds.  
The addition is performed in the precision requested.

-Travis


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] please change mean to use dtype=float

2006-09-19 Thread Travis Oliphant
Charles R Harris wrote:
>
> Speed depends on the achitecture. Float is a trifle slower than double 
> on my Athlon64, but faster on PPC750. I don't know about other 
> machines. I think there is a good argument for accumlating in double 
> and converting to float for output if required.

Yes there is.  It's just not what NumPy ever does so it would be an 
exception in this case and would need  a more convincing argument in my 
opinion.You can always specify the accumulation type yourself with 
the dtype argument.  We are only talking about what the default should 
be.  

-Travis


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] please change mean to use dtype=float

2006-09-19 Thread Travis Oliphant
Sebastian Haase wrote:
> I still would argue that getting a "good" (smaller rounding errors) answer 
> should be the default -- if speed is wanted, then *that* could be still 
> specified by explicitly using dtype=float32  (which would also be a possible 
> choice for int32 input) . 
>   
So you are arguing for using long double then ;-)

> In image processing we always want means to be calculated in float64 even 
> though input data is always float32 (if not uint16).
>
> Also it is simpler to say "float64 is the default" (full stop.) - instead 
>   
> "float64 is the default unless you have float32" 
>   
"the type you have is the default except for integers".  Do you really 
want float64 to be the default for float96?

Unless we are going to use long double as the default, then I'm not 
convinced that we should special-case the "double" type.

-Travis


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] Possible inconsisteny in enumerated type mapping

2006-09-20 Thread Travis Oliphant
Francesc Altet wrote:
> Hi,
>
> I'm sending a message here because discussing about this in the bug tracker 
> is 
> not very comfortable. This my last try before giving up, so don't  be 
> afraid ;-)
>
> In bug #283 (http://projects.scipy.org/scipy/numpy/ticket/283) I complained 
> about the fact that a numpy.int32 is being mapped in NumPy to NPY_LONG 
> enumerated type and I think I failed to explain well why I think this is a 
> bad thing. Now, I'll try to expose an (real life) example, in the hope that 
> things will make clearer.
>
> Realize that you are coding a C extension that receives NumPy arrays for 
> saving them on-disk for a later retrieval. Realize also that an user is using 
> your extension on a 32-bit platform. If she pass to this extension an array 
> of type 'int32', and the extension tries to read the enumerated type (using 
> array.dtype.num), it will get NPY_LONG.
>  So, the extension use this code 
> (NPY_LONG) to save the type (together with data) on-disk. Now, she send this 
> data file to a teammate that works on a 64-bit machine, and tries to read the 
> data using the same extension. The extension would see that the data is 
> NPY_LONG type and would try to deserialize interpreting data elements as 
> being as 64-bit integer (this is the size of a NPY_LONG in 64-bit platforms), 
> and this is clearly wrong.
>
>   

In my view, this "real-life" example points to a flaw in the coding 
design that will not be fixed by altering what numpy.int32 maps to under 
the covers.   It is wrong to use a code for the platform c data-type 
(NPY_LONG) as a key to understand data written to disk.   This is and 
always has been a bad idea.  No matter what we do with numpy.int32 this 
can cause problems.  Just because a lot of platforms think an int is 
32-bits does not mean all of them do.  C gives you no such guarantee.   

Notice that pickling of NumPy arrays does not store the "enumerated 
type" as the code.  Instead it stores the data-type object (which itself 
pickles using the kind and element size so that the correct data-type 
object can be reconstructed on the other end --- if it is available at all).

Thus, you should not be storing the enumerated type but instead 
something like the kind and element-size.  

> Besides this, if for making your C extension you are using a C library that 
> is 
> meant to save data in a platform-independent (say, HDF5), then, having a 
> NPY_LONG will not automatically say which C library datatype maps to, because 
> it only have datatypes that are of a definite size in all platforms. So, this 
> is a second problem.
>
>   
Making sure you get the correct data-type is why there are NPY_INT32 and 
NPY_INT64 enumerated types.   You can't code using NPY_LONG and expect 
it will give you the same sizes when moving from 32-bit and 64-bit 
platforms.   That's a problem that has been fixed with the bitwidth 
types.  I don't understand why you are using the enumerated types at all 
in this circumstance.


> Of course there are workarounds for this, but my impression is that they can 
> be avoided with a more sensible mapping between NumPy Python types and NumPy 
> enumerated types, like:
>
> numpy.int32 --> NPY_INT
> numpy.int64 --> NPY_LONGLONG
> numpy.int_  --> NPY_LONG
>
> in all platforms, avoiding the current situation of ambiguous mapping between 
> platforms.
>   

The problem is that C gives us this ambiguous mapping.  You are asking 
us to pretend it isn't there because it "simplifies" a hypothetical case 
so that poor coding practice can be allowed to work in a special case.  
I'm not convinced.

This persists the myth that C data-types have a defined length.  This is 
not guaranteed.  The current system defines data-types with a guaranteed 
length.   Yes, there is ambiguity as to which is "the" underlying c-type 
on certain platforms, but if you are running into trouble with the 
difference, then you need to change how you are coding because you would 
run into trouble on some combination of platforms even if we made the 
change.

Basically, you are asking to make a major change, and at this point I'm 
very hesitant to make such a change without a clear and pressing need 
for it.  Your hypothetical example does not rise to the level of "clear 
and pressing need."  In fact, I see your proposal as a step backwards. 

Now, it is true that we could change the default type that gets first 
grab at int32 to be int (instead of the current long) --- I could see 
arguments for that.  But, since the choice is ambiguous and the Python 
integer type is the c-type long, I let long get first dibs on everything 
as this seemed to work better for code I was wrapping in the past.   I 
don't see any point in changing this choice now and risk code breakage, 
especially when your argument is that it would let users think that a c 
int is always 32-bits.


Best regards,

-Travis




-
Take Surveys. Ear

Re: [Numpy-discussion] ufunc.reduce and conversion

2006-09-20 Thread Travis Oliphant
A. M. Archibald wrote:
> Hi,
>
> What are the rules for datatype conversion in ufuncs? Does ufunc(a,b)
> always yield the smallest type big enough to represent both a and b?
> What is the datatype of ufunc.reduce(a)?
>   
This is an unintended consequence of making add.reduce() reduce over at 
least a ("long").   I've fixed the code so that only add.reduce and 
multiply.reduce alter the default reducing data-type to be long.  All 
other cases use the data-type of the array as the default.

Regarding your other question on data-type conversion in ufuncs:

1)  If you specify an output array, then the result will be cast to the 
output array data-type.

2)  The actual computation takes place using a data-type that all 
(non-scalar) inputs can be cast to safely (with the exception that we 
assume that long long integers can be "safely" cast to "doubles" even 
though this is not technically true).

-Travis




-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] immutable arrays

2006-09-20 Thread Travis Oliphant
Martin Wiechert wrote:
> Hi list,
>
> I just stumbled accross NPY_WRITEABLE flag.
> Now I'd like to know if there are ways either from Python or C to make an 
> array temporarily immutable.
>   
Just setting the flag

Python:

  make immutable:
  a.flags.writeable = False

  make mutable again:
  a.flags.writeable = True


C:

  make immutable:
  a->flags &= ~NPY_WRITEABLE

  make mutable again:
  a->flags |= NPY_WRITEABLE


In C you can play with immutability all you want.  In Python you can 
only make something writeable if you either 1) own the data or 2) the 
object that owns the data is itself "writeable"


-Travis


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


[Numpy-discussion] I've just tagged the tree for 1.0rc1

2006-09-20 Thread Travis Oliphant

There is now a 1.0rc1 tag on the NumPy SVN tree.  I've confirmed it 
builds and passes all tests on my Linux box for Python2.3-Python2.5

-Travis


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] immutable arrays

2006-09-21 Thread Travis Oliphant
Martin Wiechert wrote:
> Thanks Travis.
>
> Do I understand correctly that the only way to be really safe is to make a 
> copy and not to export a reference to it?
> Because anybody having a reference to the owner of the data can override the 
> flag?
>   
No, that's not quite correct.   Of course in C, anybody can do anything 
they want to the flags.

In Python, only the owner of the object itself can change the writeable 
flag once it is set to False.   So, if you only return a "view" of the 
array (a.view())  then the Python user will not be able to change the 
flags.

Example:

a = array([1,2,3])
a.flags.writeable = False

b = a.view()

b.flags.writeable = True   # raises an error.

c = a
c.flags.writeable = True  # can be done because c is a direct alias to a.

Hopefully, that explains the situation a bit better.

-Travis









-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] Tests and code documentation

2006-09-21 Thread Travis Oliphant
Charles R Harris wrote:
> Travis,
>
> A few questions.
>
> 1) I can't find any systematic code testing units, although there seem 
> to be tests for regressions and such. Is there a place we should be 
> putting such tests?
All tests are placed under the tests directory of the corresponding 
sub-package.  They will only be picked up by .test(level < 10) if the 
file is named test_..test(level>10) should pick up all 
test files.   If you want to name something different but still have it 
run at a test level < 10,  then you need to run the test from one of the 
other test files that will be picked up (test_regression.py and 
test_unicode.py are doing that for example). 
>
> 2) Any plans for code documentation? I documented some of my stuff 
> with doxygen markups and wonder if we should include a Doxyfile as 
> part of the package.
I'm not familiar with Doxygen, but would welcome any improvements to the 
code documentation.
>
> 3) Would you consider breaking out the Converters into a separate .c 
> file for inclusion? The code generator seems to take care of the ordering.
You are right that it doesn't matter which order the API subroutines are 
placed.  I'm not opposed to more breaking up of the .c files, as long as 
it is clear where things will be located.The #include strategy is 
necessary to get it all in one Python module, but having smaller .c 
files usually makes for faster editing.   It's the arrayobject.c file 
that is "too-large" IMHO, however.   That's where I would look for ways 
to break it up.

The iterobject and the data-type object could be taken out, for example.


-Travis



-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] Question about recarray

2006-09-21 Thread Travis Oliphant
Lionel Roubeyrie wrote:
> Hi all,
> Is it possible to put masked values into recarrays, I need a array with 
> heterogenous types of datas (datetime objects in the first col, all others 
> are float) but with missing values in some records. For the moment, I don't 
> find any solution for that. 
Either use "nans" or "inf" for missing values or use the masked array 
object with a complex data-type.   You don't need to use a recarray 
object to get "records".  Any array can have "records".  Therefore, you 
can have a masked array of "records" by creating an array with the 
appropriate data-type.  

It may also be possible to use a recarray as the "array" for the masked 
array object becuase the recarray is a sub-class of the array.

> I have tried with arrays of dtype=object, but I 
> have problem when I want to compute min, max, ... with an error like:
> TypeError: function not supported for these types, and can't coerce safely to 
> supported types.
>   
It looks like the max and min functions are not supported for Object 
arrays.

import numpy as N
N.maximum.types

does not include Object arrays. 

It probably should.

-Travis


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] arr.dtype.kind is 'i' for dtype=unit !?

2006-09-21 Thread Travis Oliphant
Matthew Brett wrote:
> Hi,
>
>   
>> It's in the array interface specification:
>>
>> http://numpy.scipy.org/array_interface.shtml
>> 
>
> I was interested in the 't' (bitfield) type - is there an example of
> usage somewhere?
>   
No,  It's not implemented in NumPy.  It's just part of the array 
interface specification for completeness.

-Travis



-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] Question about recarray

2006-09-21 Thread Travis Oliphant
Lionel Roubeyrie wrote:
> find any solution for that. I have tried with arrays of dtype=object, but I 
> have problem when I want to compute min, max, ... with an error like:
> TypeError: function not supported for these types, and can't coerce safely to 
> supported types.
>   
I just added support for min and max methods of object arrays, by adding 
support for Object arrays to the minimum and maximum functions.

-Travis


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] please change mean to use dtype=float

2006-09-21 Thread Travis Oliphant
David M. Cooke wrote:

>
>Conclusions:
>
>- If you're going to calculate everything in single precision, use Kahan
>summation. Using it in double-precision also helps.
>- If you can use a double-precision accumulator, it's much better than any of
>the techniques in single-precision only.
>
>- for speed+precision in the variance, either use Kahan summation in single
>precision with the two-pass method, or use double precision with simple
>summation with the two-pass method. Knuth buys you nothing, except slower
>code :-)
>
>After 1.0 is out, we should look at doing one of the above.
>  
>

+1



-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] take behaviour has changed

2006-09-22 Thread Travis Oliphant
Christian Kristukat wrote:
> Bill Baxter  gmail.com> writes:
>
>   
>> Yep, check the release notes:
>> http://www.scipy.org/ReleaseNotes/NumPy_1.0
>> search for 'take' on that page to find out what others have changed as well.
>> --bb
>> 
>
> Ok. Does axis=None then mean, that take(a, ind) operates on the flattened 
> array?
> This it at least what it seem to be. I noticed that the ufunc behaves
> differently. a.take(ind) and a.take(ind, axis=0) behave the same, so the 
> default
> argument to axis is 0 rather than None.
>   

What do you mean.  There is no "ufunc" take.  There is a function take 
that just calls the method.  The default arguments for all functions 
that match methods are the same as the methods (which means axis=None).  
However, in oldnumeric (which pylab imports by the way),  the default 
axes are the same as they were in Numeric. 

Also, if you have a 1-d array, then the axis argument doesn't make any 
difference.   Please clarify what you are saying to be sure we don't 
have a bug floating around.



-Travis


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] Putmask/take ?

2006-09-22 Thread Travis Oliphant
Stefan van der Walt wrote:
> On Fri, Sep 22, 2006 at 02:17:57AM -0500, Robert Kern wrote:
>   
>>> According to the putmask docstring:
>>>
>>> a.putmask(values, mask) sets a.flat[n] = v[n] for each n where
>>> mask.flat[n] is true. v can be scalar.
>>>
>>> This would mean that 'w' is not of the right length. 
>>>   
>> There are 4 true values in m and 4 values in w. What's the wrong
>> 
> length?
>
> The way I read the docstring, you use putmask like this:
>
> In [4]: x = N.array([1,2,3,4])
>
> In [5]: x.putmask([4,3,2,1],[1,0,0,1])
>
> In [6]: x
> Out[6]: array([4, 2, 3, 1])
>
>   
>>
>> Out[9] and Out[18] should have been the same, but elements 6 and 9 are 
>> flipped. 
>> It's pretty clear that this is a bug in .putmask().
>> 
>
> Based purely on what I read in the docstring, I would expect the above to do
>
> x[0] = w[0]
> x[6] = w[6]
> x[9] = w[9]
> x[11] = w[11]
>
> Since w is of length 4, you'll probably get indices modulo 4:
>
> w[6] == w[2] == -3
> w[9] == w[1] == -2
> w[11] == w[3] == -4
>
> Which seems to explain what you are seeing.
>   

Yes, this does explain what you are seeing.It is the behavior of 
Numeric's putmask (where this method came from).   It does seem 
counter-intuitive, and I'm not sure what to do with it.  In some sense 
putmask should behave the same as x[m] = w.   But, on the other-hand, 
was anybody actually using the "modular" indexing "feature" of "putmask".

Here are our options:

1) "fix-it" and risk breaking code for people who used putmask and the 
modular indexing "feature," 
2) Get rid of it as a method (and keep it as a function so that 
oldnumeric can use it.)
3) Keep everything the way it is.

-Travis


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] Always using scientific notation to print

2006-09-22 Thread Travis Oliphant
Brian Granger wrote:

>>You can write a function that formats arrays how you like them and then tell
>>ndarray to use it for __str__ or __repr__ using numpy.set_string_function().
>>
>>
>
>That seems to be a little low level for most users.  Would it be hard
>to have the possibility of specifying a format string?
>  
>
No, that woudn't be hard.  If you can wait several weeks, add a ticket, 
or better, dig in to the current printing function and add it to the 
set_printoptions code.

-Travis


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] Putmask/take ?

2006-09-22 Thread Travis Oliphant
Travis Oliphant wrote:

>Yes, this does explain what you are seeing.It is the behavior of 
>Numeric's putmask (where this method came from).   It does seem 
>counter-intuitive, and I'm not sure what to do with it.  In some sense 
>putmask should behave the same as x[m] = w.   But, on the other-hand, 
>was anybody actually using the "modular" indexing "feature" of "putmask".
>
>Here are our options:
>
>1) "fix-it" and risk breaking code for people who used putmask and the 
>modular indexing "feature," 
>2) Get rid of it as a method (and keep it as a function so that 
>oldnumeric can use it.)
>3) Keep everything the way it is.
>  
>

O.K.  putmask is keeping the same behavior (it is intentional behavior 
and has been for a long time).  However, because of the confusion, it is 
being removed as a method (I know this is late, but it's a little-used 
method and was only added by NumPy).Putmask will be a function.

Also, the remaining put method will have it's arguments switched to 
match the function.  IIRC the original switch was made to accomodate 
masked arrays methods of the same name.  But, this does not really help 
anyway since arr.put for a masked array doesn't even take an indicies 
argument, so confusing users who don't even use masked arrays seems 
pointless.

Scream now if you think I'm being unreasonable.  This will be in 1.0rc2  
(yes we will need rc2)

The original poster has still not explained why

a[mask] = values

does not work suitably.


-Travis


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] Rationale for atleast_3d

2006-09-22 Thread Travis Oliphant
Bill Baxter wrote:

>26 weeks, 4 days, 2 hours and 9 minutes ago, Zdeněk Hurák asked why
>atleast_3d acts the way it does:
>http://article.gmane.org/gmane.comp.python.numeric.general/4382/match=atleast+3d
>
>He doesn't seem to have gotten any answers.  And now I'm wondering the
>same thing.  Anyone have any idea?
>  
>
This function came from scipy and was written by somebody at Enthought.  
I was hoping they would respond.   The behavior of atleast_3d does make 
sense in the context of atleast_2d and thinking of 3-d arrays as 
"stacks" of 2-d arrays where the stacks are in the last dimension.

atleast_2d converts 1-d arrays to 1xN  arrays

atleast_3d converts 1-d arrays to 1xNx1 arrays so that they can be 
"stacked" in the last dimension.  I agree that this isn't consistent 
with the general notion of "pre-pending" 1's to increase the 
dimensionality of the array.

However,  array(a, copy=False, ndmin=3)  will always produce arrays with 
a 1 at the begining.  So at_least3d is convenient if you like to think 
of 3-d arrays of stacks of 2-d arrays where the last axis is the 
"stacking" dimension.

-Travis


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] take behaviour has changed

2006-09-22 Thread Travis Oliphant
Christian Kristukat wrote:
>>> Ok. Does axis=None then mean, that take(a, ind) operates on the
>>> flattened array?
>>>   
Yes, that is correct.

> Sorry, I never really read about what are ufuncs. I thought those are class
> methods of the ndarray objects... Anyway, I was refering to the following
> difference:
>   
> In [7]: a
> Out[7]:
> array([[ 0,  1,  2,  3,  4,  5],
>[ 6,  7,  8,  9, 10, 11]])
>
> In [8]: a.take([0])
> Out[8]: array([[0, 1, 2, 3, 4, 5]])
>
> In [9]: take(a,[0])
> Out[9]: array([0])
>   
Doh!.   That is a bug.  take(a,[0]) is correct a.take([0]) is not correct.

-Travis


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] dtype truth value

2006-09-23 Thread Travis Oliphant
Matthew Brett wrote:
> Hi,
>
> Forgive my ignorance, but why is this?
>
> In [1]:from numpy import *
>
> In [2]:if not dtype('...:print 'Truth value is no'
>...:
>...:
> Truth value is no
>   

Truth value of user-built Python objects is tested by looking at:

1) __nonzero__ method is called to determine truth value.
2) sequence or mapping behavior (then the length is used.  If the length 
is greater than 0, then True, otherwise it's False.

For data-type objects, there is no __nonzero__ method, but it does have 
"mapping" behavior so that fields can be extracted from a data-type 
using mapping notation.   The "length" of the data-type is the number of 
defined fields.

Therefore, if the data-type has no defined fields, it's length is 0 and 
it's truth value is "False".

So, you can think of the test as

if dtype(...):
   print "Data-type has fields:"
else:
   print "Data-type does not have fields:"


-Travis


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] please restore the way dtype prints to original version

2006-09-25 Thread Travis Oliphant
Christopher Hanley wrote:
> Hi,
>
> Change set 3213 changed the data type printing with an array from 
> something like dtype=int64 to dtype='int64'.  Although this is a small 
> cosmetic change it has broken all of the doctests I have written for 
> numpy code. 
I was changing the way dtypes print and this was an unexpected 
consequence.   It has been restored.  Please try r3215

-Travis


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] NumPy types --> Numeric typecodes map?

2006-09-25 Thread Travis Oliphant
Francesc Altet wrote:

>Hi,
>
>Anybody know if there is a map between NumPy types and Numeric
>typecodes? Something like 'typecodes' for numarray:
>  
>
How about

dtype(obj).char?

-Travis


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] NumPy types --> Numeric typecodes map?

2006-09-25 Thread Travis Oliphant
Francesc Altet wrote:

>El dl 25 de 09 del 2006 a les 11:08 -0600, en/na Travis Oliphant va
>escriure:
>  
>
>>Francesc Altet wrote:
>>
>>
>>
>>>Hi,
>>>
>>>Anybody know if there is a map between NumPy types and Numeric
>>>typecodes? Something like 'typecodes' for numarray:
>>>  
>>>

Oh, you mean actual Numeric typecodes, not Numeric-like typecodes :-)

dtype(obj).char will not work for the Numeric typecodes that changed, 
set up a dictionary-like object which uses dtype(obj).char on all but 
the ones that changed is my suggestion.   See the core/numerictypes.py 
module for dictionary-like objects.   Perhaps this would be a good thing 
to add to numpy/oldnumeric


'b' --> 'B'
'1' --> 'b'
's' --> 'h'
'w' --> 'H'
'u' --> 'I'


-Travis


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] typeDict vs sctypeDict

2006-09-26 Thread Travis Oliphant
Francesc Altet wrote:

>Hi,
>
>I've just noted this:
>
>In [12]: numpy.typeNA is numpy.sctypeNA
>Out[12]: True
>
>In [13]: numpy.typeDict is numpy.sctypeDict
>Out[13]: True
>
>Why these aliases? Ara sc* replacing its not-sc-prefix counterparts?
>I'd like to know this so that I can avoid names that could be deprecated in a 
>future.
>  
>
Yes, The problem is that the type-dictionary is really just a scalar 
type dictionary.  It was created back when scalar-types were synonomous 
with data-types.  When the data-types became a separate object, then the 
naming was wrong.  

-Travis


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] Default values in scalar constructors

2006-09-26 Thread Travis Oliphant
Francesc Altet wrote:

>Hi,
>
>Is there any reason why the next works for some types:
>
>In [26]: numpy.int32()
>Out[26]: 0
>
>In [27]: numpy.float64()
>Out[27]: 0.0
>
>but don't for the next others?
>
>In [28]: numpy.int16()
>---
> Traceback (most recent call last)
>
>/home/faltet/python.nobackup/numpy/ in ()
>
>: function takes exactly 1 argument (0 given)
>
>  
>

I suppose because int() and float() work.  I really didn't know that.  
If that is the case, then we should have all the scalar objects return 
default values.   Please file a ticket.

-Travis


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] reload and f2py

2006-09-26 Thread Travis Oliphant
Fernando Perez wrote:

>On 9/26/06, George Nurser <[EMAIL PROTECTED]> wrote:
>  
>
>>I'm running Python 2.3.5 with recent SVN f2py.
>>
>>Suppose I import an extension I have built with f2py. Then, if I edit
>>the fortran and recompile the extension, I cannot use reload to use
>>the modified version within the same Python session.
>>
>>I believe this is an problem with Python, that reload doesn't work
>>with externally compiled extensions.
>>
>>
>
>As far as I know, you are correct.
>  
>
On many systems there is an "unload" command similar to a shared-library 
load command which Python uses to load shared libraries.   But, this is 
not performed by Python.  I've wondered for awhile if a suitable use of 
such a command in an extension module would allow "reloading" of shared 
libraries.

-Travis


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] More about data types in NumPy

2006-09-27 Thread Travis Oliphant
Francesc Altet wrote:
> Hello,
>
> Sorry for being insistent, but I recognize that I'm having a bad time
> with NumPy data type rational. Is there an explanation for this?:
>
>   
 numpy.dtype('i4').type
 
> 
>   
 numpy.dtype('int32').type
 
> 
>   
 numpy.dtype('i4').type == numpy.dtype('int32').type
 
> True
>
> So far so good, but is the next the intended behaviour?
>
>   
 numpy.typeDict['i4']
 
> 
>   
 numpy.typeDict['int32']
 
> 
>   
 numpy.typeDict['i4'] == numpy.typeDict['int32']
 
No, this isn't correct behavior.   This time you've caught an actual 
problem :-)

The typeDict (actually the sctypeDict --- scalar-type-dictionary returns 
a scalar type given a string not a data-type object) is only used if no 
other conversion can be found for the object.  It used to be much more 
useful before the data-type objects were formalized last year.

-Travis



-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] More about data types in NumPy

2006-09-27 Thread Travis Oliphant
Francesc Altet wrote:
> Hello,
>
> Sorry for being insistent, but I recognize that I'm having a bad time
> with NumPy data type rational. Is there an explanation for this?:
>
>   
Your actually talking about the array scalar types not the data-type 
objects. 

But, more to the point

 numpy.typeDict['i4']
 
> 
>   
 numpy.typeDict['int32']
 
> 
>   
 numpy.typeDict['i4'] == numpy.typeDict['int32']
 
> False
>
>   
On my 32-bit system I get:

numpy.sctypeDict['i4'] is numpy.sctypeDict['int32']

True


Hmm. I don't know why you are getting a different result, but 
perhaps it has to do with the fact that the character alias ('i4') is 
not getting set at the same type as 'int32'.  I just fixed that.   That 
should fix the problem.

-travis




-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] should I replace asarray with asanyarray in my code ?

2006-09-27 Thread Travis Oliphant
Sebastian Haase wrote:
> Hi,
> This is a vaguely formulated question ...
> When I work with memmap'ed files/arrays I have a derived class 
> that adds special attributes to the array class (referring to the MRC image 
> file format used in medical / microscopy imaging)
>
> What are the pros and cons for asarray() vs. asanyarray()
>
> One obvious con for asanyarray is that its longer and asarray is what I have 
> been using for the last few years  ;-)
>   

asarray() guarantees you have a base-class array.  Thus, you are not 
going to be thwarted by an re-definitions of infix operators, or other 
changed methods or attributes which you might use in your routine.

asanyarray() allows a simple way of making sure your function returns 
any sub-class so that, for example, matrices are passed seamlessly 
through your function (matrix in and matrix out).

However, a big drawback of asanyarray is that you must be sure that the 
way your function is written will not get confused by how a sub-class 
may overwrite the array methods and attributes.   This significantly 
limits the application of asanyarray in my mind, as it is pretty 
difficult to predict what a sub-class *might* do to it's methods 
(including the special methods implementing the infix operators). 

A better way to write a function that passes any sub-class is to use 
asarray() so you are sure of the behavior of all methods and "infix" 
operators and then use the __array_wrap__ method of the actual input 
arguments (using __array_priority__ to choose between competing input 
objects).  I expect that a decorator that automates this process will be 
added to NumPy eventually.   Several examples have already been posted 
on this list.

After getting the array result, you call the stored __array_wrap__ 
function which will take a base-class ndarray and return an object of 
the right Python-type (without copying data if possible).  This is how 
the ufuncs work and why they can take sub-classes (actually anything 
with an __array__ method) and the same kind of object.

-Travis


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


[Numpy-discussion] Question answered incorrectly at NumPy Tutorial

2006-09-27 Thread Travis Oliphant

During my NumPy Tutorial at the SciPy conference last month, somebody 
asked the question about the memory requirements of index arrays that I 
gave the wrong impression about.  Here is the context and the correct 
response that should alleviate concerns about large cross-product index 
arrays.

I was noting how copy-based (advanced) indexing using index arrays works 
in multiple-dimensions by creating an array of the same-shape of the 
input index arrays constructed by selecting the elements indicated by 
respective elements of the index arrays.

If a is 2-d, then

a[[10,12,14],[13, 15, 17]]

returns a 1-d array with elements

[a[10,13], a[12,15], a[14,17]].

This is *not* the cross-product that some would expect.  The 
cross-product can be generated using the ix_ function

a[ix_([10,12,14], [13,15,17])]

is equivalent to

a[[[10,10,10],[12,12,12],[14,14,14]], [[13,15,17],[13,15,17],[13,15,17]]]

which will return

[[a[10,13] a[10,15], a[10,17]],
 [a[12,13] a[12,15], a[12,17]],
 [a[14,13] a[14,15], a[14,17]]]

The concern mentioned at the conference was that the cross-product would 
generate large intermediate index arrays for large input arrays to ix_.  
At the time, I think I validated the concern.  However, the concern is 
unfounded.  This is because the cross product function does not actually 
create a large intermediate array, but uses the broad-casting 
implementation of indexing to generate the 2-d indexing array 
"on-the-fly" (much like ogrid and other tools in NumPy).

Notice:

ix_([10,12,14], [13,15,17])

(array([[10],
   [12],
   [14]]), array([[13, 15, 17]]))

The first indexing array is 3x1, while the second is 1x3.  The result 
array will be 3x3, but the 2-d indexing array is never actually stored.

Just to set my mind at ease about possible mis-information I spread 
during the tutorial, and give a little tutorial on advanced indexing.

Best,

-Travis







  



-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


[Numpy-discussion] Release candidate 2.0 will come out mid-week next week

2006-09-27 Thread Travis Oliphant

Hi all,

I'd like to release numpy 1.0rc2 on about October 5 of next week.   
Then, the release of numpy 1.0 official should happen on Monday, October 
17.   Please try and get all fixes and improvements in before then.  
Backward-incompatible changes are not acceptable at this point (unless 
they are minor or actually bug-fixes). I think NumPy has been cooking 
long enough.  Any remaining problems can be fixed with maintenance 
releases.  When 1.0 comes out, we will make a 1.0 release branch where 
bug-fixes should go as well as on the main trunk (I'd love for a way to 
do that automatically).

There are lots of projects that need to start converting to NumPy 1.0 if 
we are going to finally have a "merged" community.  The actual release 
of NumPy 1.0 will indicate that we are now committing to stability. 
Thanks to all that have been contributing so much to the project.

-Travis


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] how to compile numpy with visual-studio-2003?

2006-09-28 Thread Travis Oliphant
mg wrote:
> Hello,
>
> I just download the newly Python-2.5 and Numpy-1.0rc1 and all work fine 
> on Linux-x86 32 and 64bit platforms.
> Now, I try to compile the both distributions on WindowsXP with 
> VisualStudio2003. No problem to compile Python-2.5, but i have some 
> troubles with Numpy-1.0rc1 and I didn't find any help in the provided 
> setup.py. So, does someone can tell me how to do it?
>
>   
I don't use VisualStudio2003 on Windows to compile NumPy (I use mingw).  
Tim Hochberg once used a microsoft compiler to compile a previous 
version of NumPy and some things had to be fixed to make it work.  I'm 
not sure if some in-compatibilities have crept in since then or not.  
But, I'd sure like to resolve it if they have.

So, please post what problems you are having.  You may be the first 
person to try a microsoft compiler with Python 2.5

-Travis


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] storage order and concatenate

2006-09-28 Thread Travis Oliphant
David Cournapeau wrote:
> Hi,
>
>
> What are the rules concerning storage with numpy ? 
The rule is that a numpy array has "strides" which specify how many 
bytes to skip to get to the next element in the array.   That's the 
internal model.  There are no hard and fast rules about storage order.  
Internally, C-order is as good as Fortran-order (except the array 
iterator gives special preference to C-order and all functions for which 
the order can be specified (like zeros) default to C-order).

Thus, the storage order is whatever the strides say it is.  Now, there 
are flags that keep track of whether or not the strides agree with the 2 
recognized special cases of "Fortran-order" (first-index varies the 
fastest) or "C-order" (last-index varies the fastest).  But, this is 
only for convenience.   Very few functions actually require a 
specification of storage order.  Those that allow it default to "C-order".

You can't think of a NumPy array has having a particular storage order 
unless you explicitly request it.  One of the most common ways that 
Fortran-order arrays show up, for example is when a C-order array is 
transposed.  A transpose operation does nothing except flip the strides 
(and therefore the flags) of the array.  This is what is happening in 
concatenate (using axis=1) to give you a Fortran-order array.  
Bascially, code equivalent to the following is being run:  
concatenate([X1.T, X2.T]).T

In the second example, you explicitly create the array (and therefore 
the strides) as C-order and then fill it (so it doesn't change on you).  
The first example used array calculations which don't guarantee the 
storage order. 

This is all seamless to the user until you have to interface with 
extension code.  Ideally, you write compiled code that deals with 
strided arrays.  If you can't, then you request an array of the required 
storage-order. 

By the way, for interfacing with ctypes, check out the 
ctypeslib.ndpointer class-creation function for flag checking and the 
require function for automatic conversion of an array to specific 
requirements.

-Travis








-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] 32/64-bit machines, integer arrays and python ints

2006-09-28 Thread Travis Oliphant
Bill Spotz wrote:
> I am wrapping code using swig and extending it to use numpy.
>
> One class method I wrap (let's call it myElements()) returns an array  
> of ints, and I convert it to a numpy array with
>
>  PyArray_SimpleNew(1,n,'i');
>   

You should probably use NPY_INT instead of 'i' for the type-code.  
> I obtain the data pointer, fill in the values and return it as the  
> method return argument.
>
> In python, it is common to want to loop over this array and treat its  
> elements as integers:
>
>  for row in map.myElements():
>  matrix.setElements(row, [row-1,row,row+1], [-1.0,2.0,-1.0])
>
> On a 32-bit machine, this has worked fine, but on a 64-bit machine, I  
> get a type error:
>
>  TypeError: in method 'setElements', argument 2 of type 'int'
>
> because row is a .
>
> It would be nice if I could get the integer conversion to work  
> automatically under the covers, but I'm not exactly sure how to make  
> that work.
>   

Yeah, It can be confusing, at first.  You just have to make sure you are 
matching the right c-data-types. I'm not quite sure what the problem 
here is given your description, because I don't know what setElements 
expects.  

My best guess, is that it is related to the fact that a Python int uses 
the 'long' c-type.   Thus, you should very likely be using 
PyArray_SimpleNew(1, n, NPY_LONG) instead of int so that your integer 
array always matches what Python is using as integers.

The other option is to improve your converter in setElements so that it 
can understand any of the array scalar integers and not just the default 
Python integer.  

The reason this all worked on 32-bit systems is probably the array 
scalar corresponding to NPY_INT is a sub-class of the Python integer.  
It can't be on a 64-bit platform because of binary incompatibility of 
the layout.

Hope that helps.


-Travis


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] 32/64-bit machines, integer arrays and python ints

2006-09-28 Thread Travis Oliphant
Bill Spotz wrote:

>On Sep 28, 2006, at 12:03 PM, Travis Oliphant wrote:
>
>  
>
>>The other option is to improve your converter in setElements so  
>>that it
>>can understand any of the array scalar integers and not just the  
>>default
>>Python integer.
>>
>>
>
>I think this may be the best approach.
>
>This may be something worthwhile to put in the numpy.i interface  
>file: a set of typemaps that handle a set of basic conversions for  
>those array scalar types for which it makes sense.  I'll look into it.
>  
>
That's a good idea.Notice that there are some routines for making 
your life easier here. 

You should look at the tp_int function for the gentype array (it 
converts scalars to arrays).  You call the "__int__" special method of 
the scalar to convert it to a Python integer.  You should first check to 
see that it is an integer scalar PyArray_IsScalar(obj, Integer) because 
the "__int__" method coerces to an integer if it is a float (but maybe 
you want that behavior).

There are other functions in the C-API that return the data directly 
from the scalar --- check them out.  The macros in arrayscalar.h are 
useful.

-Travis







-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] Non-writeable default for numpy.ndarray

2006-09-29 Thread Travis Oliphant
Tim Hochberg wrote:
> Francesc Altet wrote:
>   
> It's not that the it's being built from ndarray, it's that the buffer 
> that you are passing it is read only. 
This is correct.
> In fact, I'd argue that allowing 
> the writeable flag to be set to True in this case is actually a bug. 
>   
It's actually intentional.  Strings used as buffers are allowed to be 
writeable.  This is an explicit design decision to allow pickles to load 
without making 2 copies of the memory.   The pickled string that Python 
creates is used as the actual memory for loaded arrays.

Now, I suppose it would be possible to still allow this but be more 
finnicky about when a string-used-as-the-memory can be set writeable 
(i.e. we are the only reference to it).  But, this would be a fragile 
solution as well. 

My inclination is to just warn users not to use strings as buffers 
unless they know what they are doing.  The fact that it is read-only by 
default is enough of a guard against "accidentally" altering a string 
you didn't want to alter.

-Travis


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] Incorrect removal of NULL char in buffers

2006-09-29 Thread Travis Oliphant
Francesc Altet wrote:
> Hi,
>
>
> However, for string values, numpy seems to work in a strange way. 
> The numarray have an expected behaviour, IMO:
>
> In [100]: numarray.strings.array(buffer="a\x00b"*4, itemsize=4, shape=3)
> Out[100]: CharArray(['a', '', 'ba'])  
>
>   
I'm not sure why you think this is "expected."   You have 
non-terminating NULLs in this array and yet they are not printing for you.

Just look at the tostring()...

> but numpy  haven't:
>
> In [101]: numpy.ndarray(buffer="a\x00b"*4, dtype="S4", shape=3)
> Out[101]:
> array([aba, ba, bab],
>   dtype='|S4')
>
> i.e. it seems like numpy is striping-off NULL chars before building the 
> object 
> and I don't think this is correct.
>   

Hmmm.  I don't see that at all.  This is what I get (version of numpy is 
1.0.dev3233)

In [33]: numpy.ndarray(buffer="a\x00b"*4, dtype="S4", shape=3)
Out[33]:
array(['a\x00ba', '\x00ba', 'ba\x00b'],
  dtype='|S4')

which to me is very much expected.   I.e. only terminating NULLs are 
stripped off of the strings on printing.   I think you are getting 
different results because string printing used to not include the quotes 
(which had the side-effect of not printing NULLs in the middle of 
strings).  They are still there, just not showing up in your output.

In the end both numarray and numpy have the same data stored 
internally.  It's just a matter of how it is being printed that seems to 
differ a bit.  From my perspective, only NULLs at the end of strings 
should be stripped off and that is the (current) behavior of NumPy.

You are getting different results, because the array-printing for 
strings was recently updated (to insert the quotes so that it makes more 
sense).Without these changes, I think the NULLs were being stripped 
away on printing.  In other words, something like

print 'a\x00ba'

aba

used to be happening. 

-Travis



-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] Non-writeable default for numpy.ndarray

2006-09-29 Thread Travis Oliphant
Francesc Altet wrote:

>I see. Thanks for the explanation.
>  
>
You deserve the thanks for the great testing of less-traveled corners of 
NumPy.   It's exactly the kind of thing needed to get NumPy ready for 
release. 

-Travis



-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] return type diffences of indexed arrays with Intel C++ compiler (Python 2.5)

2006-10-02 Thread Travis Oliphant
Lars Bittrich wrote:

>Hi all,
>
>recently I tried to compile numpy/scipy with Python 2.5 and Intel C++ compiler 
>9.1. At first everything was fine, but the scipy test produced a few errors. 
>The reason was a little difference:
>
>numpy svn(1.0.dev3233) with Intel C compiler (Python 2.5):
>---
>In [1]:from numpy import ones, zeros, integer
>
>In [2]:
>
>In [2]:x = ones(1)
>
>In [3]:i = zeros(1, integer)
>
>In [4]:x[i]
>Out[4]:1.0
>
>numpy svn(1.0.dev3233) with GCC 3.3 (Python 2.3):
>--
>In [1]:from numpy import ones, zeros, integer
>
>In [2]:
>
>In [2]:x = ones(1)
>
>In [3]:i = zeros(1, integer)
>
>In [4]:print x[i]
>Out[4]:array([1])
>
>The Intel version gives me a scalar whereas the gcc version an array. Maybe 
>Python 2.5 is causing this problem but my first guess was the compiler. 
>
>  
>

This is a Python 2.5 issue (the new __index__ method) was incorrectly 
implemented and allowing a 1-d array to be interpreted as an index.

This should be fixed in SVN.

-Travis


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] Vectorizing code, for loops, and all that

2006-10-02 Thread Travis Oliphant
Albert Strasheim wrote:

>In [571]: x1 = N.random.randn(2000,39)
>
>In [572]: y1 = N.random.randn(64,39)
>
>In [574]: %timeit z1=x1[...,N.newaxis,...]-y1 10 loops, best of 3: 703 ms
>per loop
>
>In [575]: z1.shape
>Out[575]: (2000, 64, 39)
>
>As far as I can figure, this operation is doing 2000*64*39 subtractions.
>Doing this straight up yields the following:
>
>In [576]: x2 = N.random.randn(2000,64,39)
>
>In [577]: y2 = N.random.randn(2000,64,39)
>
>In [578]: %timeit z2 = x2-y2
>10 loops, best of 3: 108 ms per loop
>
>Does anybody have any ideas on why this is so much faster? Hopefully I
>didn't mess up somewhere...
>  
>

I suspect I know why, although the difference seems rather large.  There 
is code optimization that is being taken advantage of in the second 
case.  If you have contiguous arrays (no broadcasting needed), then 1 
C-loop is used for the subtraction (your second case).

In the first case you are using broadcasting to generate the larger 
array.  This requires more complicated looping constructs under the 
covers which causes your overhead.  Bascially, you will have 64*39 1-d 
loops of 2000 elements each in the first example with a bit of 
calculation over-head to reset the pointers before each loop.

In the ufunc code, compare the ONE_UFUNCLOOP case with the 
NOBUFFER_UFUNCLOOP case.  If you want to be sure what is running 
un-comment the fprintf statements so you can tell.

I'm surprised the overhead of adjusting pointers is so high, but then 
again you are probably getting a lot of cache misses in the first case 
so there is more to it than that, the loops may run more slowly too.

-Travis


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] Vectorizing code, for loops, and all that

2006-10-02 Thread Travis Oliphant
Travis Oliphant wrote:

>
>I suspect I know why, although the difference seems rather large.  
>
[snip]

>I'm surprised the overhead of adjusting pointers is so high, but then 
>again you are probably getting a lot of cache misses in the first case 
>so there is more to it than that, the loops may run more slowly too.
>  
>

I'm personally bothered that this example runs so much more slowly.  I 
don't think it should.  Perhaps it is unavoidable because of the 
memory-layout issues.  It is just hard to believe that the overhead for 
calling into the loop and adjusting the pointers is so much higher. 

But, that isn't the problem, here.  Notice the following:

x3 = N.random.rand(39,2000)
x4 = N.random.rand(39,64,1)

%timeit z3 = x3[:,None,:] - x4

10 loops, best of 3: 76.4 ms per loop

Hmm... It looks like cache misses are a lot more important than making 
sure the inner loop is taken over the largest number of variables 
(that's the current way ufuncs decide which axis ought to be used as the 
1-d loop). 

Perhaps those inner 1-d loops could be optimized (using prefetch or 
something) to reduce the number of cache misses on the inner 
computation, and the concept of looping over the largest dimension 
(instead of the last dimension) should be re-considered.

Ideas,

-Travis




-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] ***[Possible UCE]*** Re: Vectorizing code, for loops, and all that

2006-10-02 Thread Travis Oliphant
Albert Strasheim wrote:

>Hello all
>
>  
>
>>-Original Message-
>>From: [EMAIL PROTECTED] [mailto:numpy-
>>[EMAIL PROTECTED] On Behalf Of Travis Oliphant
>>Sent: 03 October 2006 02:32
>>To: Discussion of Numerical Python
>>Subject: Re: [Numpy-discussion] Vectorizing code, for loops, and all that
>>
>>Travis Oliphant wrote:
>>
>>
>>
>>>I suspect I know why, although the difference seems rather large.
>>>
>>>  
>>>
>>[snip]
>>
>>
>>
>>>I'm surprised the overhead of adjusting pointers is so high, but then
>>>again you are probably getting a lot of cache misses in the first case
>>>so there is more to it than that, the loops may run more slowly too.
>>>
>>>
>>>  
>>>
>>I'm personally bothered that this example runs so much more slowly.  I
>>don't think it should.  Perhaps it is unavoidable because of the
>>memory-layout issues.  It is just hard to believe that the overhead for
>>calling into the loop and adjusting the pointers is so much higher.
>>
>>
>
>Firstly, thanks to Tim... I'll try his functions tomorrow.
>
>Meanwhile, I can confirm that the NOBUFFER_UFUNCLOOP case in
>PyUFunc_GenericFunction is getting exercised in the slower case. Here's some
>info on what's happening, courtesy of Rational Quantify:
>
>case NOBUFFER_UFUNCLOOP:
>while (loop->index < loop->size) {
>  for (i=0; inargs; i++)
>loop->bufptr[i] = loop->iters[i]->dataptr; [1]
>
>  loop->function((char **)loop->bufptr, &(loop->bufcnt),
> loop->steps, loop->funcdata); [2]
>  UFUNC_CHECK_ERROR(loop);
>
>  for (i=0; inargs; i++) {
>  PyArray_ITER_NEXT(loop->iters[i]); [3]
>  }
>  loop->index++;
>}
>break;
>
>[1] 12.97% of function time
>[2] 8.65% of functiont ime
>[3] 62.14% of function time
>
>If statistics from elsewhere in the code would be helpful, let me know, and
>I'll see if I can convince Quantify to cough it up.
>
>  
>
Please run the same test but using

x1 = N.random.rand(39,2000)
x2 = N.random.rand(39,64,1)

z1 = x1[:,N.newaxis,:] - x2


Thanks,

-Travis






-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] Vectorizing code, for loops, and all that

2006-10-03 Thread Travis Oliphant
Albert Strasheim wrote:
>>> [1] 12.97% of function time
>>> [2] 8.65% of functiont ime
>>> [3] 62.14% of function time
>>>
>>> If statistics from elsewhere in the code would be helpful, let me 
>>> know,
>>>   
>> and
>> 
>>> I'll see if I can convince Quantify to cough it up.
>>>
>>>   
>> Please run the same test but using
>>
>> x1 = N.random.rand(39,2000)
>> x2 = N.random.rand(39,64,1)
>>
>> z1 = x1[:,N.newaxis,:] - x2
>> 
>
> Very similar results to what I had previously:
>
> [1] 10.88%
> [2] 7.25%
> [3] 68.25%
>
>   
Thanks,

I've got some ideas about how to speed this up by eliminating some of 
the unnecessary calculations  going on outside of the function loop, but 
there will still be some speed issues depending on how the array is 
traversed once you get above a certain size.   I'm not sure there anyway 
around that, ultimately, due to memory access being slow on most hardware. 

If anyone has any ideas, I'd love to hear them.I won't be able to 
get to implementing my ideas until at least Friday (also when rc2 will 
be released).


-Travis


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


[Numpy-discussion] RC2 to be released Friday

2006-10-03 Thread Travis Oliphant

I'm going to have to put off release of rc2 for Friday.  I'm just too 
busy right now.  That might help us get some speed-ups into the 
NOBUFFER_UFUNCLOOP code as well.

My speed-up ideas are:

1) Only keep track of 1 set of coordinates instead of self->nargs sets 
(1 for each iterator).
2) Keep track of the dataptr for each iterator in the bufptr array (so a 
copy isn't necessary)
3) Not increment the index on each iterator separately.

All of these changes will be made directly in the NOBUFFER_UFUNCLOOP code. 

More generally, it would be nice to take these ideas and push them into 
other areas of the code --- perhaps through the multi-iterator that is 
already present.  Probably, this will have to wait until 1.0.1 though.

-Travis



-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] ValueError: object too deep for desired array

2006-10-03 Thread Travis Oliphant
[EMAIL PROTECTED] wrote:

>Microsoft Windows XP [Version 5.1.2600]
>(C) Copyright 1985-2001 Microsoft Corp.
>
>C:\Documents and Settings\kenny>cd c:\lameness
>
>C:\lameness>c:\Python24\python.exe templatewindow.py
>a = [[  1.00013175e+00   2.63483019e-05   1.6740e+00   5.22246363e-05
>1.8735e+00  -2.77694969e-07  -1.30756273e-07   1.03202730e-06]
> [  2.63483019e-05   6.95644927e-05  -7.15426839e-07   1.99534228e-05
>5.29400631e-05  -3.07638369e-09  -5.52729618e-06  -7.61431767e-06]
> [  1.6740e+00  -7.15426839e-07   1.00011917e+00   2.50407751e-05
>1.6219e+00  -2.77757947e-07  -1.30856101e-07   1.23058301e-07]
> [  5.22246363e-05   1.99534228e-05   2.50407751e-05   8.21505582e-05
>5.26966037e-05  -6.13563429e-09  -2.76420755e-06  -3.80791858e-06]
> [  1.8735e+00   5.29400631e-05   1.6219e+00   5.26966037e-05
>1.00020132e+00   5.01389982e-05   3.45959412e-05   3.17129503e-05]
> [ -2.77694969e-07  -3.07638369e-09  -2.77757947e-07  -6.13563429e-09
>5.01389982e-05   3.45959412e-05   3.17129503e-05   3.27035490e-05]
> [ -1.30756273e-07  -5.52729618e-06  -1.30856101e-07  -2.76420755e-06
>3.45959412e-05   3.17129503e-05   3.27035490e-05   3.59732704e-05]
> [  1.03202730e-06  -7.61431767e-06   1.23058301e-07  -3.80791858e-06
>3.17129503e-05   3.27035490e-05   3.59732704e-05   4.12184645e-05]]
>F = [[ -1.44231014e+03]
> [ -7.54006917e-02]
> [ -1.44227222e+03]
> [ -7.49199956e-02]
> [ -1.44242446e+03]
> [ -4.24684780e-02]
> [ -2.49566072e-02]
> [ -2.16978637e-02]]
>Traceback (most recent call last):
>  File "C:\lameness\PlotPanel.py", line 476, in OnNo
>self.parent.peaktime()
>  File "C:\lameness\PlotPanel.py", line 64, in peaktime
>self._replot()
>  File "C:\lameness\PlotPanel.py", line 159, in _replot
>self.drawpeak()
>  File "C:\lameness\PlotPanel.py", line 233, in drawpeak
>COE[:,j1-1] = linalg.solve(A,F)
>ValueError: object too deep for desired array
>
>Anyone know what this error could be coming from?  Is it because the
>floating point numbers are too large, memory wise?
>  
>
No, it's because you are trying to fit a 2-d array into a 1-d position. 

linalg.solve(A,F) produces a 2-d array in this instance (because F is a 
2-d array). 

COE[:,j1-1:] =linalg.solve(A,F)

should work

or squeezing the result of the solution to a 1-d array would also work.

-Travis


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] Fastest way of distinguish a numpy scalar of a python scalar?

2006-10-03 Thread Travis Oliphant
Sebastian Haase wrote:

>Hi,
>a noticed the underscore in "numpy.string_"  ...
>I thought the underscores were removed in favour of handling the 
>"from numpy import *" separately via the __all__ variable.
>Is this done only for *some* members of numpy ?
>  
>
Yeah.  The underscores are still on the conflicting type names.

-Travis


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] Fastest way of distinguish a numpy scalar of a python scalar?

2006-10-03 Thread Travis Oliphant
Tim Hochberg wrote:

>Francesc Altet wrote:
>  
>
>>Hi,
>>
>>I thought that numpy.isscalar was a good way of distinguising a numpy scalar 
>>from a python scalar, but it seems not:
>>
>>  
>>
>>
>numpy.isscalar(numpy.string_('3'))
>
>  
>
>>True
>>  
>>
>>
>numpy.isscalar('3')
>
>  
>
>>True
>>
>>Is there an easy (and fast, if possible) way to check whether an object is a 
>>numpy scalar or a python one?
>>
>>  
>>
>>
>It looks like isinstance(x, numpy.generic) works, but I didn't test it 
>extensively.
>
>  
>
That should definitely work. All the array scalars are in a hierarchy 
inheriting from numpy.generic.  There are also sub-levels 
(numpy.integer, numpy.inexact, etc...)


-Travis


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] Problems with Numexpr and discontiguous arrays

2006-10-04 Thread Travis Oliphant
Tim Hochberg wrote:

>David M. Cooke wrote:
>  
>
>>On Wed, 04 Oct 2006 10:19:08 -0700
>>Tim Hochberg <[EMAIL PROTECTED]> wrote:
>>
>>  
>>
>>
>>>Ivan Vilata i Balaguer wrote:
>>>
>>>  
>>>
It seemed that discontiguous arrays worked OK in Numexpr since r1977 or
so, but I have come across some alignment or striding problems which can
be seen with the following code::
  


>>>I looked at this just a little bit and clearly this bit from interp_body 
>>>cannot work in the presence of recor arrays:
>>>
>>>//
>>>intp sf1 = sb1 / sizeof(double);\
>>>//...
>>>#define f1((double *)x1)[j*sf1]
>>>
>>>
>>>There are clearly some assumptions that sb1 is evenly divisible by 
>>>sizeof(double). Blech!. This is likely my fault, and I expect it won't 
>>>be too horrible to fix, but I don't know that I'll have time immediately.
>>>
>>>  
>>>
>>My thinking is that this should be handled by a copy, so that the opcodes
>>always work on contiguous data. The copy can be another opcode. One advantage
>>of operating on contiguous data is that it's easier to use the processor's
>>vector instructions, if applicable.
>>  
>>
>>
>
>That would be easy to do. Right now the opcodes should work correctly on 
>data that is spaced in multiples of the itemsize on the last axis. Other 
>arrays are copied (no opcode required, it's embedded at the top of 
>interp_body lines 64-80). The record array case apparently slips through 
>the cracks when we're checking whether an array is suitable to be used 
>correctly (interpreter.c 1086-1103). It would certainly not be any 
>harder to only allow contiguous arrays than to correctly deal with 
>record arrays. Only question I have is whether the extra copy will 
>overwhelm the savings of that operating on contiguous data gives.  
>

With record arrays you have to worry about alignment issues.   The most 
complicated part of the ufunc code is to handle that. 

The usual approach is to copy (and possibly byte-swap at least the axis 
you are working on) over to a buffer (the copyswapn functions will do 
that using a pretty fast approach for each data-type).  This is 
ultimately how the ufuncs work (though the buffer-size is fixed so the 
data is copied and operated on in chunks).

-Travis


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] Hello and my first patch

2006-10-04 Thread Travis Oliphant
Greg Willden wrote:

> Hello All,
> I introduced myself on the Scipy list and I have a feeling that most 
> of the subscribers here are also on Scipy-devel.  Anyway I just 
> submitted my first patch to numpy (ticket #316).  I added the 
> blackman-harris, Nuttall and Flat Top windowing functions and added 
> "See also" sections to the docstrings for all the window functions.

Great contribution.  Thanks a bunch.  I think this will probably go into 
the scipy package, though.  There is already a lot of windows available 
in the scipy.signal.window function. 

The window functions that are in NumPy are there only for historical 
purposes only (i.e. compatibility with old MLab).   

On the other hand, the other thought to consider is that since we have 
window functions in NumPy already.  Let's just move them all from 
scipy.signal into NumPy.

-Travis


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] Hello and my first patch

2006-10-04 Thread Travis Oliphant
Sebastian Haase wrote:

>If scipy is going to be installable as separate sub-packages maybe
>all window functions can be moved to scipy ;-)
>
>In other words, if the ones in numpy are there only for "historical 
>reasons" maybe they should be cleaned out before the 1.0 release.
>All arguments seem similar to ndimage (which was is numarray and is now 
>in scipy)
>  
>
Not really, because these functions were in *both* Numeric and 
numarray.  That's the trouble.

And the multiple scipy packages situation needs more discussion We 
are all ears...

-Travis


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] Vectorizing code, for loops, and all that

2006-10-05 Thread Travis Oliphant
Travis Oliphant wrote:

>Albert Strasheim wrote:
>  
>
>>>>[1] 12.97% of function time
>>>>[2] 8.65% of functiont ime
>>>>[3] 62.14% of function time
>>>>
>>>>If statistics from elsewhere in the code would be helpful, let me 
>>>>know,
>>>>  
>>>>
>>>>
>>>and
>>>
>>>  
>>>
>>>>I'll see if I can convince Quantify to cough it up.
>>>>
>>>>  
>>>>
>>>>
>>>Please run the same test but using
>>>
>>>x1 = N.random.rand(39,2000)
>>>x2 = N.random.rand(39,64,1)
>>>
>>>z1 = x1[:,N.newaxis,:] - x2
>>>
>>>  
>>>
>>Very similar results to what I had previously:
>>
>>[1] 10.88%
>>[2] 7.25%
>>[3] 68.25%
>>
>>  
>>
>>
>Thanks,
>
>I've got some ideas about how to speed this up by eliminating some of 
>the unnecessary calculations  going on outside of the function loop, but 
>there will still be some speed issues depending on how the array is 
>traversed once you get above a certain size.   I'm not sure there anyway 
>around that, ultimately, due to memory access being slow on most hardware. 
>  
>

Well, I tried out my ideas and didn't get much improvement (8-10%).  
Then, I finally realized more fully that the slowness was due to the 
loop taking place over an axis which had a very large stride so that the 
memory access was taking a long time. 

Thus, instead of picking the loop axis to correspond to the axis with 
the longest dimension, I've picked the loop axis to be one with the 
smallest sum of strides.

In this particular example, the speed-up is about 6-7 times...

-Travis


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] On loop and broadcasting (again)

2006-10-05 Thread Travis Oliphant
David Cournapeau wrote:

>Hi,
>
>   The email from Albert made me look again on some surprising results I 
>got a few months ago when starting my first "serious" numpy project. I 
>noticed that when computing multivariate gaussian densities, centering 
>the data was more expensive than everything else, including 
>exponentiation. Now that I have some experience with numpy, and 
>following the previous discussion, I tried the following script:
>  
>

Try it again with the new code in SVN.

-Travis


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] Problems with Numexpr and discontiguous arrays

2006-10-05 Thread Travis Oliphant
Tim Hochberg wrote:

>>>  
>>>  
>>>
>>That would be easy to do. Right now the opcodes should work correctly 
>>on data that is spaced in multiples of the itemsize on the last axis. 
>>Other arrays are copied (no opcode required, it's embedded at the top 
>>of interp_body lines 64-80). The record array case apparently slips 
>>through the cracks when we're checking whether an array is suitable to 
>>be used correctly (interpreter.c 1086-1103). It would certainly not be 
>>any harder to only allow contiguous arrays than to correctly deal with 
>>record arrays. Only question I have is whether the extra copy will 
>>overwhelm the savings of that operating on contiguous data gives.  The 
>>thing to do is probably try it and see what happens.
>>
>>
>
>OK, I've checked in a fix for this that makes a copy when the array is 
>not strided in an even multiple of the itemsize. I first tried copying 
>for all discontiguous array, but this resulted in a large speed hit for 
>vanilla strided arrays (a=arange(10)[::2], etc.), so I was more frugal 
>with my copying. I'm not entirely certain that I caught all of the 
>problematic cases, so let me know if you run into any more issues like this.
>
>  
>
There is an ElementStrides check and similar requirement flag you can 
use to make sure that you have an array whose strides are multiples of 
it's itemsize.

-Travis


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] Hello and my first patch

2006-10-05 Thread Travis Oliphant
John Hunter wrote:

>>"Robert" == Robert Kern <[EMAIL PROTECTED]> writes:
>>
>>
>
>Robert> IMO, I'd rather see this and similar functions go into
>Robert> scipy. New functions that apply semantics to arrays (in
>Robert> this case, treating them as time series), I think should
>Robert> go into scipy. New functions that treat arrays simply as
>Robert> arrays and are generally useful can probably go into
>Robert> numpy.
>
>I prefer Perry's longstanding suggestion: things that do not add to
>distribution complexity should go into numpy.  If it compiles as
>easily as numpy itself, it should go into numpy where sensible.  
>

I don't think this is as feasible as it sounds at first.  Some people 
complain that NumPy is too big already.  
SciPy is very easy to install on Windows (there is a binary available).  
The only major platform that still gives some trouble is Intel Mac (due 
to the fortran compiler situation).   But, all you need is one person 
who can build it and distribute a binary.

I think a better long-term solution is to understand how to package 
things better by working with people at Enthought so that when you 
advertise to the ex-Matlab user you point him to a "super-package" that 
installs a bunch of other small packages.  This is a more maintainable 
solution as long as we set standards for

1) documentation
2) tests
3) some kind of problem-domain hierarchy

The idea of just lumping more an more things into NumPy itself is not a 
good idea.  What most users want is something that installs easily (like 
Enthon).   How it is packaged is not as important.  What developers need 
is a sane multi-namespace system that can be maintained separately if 
needed.

I think we decided a while ago, that the package approach should contain 
indicators as to whether or not a fortran compiler was needed to build 
the system so that dependency on those things could be eliminated if 
needed. 

Do we want to pull scipy apart  into two components:  one that needs 
Fortran to build and another that doesn't? 

Perhaps that is the best way to move forward along with the work on a 
"pylab" super-package.

-Travis


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] flat indexing of object arrays

2006-10-05 Thread Travis Oliphant
Martin Wiechert wrote:

>Hi list,
>
>when I try to assign a sequence as an element of an object array via flat 
>indexing only the first element of the sequence is assigned:
>
>  
>
import numpy
numpy.version.version


>'1.0rc1.dev3171'
>  
>
from numpy import *
a = ndarray ((2,2), object)
a.flat [2] = (1, 2, 3)
a.flat [2]


>1
>  
>
a


>array([[None, None],
>   [1, None]], dtype=object)
>
>Is this a feature? Wouldn't a naive user like me expect
>a.flat [2] == (1, 2, 3)?
>
>  
>
You are probably right.This should be changed.

-Travis


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] Hello and my first patch

2006-10-05 Thread Travis Oliphant
A. M. Archibald wrote:

>On 05/10/06, Greg Willden <[EMAIL PROTECTED]> wrote:
>  
>
>>On 10/5/06, Travis Oliphant <[EMAIL PROTECTED]> wrote:
>>
>>
>>>Perhaps that is the best way to move forward along with the work on a
>>>"pylab" super-package.
>>>  
>>>
>>That is exactly what I want.
>>
>>
>
>What is unsatisfactory about installing numpy+scipy+matplotlib? I've
>found they're generally pretty complete (except where no decent python
>alternative exists).
>
>  
>
>>In the end I want a nice collection of functions, logically organized, that
>>let me analyze/filter/plot etc. etc. etc.
>>
>>The key for me is "logically organized".
>>
>>
>
>  
>
There is a structure to it, but it's more organic because of the 
multiple contributors.

weave should be in NumPy but nobody was willing to step up to maintain 
it a year ago.   I may be willing to step up at this point.   I would 
like to see weave in NumPy (maybe not the blitz libraries though...)

I think a hybrid for weave / f2py / ctypes that allows "inlining in 
multiple languages" as well as automatic extension module generation for 
"already-written" code is in order.


-Travis


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] flat indexing of object arrays

2006-10-05 Thread Travis Oliphant
Matthew Brett wrote:

>Hi,
>
>On 10/5/06, Martin Wiechert <[EMAIL PROTECTED]> wrote:
>  
>
>>Hi list,
>>
>>when I try to assign a sequence as an element of an object array via flat
>>indexing only the first element of the sequence is assigned:
>>
>>
>
>I've also been having trouble with flat on object arrays.
>
>Is this intended?
>
>In [1]: from numpy import *
>
>In [2]: a = arange(2)
>
>In [3]: a[1]
>Out[3]: 1
>
>In [4]: a.flat[1]
>Out[4]: 1
>
>In [5]: b = array([a], dtype=object)
>
>In [6]: b[1]
>---
>exceptions.IndexErrorTraceback (most
>recent call last)
>
>/home/mb312/devel_trees/scipy/Lib/io/
>
>IndexError: index out of bounds
>
>In [7]: b.flat[1]
>Out[7]: 1
>
>  
>

This is correct behavior.  Look at the shape of b.  It is being indexed 
correctly.

The problem is that it is ambiguous as to what is wanted when you write

b = array([a], dtype=object).

We have gone through the rounds on this one and the current behavior is 
our best compromise.

-Travis




-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] Memory errors

2006-10-05 Thread Travis Oliphant
Vikalpa Jetly wrote:

>I am reading a very large array (~9000,11000) of 1 byte image values. I need
>to change values in the array that meet a certain condition so I am running
>something like:
>
>b = numpy.where(a>200,0,1)
>
>to create a new array with the changed values. However, I get a
>"MemoryError" everytime I try this. I have over 3gb of RAM on my machine
>(most of which is available). The process runs fine on smaller datasets. Is
>there a maximum array size that numpy handles? Any alternatives/workarounds?
>
>  
>
The MemoryError is a direct result when system malloc fails.Rather 
than use where with two scalars (you're resulting array will be int32 
and therefore 4-times larger).

Use

b = zeros_like(a)
b[a>200] = 1

which will consume less memory.

-Travis
 

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] repmat

2006-10-06 Thread Travis Oliphant
Bill Baxter wrote:

>[There seem to have been some gmail delivery problems that prevented
>my previous mail on this subject from being delivered]
>
>I've proposed that we fix repmat handle arbitrary dimensions before 1.0.
>
>   http://projects.scipy.org/scipy/numpy/ticket/292
>
>I don't think this is particularly controversial, just I'm guessing
>no-one's found the time to look at my proposed fixes.  And
>gmail/sourceforge issues haven't helped either.
>  
>
Thanks for reminding us again. 

I don't think this is too bad of a deal.  I propose to move repmat(A, 
M,N) to matlib.py and replace it in numpy with a function named

tile(a, reps)

That is more general-purpose.   If this is not acceptable, please speak up.

-Travis




-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] ***[Possible UCE]*** user-defined type example

2006-10-07 Thread Travis Oliphant
Matt Knox wrote:
> Could someone please point me to/provide me with a basic example of 
> creating a user defined type?
>  
> Here is my completely naive attempt which obviously doesn't work...
>
> import numpy
>  
> class myType(numpy.void):
> def __init__(self,val):
> self.val = val
>  
> testType = numpy.dtype(myType)
>
> val1 = myType(5)
> val2 = myType(6)
>  
> foo = numpy.array([val1,val2],testType)
>  
>  
> Any help would be greatly appreciated. Thanks,
>  

A "true" user-defined data-type can only be done in C.

However.  You can use the "VOID" data-type to create a type with 
multiple fields.   What do you want to do with your user-defined data-type.

-Travis


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] tensor product

2006-10-09 Thread Travis Oliphant
Charles R Harris wrote:
> Hmmm,
>
> I notice that there is no longer a tensor product. As it was the only 
> one of the outer, kron bunch that I really wanted, l miss it. In fact, 
> I always thought outer should act like the tensor product for the 
> other binary operators too. Anyway, mind if I put it back?

I'm not opposed to the idea, necessarily.  But, when and where did this 
exist?  I don't remember it.

-Travis


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] tensor product

2006-10-09 Thread Travis Oliphant
Charles R Harris wrote:
>
>
> On 10/9/06, *Tim Hochberg* <[EMAIL PROTECTED] 
> > wrote:
>
> 
>
> Is this not the same things as numpy.multiply.outer(a, b)? (as
> opposed
> to outer(a, b), which appears to pretend that everything is a
> vector --
> I'm not sure what the point of that is).
>
>
> Hmmm, yes, multiply.outer does do that. I thought that outer was short 
> for multiply.outer and that the behaviour had changed. So the question 
> is why outer does what it does.
Unfortunately,  I don't know the answer to that.

numpy.outer is the same as Numeric.outerproduct and does the same 
thing.  I'm not sure of the reason behind it.

-Travis


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


[Numpy-discussion] Numpy rc2 is out

2006-10-09 Thread Travis Oliphant

Release Candidate 2.0 is now out.  Thanks to all the great testing and 
fixes that were done between 1.0rc1 and 1.0rc2

The release date for NumPy 1.0 is Oct. 17.  

There will be a freeze on the trunk starting Monday Oct. 16 so any 
changes should be in by then.  If significant changes are made then we 
will release 1.0rc3 on Oct. 17 and push the release date of NumPy 1.0 to 
Oct 24.

-Travis


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] suggestions for slices of matrices

2006-10-09 Thread Travis Oliphant
JJ wrote:
> Hello.
> I haven't been following development too closely
> lately, but I did just download and reinstall the
> current svn version.  For what its worth, I would like
> to again suggest two changes:
>   
Suggestions are nice.  Working code is better.   Many ideas are just too 
difficult to implement (and still work with the system as it exists) and 
so never get done.  I'm not saying these ideas fit into this category, 
but generally if a suggestion is not taken it's very likely seen in that 
light.
> --  If M is a nxm matrix and P and Z are nx1 (or 1xn)
> matrices, then it would be nice if we could write
> M[P==Z,:] to obtain all columns and only those rows
> where P==Z.
This works already if p and z are 1-d arrays.   That seems to be the 
only issue.  you want this to work with p and z being 2-d arrays (i.e. 
matrices).  The problem is there is already a defined behavior for this 
case that would have to be ignored (i.e. special-cased to get what you 
want).  This could be done within the special matrix sub-class of 
course, but I'm not sure it is wise.  Too many special cases make life 
difficult down the road.

It is better to just un-think the ambiguity between 1-d and 2-d arrays 
that was inspired by Matlab and recognize a 1-d situation when you have 
it.   But, that's just my opinion.  I'm not dead-set against 
special-casing in the matrix object if enough matrix-oriented people are 
in favor of it.  But, it would be a feature for a later NumPy (not 1.0).

>   Likewise, for 1xm (or mx1) matrices U and
> V, it would be nice to be able to use M[P==Z,U==V]. 
>   
Same issue as before + cross-product versus element-by-element. 
> Also, it would be nice to use M[P==Z,U==2], for
> example, to obtain selected rows where matrix U is
> equal to a constant.
>   
Again.  Form the cross-product using ix_().

> --  It would be nice to slice a matrix by using
> M[[1,2,3],[3,5,7]], for example.
>   
You can get the cross-product using M[ix_([1,2,3],[3,5,7])].  This was a 
design choice and I think a good one. It's been discussed before.

> I believe this would help make indexing more user
> friendly.  In my humble opinion, I think indexing is a
> weak spot in numpy. 
I'm sorry you see it that way.  I think indexing is a strength of 
numpy.  It's a little different then what you are used to with Matlab, 
perhaps, but it is much more general-purpose and capable (there is one 
weak-spot in that a certain boolean indexing operations uses more memory 
than it needs to, but that is a separate issue...).  The Matlab behavior 
can always be created in a sub-class.


Best regards,

-Travis


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] 1.0rc2: imag attribute of numpy.float64 causes segfault

2006-10-10 Thread Travis Oliphant
Peter Bienstman wrote:
> This is on an AMD64 platform:
>
> Python 2.4.3 (#1, Sep 27 2006, 14:14:48)
> [GCC 4.1.1 (Gentoo 4.1.1)] on linux2
> Type "help", "copyright", "credits" or "license" for more information.
>   
 import numpy
 print numpy.__version__
 
> 1.0rc2
>   
 a = numpy.float64(1.0)
 a
 
> 1.0
>   
 a.real
 
> 1.0
>   
 a.imag
 
> Segmentation fault
>
>   
Thanks for the test.   Fixed in SVN r3299

-Travis


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] Constructing an array from memory address

2006-10-10 Thread Travis Oliphant
Daniel Drake wrote:

>Hi,
>
>I have an area of memory which is shared between processes (it is
>actually a shared memory segment). The address of this memory is stored
>in a python long variable, which I pass to various custom C/C++ python
>modules.
>
>I would like to construct a numpy array in this area. Is there any way I
>can tell numpy to use a specific address (stored as a long) to use as
>storage for the array? 
>

This functionality is already available.   There are two ways to do it. 

1) Create an object with an __array_interface__ attribute that returns a 
dictionary with shape, typestr, and data fields.   The data field should 
return the tuple  (memory-address-as-long, 
True-if-read-only-otherwise-False).

Passing this object into the asarray constructor will use the memory given

2) Use the asbuffer function in numpy.core.multiarray  (it's not exposed 
to numpy yet) to create a buffer object from the memory address and then 
use frombuffer to create an array from the memory.


Good luck,

-Travis


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] asmatrix and asarray exception

2006-10-11 Thread Travis Oliphant
Keith Goodman wrote:

>On 10/11/06, Keith Goodman <[EMAIL PROTECTED]> wrote:
>  
>
>>This works:
>>
>>
>>
M.asmatrix(['a', 'b', None])


>>matrix([[a, b, None]], dtype=object)
>>
>>But this doesn't:
>>
>>
>>
M.asmatrix(['a', 'b', None, 'c'])


>>TypeError: expected a readable buffer object
>>
>>
>>
M.__version__


>>'1.0rc1'
>>
>>It also doesn't work for asarray and for tuples.
>>
>>
>>
>
>  
>
It is pretty fragile to rely on NumPy's "detection" of object arrays.  
The problem is that with the introduction of string, unicode, and record 
array styles, what is supposed to be an object array is harder to detect. 

The type-error propagates up from trying to create a record-array 
(apparently that's what was detected).  You can only create record-array 
items from tuples or objects exposing the buffer interface. 

It's interesting that the detection algorithm gets thrown off by the 
addition of an other element.  There may be an easy fix there.

-Travis


-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] Compile with atlas 3.7.17 fails

2006-10-11 Thread Travis Oliphant
Hanno Klemm wrote:

>Hi, 
>
>I don't know if this is a bug or just me doing something wrong (I
>suspect the latter). I try to compile numpy-1.0rc1 with python2.5 and
>atlas 3.7.17.
>
>I have build the atlas library myself, it doesn't give any errors
>under make test or make pttest, so it seems to be okay. if I try to
>build numpy I get the following error:
>
>creating build/temp.linux-x86_64-2.5/numpy/core/blasdot
>compile options: '-DATLAS_INFO="\"3.7.17\"" -Inumpy/core/blasdot
>-I/scratch/python2.5/include -Inumpy/core/include
>-Ibuild/src.linux-x86_64-2.5/numpy/core -Inumpy/core/src
>-Inumpy/core/include -I/scratch/python2.5/include/python2.5 -c'
>gcc: numpy/core/blasdot/_dotblas.c
>gcc -pthread -shared
>build/temp.linux-x86_64-2.5/numpy/core/blasdot/_dotblas.o
>-L/scratch/python2.5/lib -lcblas -latlas -o
>build/lib.linux-x86_64-2.5/numpy/core/_dotblas.so
>/usr/bin/ld: /scratch/python2.5/lib/libcblas.a(cblas_dgemm.o):
>relocation R_X86_64_32 can not be used when making a shared object;
>recompile with -fPIC
>  
>

This may be part of your problem.  It's looks like the linker is having 
a hard time making use of your compiled extension in a shared library.  
Perhaps you should make sure -fPIC is on when you compile atlas (I'm not 
sure how to do that --- perhaps setting CCFLAGS environment variable to 
include -fPIC would help).

-Travis


-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] asmatrix and asarray exception

2006-10-11 Thread Travis Oliphant
Charles R Harris wrote:

>
>
> On 10/11/06, *Keith Goodman* <[EMAIL PROTECTED] 
> > wrote:
>
> On 10/11/06, Keith Goodman <[EMAIL PROTECTED]
> > wrote:
> > This works:
> >
> > >> M.asmatrix(['a', 'b', None])
> > matrix([[a, b, None]], dtype=object)
> >
> > But this doesn't:
> >
> > >> M.asmatrix(['a', 'b', None, 'c'])
> > TypeError: expected a readable buffer object
> >
>
>
> As a side observation, I note that the 'None' is also non-printing:
>
> >>> a = asarray(['a', 'b', None, 'c'], dtype=object)
> >>> a[2]
> >>> str(a[2])
> 'None'
>
> I wonder if this should be changed?

That's Python's decision.  You are getting back the None object when you 
access element a[2].  Thus, there is no way to change it.

-Travis


-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] cannot import Numeric

2006-10-11 Thread Travis Oliphant
Carl Wenrich wrote:

> thanks, but actually it's the other applications i want to use that 
> have the 'import Numeric' line in them. i'm sure others have noted 
> this before. what's the normal procedure?


You must install Numeric if a package needs Numeric.  As far as Python 
is concerned NumPy is a separate package. Packages must be "ported" to 
use numpy.  Please encourage the package author to port.  Help is 
available for open source packages.  Just ask on the list.

-Travis


-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


[Numpy-discussion] Things to address for Py3K

2006-10-11 Thread Travis Oliphant

Hi all,

Py3K is undergoing active development.  This gives us an opportunity to 
discuss more significant changes to the language that might improve the 
experience of NumPy users. 

We should form a list and start commenting on the py3k mailing lists 
about what changes would be most  helpful for our community.

Please provide examples of changes to Python that you think might help us.

A couple on my short list

1) Adding a *few* new infix operators.

   a) an extra multiplication operator to distinguish between 
element-by-element and dot
   b) extending 'and' and 'or' to allow element-by-element logical 
operations or adding && and ||

2) Lowering the precedence of & so that a > 8 & a < 10  works as you 
would expect.




-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] Things to address for Py3K

2006-10-11 Thread Travis Oliphant
Christopher Barker wrote:

>Travis Oliphant wrote:
>  
>
>>A couple on my short list
>>
>>1) Adding a *few* new infix operators.
>>
>>   a) an extra multiplication operator to distinguish between 
>>element-by-element and dot
>>   b) extending 'and' and 'or' to allow element-by-element logical 
>>operations or adding && and ||
>>
>>2) Lowering the precedence of & so that a > 8 & a < 10  works as you 
>>would expect.
>>
>>
>
>Maybe this goes without saying, but:
>
>3) Inclusion of an nd-array type in the standard lib!
>
>(or at the very least, an nd-array protocol)
>  
>
Work on an nd-array protocol to extend the buffer protocol is occurring 
right now.  It think this will be better in the end then a standard 
nd-array type. 

I think a multi-dimensional object array would at least be a nice step.

There are enough differences between lists and 1-d arrays though, that 
I'm not sure the accepted multi-dimensional object array would just be 
the NumPy version.  

-Travis


-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] Should numpy.sqrt(-1) return 1j rather than nan?

2006-10-11 Thread Travis Oliphant
[EMAIL PROTECTED] wrote:

>Hi,
>
>I have recieved the following note from a user:
>
>"""
>In SciPy 0.3.x the ufuncs were overloaded by more "intelligent" versions.
>A very attractive feature was that sqrt(-1) would yield 1j as in Matlab.
>Then you can program formulas directly (e.g., roots of a 2nd order
>polynomial) and the right answer is always achieved. In the Matlab-Python
>battle in mathematics education, this feature is important.
>
>Now in SciPy 0.5.x sqrt(-1) yields nan. A lot of code we have, especially
>for introductory numerics and physics courses, is now broken.
>This has already made my colleagues at the University skeptical to
>Python as "this lack of backward compatibility would never happen in Matlab".
>  
>
This was a consequence of moving scipy_base into NumPy but not exposing 
the scimath library in NumPy.   It would be a very easy thing to put 
from numpy.lib.scimath import *
into the scipy name-space.

I'm supportive of that as a backward-compatibility measure.

>Another problem related to Numeric and numpy is that in these courses we
>use ScientificPython several places, which applies Numeric and will
>continue to do so. You then easily get a mix of numpy and Numeric
>in scripts, which may cause problems and at least extra overhead.
>Just converting to numpy in your own scripts isn't enough if you call
>up libraries using and returning Numeric.
>  
>
>"""
>
>I wonder, what are the reasons that numpy.sqrt(-1) returns nan?
>  
>
Because that is backwards compatible.  You have to construct a 
function-wrapper in order to handle the negative case correctly.  The 
function wrapper is going to be slower.  Thus, it is placed in a 
separate library.

>Could sqrt(-1) made to return 1j again? 
>
Not in NumPy.  But, in scipy it could.

>If not, shouldn't
>  
>
>numpy.sqrt(-1) raise a ValueError instead of returning silently nan?
>  
>
This is user adjustable.  You change the error mode to raise on 
'invalid' instead of pass silently which is now the default.

-Travis



-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] Should numpy.sqrt(-1) return 1j rather than nan?

2006-10-11 Thread Travis Oliphant
Sven Schreiber wrote:

>>This is user adjustable.  You change the error mode to raise on 
>>'invalid' instead of pass silently which is now the default.
>>
>>-Travis
>>
>>
>>
>
>Could you please explain how this adjustment is done, or point to the
>relevant documentation.
>  
>

numpy.sqrt(-1)

old = seterr(invalid='raise')
numpy.sqrt(-1)  # should raise an error

seterr(**old)  # restores error-modes for current thread
numpy.sqrt(-1)





-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] Should numpy.sqrt(-1) return 1j rather than nan?

2006-10-11 Thread Travis Oliphant
Fernando Perez wrote:

>On 10/11/06, Travis Oliphant <[EMAIL PROTECTED]> wrote:
>
>  
>
>>[EMAIL PROTECTED] wrote:
>>
>>
>>>Could sqrt(-1) made to return 1j again?
>>>
>>>  
>>>
>>Not in NumPy.  But, in scipy it could.
>>
>>
>
>Without taking sides on which way to go, I'd like to -1 the idea of a
>difference in behavior between numpy and scipy.
>
>IMHO, scipy should be within reason a strict superset of numpy.
>  
>
This was not the relationship of scipy to Numeric.

For me, it's the fact that scipy *used* to have the behavior that

scipy.sqrt(-1) return 1j

and now doesn't  that is the kicker. 

On the other hand requiring all calls to numpy.sqrt to go through an 
"argument-checking" wrapper is a bad idea as it will slow down other uses.

So, I committed a change to scipy to bring it back into compatibility 
with 0.3.2



>Gratuitious differences in behavior like this one are going to drive
>us all mad.
>
>There are people who import scipy for everything, others distinguish
>between numpy and scipy, others use numpy alone and at some point in
>their life's code they do
>
>import numpy as N -> import scipy as N
>
>because they start needing stuff not in plain numpy.  Having different
>APIs and behaviors appear there is, I think, a Seriously Bad Idea
>(TM).
>  
>
I think the SBI is mixing numpy and scipy gratuitously  (which I admit I 
have done in the past).  I'm trying to repent

-Travis



-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] Should numpy.sqrt(-1) return 1j rather than nan?

2006-10-11 Thread Travis Oliphant
Stefan van der Walt wrote:

>I agree with Fernando on this one.
>
>Further, if I understand correctly, changing sqrt and power to give
>the right answer by default will slow things down somewhat.  But is it
>worth sacrificing intuitive usage for speed?
>  
>
For NumPy, yes. 

This is one reason that NumPy by itself is not a MATLAB replacement. 

>N.power(2,-2) == 0
>
>and
>
>N.sqrt(-1) == nan
>
>just doesn't feel right.  
>

Only because your expectations are that NumPy *be* a MATLAB 
replacement.  The problem is that it sacrifices too much for that to be 
the case.   And we all realize that NumPy needs more stuff added to it 
to be like IDL/MATLAB such as SciPy, Matplotlib, IPython, etc.

>Why not then have
>
>N.power(2,-2) == 0.24
>N.sqrt(-1) == 1j
>
>and write a special function that does fast calculation of
>square-roots for positive values?
>  
>

We've already done this.  The special functions are called

numpy.power
numpy.sqrt

(notice that if you do numpy.sqrt(-1+0j) you get the "expected" answer 
emphasizing that numpy does no "argument" checking to determine the output).

The "intuitive" functions (which must do argument checking) are (in 
numpy.lib.scimath) but exported as

scipy.power (actually I need to check that one...)
scipy.sqrt

What could be simpler?  ;-)

-Travis



-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] Should numpy.sqrt(-1) return 1j rather than nan?

2006-10-11 Thread Travis Oliphant
[EMAIL PROTECTED] wrote:

>
>On Wed, 11 Oct 2006, Travis Oliphant wrote:
>
>  
>
>>On the other hand requiring all calls to numpy.sqrt to go through an 
>>"argument-checking" wrapper is a bad idea as it will slow down other uses.
>>
>>
>
>Interestingly, in worst cases numpy.sqrt is approximately ~3 times slower
>than scipy.sqrt on negative input but ~2 times faster on positive input:
>
>In [47]: pos_input = numpy.arange(1,100,0.001)
>
>In [48]: %timeit -n 1000 b=numpy.sqrt(pos_input)
>1000 loops, best of 3: 4.68 ms per loop
>
>In [49]: %timeit -n 1000 b=scipy.sqrt(pos_input)
>1000 loops, best of 3: 10 ms per loop
>  
>

This is the one that concerns me.  Slowing everybody down who knows they 
have positive values just for people that don't seems problematic.

>In [50]: neg_input = -pos_input
>
>In [52]: %timeit -n 1000 b=numpy.sqrt(neg_input)
>1000 loops, best of 3: 99.3 ms per loop
>
>In [53]: %timeit -n 1000 b=scipy.sqrt(neg_input)
>1000 loops, best of 3: 29.2 ms per loop
>
>nan's are making things really slow,
>  
>
Yeah, they do.   This actually makes the case for masked arrays, rather 
than using NAN's.


-Travis


-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] Should numpy.sqrt(-1) return 1j rather than nan?

2006-10-11 Thread Travis Oliphant
Tim Hochberg wrote:

>With python 2.5 out now, perhaps it's time to come up with a with 
>statement context manager. Something like:
>
>from __future__ import with_statement
>import numpy
>
>class errstate(object):
>def __init__(self, **kwargs):
>self.kwargs = kwargs
>def __enter__(self):
>self.oldstate = numpy.seterr(**self.kwargs)
>def __exit__(self, *exc_info):
>numpy.seterr(**self.oldstate)
>   
>a = numpy.arange(10)
>a/a # ignores divide by zero
>with errstate(divide='raise'):
>a/a # raise exception on divide by zer
># Would ignore divide by zero again if we got here.
>
>-tim
>
>  
>

This looks great.  I think most people aren't aware of the with 
statement and what it can do (I'm only aware because of your posts, for 
example). 

So, what needs to be added to your example in order to just add it to 
numpy?

-Travis


-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] incrementing along a diagonal

2006-10-11 Thread Travis Oliphant
David Novakovic wrote:
> Hi,
>
> i'm moving some old perl PDL code to python. I've come across a line
> which changes values in a diagonal line accross a matrix.
>
> matrix.diagonal() returns a list of values, but making changes to these
> does not reflect in the original (naturally).
>
> I'm just wondering if there is a way that i can increment all the values
> along a diagonal?
>   

You can refer to a diagonal using flattened index with a element-skip 
equal to the number of columns+1.

Thus,

a.flat[::a.shape[1]+1] += 1

will increment the elements of a along the main diagonal.

-Travis




-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] Should numpy.sqrt(-1) return 1j rather than nan?

2006-10-11 Thread Travis Oliphant
Greg Willden wrote:
> On 10/11/06, *Travis Oliphant* <[EMAIL PROTECTED] 
> <mailto:[EMAIL PROTECTED]>> wrote:
>
> Stefan van der Walt wrote:
> >Further, if I understand correctly, changing sqrt and power to give
> >the right answer by default will slow things down somewhat.  But
> is it
> >worth sacrificing intuitive usage for speed?
> >
> For NumPy, yes.
>
> This is one reason that NumPy by itself is not a MATLAB replacement.
>
>
> This is not about being a Matlab replacement.
> This is about correctness.
I disagree.   NumPy does the "correct" thing when you realize that sqrt 
is a function that returns the same type as it's input.   The field 
over-which the operation takes place is defined by the input data-type 
and not the input "values".  Either way can be considered correct 
mathematically.   As Paul said it was a design decision not to go 
searching through the array to determine whether or not there are 
negative numbers in the array.

Of course you can do that if you want and that's what scipy.sqrt does. 
> Numpy purports to handle complex numbers.
> Mathematically, sqrt(-1) is a complex number.
Or, maybe it's undefined if you are in the field of real numbers.  It 
all depends.
> Therefore Numpy *must* return a complex number.
Only if the input is complex.  That is a reasonable alternative to your 
specification.
>
> If Numpy doesn't return a complex number then it shouldn't pretend to 
> support complex numbers.
Of course it supports complex numbers, it just doesn't support automatic 
conversion to complex numbers.It supports complex numbers the same 
way Python supports them (i.e. you have to use cmath to get sqrt(-1) == 1j)

People can look at this many ways without calling the other way of 
looking at it unreasonable. 

I don't see a pressing need to change this in NumPy, and in fact see 
many reasons to leave it the way it is.   This discussion should move to 
the scipy list because that is the only place where a change could occur.

-Travis.



-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


Re: [Numpy-discussion] Should numpy.sqrt(-1) return 1j rather than nan?

2006-10-11 Thread Travis Oliphant
David Goldsmith wrote:
> Travis Oliphant wrote:
>   
>> [EMAIL PROTECTED] wrote:
>>
>>   
>> 
>>> Could sqrt(-1) made to return 1j again? 
>>>
>>> 
>>>   
>> Not in NumPy.  But, in scipy it could.
>>
>>   
>> 
> Ohmigod!!!  You are definitely going to scare away many, many potential 
> users - if I wasn't obliged to use open source at work, you'd be scaring 
> me away.
Why in the world does it scare you away.  This makes no sense to me.   
If you don't like the scipy version don't use it.   NumPy and SciPy are 
not the same thing.

The problem we have is that the scipy version (0.3.2) already had this 
feature (and Numeric didn't).   What is so new here that is so scary ?


-Travis


-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion


  1   2   3   4   5   6   >