Re: [Numpy-discussion] dtype confusion

2007-03-26 Thread Jan Strube

I'm afraid I'm not quite done with this, yet.
There's still more I am confused about, even after reading the manual
(again).
There is something fundamentally weird about record arrays, that doesn't
seem to click with me.
Please have a look at this piece of code:

import numpy as N
newtype = N.dtype([('x', N.float64), ('y', N.float64), ('z', N.float64)])
x = N.zeros((10, 3))
x.dtype = newtype
print x

x = N.zeros((10, 3), dtype=newtype)
print x

I think I understand what's going on here:
x = N.zeros((10, 1), dtype=newtype)
print x

So can I summarize that in the first case a (10,3) array is reinterpreted (
numpy.view) as a record array, while in the second case I am actually
requesting a (10,3) array of the dtype, which itself has 3 elements?
And the record array should not be viewed as a column-wise array with named
columns (although it can be accessed that way), but rather as an array that
groups the columns in each row into a record. Because after turning it into
a record array, x is still c-contiguous and not fortran-contiguous...

Maybe I'm actually starting to understand this a bit.

Anyways, thanks for listening ;-)
Cheers,
   Jan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] New Operators in Python

2007-03-26 Thread René Bastian
Hello,

I am interest both in numarray type multiplication and matrix type 
multiplication.

But I am not shure that I can buy an Unicode keyboard.

May be it would be possible to implement a class with
user "definissable" (?) signs.

My choice :

a * b -> numarray type multi
a !* b -> matrix


-- 
René Bastian
http://www.musiques-rb.org
http://pythoneon.musiques-rb.org


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] nd_image.affine_transform edge effects

2007-03-26 Thread James Turner
PS... (sorry for all the posts, for anyone who isn't interested...)

> Agreed, it looks like aliasing. Nevertheless, any resampling
> procedure is supposed to deal with this internally, right? Either by
> lowpass filtering (traditional case), or by spline fitting (spline
> case as described by Unser and understood by me) -- it shouldn't be
> letting aliasing bubble through, correct?

In the general case, I don't think it is appropriate for the resampling
procedure to use low-pass filtering internally to avoid artefacts,
except perhaps when downsampling. It probably makes sense for computer
graphics work, but there are cases where the input data are band
limited to begin with and any degradation in resolution is unacceptable.
Where needed, I think low-pass filtering should either be the
responsibility of the main program or an option. It's not even possible
for the resampling procedure to prevent artefacts in every case, since
the aliasing in a badly undersampled image cannot be removed post
factum (this is for undersampled photos rather than artificial graphics,
which I think are fundamentally different because everything is defined
on the grid, although I haven't sat down and proved it mathematically).
I'm also not sure how the procedure could decide on the level of
smoothing needed for a given dataset without external information.

Of course intermediate-order splines will probably keep everyone happy,
being reasonably robust against ringing effects without causing much
smoothing or interpolation error :-).

By the way, I think you and Stefan might be interested in a medical
imaging paper by Lehmann et al. (1999), which gives a very nice overview
of the properties of different interpolation kernels:

http://ieeexplore.ieee.org/Xplore/login.jsp?url=/iel5/42/17698/00816070.pdf?arnumber=816070

For what it's worth, I'd agree with both of you that the numeric
overflow should be documented if not fixed. It sounds like Stefan has
figured out a solution for it though. If you make sense of the code in
"ni_interpolation.c", Stefan, I'd be very interested in how to make it
calculate one less value at the edges :-).

Cheers,

James.

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] nd_image.affine_transform edge effects

2007-03-26 Thread Zachary Pincus
Thanks for the information and the paper link, James. I certainly  
appreciate the perspective, and now see why the anti-aliasing and  
reconstruction filtering might best be left to clients of a  
resampling procedure.

Hopefully at least some of the kinks in the spline interpolation (to  
date: obligate mirror boundary conditions, extra edge values, integer  
overflow) can be smoothed out.
I can't guarantee that I'll have the time or expertise to deal with  
ni_interpolation.c, but I'll try to give it a shot some time here.

Zach


On Mar 26, 2007, at 1:16 AM, James Turner wrote:

> PS... (sorry for all the posts, for anyone who isn't interested...)
>
>> Agreed, it looks like aliasing. Nevertheless, any resampling
>> procedure is supposed to deal with this internally, right? Either by
>> lowpass filtering (traditional case), or by spline fitting (spline
>> case as described by Unser and understood by me) -- it shouldn't be
>> letting aliasing bubble through, correct?
>
> In the general case, I don't think it is appropriate for the  
> resampling
> procedure to use low-pass filtering internally to avoid artefacts,
> except perhaps when downsampling. It probably makes sense for computer
> graphics work, but there are cases where the input data are band
> limited to begin with and any degradation in resolution is  
> unacceptable.
> Where needed, I think low-pass filtering should either be the
> responsibility of the main program or an option. It's not even  
> possible
> for the resampling procedure to prevent artefacts in every case, since
> the aliasing in a badly undersampled image cannot be removed post
> factum (this is for undersampled photos rather than artificial  
> graphics,
> which I think are fundamentally different because everything is  
> defined
> on the grid, although I haven't sat down and proved it  
> mathematically).
> I'm also not sure how the procedure could decide on the level of
> smoothing needed for a given dataset without external information.
>
> Of course intermediate-order splines will probably keep everyone  
> happy,
> being reasonably robust against ringing effects without causing much
> smoothing or interpolation error :-).
>
> By the way, I think you and Stefan might be interested in a medical
> imaging paper by Lehmann et al. (1999), which gives a very nice  
> overview
> of the properties of different interpolation kernels:
>
> http://ieeexplore.ieee.org/Xplore/login.jsp?url=/ 
> iel5/42/17698/00816070.pdf?arnumber=816070
>
> For what it's worth, I'd agree with both of you that the numeric
> overflow should be documented if not fixed. It sounds like Stefan has
> figured out a solution for it though. If you make sense of the code in
> "ni_interpolation.c", Stefan, I'd be very interested in how to make it
> calculate one less value at the edges :-).
>
> Cheers,
>
> James.
>
> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] New Operators in Python

2007-03-26 Thread dmitrey
The unicode keyboards sailing everywhere is just a matter of time
And python 2-symbol operators soon will look obsolete, this will 
increase migrating from python to Sun fortress etc. I took a look at 
their unicode syntax for math formulas
http://research.sun.com/projects/plrg/faq/NAS-CG.pdf
it looks (vs Python) (or even MATLAB) like Pentium 4 vs Intel 386 
processors.
It just allows copy-paste from articles of many formulas, including 
"ro", 'beta' and other non-ascii symbols
Also, implementing unicode will allow separate operators for (for 
example) MATLAB cross() equivalent (vector multiplication of vectors).
WBR, D.

René Bastian wrote:
> Hello,
>
> I am interest both in numarray type multiplication and matrix type 
> multiplication.
>
> But I am not shure that I can buy an Unicode keyboard.
>
> May be it would be possible to implement a class with
> user "definissable" (?) signs.
>
> My choice :
>
> a * b -> numarray type multi
> a !* b -> matrix
>
>
>   

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] New Operators in Python

2007-03-26 Thread Zachary Pincus
Hi folks,

Sorry to rain on this parade, but unicode variable names and/or other  
syntactic elements have already been rejected for Python 3:

http://www.python.org/dev/peps/pep-3099/
> Python 3000 source code won't use non-ASCII Unicode characters for  
> anything except string literals or comments.
To tell the truth, in my code (for an n=1 example), I use elementwise  
(or broadcasted) multiplication about an order of magnitude more than  
matrix multiplication. Now, there's plenty of linear algebra in my  
work, but it's usually boxed off enough from the rest that converting  
everything to matrices, or using the 'dot' function (etc.), is just  
fine.

Personally, I even prefer the current way that numpy does things --  
I've always *really* disliked matlab's special syntax for elementwise  
multiplication. Now, clearly there are cases where this is handy, but  
at least in looking over my code, I find that those cases are quite  
rare, really.

So, in large part, I really don't feel a strong need for new  
operators in numpy. (And in the rest of python too! The @ decorator  
pie-syntax is quite enough for line-noise characters, in my personal  
opinion. And I think that python-dev pretty well agrees too, based on  
the raging flame wars about adding even that.)

Now, this is of course just my single opinion, but folks should  
recall that even C++, which rarely met a feature that it didn't like,  
drew the line at adding extra syntactic operators to the language  
that existed only to be overridden/implemented by users. (Which is  
precisely what's being proposed here.)

Anyhow, feel free to disagree with me -- I'm no expert here. I'm only  
mentioning this as a public service to make it clear that most of  
what's being proposed in this thread is, for better or worse, 100%  
dead-in-the-water for Python 3, and the rest will have a fairly  
significant uphill battle.

Zach




On Mar 26, 2007, at 2:42 AM, dmitrey wrote:

> The unicode keyboards sailing everywhere is just a matter of time
> And python 2-symbol operators soon will look obsolete, this will
> increase migrating from python to Sun fortress etc. I took a look at
> their unicode syntax for math formulas
> http://research.sun.com/projects/plrg/faq/NAS-CG.pdf
> it looks (vs Python) (or even MATLAB) like Pentium 4 vs Intel 386
> processors.
> It just allows copy-paste from articles of many formulas, including
> "ro", 'beta' and other non-ascii symbols
> Also, implementing unicode will allow separate operators for (for
> example) MATLAB cross() equivalent (vector multiplication of vectors).
> WBR, D.
>
> René Bastian wrote:
>> Hello,
>>
>> I am interest both in numarray type multiplication and matrix type
>> multiplication.
>>
>> But I am not shure that I can buy an Unicode keyboard.
>>
>> May be it would be possible to implement a class with
>> user "definissable" (?) signs.
>>
>> My choice :
>>
>> a * b -> numarray type multi
>> a !* b -> matrix
>>
>>
>>
>
> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Modular toolkit for Data Processing 2.1 released!

2007-03-26 Thread Tiziano Zito
MDP version 2.1 and symeig 1.2 have been released!

What's new in version 2.1?
--

- Fully compatible with NumpPy 1.0, the first stable release
  of the descendant of the Numeric python extension module

- symeig project resumed and updated

- For increased speed, scipy and symeig are automatically used if
  available

- New nodes: Independent Slow Feature Analysis and quadratic forms
  analysis algorithms

- General improvements, several bug fixes, and code cleanups

What is it?
---
Modular toolkit for Data Processing (MDP) is a data processing
framework written in Python.

 From the user's perspective, MDP consists of a collection of
trainable supervised and unsupervised algorithms that can be combined
into data processing flows. The base of readily available algorithms
includes Principal Component Analysis, two flavors of Independent
Component Analysis, Slow Feature Analysis, Independent Slow Feature
Analysis, and many more.

 From the developer's perspective, MDP is a framework to make the
implementation of new algorithms easier. MDP takes care of tedious
tasks like numerical type and dimensionality checking, leaving the
developer free to concentrate on the implementation of the training
and execution phases. The new elements then seamlessly integrate with
the rest of the library.

 MDP has been written in the context of theoretical research in
neuroscience, but it has been designed to be helpful in any context
where trainable data processing algorithms are used. Its simplicity
on the user side together with the reusability of the implemented
nodes make it also a valid educational tool.

 As its user base is increasing, MDP is becoming a common repository
of user-supplied, freely available, Python-implemented data processing
algorithms.

 The optional symeig module contains a Python wrapper for the LAPACK
functions to solve the standard and generalized eigenvalue problems
for symmetric (hermitian) positive definite matrices. Those
specialized algorithms give an important speed-up with respect to the
generic LAPACK eigenvalue problem solver used by NumPy.

Resources
-
Download: http://sourceforge.net/project/showfiles.php?group_id=116959
Homepage: http://mdp-toolkit.sourceforge.net
Mailing list: http://sourceforge.net/mail/?group_id=116959

--

 Tiziano Zito
 Institute for Theoretical Biology
 Humboldt-Universitaet zu Berlin
 Invalidenstrasse, 43
 10115 Berlin, Germany

 Pietro Berkes
 Gatsby Computational Neuroscience Unit
 Alexandra House, 17 Queen Square
 London WC1N 3AR, United Kingdom

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] random state not portable?

2007-03-26 Thread Pietro Berkes
Dear Numpy devels,

First of all, thank you for the great job done so far! We've been
updating our MDP module, and had a chance to appreciate it.

We found an issue in the random module: the random state contains an
array of int32 or int64 depending on the architecture, making
it not portable.

For example, try the following on two machines with different
architecures:

Python 2.4.4 (#2, Jan 13 2007, 17:50:26)
[GCC 4.1.2 20061115 (prerelease) (Debian 4.1.1-21)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import numpy
>>> import platform
>>> import pickle
>>>
>>> platform.architecture()
('32bit', '')
>>>
>>> state = numpy.random.get_state()
>>> pickle.dump(state, file('/tmp/random_state.pic','w'))
>>> numpy.random.permutation(numpy.arange(10))
array([8, 6, 3, 1, 0, 5, 7, 4, 2, 9])
>>>

Python 2.5 (r25:51908, Mar 19 2007, 13:41:07)
[GCC 4.1.0 (SUSE Linux)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import numpy
>>> import platform
>>> import pickle
>>>
>>> platform.architecture()
('64bit', 'ELF')
>>>
>>> state = pickle.load(file('/tmp/random_state.pic','r'))
>>> numpy.random.set_state(state)
>>> numpy.random.permutation(numpy.arange(10))
array([3, 9, 4, 8, 1, 6, 2, 0, 5, 7])
>>>

Is there any known workaround?
Thanks!


--

 Tiziano Zito
 Institute for Theoretical Biology
 Humboldt-Universitaet zu Berlin
 Invalidenstrasse, 43
 10115 Berlin, Germany

 Pietro Berkes
 Gatsby Computational Neuroscience Unit
 Alexandra House, 17 Queen Square
 London WC1N 3AR, United Kingdom

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] matrix indexing question

2007-03-26 Thread Colin J. Williams
Bill Baxter wrote:
> On 3/26/07, Colin J. Williams <[EMAIL PROTECTED]> wrote:
>> Bill Baxter wrote:
>>> This may sound silly, but I really think seeing all those brackets is
>>> what makes it feel wrong.   Matlab's  output doesn't put it in your
>>> face that your 4 is really a matrix([[4]]), even though that's what it
>>> is to Matlab.  But I don't see a good way to change that behavior.
>>>
>>> The other thing I find problematic about matrices is the inability to
>>> go higher than 2d.  To me that means that it's impossible to go "pure
>>> matrix" in my code because I'll have to switch back to arrays any time
>>> I want more than 2d (or use a mixed solution like a list of matrices).
>>>  Matlab allows allows >2D.
>>>
>>> --bb
>> "pure matrix" seems to me an area of exploration, does it have any
>> application in numerical computation at this time?
> 
> I'm not sure what you thought I meant, but all I meant by going "pure
> matrix" was having my Numpy code use the 'matrix' type exclusively
> instead of some mix of 'matrix' and the base 'ndarray' type.  
It was a term I had not come across before but I assumed that you were 
referring to something like this link - beyond my comprehension.
 
http://72.14.203.104/search?q=cache:Yu9gbUQEfWkJ:math.ca/Events/winter05/abs/pdf/ma-df.pdf+pure+matrix&hl=en&ct=clnk&cd=4&gl=ca&lr=lang_en
Things
> become messy when you mix and match them because you don't know any
> more if an expression like A[1] is going to give you a 1-D thing or a
> 2-D thing, and you can't be sure what A * B will do without always
> coercing A and B.

Yes, to my mind it's best to consider the multi-dimensional array and 
the matrix to be two distinct data types.  In most cases, it's best that 
conversions between the two should be explicit.
> 
>> A list of matrices seems to be a logical structure.
> 
> Yes, and it's the only option if you want to make a list of matrices
> of different shapes, but I frequently have a need for things like a
> list of per-point transformation matrices.  Each column from each of
> those matrices can be thought of as a vector.  Sometimes its
> convenient to consider all the X basis vectors together, for instance,
> which is a simple and efficient M[:,:,0] slice if I have all the data
> in a 3-D array, but it's a slow list comprehension plus matrix
> constructor if I have the matrices in a list -- something like
> matrix([m[:,0] for m in M])
> but that line is probably incorrect.

Logically, this makes sense, where M is a list of matrices.

My guess is that it would be a little faster to build one larger matrix 
and then slice it as needed.
> 
>> PyMatrix deals with
>> lists in building a larger matrix from sub-matrices.
>>
>> Suppose that we have matrices A (3, 4), B (3, 6), C (4, 2) and D (4, 8).
>>
>> Then E= M([[A, B], [C, D]]) gives E (7, 10).
> 
> Numpy generally tries to treat all lists and tuples as array literals.
>  That's not likely to change.
That need no be a problem is there is clarity of thinking about the 
essential difference between the matrix data type (even if is is built 
as a sub-type of the array) and the multi-dimensional array.
> 
> --bb

Colin W.

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] matrix indexing question

2007-03-26 Thread Colin J. Williams
Alan G Isaac wrote:
>>> On 3/26/07, Alan G Isaac <[EMAIL PROTECTED]> wrote: 
 finds itself in basic conflict with the idea that 
 I ought to be able to iterate over the objects in an 
 iterable container.  I mean really, does this not "feel" 
 wrong? ::
> >>> for item in x: print item.__repr__()
> >>> ...
> >>> matrix([[1, 2]])
> >>> matrix([[3, 4]]) 
> 
> 
> On Mon, 26 Mar 2007, Bill Baxter apparently wrote: 
>> So you're saying this is what you'd find more pythonic? 
> X[1] 
>> matrix([2,3]) 
> X[:,1] 
>> matrix([[3, 
>> 4]]) 
>> Just trying to make it clear what you're proposing. 
> 
> 
> No; that is not possible, since a matrix is inherently 2d.
> I just want to get the constituent arrays when I iterate
> over the matrix object or use regular Python indexing, but 
> a matrix when I use matrix/array indexing.  That is ::
> 
> >>> X[1] 
> array([2,3]) 
> >>> X[1,:] 
> matrix([[3, 4]]) 
> 
> That behavior seems completely natural and unsurprising.

Perhaps things would be clearer if we thought of the constituent groups 
of data in a matrix as being themselves matrices.

X[1] could represent the second row of a matrix. A row of a matrix is a 
row vector, a special case of a matrix.  To get an array, I suggest that 
an explicit conversion X[1].A is a clearer way to handle things.

Similarly, X[2, 3] is best returned as a value which is of a Python type.

Colin W.
> 
> 
>> Probably about half the bugs I get from mixing and matching matrix and 
>> array are things like 
>>row = A[i] 
>>... 
>>z = row[2]
>> Which works for an array but not for a matrix. 
> 
> 
> Exactly!
> That is the evidence of a "bad surprise" in the current 
> behavior.  Iterating over a Python iterable should provide 
> access to the contained objects.
> 
> Cheers,
> Alan Isaac

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] matrix indexing question

2007-03-26 Thread Sven Schreiber
Alan G Isaac schrieb:
> Oooops, they should match of course. ::
> >>> X[1]
> array([3,4])
> >>> X[1,:]
> matrix([[3, 4]])
> 
> But again the point is:
> indexing for submatrices should produce matrices.
> Normal Python indexing should access the constituent arrays.
> 

I think this is a tricky business.

There's also the rule "indexing reduces the rank, slicing preserves it".
Numpy-matrices form an exception to this rule, always being 2d, but the
exception is consistently enforced.

Now it seems you want to have an exception from the exception, correct?

Isn't this why the .A1 method for numpy-matrices was introduced even
after 1.0rc?

-sven (matrix fan)
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] defmatrix.py

2007-03-26 Thread Nils Wagner
Charles R Harris wrote:
>
>
> On 3/25/07, *Nils Wagner* <[EMAIL PROTECTED]
> > wrote:
>
> Hi,
>
>
> 
>
> BTW, I can't import scipy.sparse, I get the following error:
>  
> ImportError: cannot import name densetocsr
>
> What am I doing wrong?
>
> Chuck
> 
>
> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion
>   
This works for me.

Python 2.4.1 (#1, Oct 13 2006, 16:51:58)
[GCC 4.0.2 20050901 (prerelease) (SUSE Linux)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from scipy import sparse
>>> import scipy.sparse
>>> import scipy
>>> scipy.__version__
'0.5.3.dev2869'

but scipy.test(1) results in
...

 ==
ERROR: check_matmat (scipy.sparse.tests.test_sparse.test_csc)
--
Traceback (most recent call last):
  File
"/usr/lib64/python2.4/site-packages/scipy/sparse/tests/test_sparse.py",
line 142, in check_matmat
assert_array_almost_equal((a*bsp).todense(), a*b)
  File "/usr/lib64/python2.4/site-packages/numpy/core/defmatrix.py",
line 162, in __mul__
return N.dot(self, other)
ValueError: objects are not aligned

==
ERROR: check_rmatvec (scipy.sparse.tests.test_sparse.test_csc)
--
Traceback (most recent call last):
  File
"/usr/lib64/python2.4/site-packages/scipy/sparse/tests/test_sparse.py",
line 107, in check_rmatvec
assert_array_almost_equal(row*M, row*M.todense())
  File "/usr/lib64/python2.4/site-packages/numpy/core/defmatrix.py",
line 162, in __mul__
return N.dot(self, other)
ValueError: objects are not aligned

==
ERROR: check_matmat (scipy.sparse.tests.test_sparse.test_csr)
--
Traceback (most recent call last):
  File
"/usr/lib64/python2.4/site-packages/scipy/sparse/tests/test_sparse.py",
line 142, in check_matmat
assert_array_almost_equal((a*bsp).todense(), a*b)
  File "/usr/lib64/python2.4/site-packages/numpy/core/defmatrix.py",
line 162, in __mul__
return N.dot(self, other)
ValueError: objects are not aligned

==
ERROR: check_rmatvec (scipy.sparse.tests.test_sparse.test_csr)
--
Traceback (most recent call last):
  File
"/usr/lib64/python2.4/site-packages/scipy/sparse/tests/test_sparse.py",
line 107, in check_rmatvec
assert_array_almost_equal(row*M, row*M.todense())
  File "/usr/lib64/python2.4/site-packages/numpy/core/defmatrix.py",
line 162, in __mul__
return N.dot(self, other)
ValueError: objects are not aligned

==
ERROR: check_matmat (scipy.sparse.tests.test_sparse.test_dok)
--
Traceback (most recent call last):
  File
"/usr/lib64/python2.4/site-packages/scipy/sparse/tests/test_sparse.py",
line 142, in check_matmat
assert_array_almost_equal((a*bsp).todense(), a*b)
  File "/usr/lib64/python2.4/site-packages/numpy/core/defmatrix.py",
line 162, in __mul__
return N.dot(self, other)
ValueError: objects are not aligned

==
ERROR: Does the matrix's mean(,axis=0) method work?
--
Traceback (most recent call last):
  File
"/usr/lib64/python2.4/site-packages/scipy/sparse/tests/test_sparse.py",
line 54, in check_mean
assert_array_equal(self.dat.mean(axis=0), self.datsp.mean(axis=0))
  File "/usr/lib64/python2.4/site-packages/scipy/sparse/sparse.py", line
423, in mean
mean = self.sum(0)
  File "/usr/lib64/python2.4/site-packages/scipy/sparse/sparse.py", line
402, in sum
return o * self
  File "/usr/lib64/python2.4/site-packages/numpy/core/defmatrix.py",
line 162, in __mul__
return N.dot(self, other)
ValueError: objects are not aligned

==
ERROR: check_rmatvec (scipy.sparse.tests.test_sparse.test_dok)
--
Traceback (most recent call last):
  File
"/usr/lib64/python2.4/site-packages/scipy/sparse/tests/test_sparse.py",
line 107, in check_rmatvec
assert_array_almost_equal(row*M, row*M.todense())
  File "/usr/lib64/python2.4/site-packages/numpy/core/defmatrix.py",
line 162, in __mul__
return N.dot(self, other)
ValueError: objects are not aligned


Re: [Numpy-discussion] Modular toolkit for Data Processing 2.1 released!

2007-03-26 Thread Sven Schreiber
Tiziano Zito schrieb:

>  The optional symeig module contains a Python wrapper for the LAPACK
> functions to solve the standard and generalized eigenvalue problems
> for symmetric (hermitian) positive definite matrices. Those
> specialized algorithms give an important speed-up with respect to the
> generic LAPACK eigenvalue problem solver used by NumPy.
> 

This sounds very good. Is symeig completely independent from that other
package, in the sense that it only depends on numpy?

thanks,
Sven
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] matrix indexing question

2007-03-26 Thread Colin J. Williams
Alan G Isaac wrote:
>> Alan G Isaac wrote: 
>>> So this :: 
>>> >>> x[1] 
>>> matrix([[1, 0]]) 
>>> feels wrong.  (Similarly when iterating across rows.) 
>>> Of course I realize that I can just :: 
>>> >>> x.A[1] 
>>> array([1, 0]) 
> 
> 
> On Sun, 25 Mar 2007, "Colin J. Williams" apparently wrote: 
>> An array and a matrix are different animals.  Conversion 
>> from one to the other should be spelled out. 
> 
> 
> But you are just begging the question here.
> The question is: when I iterate across matrix rows,
> why am I iterating across matrices and not arrays.
> This seems quite out of whack with general Python practice.
> 
> You cannot just say "conversion should be explicit"
> because that assumes (incorrectly actually) that
> the rows are matrices.  The "conversion should be explicit"
> argument actually cuts in the opposite direction of what
> you appear to believe.

Alan,

Yes, this is where we appear to differ.  I believe that vectors are best 
represented as matrices, with a shape of (1, n) or (m, 1).  The choice 
of these determines whether we have a column or a row vectors.

Thus any (m, n) matrix can be considered as either a collection of 
column vectors or a collection of row vectors.

If the end result is required as an array or a list, this can be done 
explicitly with X[1].A or A[1].tolist().

Here, A is a property of the M (matrix) class.
> 
> Cheers,
> Alan Isaac

A long time ago, you proposed that PyMatrix should provide for matrix 
division in two way, as is done in MatLab.  This was implemented, but 
PyMatrix has not yet been ported to numpy - perhaps this summer.

Regards,

Colin W.

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] New Operators in Python

2007-03-26 Thread Steven H. Rogers
Zachary Pincus wrote:
> 
> Anyhow, feel free to disagree with me -- I'm no expert here. I'm only  
> mentioning this as a public service to make it clear that most of  
> what's being proposed in this thread is, for better or worse, 100%  
> dead-in-the-water for Python 3, and the rest will have a fairly  
> significant uphill battle.
> 

Don't disagree about the uphill battle part, but I believe that you're 
overly pessimistic about Unicode.  Unicode variables and operators have 
only been rejected for Python 3.0, though the degree of Unicode support 
is still under discussion.  If the core developers are convinced that 
enough people would use it, and it is implemented in a maintainable way, 
it is certainly possible for 3.1 or 3.2.

# Steve
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] matrix indexing question

2007-03-26 Thread Alan G Isaac
On Mon, 26 Mar 2007, "Colin J. Williams" apparently wrote: 
> Perhaps things would be clearer if we thought of the 
> constituent groups of data in a matrix as being themselves 
> matrices. 

This "thinking of" is what you have suggested before.
You need to explain why it is not begging the question.

Cheers,
Alan Isaac




___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Adapting algorithm to accept Scipy sparse matrix

2007-03-26 Thread David Koch

Hi again,

I want to select/access several columns from a sparse csc_matrix. The only
way I could think of is the following enormously inefficient algorithm which
basically initalizes a new lil_matrix (for assigments) and loops over all
the specified columns and does sparse -> dense -> sparse. All this, to
overcome the inability of using "multi-column" slices in csc_matrices.

def spSelCol(X, A):
   "insert doc string"

   n = size(X,0)
   d = size(A)
   X = X.tocsc()
   newX = sparse.lil_matrix((d,n))
   for i in range(0, d):
   # sparse -> dense -> sparse: not good!
   newX[i,:] = X[:,A[i]].toarray().flatten()

   return newX.transpose()

Is there any way the operation can be made more efficient or should I look
elsewhere (CVXOPT Python toolbox ...)

Thanks,


/David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] matrix indexing question

2007-03-26 Thread Alan G Isaac
> Alan G Isaac schrieb: 
>> >>> X[1] 
>> array([3,4]) 
>> >>> X[1,:] 
>> matrix([[3, 4]]) 

>> But again the point is: 
>> indexing for submatrices should produce matrices. 
>> Normal Python indexing should access the constituent arrays. 

On Mon, 26 Mar 2007, Sven Schreiber apparently wrote: 
> I think this is a tricky business. 
> There's also the rule "indexing reduces the rank, slicing preserves it". 
> Numpy-matrices form an exception to this rule, always being 2d, but the 
> exception is consistently enforced. 
> Now it seems you want to have an exception from the exception, correct? 

What I want: the best design.
I did not claim that the current design is flawed, only to suspect it.

Why I wrote: current behavior feels wrong -> suspect design flaw.

What feels wrong: iterating over a container does not give 
access to the contained objects.  This is not Pythonic.

*Symptom* of the underlying problem:
for matrix M, M[0] returns a matrix.

Would the change I suggest mean that the behavior of the 
matrix class deviates less from the array class: yes.

Does this mean the matrix class behavior would be less "consistent"?
I have tried to explain why the answer is "no".

hth,
Alan Isaac



___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] matrix indexing question

2007-03-26 Thread Sven Schreiber
Alan G Isaac schrieb:

> 
> What feels wrong: iterating over a container does not give 
> access to the contained objects.  This is not Pythonic.

If you iterate over the rows of the matrix, it feels natural to me to
get the row vectors -- and as you know a 1d-array does not contain the
information of "row-ness", so you need to get a 2d thing to properly
return those "contained objects". So in my view the current behavior
actually does exactly what you're calling for.

(But I admit I'm a 2d fan so much so that I didn't even know that using
a single index is possible with a matrix. I thought one always had to be
explicit about both dimensions... So thanks for pointing this out.
-- BTW, I'm aware that sticking to numpy-matrices exclusively is not
possible. For example, eigenvalues are returned in a 1d-array even if
you pass a matrix, and it's intended behavior.)

cheers,
sven



___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] matrix indexing question

2007-03-26 Thread Alan G Isaac
> Alan G Isaac schrieb: 
>> What feels wrong: iterating over a container does not give 
>> access to the contained objects.  This is not Pythonic. 

On Mon, 26 Mar 2007, Sven Schreiber apparently wrote: 
> If you iterate over the rows of the matrix, it feels 
> natural to me to get the row vectors

Natural in what way?
Again, I am raising the question of what
would be expected from someone familiar with Python.
Abstractly, what do you expect to get when you iterate
over a container?  Seriously.


> But I admit I'm a 2d fan so much so that I didn't even 
> know that using a single index is possible with a matrix.

Exactly.  When you want submatrices, you expect to index for 
them.  EXACTLY!!

Cheers,
Alan Isaac




___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Tuning sparse stuff in NumPy

2007-03-26 Thread David Koch

Hi,

I ran some tests on the very same matrices in Matlab/Numpy and it seems that
for sparse matrix multipilcation to be faster than dense multiplication -
the degree of sparsity has to be much higher in Numpy than in Matlab. Is
there anything I can tune in the underlying routines? I need good
performance at about 0.02 +- 0.01 nnz elements.

Thanks,

/David



Matrix size: 10k * 1k, times are in seconds:

sparsity: 0.01 nonzero elements
Matlab dense 3.9, sparse 0.09
Numpy dense 2.9, sparse 15.24

sparsity: increase sparsity to 0.001 nnz
Matlab dense 3.9, sparse 0.002
Numpy dense 2.9, sparse 0.27
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Tuning sparse stuff in NumPy

2007-03-26 Thread Robert Cimrman
David Koch wrote:
> Hi,
> 
> I ran some tests on the very same matrices in Matlab/Numpy and it seems
> that
> for sparse matrix multipilcation to be faster than dense multiplication -
> the degree of sparsity has to be much higher in Numpy than in Matlab. Is
> there anything I can tune in the underlying routines? I need good
> performance at about 0.02 +- 0.01 nnz elements.

Hi David,

Could you be more specific on which type of the sparse matrix storage
did you use? They are not equivalent in performance w.r.t. different
aspects, e.g. LIL, DOK matrices a good for populating an empty sparse
matrix, while CSR, CSC should be used for computations.

r.

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] matrix indexing question

2007-03-26 Thread Sebastian Haase
On 3/26/07, Alan G Isaac <[EMAIL PROTECTED]> wrote:
> > Alan G Isaac schrieb:
> >> What feels wrong: iterating over a container does not give
> >> access to the contained objects.  This is not Pythonic.
>
> On Mon, 26 Mar 2007, Sven Schreiber apparently wrote:
> > If you iterate over the rows of the matrix, it feels
> > natural to me to get the row vectors
>
> Natural in what way?
> Again, I am raising the question of what
> would be expected from someone familiar with Python.
> Abstractly, what do you expect to get when you iterate
> over a container?  Seriously.
>
>
> > But I admit I'm a 2d fan so much so that I didn't even
> > know that using a single index is possible with a matrix.
>
> Exactly.  When you want submatrices, you expect to index for
> them.  EXACTLY!!
>
If may chime in...
I think Sven's argument in on the side saying,
A "matrix" is an object that you expect a certain (mathematical !)
behavior from.
If some object behaves intuitively right -- that's ultimately pythonic !
The clash is, NOT to see a matrix  "just as another container".
Instead a matrix is a mathematical object , that has rows and columns.
It is used in a field (lin-alg) where every vector is either a row or
a column vector -- apparently that's big thing ;-)
The whole reason to add a special matrix class to numpy in the first
place, is to provide a better degree of convenience to lin-alg related
applications.  I would argue that it was just not consistently
considered, that this should also come with "a column of a matrix is
something else than a row  -- (1,n) vs. (n,1)  and not (n,)

more notes/points:
a) I have never heard about the m.A1 - what is it ?
b) I don't think that if m[1] would return a (rank 2) matrix, that
m[1].A could return a (rank 1) array ...
c) I'm curious if there is a unique way to extend the matrix class
into 3D or ND.

-Sebastian
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] header file location upon installation

2007-03-26 Thread Daniel Wheeler

On Mar 25, 2007, at 9:44 PM, Robert Kern wrote:

For building extensions, either use numpy.distutils, which will  
take care of
everything for you, or use numpy.get_include() to get the directory  
with headers.


get_include() does the trick. Thanks for your help.

--
Daniel Wheeler


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Tuning sparse stuff in NumPy

2007-03-26 Thread David Koch

On 3/26/07, Robert Cimrman <[EMAIL PROTECTED]> wrote:


Could you be more specific on which type of the sparse matrix storage
did you use?




Hi Robert,

I used csc_matrix.


/David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] matrix indexing question

2007-03-26 Thread Alan G Isaac
On Mon, 26 Mar 2007, Sebastian Haase apparently wrote: 
> A "matrix" is an object that you expect a certain 
> (mathematical !) behavior from.  If some object behaves 
> intuitively right -- that's ultimately pythonic!

The problem is, as I am not the only one to point out,
this particular behavior is NOT intuitively right.


> The clash is, NOT to see a matrix  "just as another 
> container".

But be serious, no object is "just another container".
Again, this just begs the question.
The question is a design question.
E.g., what is the principle of least surprise?


> more notes/points: 
> a) I have never heard about the m.A1 - what is it ?

It returns a 1d array holding the raveled matrix.

> b) I don't think that if m[1] would return a (rank 2) 
> matrix, that m[1].A could return a (rank 1) array ...

It does not, of course.
(But both should, I believe.)

> c) I'm curious if there is a unique way to extend the 
> matrix class into 3D or ND. 

Is that not what an array is for??

Cheers,
Alan Isaac


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Tuning sparse stuff in NumPy

2007-03-26 Thread Robert Cimrman
David Koch wrote:
> On 3/26/07, Robert Cimrman <[EMAIL PROTECTED]> wrote:
>>
>> Could you be more specific on which type of the sparse matrix storage
>> did you use?
> 
> 
> 
> Hi Robert,
> 
> I used csc_matrix.

OK, good. Would you mind measuring csc * csr, csc * csc, csr * csc and
csr * csr? I am curious how this will compare.

r.

ps: this thread might be more appropriate for scipy-user or scipy-dev...
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] matrix indexing question

2007-03-26 Thread Colin J. Williams
Alan G Isaac wrote:
> On Mon, 26 Mar 2007, "Colin J. Williams" apparently wrote: 
>> Perhaps things would be clearer if we thought of the 
>> constituent groups of data in a matrix as being themselves 
>> matrices. 
> 
> This "thinking of" is what you have suggested before.
> You need to explain why it is not begging the question.
> 
> Cheers,
> Alan Isaac

Perhaps it would now help if you redefined the question.

In an earlier posting, you appeared anxious that the matrix and the 
array behave in the same way.  Since they are different animals, I see 
sameness of behaviour as being lower on the list of desirables than 
fitting the standard ideas of matrix algebra.

Suppose that a is a row vector, b a column vector and A a conforming 
matrix then:
  a * A
  A * b
and  b.T * A are all acceptable operations.

One would expect the iteration over A to return row vectors, represented 
by (1, n) matrices.

Colin W.

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] fastest way to do multiplication with diagonal matrices from left or right

2007-03-26 Thread daniel . egloff


Dear list

what is the fastet way to multiply with a diagonal matrix from left or
right and without to build a square matrix from the diagonal.
Here it what I am looking for:

import numpy as N

def diagmult(X, Y):
"""
Matrix multiplication X*Y where either X or Y is a diagonal matrix.
"""
if X.ndim == 1 and Y.ndim == 2:
R = Y.copy()
for i, d in enumerate(X):
R[i,:] *= d
return R
elif X.ndim == 2 and Y.ndim == 1:
R = X.copy()
for i, d in enumerate(Y):
R[:,i] *= d
return R
elif X.ndim == 1 and Y.ndim == 1:
return X*Y
else
raise ValueError('diagmult dimension mismatch X.ndim = %d, Y.ndim =
%d' % (X.ndim, Y.ndim))

Freundliche Grüsse
Daniel Egloff
Zürcher Kantonalbank
Leiter(in) Financial Computing, ZEF

Josefstrasse 222, 8005 Zürich
Telefon 044 292 45 33, Fax 044 292 45 95
Briefadresse: Postfach, 8010 Zürich, http://www.zkb.ch
___

Disclaimer:


Diese Mitteilung ist nur fuer die Empfaengerin / den Empfaenger bestimmt.

Fuer den Fall, dass sie von nichtberechtigten Personen empfangen wird,
bitten wir diese hoeflich, die Mitteilung an die ZKB zurueckzusenden und
anschliessend die Mitteilung mit allen Anhaengen sowie allfaellige Kopien
zu vernichten bzw. zu loeschen. Der Gebrauch der Information ist verboten.


This message is intended only for the named recipient and may contain
confidential or privileged information.

If you have received it in error, please advise the sender by return e-mail
and delete this message and any attachments. Any unauthorised use or
dissemination of this information is strictly prohibited.___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Array of Arrays

2007-03-26 Thread Pauli Virtanen
Alexander Michael kirjoitti:
> On 3/23/07, Nadav Horesh <[EMAIL PROTECTED]> wrote:
>   
>> How about
>>
>>  a = empty((5,7,4))
>>  c = a[...,-1]
>> 
>
> Solely because I want to use the array with code that assumes it is
> working with two-dimensional arrays but yet only performs operations
> on the "outer" two-dimensional array that would be consistent with an
> "inner" array type (i.e. scalar assignment, element-wise
> multiplication, etc.). I own all the code, so perhaps I can replace
> a[mask,-1] with a[mask,-1,...] and such. Hmm. Not bad reminder,
> thanks.
>   

In some cases the 2d code works without modifications also for 
3d-arrays, because in numpy

a[1,2]

is the same as

a[1,2,...]

for a 3d-array.

For example, the examples in your first post run as intended:

 >>> from numpy import *
 >>> a = empty((5,7,4), dtype=float64)
 >>> c = a[:,-1] # shape (5,4) array, i.e. last column of 4 element arrays
 >>> a[0,0] = 2.0
 >>> print a[0,0]
[ 2.  2.  2.  2.]
 >>> a[1,0] = 3.0
 >>> a[0,1] = a[0,0] * a[1,0]
 >>> print a[0,1]
[ 6.  6.  6.  6.]

-- 
Pauli Virtanen
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] nd_image.affine_transform edge effects

2007-03-26 Thread Zachary Pincus

Hello folks,


Hmm, this is worrisome. There really shouldn't be ringing on
continuous-tone images like Lena  -- right? (And at no step in an
image like that should gaussian filtering be necessary if you're
doing spline interpolation -- also right?)


That's hard to say. Just because it's mainly a continuous-tone image
doesn't necessarily mean it is well sampled everywhere. This depends
both on the subject and the camera optics. Unlike the data I usually
work with, I think everyday digital photographs (probably a photo
scan in the case of Lena) do not generally have the detector sampling
frequency matched to the optical resolution of the image. If that's
true, the presence of aliasing in interpolated images depends on the
structure of the subject and whether the scene has edges or high-
frequency patterns in it.


Based on my reading of the two excellent Unser papers (both the one  
that ndimage's spline code is based on, and the one that Travis  
linked to), it seems like a major point of using splines for  
interpolation is *better* behavior in the case of non-band-limited  
data than the traditional 'anti-aliasing [e.g. lowpass] filter' +  
'resampling' + 'reconstruction [e.g. lowpass] filter' procedure.


That is, based on my (very limited) understanding (I understand the  
basics of the sampling theorem and can follow the Unser paper's  
update thereof, but not much more than that), in the spline case the  
spline fitting step *replaces* the anti-aliasing filter in the above  
procedure as the method for dealing with non-band-limited data. And  
the claim is that it in fact works better in many cases.


So it seems that something is wrong if the spline interpolation tools  
in ndimage only work properly in the band-limited case, or require  
lowpass filtering.




Stefan's rotated Lena example is indeed a bit bizarre on zooming in!
However, the artefacts are clearly localized to distinct edges, so I
suspect this is indeed some kind of aliasing. Moreover, it looks like
Lena has been decimated (reduced in size) prior to the rotation. That
is definitely a good way to get artefacts, unless an anti-aliasing
filter is applied before shrinking the image. My impression is that
this image is probably somewhat undersampled (to understand exactly
what that means, read up on the Sampling Theorem).


Agreed, it looks like aliasing. Nevertheless, any resampling  
procedure is supposed to deal with this internally, right? Either by  
lowpass filtering (traditional case), or by spline fitting (spline  
case as described by Unser and understood by me) -- it shouldn't be  
letting aliasing bubble through, correct?



The first was on Stefan's artificial data which had sharp edges, and
got very nasty ringing artifacts even with 3rd order splines. From
your recollection, is this expected behavior based on splines and the
nature of Stefan's image, or more likely to be a bug?


Your question was aimed at Travis, so I don't want to discourage him
from answering it :-), but looking at this in more detail, I do think
the amplitude of the artefacts here is greater than I might expect due
to ringing with a quadratic b-spline kernel, which I think has minima
with amplitudes <10% of the central peak. There has to be SOME
oscillation, but in Stefan's "rotate_artifacts" example it seems to be
at the level of ~100%. Also, it is not present on one of the inner
edges for some reason. So I do wonder if the algorithm in nd_image is
making this worse than it needs to be.


Good observation! Here are cubic spline fits to a step and a delta  
function, which both have quite small amounts of ringing compared to  
what's observed -- at least on the color images. Maybe 10% ringing in  
each color channel adds up perceptually more than it does  
mathematically?


import numpy, Gnuplot, scipy.interpolate

# step
x = numpy.arange(-10, 10)
y = numpy.where(x > 0, 1, 0)
tck = scipy.interpolate.splrep(x, y, s=0, k=3)
nx = numpy.linspace(-10, 9, 200, True)
ny = scipy.interpolate.splev(nx, tck)
d = Gnuplot.Data(numpy.transpose([x, y]), with='points')
nd = Gnuplot.Data(numpy.transpose([nx,ny]), with='lines')
g = Gnuplot.Gnuplot()
g.plot(d, nd)
ny.min()
# -0.107490563074 -- largest ringing is ~10% of step size

# delta
x = numpy.arange(-10, 10)
y = numpy.where(x == 0, 1, 0)
tck = scipy.interpolate.splrep(x, y, s=0, k=3)
nx = numpy.linspace(-10, 9, 200, True)
ny = scipy.interpolate.splev(nx, tck)
d = Gnuplot.Data(numpy.transpose([x, y]), with='points')
nd = Gnuplot.Data(numpy.transpose([nx,ny]), with='lines')
g = Gnuplot.Gnuplot()
g.plot(d, nd)
ny.min()
# -0.136449610107 -- largest ringing is ~13% of impulse size

(sample plots are attached for reference).

Now let's use ndimage to rotate a step function by 45%, or zoom in on  
it:


import numpy, scipy.ndimage
o = numpy.ones((100, 50), dtype=float)
z = numpy.zeros((100, 50), dtype=float)
a = numpy.concatenate([o, z], axis=1)
b = scipy.ndimage.rotate(a, 45, order=3)
b.max()
# 1.08832325055
b.min()
# -0.

Re: [Numpy-discussion] New Operators in Python

2007-03-26 Thread Beliavsky
On Mar 24, 3:48 pm, "Charles R Harris" <[EMAIL PROTECTED]>
wrote:

> > It is somewhat workable as it stands, but I think it would be nicer if
> > we could have some "meta" operator that allowed an alternative
> > definition of major operators.   Something like @*  for example (just
> > picking a character that is already used for decorators).
>
> Yes indeed, this is an old complaint. Just having an infix operator would be
> an improvement:
>
> A dot B dot C

In Fortran 90 and later versions, A*B means element-wise
multiplication, and there exist intrinsic functions such as
matmul(A,B) and dot_product(A,B). One can define operators so that

A .x. B .x. C

is the same as matmul(A,matmul(B,C))

although I am unsure of the order of operations implied by the former
syntax. The Matran http://www.cs.umd.edu/~stewart/matran/Matran.html
package does this. I think Fortran's infix operators are a good
solution but don't know if it can be carried over to Python.

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] fastest way to do multiplication with diagonal matrices from left or right

2007-03-26 Thread Pietro Berkes
This is a small function I use to speed up multiplication with diagonal
matrices. I don't know if it's the *fastest* way to do it, but it's
pretty fast.

def mult_diag(d, mtx, left=True):
"""Multiply a full matrix by a diagonal matrix.
This function should always be faster than dot.

Input:
  d -- 1D (N,) array (contains the diagonal elements)
  mtx -- 2D (N,N) array

Output:
  mult_diag(d, mts, left=True) == dot(diag(d), mtx)
  mult_diag(d, mts, left=False) == dot(mtx, diag(d))
"""
if left:
return (d*mtx.T).T
else:
return d*mtx


On Fri, 23 Mar 2007, [EMAIL PROTECTED] wrote:

> 
> 
> Dear list
> 
> what is the fastet way to multiply with a diagonal matrix from left or
> right and without to build a square matrix from the diagonal.
> Here it what I am looking for:
> 
> import numpy as N
> 
> def diagmult(X, Y):
> """
> Matrix multiplication X*Y where either X or Y is a diagonal matrix.
> """
> if X.ndim == 1 and Y.ndim == 2:
> R = Y.copy()
> for i, d in enumerate(X):
> R[i,:] *= d
> return R
> elif X.ndim == 2 and Y.ndim == 1:
> R = X.copy()
> for i, d in enumerate(Y):
> R[:,i] *= d
> return R
> elif X.ndim == 1 and Y.ndim == 1:
> return X*Y
> else
> raise ValueError('diagmult dimension mismatch X.ndim = %d, Y.ndim =
> %d' % (X.ndim, Y.ndim))
> 
> Freundliche Grüsse
> Daniel Egloff
> Zürcher Kantonalbank
> Leiter(in) Financial Computing, ZEF
> 
> Josefstrasse 222, 8005 Zürich
> Telefon 044 292 45 33, Fax 044 292 45 95
> Briefadresse: Postfach, 8010 Zürich, http://www.zkb.ch
> ___
> 
> Disclaimer:
> 
> 
> Diese Mitteilung ist nur fuer die Empfaengerin / den Empfaenger bestimmt.
> 
> Fuer den Fall, dass sie von nichtberechtigten Personen empfangen wird,
> bitten wir diese hoeflich, die Mitteilung an die ZKB zurueckzusenden und
> anschliessend die Mitteilung mit allen Anhaengen sowie allfaellige Kopien
> zu vernichten bzw. zu loeschen. Der Gebrauch der Information ist verboten.
> 
> 
> This message is intended only for the named recipient and may contain
> confidential or privileged information.
> 
> If you have received it in error, please advise the sender by return e-mail
> and delete this message and any attachments. Any unauthorised use or
> dissemination of this information is strictly prohibited.___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] matrix indexing question

2007-03-26 Thread Alan G Isaac
On Mon, 26 Mar 2007, "Colin J. Williams" apparently wrote: 
> One would expect the iteration over A to return row 
> vectors, represented by (1, n) matrices. 

This is again simple assertion.
**Why** would "one" expect this?
Some people clearly do not.

One person commented that this unexpected behavior was 
a source of error in their code.

Another person commented that they did not even guess that 
such a thing would be possible.

Experience with Python should lead to the ability to 
anticipate the outcome.  Apparently this is not the case.
That suggests a design problem.

What about **Python** would lead us to expect this behavior??

In *contrast*, everyone agrees that for a matrix M,
we should get a matrix from M[0,:].
This is expected and desirable.

Cheers,
Alan Isaac





___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] fastest way to do multiplication with diagonal matrices from left or right

2007-03-26 Thread Robert Kern
[EMAIL PROTECTED] wrote:
> Dear list
> 
> what is the fastet way to multiply with a diagonal matrix from left or
> right and without to build a square matrix from the diagonal.

Use broadcasting to do your work for you.

  from numpy import array, newaxis

  diags = array([...])
  mymatrix = array([[...]])

  # From the right:
  mymatrix * diags

  # From the left:
  diags[:,newaxis] * mymatrix

-- 
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
 that is made terrible by our own mad attempt to interpret it as though it had
 an underlying truth."
  -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Tuning sparse stuff in NumPy

2007-03-26 Thread David Koch

Ok,

I did and the results are:
csc * csc: 372.601957083
csc * csc: 3.90811300278
csr * csc: 15.3202679157
csr * csr:  3.84498214722

Mhm, quite insightful. Note, that in an operation X.transpose() * X, where X
is csc_matrix, then X.tranpose() is automatically cast to csr_matrix. A
re-cast to csc make the whole operation faster. It's still about 1000 times
slower than Matlab but 4 times faster than before.


Note, that .transpose already switches the matrix

On 3/26/07, Robert Cimrman <[EMAIL PROTECTED]> wrote:


David Koch wrote:
> On 3/26/07, Robert Cimrman <[EMAIL PROTECTED]> wrote:
>>
>> Could you be more specific on which type of the sparse matrix storage
>> did you use?
>
>
>
> Hi Robert,
>
> I used csc_matrix.

OK, good. Would you mind measuring csc * csr, csc * csc, csr * csc and
csr * csr? I am curious how this will compare.

r.

ps: this thread might be more appropriate for scipy-user or scipy-dev...
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] New Operators in Python

2007-03-26 Thread tan2

On 3/26/07, René Bastian <[EMAIL PROTECTED]> wrote:


May be it would be possible to implement a class with
user "definissable" (?) signs.



Yes, it is possible and is done.
See this recipe to define an Infix operator class either:

x |op| y
or:
x <> y


http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/384122

Eh Tan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] defmatrix.py

2007-03-26 Thread Charles R Harris

On 3/26/07, Nils Wagner <[EMAIL PROTECTED]> wrote:


Charles R Harris wrote:
>
>
> On 3/25/07, *Nils Wagner* <[EMAIL PROTECTED]
> > wrote:
>
> Hi,
>
>




Ok, things should be working now. In order to get rid of the scipy problems
I just needed to rm the scipy package in site-packages and then reinstall.

I still need to figure out a real fix down in the dot routine because

In [1]: dot(mat(eye(2)),ones(2))
Out[1]: matrix([[ 1.,  1.]])

Which is incorrect. But as long as you use * for matrices I think things are
working correctly, or at least correctly by numpy convention.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] matrix indexing question

2007-03-26 Thread Charles R Harris

On 3/26/07, Alan G Isaac <[EMAIL PROTECTED]> wrote:


On Mon, 26 Mar 2007, "Colin J. Williams" apparently wrote:
> One would expect the iteration over A to return row
> vectors, represented by (1, n) matrices.

This is again simple assertion.
**Why** would "one" expect this?
Some people clearly do not.



Well, and what *should* they expect. I think it is expected because the for
iterates over rows and rows of matrices are 1xn. Matrices and arrays, as has
been stated, are different animals. Probably it would have been best to
stick with arrays and I suspect that matrices appeared because of the dearth
of Python operators, in particular to make matrix multiplication simpler. On
the other hand, certain errors slip by when one is implementing matrix
algebra with arrays, but they can be avoided by never using 1-d vectors. So
all this mess results from catering to the matrix community. Matlab has the
opposite problem, multidimensional arrays were tacked on later and they
don't operate properly with everything.

Chuck

One person commented that this unexpected behavior was

a source of error in their code.

Another person commented that they did not even guess that
such a thing would be possible.

Experience with Python should lead to the ability to
anticipate the outcome.  Apparently this is not the case.
That suggests a design problem.

What about **Python** would lead us to expect this behavior??

In *contrast*, everyone agrees that for a matrix M,
we should get a matrix from M[0,:].
This is expected and desirable.

Cheers,
Alan Isaac





___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] defmatrix.py

2007-03-26 Thread Travis Oliphant
Charles R Harris wrote:

>
>
> On 3/26/07, *Nils Wagner* <[EMAIL PROTECTED] 
> > wrote:
>
> Charles R Harris wrote:
> >
> >
> > On 3/25/07, *Nils Wagner* <[EMAIL PROTECTED]
> 
> >  >> wrote:
> >
> > Hi,
> >
> >
>
>  
>
> Ok, things should be working now. In order to get rid of the scipy 
> problems I just needed to rm the scipy package in site-packages and 
> then reinstall.
>
> I still need to figure out a real fix down in the dot routine because


No, don't fix the dot routine.  Remember the dot routine should accept 
mixed 2-d arrays and 1-d arrays because it is more general than matrix 
multiplication.

-Travis

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] matrix indexing question

2007-03-26 Thread Alan G Isaac
>> On Mon, 26 Mar 2007, "Colin J. Williams" apparently wrote: 
>>> One would expect the iteration over A to return row 
>>> vectors, represented by (1, n) matrices. 


> On 3/26/07, Alan G Isaac <[EMAIL PROTECTED]> wrote: 
>> This is again simple assertion. 
>> **Why** would "one" expect this? 
>> Some people clearly do not. 


On Mon, 26 Mar 2007, Charles R Harris apparently wrote: 
> Well, and what should they expect.


Since matrices are an iterable Python object,
we *expect* to iterate over the contained objects.
(Arrays.)  I am not sure why this is not evident to all,
but it is surely the sticking point in this discussion.

A matrix is not a container of matrices.
That it acts like one is surprsing.
Surprises are bad unless they have a clear justification.
Perhaps a clear justification exists,
but it has not been offered in this discussion.

Cheers,
Alan Isaac


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] defmatrix.py

2007-03-26 Thread Charles R Harris

On 3/26/07, Travis Oliphant <[EMAIL PROTECTED]> wrote:


Charles R Harris wrote:

>
>
> On 3/26/07, *Nils Wagner* <[EMAIL PROTECTED]
> > wrote:
>
> Charles R Harris wrote:
> >
> >
> > On 3/25/07, *Nils Wagner* <[EMAIL PROTECTED]
> 
> >  >> wrote:
> >
> > Hi,
> >
> >
>
>  
>
> Ok, things should be working now. In order to get rid of the scipy
> problems I just needed to rm the scipy package in site-packages and
> then reinstall.
>
> I still need to figure out a real fix down in the dot routine because


No, don't fix the dot routine.  Remember the dot routine should accept
mixed 2-d arrays and 1-d arrays because it is more general than matrix
multiplication.



Sure, but it can't return a row vector when multiplying a 1-d vector. It
just ain't mathematically possible no matter how the 1-d vector is
interpreted. It needs to return either a column vector or a 1-d vector.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] matrix indexing question

2007-03-26 Thread Zachary Pincus
> Since matrices are an iterable Python object,
> we *expect* to iterate over the contained objects.
> (Arrays.)  I am not sure why this is not evident to all,
> but it is surely the sticking point in this discussion.
>
> A matrix is not a container of matrices.
> That it acts like one is surprsing.
> Surprises are bad unless they have a clear justification.
> Perhaps a clear justification exists,
> but it has not been offered in this discussion.

I think that a clear justification has been offered several times on  
the list recently, though maybe not all in this thread.

Matrices in numpy seem to exist as a construct to facilitate linear  
algebra. One such property is that row- and column-vector slices of  
matrices are (1xN) or (Nx1) matrices, because otherwise common linear  
algebra things -- like how you multiply a row-vector or a column  
vector by a matrix, and whether and when it needs to be transposed --  
do not translate properly from "linear algebra notation" like in  
textbooks and papers.

Once the matrix class is committed to providing row- and column- 
vector slices as other matrices, it makes no sense to have iteration  
over the matrix provide a different data type than slicing.

Basically, my rule of thumb has been to *only* use matrices when I'm  
copying linear algebra operations out of textbooks and papers, and I  
want the notations to align. Doing other, non-linear-algebra  
operations with matrices -- like iterating over their elements -- is  
not really worth it.

There's a meta question, of course: should things be changed to make  
it "worth it" to do "pythonic" tasks with matrices? Should there be  
special elementwise vs. matrix-operation operators? Should numpy work  
a lot more like matlab? That discussion is on-going in another  
thread, I think.

Zach
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] matrix indexing question

2007-03-26 Thread Travis Oliphant
Alan G Isaac wrote:

>>>On Mon, 26 Mar 2007, "Colin J. Williams" apparently wrote: 
>>>  
>>>
One would expect the iteration over A to return row 
vectors, represented by (1, n) matrices. 


>
>
>  
>
>>On 3/26/07, Alan G Isaac <[EMAIL PROTECTED]> wrote: 
>>
>>
>>>This is again simple assertion. 
>>>**Why** would "one" expect this? 
>>>Some people clearly do not. 
>>>  
>>>
>
>
>On Mon, 26 Mar 2007, Charles R Harris apparently wrote: 
>  
>
>>Well, and what should they expect.
>>
>>
>
>
>Since matrices are an iterable Python object,
>we *expect* to iterate over the contained objects.
>(Arrays.)  I am not sure why this is not evident to all,
>but it is surely the sticking point in this discussion.
>
>A matrix is not a container of matrices.
>That it acts like one is surprsing.
>Surprises are bad unless they have a clear justification.
>Perhaps a clear justification exists,
>but it has not been offered in this discussion.
>  
>

It actually has been offered.   You just don't accept it. 

Matrices are containers of matrices.  

If M is an (mxn) matrix then M[0] is a (1xn) matrix. 

Viewing this 1xn matrix as a 1-d array loses it's row-vectorness. 

This seems perfectly logical and acceptable to me.  I'm waiting for a 
better explanation as to why this is not acceptable.

Arguments that rest on what is and what isn't "Pythonic" are not very 
persuasive as this is very often in the eye of the beholder.

-Travis

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] matrix indexing question

2007-03-26 Thread Charles R Harris

On 3/26/07, Alan G Isaac <[EMAIL PROTECTED]> wrote:


>> On Mon, 26 Mar 2007, "Colin J. Williams" apparently wrote:
>>> One would expect the iteration over A to return row
>>> vectors, represented by (1, n) matrices.


> On 3/26/07, Alan G Isaac <[EMAIL PROTECTED]> wrote:
>> This is again simple assertion.
>> **Why** would "one" expect this?
>> Some people clearly do not.


On Mon, 26 Mar 2007, Charles R Harris apparently wrote:
> Well, and what should they expect.


Since matrices are an iterable Python object,
we *expect* to iterate over the contained objects.
(Arrays.)  I am not sure why this is not evident to all,
but it is surely the sticking point in this discussion.

Matrices don't contain arrays. Indeed, in linear algebra they are elements

of the tensor product of a vector space with its dual, so one would expect
either row or column vectors depending on the index position.

But this discussion is not really relevant at this point. What happens is a
convention and the numpy convention is out there. It isn't going to change
now, it would break too much code.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] defmatrix.py

2007-03-26 Thread Travis Oliphant
Charles R Harris wrote:

>
>
> On 3/26/07, *Travis Oliphant* <[EMAIL PROTECTED] 
> > wrote:
>
> Charles R Harris wrote:
>
> >
> >
> > On 3/26/07, *Nils Wagner* <[EMAIL PROTECTED]
> 
> >  >> wrote:
> >
> > Charles R Harris wrote:
> > >
> > >
> > > On 3/25/07, *Nils Wagner* <[EMAIL PROTECTED]
> 
> >  >
> > >  
> >   > >
> > > Hi,
> > >
> > >
> >
> >  
> >
> > Ok, things should be working now. In order to get rid of the scipy
> > problems I just needed to rm the scipy package in site-packages and
> > then reinstall.
> >
> > I still need to figure out a real fix down in the dot routine
> because
>
>
> No, don't fix the dot routine.  Remember the dot routine should accept
> mixed 2-d arrays and 1-d arrays because it is more general than matrix
> multiplication.
>
>
> Sure, but it can't return a row vector when multiplying a 1-d vector. 
> It just ain't mathematically possible no matter how the 1-d vector is 
> interpreted. It needs to return either a column vector or a 1-d vector.

Ah.. I see what your point is. 

How to fix that.   The problem is choosing the Python type of the output 
array.  It's being selected as matrix right now because of the 
precedence of the matrix subtype, but it should be chosen as array in 
this instance.Perhaps we should make dot always choose the array as 
the output type??

-Travis

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Fixed scalar coercion model in NumPy

2007-03-26 Thread Travis Oliphant
I've finally made the changes to fix the scalar coercion model problems 
in NumPy 1.0.1

Now, scalar coercion rules only apply when involved types are of the 
same basic "kind".

Thus,

array([1,2,3],int8)*10

returns an int8 array

but

array([1,2,3],int8)*10.0

returns a float64 array.

If you want a float32 array you must use

array([1,2,3],int8)*float32(10.0)


This is actually a behavioral change which is why I asked earlier if we 
should change it.  However, the previous behavior was not documented 
anywhere and any previous discussion on this point should have been 
interpreted as the behavior now in SVN.

It's also a rare use-case and so should not create too many issues.
In running tests with NumPy and SciPy there appear to be three tests in 
ndimage breaking now.

I really do want to get 1.0.2 out the door soon.  What still needs to be 
fixed before then?

-Travis

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Fixed scalar coercion model in NumPy

2007-03-26 Thread Perry Greenfield
Great!

On Mar 26, 2007, at 4:52 PM, Travis Oliphant wrote:

> I've finally made the changes to fix the scalar coercion model  
> problems
> in NumPy 1.0.1
>
> Now, scalar coercion rules only apply when involved types are of the
> same basic "kind".
[...]
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Fixed scalar coercion model in NumPy

2007-03-26 Thread Travis Oliphant
Perry Greenfield wrote:

>Great!
>
>On Mar 26, 2007, at 4:52 PM, Travis Oliphant wrote:
>
>  
>
>>I've finally made the changes to fix the scalar coercion model  
>>problems
>>in NumPy 1.0.1
>>
>>Now, scalar coercion rules only apply when involved types are of the
>>same basic "kind".
>>
>>

Actually, the rule is better stated.  The scalar coercion rules apply 
when involved types are of the same basic "kind" or if the scalar is of 
"lesser" kind than the array.

Thus,

array([1,2,3],float32)*10produces a float32 array
array([1,2,3],float32)*10.0  produces a float32 array
array([1,2,3],float32)*10.0j produces a complex128 array

-Travis

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] defmatrix.py

2007-03-26 Thread Charles R Harris

On 3/26/07, Travis Oliphant <[EMAIL PROTECTED]> wrote:


Charles R Harris wrote:

>
>
> On 3/26/07, *Travis Oliphant* <[EMAIL PROTECTED]
> > wrote:
>
> Charles R Harris wrote:
>
> >
> >
> > On 3/26/07, *Nils Wagner* <[EMAIL PROTECTED]
> 
> >  >> wrote:
> >
> > Charles R Harris wrote:
> > >
> > >
> > > On 3/25/07, *Nils Wagner* <[EMAIL PROTECTED]
> 
> >  >
> > >  
> >   > >
> > > Hi,
> > >
> > >
> >
> >  
> >
> > Ok, things should be working now. In order to get rid of the scipy
> > problems I just needed to rm the scipy package in site-packages
and
> > then reinstall.
> >
> > I still need to figure out a real fix down in the dot routine
> because
>
>
> No, don't fix the dot routine.  Remember the dot routine should
accept
> mixed 2-d arrays and 1-d arrays because it is more general than
matrix
> multiplication.
>
>
> Sure, but it can't return a row vector when multiplying a 1-d vector.
> It just ain't mathematically possible no matter how the 1-d vector is
> interpreted. It needs to return either a column vector or a 1-d vector.

Ah.. I see what your point is.

How to fix that.   The problem is choosing the Python type of the output
array.  It's being selected as matrix right now because of the
precedence of the matrix subtype, but it should be chosen as array in
this instance.Perhaps we should make dot always choose the array as
the output type??



I think that might be the simplest thing, dot overrides subtypes. BTW, here
is another ambiguity

In [6]: dot(array([[1]]),ones(2))
---
exceptions.ValueErrorTraceback (most recent
call last)

/home/charris/

ValueError: matrices are not aligned

Note that in this case dot acts like the rhs is always a column vector
although it returns a 1-d vector. I don't know that this is a bad thing, but
perhaps we should extend this behaviour to matrices, which would be
different from the now current 1-d is always a *row* vector, i.e.

In [8]: mat([[1]])*ones(2)
Out[8]: matrix([[ 1.,  1.]])

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] defmatrix.py

2007-03-26 Thread Travis Oliphant

> I think that might be the simplest thing, dot overrides subtypes. BTW, 
> here is another ambiguity
>
> In [6]: dot(array([[1]]),ones(2))
> ---
> exceptions.ValueErrorTraceback (most 
> recent call last)
>
> /home/charris/
>
> ValueError: matrices are not aligned
>
> Note that in this case dot acts like the rhs is always a column vector 
> although it returns a 1-d vector. I don't know that this is a bad 
> thing, but perhaps we should extend this behaviour to matrices, which 
> would be different from the now current 1-d is always a *row* vector, i.e.


The rule 1-d is always a *row* vector only applies when converting to a 
matrix. 

In this case, the dot operator does not "convert to a matrix" but uses 
rules for operating with mixed 2-d and 1-d arrays inherited from Numeric.

I'm very hesitant to change those rules.

-Travis

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Fixed scalar coercion model in NumPy

2007-03-26 Thread Stefan van der Walt
On Mon, Mar 26, 2007 at 01:52:28PM -0700, Travis Oliphant wrote:
> I really do want to get 1.0.2 out the door soon.  What still needs to be 
> fixed before then?

The code in ticket 469 still causes a memory error, so that might be
worth fixing.

Cheers
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] defmatrix.py

2007-03-26 Thread Charles R Harris

On 3/26/07, Travis Oliphant <[EMAIL PROTECTED]> wrote:



> I think that might be the simplest thing, dot overrides subtypes. BTW,
> here is another ambiguity
>
> In [6]: dot(array([[1]]),ones(2))
>
---
> exceptions.ValueErrorTraceback (most
> recent call last)
>
> /home/charris/
>
> ValueError: matrices are not aligned
>
> Note that in this case dot acts like the rhs is always a column vector
> although it returns a 1-d vector. I don't know that this is a bad
> thing, but perhaps we should extend this behaviour to matrices, which
> would be different from the now current 1-d is always a *row* vector,
i.e.


The rule 1-d is always a *row* vector only applies when converting to a
matrix.

In this case, the dot operator does not "convert to a matrix" but uses
rules for operating with mixed 2-d and 1-d arrays inherited from Numeric.

I'm very hesitant to change those rules.



I wasn't suggesting that, just noticing that the rule was 1-d vector on
right is treated as a column vector by dot, which is why an exception was
raised in the posted case. If it is traditional for matrix routines always
treat is as a row vector, so be it.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] defmatrix.py

2007-03-26 Thread Travis Oliphant
Charles R Harris wrote:

>
>
> The rule 1-d is always a *row* vector only applies when converting
> to a
> matrix.
>
> In this case, the dot operator does not "convert to a matrix" but
> uses
> rules for operating with mixed 2-d and 1-d arrays inherited from
> Numeric.
>
> I'm very hesitant to change those rules.
>
>
> I wasn't suggesting that, just noticing that the rule was 1-d vector 
> on right is treated as a column vector by dot, which is why an 
> exception was raised in the posted case. If it is traditional for 
> matrix routines always treat is as a row vector, so be it.

O.K.  So, the problem is that when I defined matrix multiplication, I 
mistakenly believed that dot always interpreted 1-d arrays as row 
vectors which it is now clear it doesn't.

I think this led to the issues. 

So, making dot always return arrays seems like the needed fix.

-Travis



___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] defmatrix.py

2007-03-26 Thread Charles R Harris

On 3/26/07, Travis Oliphant <[EMAIL PROTECTED]> wrote:


Charles R Harris wrote:

>
>
> The rule 1-d is always a *row* vector only applies when converting
> to a
> matrix.
>
> In this case, the dot operator does not "convert to a matrix" but
> uses
> rules for operating with mixed 2-d and 1-d arrays inherited from
> Numeric.
>
> I'm very hesitant to change those rules.
>
>
> I wasn't suggesting that, just noticing that the rule was 1-d vector
> on right is treated as a column vector by dot, which is why an
> exception was raised in the posted case. If it is traditional for
> matrix routines always treat is as a row vector, so be it.

O.K.  So, the problem is that when I defined matrix multiplication, I
mistakenly believed that dot always interpreted 1-d arrays as row
vectors which it is now clear it doesn't.

I think this led to the issues.

So, making dot always return arrays seems like the needed fix.



Sounds good. Do you want to leave the matrix  * operator as it now is?

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] ATLAS problems

2007-03-26 Thread Justin Bedo
Hi All,

I'm trying to compile NumPy using the ATLAS libraries but am having
some difficulty with it.  I've managed to build numpy such that
lapack_lite.so is linked as follows:

linux-gate.so.1 =>  (0xe000)
libatlas.so => /usr/local/atlas/lib/libatlas.so (0xb79e9000)
libptcblas.so => /usr/local/atlas/lib/libptcblas.so (0xb79c7000)
libptf77blas.so => /usr/local/atlas/lib/libptf77blas.so (0xb79ac000)
libpython2.5.so.1.0 => /usr/lib/libpython2.5.so.1.0 (0xb788f000)
libgfortran.so.1 => /usr/lib/libgfortran.so.1 (0xb7817000)
libm.so.6 => /lib/libm.so.6 (0xb77f1000)
libgcc_s.so.1 => /lib/libgcc_s.so.1 (0xb77e5000)
libc.so.6 => /lib/libc.so.6 (0xb76b7000)
libpthread.so.0 => /lib/libpthread.so.0 (0xb769f000)
libdl.so.2 => /lib/libdl.so.2 (0xb769a000)
libutil.so.1 => /lib/libutil.so.1 (0xb7696000)
/lib/ld-linux.so.2 (0x8000)

However despite this, the performance when multiplying matrices is
extremely slow, and only a single thread is used even though
lapack_lite.so is linked with the pt atlas libraries.  Does anybody
know how to fix this problem?

Thanks
Justin
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] matrix indexing question

2007-03-26 Thread Alan Isaac
On Mon, 26 Mar 2007, Charles R Harris wrote: 
> What happens is a convention

Certainly true.

> It isn't going to change now, it would break too much 
> code. 

That is a different kind of argument.
It might be true.

May I see a use case where the desired
return when iterating through a matrix
is rows as matrices?  That has never
been what I wanted.

Thank you,
Alan Isaac




___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] matrix indexing question

2007-03-26 Thread Alan Isaac
On Mon, 26 Mar 2007, Travis Oliphant wrote: 
> It actually has been offered.   You just don't accept it.  
> Matrices are containers of matrices.  
> If M is an (mxn) matrix then M[0] is a (1xn) matrix.  
> Viewing this 1xn matrix as a 1-d array loses it's row-vectorness.  
> This seems perfectly logical and acceptable to me.  I'm waiting for a 
> better explanation as to why this is not acceptable.  
> Arguments that rest on what is and what isn't "Pythonic" are not very 
> persuasive as this is very often in the eye of the beholder. 

Again, I only raised a *question* about whether
there might be a design problem here.  My goal
was only to have this explored, and I've tried
to explain why.

The evidence I offer:
- it is surprising (at least to some) that iterating
  over a matrix yields matrices
- I believe it is unexpected (prior to instruction) and that 
  there is a natural more expected behavior
- if that is right, deviation from the expected should have 
  a good justification
- this behavior has tripped up at least a couple people and
  I expect that to happen to many others over time (because 
  I believe the behavior is unexpected)
- when I desire to iterate over a matrix I always want the 
  arrays. (Of course there is a way to have them; that's
  not the point).  I'm interested to see a use case where
  the rows are desired as matrices

As you indicate, none of this constitutes an "argument".
And since nobody appears to agree, I should shut up.
This will be my last post on this subject.

Cheers,
Alan Isaac




___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] matrix indexing question

2007-03-26 Thread Bill Baxter
On 3/27/07, Alan Isaac <[EMAIL PROTECTED]> wrote:
> May I see a use case where the desired
> return when iterating through a matrix
> is rows as matrices?  That has never
> been what I wanted.

If you use a row vector convention it make plenty of sense.

AllMyPoints = mat(rand(100,2)) # 100 two-d points
for pt in AllMyPoints:
xformedPt = pt * TransformMatrix
# do something to transformed point

--bb
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] matrix indexing question

2007-03-26 Thread Alan Isaac
> On 3/27/07, Alan Isaac <[EMAIL PROTECTED]> wrote: 
>> May I see a use case where the desired 
>> return when iterating through a matrix 
>> is rows as matrices?  That has never 
>> been what I wanted. 


On Tue, 27 Mar 2007, Bill Baxter wrote: 
> AllMyPoints = mat(rand(100,2)) # 100 two-d points 
> for pt in AllMyPoints: 
> xformedPt = pt * TransformMatrix 
> # do something to transformed point 


This seems artificial to me for several reasons,
but here is one reason:
AllxformedPts = AllMyPoints * TransformMatrix

Note that I am no longer disputing the convention,
just trying to understand its usefulness.

Cheers,
Alan Isaac


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] matrix indexing question

2007-03-26 Thread Bill Baxter
On 3/27/07, Alan Isaac <[EMAIL PROTECTED]> wrote:
> > On 3/27/07, Alan Isaac <[EMAIL PROTECTED]> wrote:
> >> May I see a use case where the desired
> >> return when iterating through a matrix
> >> is rows as matrices?  That has never
> >> been what I wanted.
>
>
> On Tue, 27 Mar 2007, Bill Baxter wrote:
> > AllMyPoints = mat(rand(100,2)) # 100 two-d points
> > for pt in AllMyPoints:
> > xformedPt = pt * TransformMatrix
> > # do something to transformed point
>
>
> This seems artificial to me for several reasons,
> but here is one reason:
> AllxformedPts = AllMyPoints * TransformMatrix

Yeh, I was afraid you'd say that.  :-)
But maybe I've got a lot of points, and I don't feel like making a
copy of the whole set.
Or maybe it's not a linear transform, but instead
 xformedPt = someComplicatedNonLinearThing(pt)

I do stuff like the above quite frequently in my code, although with
arrays rather than matrices.  :-)
For instance in finite elements there's assembling the global
stiffness matrix step where for each node (point) in your mesh you set
some entries in the big matrix K.  Something like

for i,pt in enumerate(points):
K[shape_fn_indices(i)] = stiffness_fn(pt)

That's cartoon code, but I think you get the idea.  I don't think
there's any good way to make that into one vectorized expression.  The
indices of K that get set depend on the connectivity of your mesh.

> Note that I am no longer disputing the convention,
> just trying to understand its usefulness.

--bb
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] two previously unresolved issues

2007-03-26 Thread Stefan van der Walt
Hi,

I just went through my mail archive and found these two minor
outstanding issues.  Thought I'd ask for comments before the new
release:

"""
From: "Charles R Harris" <[EMAIL PROTECTED]>
Subject: Re: [Numpy-discussion] Assign NaN, get zero

On 11/11/06, Lisandro Dalcin <[EMAIL PROTECTED]> wrote:
>
> On 11/11/06, Stefan van der Walt <[EMAIL PROTECTED]> wrote:
> > NaN (or inf) is a floating point number, so seeing a zero in integer
> > representation seems correct:
> >
> > In [2]: int(N.nan)
> > Out[2]: 0L
> >
>
> Just to learn myself: Why int(N.nan) should be 0? Is it C behavior?

In [1]: int32(0)/int32(0)
Warning: divide by zero encountered in long_scalars
Out[1]: 0

In [2]: float32(0)/float32(0)
Out[2]: nan

In [3]: int(nan)
Out[3]: 0L

I think it was just a default for numpy. Hmmm, numpy now warns on integer
division by zero, didn't used to.  Looks like a warning should also be
raised when casting nan to integer. It is probably a small bug not to. I
also suspect int(nan) should return a normal python zero, not 0L.
"""


"""
From: "Bill Baxter" <[EMAIL PROTECTED]>
To: numpy-discussion@scipy.org
Subject: [Numpy-discussion] linalg.lstsq for complex

Is this code from linalg.lstsq for the complex case correct?

   lapack_routine = lapack_lite.zgelsd
lwork = 1
rwork = zeros((lwork,), real_t)
work = zeros((lwork,),t)
results = lapack_routine(m, n, n_rhs, a, m, bstar, ldb, s, rcond,
 0, work, -1, rwork, iwork, 0)
lwork = int(abs(work[0]))
rwork = zeros((lwork,),real_t)
a_real = zeros((m,n),real_t)
bstar_real = zeros((ldb,n_rhs,),real_t)
results = lapack_lite.dgelsd(m, n, n_rhs, a_real, m,
 bstar_real, ldb, s, rcond,
 0, rwork, -1, iwork, 0)
lrwork = int(rwork[0])
work = zeros((lwork,), t)
rwork = zeros((lrwork,), real_t)
results = lapack_routine(m, n, n_rhs, a, m, bstar, ldb, s, rcond,

The middle call to dgelsd looks unnecessary to me.  At the very least,
allocating astar_real and bstar_real shouldn't be necessary since they
aren't referenced anywhere else in the lstsq function.  The lapack
documentation for zgelsd also doesn't mention any need to call dgelsd
to compute the size of the work array.
"""


Cheers
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion