Re: [Numpy-discussion] Indexing with callables (was: Yorick-like functionality)

2009-05-18 Thread josef . pktd
On Mon, May 18, 2009 at 6:22 PM, Robert Kern  wrote:
> On Mon, May 18, 2009 at 13:23, Pauli Virtanen  wrote:
>> Mon, 18 May 2009 09:21:39 -0700, David J Strozzi wrote:
>> [clip]
>>> I also like pointing out that Yorick was a fast, free environment
>>> developed by ~1990, when matlab/IDL were probably the only comparable
>>> games in town, but very few people ever used it.  I think this is a case
>>> study in the triumph of marketing over substance.  It looks like num/sci
>>> py are gaining enough momentum and visibility.  Hopefully the numerical
>>> science community won't be re-inventing this same wheel in 5 years
>>
>> Well, GNU Octave has been around about the same time, and the same for
>> Scilab. Curiously enough, first public version >= 1.0 of all the three
>> seem to have appeared around 1994. [1,2,3] (Maybe something was in
>> the air that year...)
>>
>> So I'd claim this particular wheel has already been reinvented pretty
>> thoroughly :)
>
> It's worth noting that most of numpy's indexing functionality was
> stol^H^H^H^Hborrowed from Yorick in ages past:
>
>  http://mail.python.org/pipermail/matrix-sig/1995-November/000143.html
>

Thanks for the link, an interesting discussion on the origin of
array/matrices in python.

also the end of matrix-sig is interesting
http://mail.python.org/pipermail/matrix-sig/2000-February/003292.html

I needed to check some history: Gauss and Matlab are more than 10
years older, and S is ancient, way ahead of Python.

Josef
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Indexing with callables (was: Yorick-like functionality)

2009-05-18 Thread Robert Kern
On Mon, May 18, 2009 at 13:23, Pauli Virtanen  wrote:
> Mon, 18 May 2009 09:21:39 -0700, David J Strozzi wrote:
> [clip]
>> I also like pointing out that Yorick was a fast, free environment
>> developed by ~1990, when matlab/IDL were probably the only comparable
>> games in town, but very few people ever used it.  I think this is a case
>> study in the triumph of marketing over substance.  It looks like num/sci
>> py are gaining enough momentum and visibility.  Hopefully the numerical
>> science community won't be re-inventing this same wheel in 5 years
>
> Well, GNU Octave has been around about the same time, and the same for
> Scilab. Curiously enough, first public version >= 1.0 of all the three
> seem to have appeared around 1994. [1,2,3] (Maybe something was in
> the air that year...)
>
> So I'd claim this particular wheel has already been reinvented pretty
> thoroughly :)

It's worth noting that most of numpy's indexing functionality was
stol^H^H^H^Hborrowed from Yorick in ages past:

  http://mail.python.org/pipermail/matrix-sig/1995-November/000143.html

-- 
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth."
  -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Indexing with callables (was: Yorick-like functionality)

2009-05-18 Thread Pauli Virtanen
Mon, 18 May 2009 09:21:39 -0700, David J Strozzi wrote:
[clip]
> I also like pointing out that Yorick was a fast, free environment
> developed by ~1990, when matlab/IDL were probably the only comparable
> games in town, but very few people ever used it.  I think this is a case
> study in the triumph of marketing over substance.  It looks like num/sci
> py are gaining enough momentum and visibility.  Hopefully the numerical
> science community won't be re-inventing this same wheel in 5 years

Well, GNU Octave has been around about the same time, and the same for 
Scilab. Curiously enough, first public version >= 1.0 of all the three
seem to have appeared around 1994. [1,2,3] (Maybe something was in
the air that year...)

So I'd claim this particular wheel has already been reinvented pretty
thoroughly :)


.. [1] http://ftp.lanet.lv/ftp/mirror/x2ftp/msdos/programming/news/yorick.10
.. [2] http://www.scilab.org/platform/index_platform.php?page=history
.. [3] http://en.wikipedia.org/wiki/GNU_Octave#History

-- 
Pauli Virtanen

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Indexing with callables (was: Yorick-like functionality)

2009-05-18 Thread David J Strozzi
>
>The actual list of Yorick functions relevant here appears to be here:
>
>   http:// yorick.sourceforge.net/manual/yorick_46.php#SEC46
>   http:// yorick.sourceforge.net/manual/yorick_47.php#SEC47
>
>I must say that I don't see many functions missing in Numpy...
>
>David (Strozzi): are these the functions you meant? Are there more?
>
>--
>Pauli Virtanen


Paul et al,

I see the numpy list is quite active, and I appreciate the discussion 
I started - and then went silent about!

foo(zcen,dif) is indeed syntax sugar, but it can be quite sweet! 
Haven't you seen how lab rats react to feedings of aspartame or other 
sweeteners??

Anyway, the list above is indeed what I had in mind.  It seems a few 
are missing, like zcen, and perhaps it's worth it to add to numpy 
proper rather than have everyone write their own one-liners (and then 
have to deal w/ it when sharing code).

I leave it to the community's wisdom.  It seems enough smart people 
have thought about the issue.

I also like pointing out that Yorick was a fast, free environment 
developed by ~1990, when matlab/IDL were probably the only comparable 
games in town, but very few people ever used it.  I think this is a 
case study in the triumph of marketing over substance.  It looks like 
num/sci py are gaining enough momentum and visibility.  Hopefully the 
numerical science community won't be re-inventing this same wheel in 
5 years

Dave
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Overlap arrays with "transparency"

2009-05-18 Thread josef . pktd
On Mon, May 18, 2009 at 11:38 AM, Michael S. Gilbert
 wrote:
> On Mon, 18 May 2009 05:37:09 -0700 (PDT), Cristi Constantin wrote:
>> Good day.
>> I am working on this algorithm for a few weeks now, so i tried almost 
>> everything...
>> I want to overlap / overwrite 2 matrices, but completely ignore some values 
>> (in this case ignore 0)
>> Let me explain:
>>
>> a = [
>> [1, 2, 3, 4, 5],
>> [9,7],
>> [0,0,0,0,0],
>> [5,5,5] ]
>>
>> b = [
>> [0,0,9,9],
>> [1,1,1,1],
>> [2,2,2,2] ]
>>
>> Then, we have:
>>
>> a over b = [
>> [1,2,3,4,5],
>> [9,7,1,1],
>> [1,1,1,1,0],
>> [5,5,5,2] ]
>>
>> b over a = [
>> [0,0,9,9,5],
>> 1,1,1,1],
>> 2,2,2,2,0],
>> 5,5,5] ]
>>


If you can convert the list of lists to a common rectangular shape
(masking missing values or assigning nans), then
conditional overwriting is very easy, something like

mask = a>0
a[mask] =b[mask]

but for lists of lists with unequal shape, there might not be anything
faster than looping.

Josef
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] linear algebra help

2009-05-18 Thread Charles R Harris
On Mon, May 18, 2009 at 9:35 AM,  wrote:

> On Mon, May 18, 2009 at 10:55 AM, Charles R Harris
>  wrote:
> >
> >
> > 2009/5/18 Stéfan van der Walt 
> >>
> >> 2009/5/18 Sebastian Walter :
> >> > B = numpy.dot(A.T, A)
> >>
> >> This multiplication should be avoided whenever possible -- you are
> >> effectively squaring your condition number.
> >
> > Although the condition number doesn't mean much unless the columns are
> > normalized. Having badly scaled columns can lead to problems with lstsq
> > because of its default cutoff based on the condition number.
> >
> > Chuck
>
> Do you know if any of the linalg methods, np.linalg.lstsq or
> scipy.linalg.lstsq, do any normalization internally to improve
> numerical accuracy?
>
-
They don't. Although, IIRC, lapack provides routines for doing so. Maybe
there is another least squares routine that does the scaling.


>
> I saw automatic internal normalization (e.g. rescaling) for some
> econometrics methods, and was wondering whether we should do this also
> in stats.models or whether scipy.linalg is already taking care of
> this. I have only vague knowledge of the numerical precision of
> different linear algebra methods.
>

It's a good idea. Otherwise the condition number depends on choice of units
and other such extraneous things.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Overlap arrays with "transparency"

2009-05-18 Thread Michael S. Gilbert
On Mon, 18 May 2009 05:37:09 -0700 (PDT), Cristi Constantin wrote:
> Good day.
> I am working on this algorithm for a few weeks now, so i tried almost 
> everything...
> I want to overlap / overwrite 2 matrices, but completely ignore some values 
> (in this case ignore 0)
> Let me explain:
> 
> a = [
> [1, 2, 3, 4, 5],
> [9,7],
> [0,0,0,0,0],
> [5,5,5] ]
> 
> b = [
> [0,0,9,9],
> [1,1,1,1],
> [2,2,2,2] ]
> 
> Then, we have:
> 
> a over b = [
> [1,2,3,4,5],
> [9,7,1,1],
> [1,1,1,1,0],
> [5,5,5,2] ]
> 
> b over a = [
> [0,0,9,9,5],
> 1,1,1,1],
> 2,2,2,2,0],
> 5,5,5] ]
> 
> That means, completely overwrite one list of arrays over the other, not 
> matter what values one has, not matter the size, just ignore 0 values on 
> overwriting.
> I checked the documentation, i just need some tips.
> 
> TempA = [[]]
> #
> One For Cicle in here to get the Element data...
>     Data = vElem.data # This is a list of numpy ndarrays.
>     #
>     for nr_row in range( len(Data) ): # For each numpy ndarray (row) in Data.
>     #
>     NData = Data[nr_row]   # New data, to be written over 
> old data.
>     OData = TempA[nr_row:nr_row+1] or [[]] # This is old data. Can be 
> numpy ndarray, or empty list.
>     OData = OData[0]
>     #
>     # NData must completely eliminate transparent pixels... here comes 
> the algorithm... No algorithm yet.
>     #
>     if len(NData) >= len(OData): 
>     # If new data is longer than old data, old data will be 
> completely overwritten.
>     TempA[nr_row:nr_row+1] = [NData]
>     else: # Old data is longer than new data ; old data cannot be null.
>     TempB = np.copy(OData)
>     TempB.put( range(len(NData)), NData )
>     #TempB[0:len(NData)-1] = NData # This returns "ValueError: shape 
> mismatch: objects cannot be broadcast to a single shape"
>     TempA[nr_row:nr_row+1] = [TempB]
>     del TempB
>     #
>     #
> #
> The result is stored inside TempA as list of numpy arrays.
> 
> I would use 2D arrays, but they are slower than Python Lists containing Numpy 
> arrays. I need to do this overwrite in a very big loop and every delay is 
> very important.
> I tried to create a masked array where all "zero" values are ignored on 
> overlap, but it doesn't work. Masked or not, the "transparent" values are 
> still overwritten.
> Please, any suggestion is useful.

your code will certainly be slow if you do no preallocate memory for
your arrays. and i would suggest using numpy's array class instead of
lists.

a = numpy.array( a )
b = numpy.array( b )
c = numpy.zeros( ( max( ( len(a[:,0]) , len(b[:,0]) ) ) , 
max( ( len(a[0,:]) , len(b[0,:]) ) ) , int )
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] linear algebra help

2009-05-18 Thread josef . pktd
On Mon, May 18, 2009 at 10:55 AM, Charles R Harris
 wrote:
>
>
> 2009/5/18 Stéfan van der Walt 
>>
>> 2009/5/18 Sebastian Walter :
>> > B = numpy.dot(A.T, A)
>>
>> This multiplication should be avoided whenever possible -- you are
>> effectively squaring your condition number.
>
> Although the condition number doesn't mean much unless the columns are
> normalized. Having badly scaled columns can lead to problems with lstsq
> because of its default cutoff based on the condition number.
>
> Chuck

Do you know if any of the linalg methods, np.linalg.lstsq or
scipy.linalg.lstsq, do any normalization internally to improve
numerical accuracy?

I saw automatic internal normalization (e.g. rescaling) for some
econometrics methods, and was wondering whether we should do this also
in stats.models or whether scipy.linalg is already taking care of
this. I have only vague knowledge of the numerical precision of
different linear algebra methods.

Thanks,

Josef
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] linear algebra help

2009-05-18 Thread Charles R Harris
2009/5/18 Stéfan van der Walt 

> 2009/5/18 Sebastian Walter :
> > B = numpy.dot(A.T, A)
>
> This multiplication should be avoided whenever possible -- you are
> effectively squaring your condition number.
>

Although the condition number doesn't mean much unless the columns are
normalized. Having badly scaled columns can lead to problems with lstsq
because of its default cutoff based on the condition number.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] SciPy 2009 Call for Papers

2009-05-18 Thread Jarrod Millman
==
SciPy 2009 Call for Papers
==

SciPy 2009, the 8th Python in Science conference, will be held
from August 18-23, 2009 at Caltech in Pasadena, CA, USA.

Each year SciPy attracts leading figures in research and scientific
software development with Python from a wide range of scientific and
engineering disciplines. The focus of the conference is both on scientific
libraries and tools developed with Python and on scientific or engineering
achievements using Python.

We welcome contributions from the industry as well as the academic world.
Indeed, industrial research and development as well academic research
face the challenge of mastering IT tools for exploration, modeling and
analysis.

We look forward to hearing your recent breakthroughs using Python!

Submission of Papers


The program features tutorials, contributed papers, lightning talks, and
bird-of-a-feather sessions. We are soliciting talks and accompanying
papers (either formal academic or magazine-style articles) that discuss
topics which center around scientific computing using Python. These
include applications, teaching, future development directions, and
research. A collection of peer-reviewed articles will be published as
part of the proceedings.

Proposals for talks are submitted as extended abstracts. There are two
categories of talks:

 Paper presentations

 These talks are 35 minutes in duration (including questions). A one page
 abstract of no less than 500 words (excluding figures and references)
 should give an outline of the final paper. Proceeding papers are due two
 weeks after the conference, and may be in a formal academic style, or in
 a more relaxed magazine-style format.

 Rapid presentations

 These talks are 10 minutes in duration. An abstract of between
 300 and 700 words should describe the topic and motivate its
 relevance to scientific computing.

In addition, there will be an open session for lightning talks during which
any attendee willing to do so is invited to do a couple-of-minutes-long
presentation.

If you wish to present a talk at the conference, please create an account
on the website (http://conference.scipy.org). You may then submit an abstract
by logging in, clicking on your profile and following the "Submit an
abstract" link.

Submission Guidelines
-

* Submissions should be uploaded via the online form.
* Submissions whose main purpose is to promote a commercial product or
  service will be refused.
* All accepted proposals must be presented at the SciPy conference by
  at least one author.
* Authors of an accepted proposal can provide a final paper for
  publication in the conference proceedings. Final papers are limited
  to 7 pages, including diagrams, figures, references, and appendices.
  The papers will be reviewed to help ensure the high-quality of the
  proceedings.

For further information, please visit the conference homepage:
http://conference.scipy.org.

Important Dates
===

* Friday, June 26: Abstracts Due
* Saturday, July 4: Announce accepted talks, post schedule
* Friday, July 10: Early Registration ends
* Tuesday-Wednesday, August 18-19: Tutorials
* Thursday-Friday, August 20-21: Conference
* Saturday-Sunday, August 22-23: Sprints
* Friday, September 4: Papers for proceedings due

Tutorials
=

Two days of tutorials to the scientific Python tools will precede the
conference. There will be two tracks:  one for introduction of the basic
tools to beginners and one for more advanced tools. Tutorials will be
announced later.

Birds of a Feather Sessions
===

If you wish to organize a birds-of-a-feather session to discuss some
specific area of scientific development with Python, please contact the
organizing committee.

Executive Committee
===

* Jarrod Millman, UC Berkeley, USA (Conference Chair)
* Gaël Varoquaux, INRIA Saclay, France (Program Co-Chair)
* Stéfan van der Walt, University of Stellenbosch, South Africa
(Program Co-Chair)
* Fernando Pérez, UC Berkeley, USA (Tutorial Chair)
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Problem with correlate

2009-05-18 Thread josef . pktd
2009/5/18 Stéfan van der Walt :
> 2009/5/18 rob steed :
>> This works fine. However, if the arrays have different lengths, we get a 
>> problem.
>>
> y2=N.array([0,0,0,1])
> N.correlate(x,y2,'full')
>
> This looks like a bug to me.
>
> In [54]: N.correlate([1, 0, 0, 0], [0, 0, 0, 1],'full')
> Out[54]: array([1, 0, 0, 0, 0, 0, 0])
>
> In [55]: N.correlate([1, 0, 0, 0, 0], [0, 0, 0, 1],'full')
> Out[55]: array([1, 0, 0, 0, 0, 0, 0, 0])
>
> In [56]: N.correlate([1, 0, 0, 0, 0], [0, 0, 0, 0, 1],'full')
> Out[56]: array([1, 0, 0, 0, 0, 0, 0, 0, 0])
>
> In [57]: N.correlate([1, 0, 0, 0, 0], [0, 0, 0, 0, 0, 1],'full')
> Out[57]: array([0, 0, 0, 0, 0, 0, 0, 0, 0, 1])
>

comparing with scipy:
  signal.correlate behaves the same "flipping" way as np.correlate,
  ndimage.correlate keeps the orientation.

>>> np.correlate([1, 2, 0, 0, 0], [0, 0, 1, 0, 0,0],'same')
array([0, 0, 0, 2, 1, 0])
>>> np.correlate([1, 2, 0, 0, 0], [0, 0, 1, 0, 0],'same')
array([1, 2, 0, 0, 0])
>>> np.correlate([1, 2, 0, 0, 0], [0, 0, 1, 0, 0, 0],'full')
array([0, 0, 0, 0, 0, 2, 1, 0, 0, 0])
>>> np.correlate([1, 2, 0, 0, 0], [0, 0, 1, 0, 0],'full')
array([0, 0, 1, 2, 0, 0, 0, 0, 0])
>>>
>>> signal.correlate([1, 2, 0, 0, 0], [0, 0, 1, 0, 0, 0])
array([0, 0, 0, 0, 0, 2, 1, 0, 0, 0])
>>> signal.correlate([1, 2, 0, 0, 0], [0, 0, 1, 0, 0])
array([0, 0, 1, 2, 0, 0, 0, 0, 0])
>>> ndimage.filters.correlate([1, 2, 0, 0, 0], [0, 0, 1, 0, 0, 
>>> 0],mode='constant')
array([0, 1, 2, 0, 0])
>>> ndimage.filters.correlate([1, 2, 0, 0, 0], [0, 0, 1, 0, 0],mode='constant')
array([1, 2, 0, 0, 0])

Josef
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Overlap arrays with "transparency"

2009-05-18 Thread Cristi Constantin
Good day.
I am working on this algorithm for a few weeks now, so i tried almost 
everything...
I want to overlap / overwrite 2 matrices, but completely ignore some values (in 
this case ignore 0)
Let me explain:

a = [
[1, 2, 3, 4, 5],
[9,7],
[0,0,0,0,0],
[5,5,5] ]

b = [
[0,0,9,9],
[1,1,1,1],
[2,2,2,2] ]

Then, we have:

a over b = [
[1,2,3,4,5],
[9,7,1,1],
[1,1,1,1,0],
[5,5,5,2] ]

b over a = [
[0,0,9,9,5],
1,1,1,1],
2,2,2,2,0],
5,5,5] ]

That means, completely overwrite one list of arrays over the other, not matter 
what values one has, not matter the size, just ignore 0 values on overwriting.
I checked the documentation, i just need some tips.

TempA = [[]]
#
One For Cicle in here to get the Element data...
    Data = vElem.data # This is a list of numpy ndarrays.
    #
    for nr_row in range( len(Data) ): # For each numpy ndarray (row) in Data.
    #
    NData = Data[nr_row]   # New data, to be written over 
old data.
    OData = TempA[nr_row:nr_row+1] or [[]] # This is old data. Can be numpy 
ndarray, or empty list.
    OData = OData[0]
    #
    # NData must completely eliminate transparent pixels... here comes the 
algorithm... No algorithm yet.
    #
    if len(NData) >= len(OData): 
    # If new data is longer than old data, old data will be completely 
overwritten.
    TempA[nr_row:nr_row+1] = [NData]
    else: # Old data is longer than new data ; old data cannot be null.
    TempB = np.copy(OData)
    TempB.put( range(len(NData)), NData )
    #TempB[0:len(NData)-1] = NData # This returns "ValueError: shape 
mismatch: objects cannot be broadcast to a single shape"
    TempA[nr_row:nr_row+1] = [TempB]
    del TempB
    #
    #
#
The result is stored inside TempA as list of numpy arrays.

I would use 2D arrays, but they are slower than Python Lists containing Numpy 
arrays. I need to do this overwrite in a very big loop and every delay is very 
important.
I tried to create a masked array where all "zero" values are ignored on 
overlap, but it doesn't work. Masked or not, the "transparent" values are still 
overwritten.
Please, any suggestion is useful.

Thank you.


  ___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Problem with correlate

2009-05-18 Thread Stéfan van der Walt
2009/5/18 rob steed :
> This works fine. However, if the arrays have different lengths, we get a 
> problem.
>
 y2=N.array([0,0,0,1])
 N.correlate(x,y2,'full')

This looks like a bug to me.

In [54]: N.correlate([1, 0, 0, 0], [0, 0, 0, 1],'full')
Out[54]: array([1, 0, 0, 0, 0, 0, 0])

In [55]: N.correlate([1, 0, 0, 0, 0], [0, 0, 0, 1],'full')
Out[55]: array([1, 0, 0, 0, 0, 0, 0, 0])

In [56]: N.correlate([1, 0, 0, 0, 0], [0, 0, 0, 0, 1],'full')
Out[56]: array([1, 0, 0, 0, 0, 0, 0, 0, 0])

In [57]: N.correlate([1, 0, 0, 0, 0], [0, 0, 0, 0, 0, 1],'full')
Out[57]: array([0, 0, 0, 0, 0, 0, 0, 0, 0, 1])

Regards
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Problem with correlate

2009-05-18 Thread rob steed

Hi all,

I have been using numpy.correlate and was finding something weird. I now think 
that there might be a bug.

Correlations should be order dependent eg. correlate(x,y) != correlate(y,x) in 
general (whereas convolutions are symmetric)

>>> import numpy as N
>>> x = N.array([1,0,0])
>>> y = N.array([0,0,1])

>>> N.correlate(x,y,'full')
array([1, 0, 0, 0, 0])
>>> N.correlate(y,x,'full')
array([0, 0, 0, 0, 1])

This works fine. However, if the arrays have different lengths, we get a 
problem.

>>> y2=N.array([0,0,0,1])
>>> N.correlate(x,y2,'full')
array([0, 0, 0, 0, 0, 1])
>>> N.correlate(y2,x,'full')
array([0, 0, 0, 0, 0, 1])

I believe that somewhere in the code, the arrays are re-ordered by their 
length. Initially I thought that this was because
correlate was deriving from convolution but looking at numpy.core, I can see 
that in fact convolution derives from correlate. 
After that, it becomes C code which I haven't managed to look at yet.

Am I correct, is this a bug? 

regards

Rob Steed



  
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] linear algebra help

2009-05-18 Thread Sebastian Walter
2009/5/18 Stéfan van der Walt :
> 2009/5/18 Sebastian Walter :
>> B = numpy.dot(A.T, A)
>
> This multiplication should be avoided whenever possible -- you are
> effectively squaring your condition number.
Indeed.

>
> In the case where you have more rows than columns, use least squares.
> For square matrices use solve.  For large sparse matrices, use GMRES
> or any of the others available in scipy.sparse.linalg.

It is my impression that this is a linear algebra and not a numerics question.



>
> Regards
> Stéfan
> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] linear algebra help

2009-05-18 Thread Stéfan van der Walt
2009/5/18 Sebastian Walter :
> B = numpy.dot(A.T, A)

This multiplication should be avoided whenever possible -- you are
effectively squaring your condition number.

In the case where you have more rows than columns, use least squares.
For square matrices use solve.  For large sparse matrices, use GMRES
or any of the others available in scipy.sparse.linalg.

Regards
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] linear algebra help

2009-05-18 Thread Sebastian Walter
Alternatively, to solve A x = b you could do

import numpy
import numpy.linalg

B = numpy.dot(A.T, A)
c = numpy.dot(A.T, b)
x = numpy.linalg(B,c)

This is not the most efficient way to do it but at least you know
exactly what's going on in your code.



On Sun, May 17, 2009 at 7:21 PM,   wrote:
> On Sun, May 17, 2009 at 12:14 PM, Quilby  wrote:
>> Right the dimensions I gave were wrong.
>> What do I need to do for m>=n (more rows than columns)?  Can I use the
>> same function?
>>
>> When I run the script written by Nils (thanks!) I get:
>>from numpy.random import rand, seed
>> ImportError: No module named random
>>
>> But importing numpy works ok. What do I need to install?
>
> This should be working without extra install. You could run the test
> suite, numpy.test(), to see whether your install is ok.
>
> Otherwise, you would need to provide more information, numpy version, 
>
> np.lstsq works for m>n, m solution is different in the 3 cases.
> m>n (more observations than parameters) is the standard least squares
> estimation problem.
>
> Josef
>
>
>>
>> Thanks again!
>>
>> On Sun, May 17, 2009 at 1:51 AM, Alan G Isaac  wrote:
>>> On 5/16/2009 9:01 AM Quilby apparently wrote:
 Ax = y
 Where A is a rational m*n matrix (m<=n), and x and y are vectors of
 the right size. I know A and y, I don't know what x is equal to. I
 also know that there is no x where Ax equals exactly y.
>>>
>>> If m<=n, that can only be true if there are not
>>> m linearly independent columns of A.  Are you
>>> sure you have the dimensions right?
>>>
>>> Alan Isaac
>>>
> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion