Re: [Numpy-discussion] add .H attribute?

2013-07-22 Thread Alan G Isaac
On 7/22/2013 3:10 PM, Nathaniel Smith wrote:
> Having .T but not .H is an example of this split.


Hate to do this but ...

  Readability counts.
  Special cases aren't special enough to break the rules.
  Although practicality beats purity.

How much is the split a rule or "just" a convention, and is there
enough practicality here to beat the purity of the split?

Note: this is not a rhetorical question.
However: if you propose A.conjugate().transpose() as providing a teachable
moment about why to use NumPy instead of A' in Matlab, I conclude you
do not ever teach most of my students.  The real world matters.  Since
practicality beats purity, we do have A.conj().T, which is better
but still not as readable as A.H would be.  Or even A.H(), should
that satisfy your objections (and still provide a teachable moment).

Alan

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] add .H attribute?

2013-07-22 Thread Bryan Van de Ven
On the other hand, the most salient quality an unavoidable copy is that it is 
unavoidable. For people for whom using Hermitian conjugates is common, it's not 
like they won't do it just because they can't avoid a copy that can't be 
avoided.  Given that if a problem dictates a Hermitian conjugate be taken, then 
it will be taken, then: a.H is closer to the mathematical notation, eases 
migration for matlab users, and does not require everyone to reinvent their own 
little version of the same function over and over. All of that seems more 
compelling that this particular arbitrary convention, personally. 

Bryan 

On Jul 22, 2013, at 3:10 PM, Nathaniel Smith  wrote:

> On Thu, Jul 18, 2013 at 3:18 PM, Stéfan van der Walt  wrote:
>> On Sat, Jul 13, 2013 at 7:46 PM, Nathaniel Smith  wrote:
>>> Why not just write
>>> 
>>> def H(a):
>>>return a.conj().T
>> 
>> It's hard to convince students that this is the Best Way of doing
>> things in NumPy.  Why, they ask, can you do it using a' in MATLAB,
>> then?
> 
> I guess I'd try to treat it as a teachable moment... the answer points
> to a basic difference in numpy versus MATLAB. Numpy operates at a
> slightly lower level of abstraction. In MATLAB you're encouraged to
> think of arrays as just mathematical matrices and let MATLAB worry
> about how to actually represent those inside the computer. Sometimes
> it does a good job, sometimes not. In numpy you need to think of
> arrays as structured representations of a chunk of memory. There
> disadvantages to this -- e.g. keeping track of which arrays return
> view and which return copies can be tricky -- but it also gives a lot
> of power: views are awesome, you get better interoperability with C
> libraries/Cython, better ability to predict which operations are
> expensive or cheap, more opportunities to use clever tricks when you
> need to, etc.
> 
> And one example of this is that transpose and conjugate transpose
> really are very different at this level, because one is a cheap stride
> manipulation that returns a view, and the other is a (relatively)
> expensive data copying operation. The convention in Python is that
> attribute access is supposed to be cheap, while function calls serve
> as a warning that something expensive might be going on. So in short:
> MATLAB is optimized for doing linear algebra and not thinking too hard
> about programming; numpy is optimized for writing good programs.
> Having .T but not .H is an example of this split.
> 
> Also it's a good opportunity to demonstrate the value of making little
> helper functions, which is a powerful technique that students
> generally need to be taught ;-).
> 
> -n
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] add .H attribute?

2013-07-22 Thread Toder, Evgeny
What if .H is not an attribute, but a method? Is this enough of a warning about 
copying?

Eugene

-Original Message-
From: numpy-discussion-boun...@scipy.org 
[mailto:numpy-discussion-boun...@scipy.org] On Behalf Of Nathaniel Smith
Sent: Monday, July 22, 2013 3:11 PM
To: Discussion of Numerical Python
Subject: Re: [Numpy-discussion] add .H attribute?

On Thu, Jul 18, 2013 at 3:18 PM, Stéfan van der Walt  wrote:
> On Sat, Jul 13, 2013 at 7:46 PM, Nathaniel Smith  wrote:
>> Why not just write
>>
>> def H(a):
>> return a.conj().T
>
> It's hard to convince students that this is the Best Way of doing
> things in NumPy.  Why, they ask, can you do it using a' in MATLAB,
> then?

I guess I'd try to treat it as a teachable moment... the answer points
to a basic difference in numpy versus MATLAB. Numpy operates at a
slightly lower level of abstraction. In MATLAB you're encouraged to
think of arrays as just mathematical matrices and let MATLAB worry
about how to actually represent those inside the computer. Sometimes
it does a good job, sometimes not. In numpy you need to think of
arrays as structured representations of a chunk of memory. There
disadvantages to this -- e.g. keeping track of which arrays return
view and which return copies can be tricky -- but it also gives a lot
of power: views are awesome, you get better interoperability with C
libraries/Cython, better ability to predict which operations are
expensive or cheap, more opportunities to use clever tricks when you
need to, etc.

And one example of this is that transpose and conjugate transpose
really are very different at this level, because one is a cheap stride
manipulation that returns a view, and the other is a (relatively)
expensive data copying operation. The convention in Python is that
attribute access is supposed to be cheap, while function calls serve
as a warning that something expensive might be going on. So in short:
MATLAB is optimized for doing linear algebra and not thinking too hard
about programming; numpy is optimized for writing good programs.
Having .T but not .H is an example of this split.

Also it's a good opportunity to demonstrate the value of making little
helper functions, which is a powerful technique that students
generally need to be taught ;-).

-n
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion

This email is confidential and subject to important disclaimers and conditions 
including on offers for the purchase or sale of securities, accuracy and 
completeness of information, viruses, confidentiality, legal privilege, and 
legal entity disclaimers, available at 
http://www.jpmorgan.com/pages/disclosures/email.  
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] add .H attribute?

2013-07-22 Thread Nathaniel Smith
On Thu, Jul 18, 2013 at 3:18 PM, Stéfan van der Walt  wrote:
> On Sat, Jul 13, 2013 at 7:46 PM, Nathaniel Smith  wrote:
>> Why not just write
>>
>> def H(a):
>> return a.conj().T
>
> It's hard to convince students that this is the Best Way of doing
> things in NumPy.  Why, they ask, can you do it using a' in MATLAB,
> then?

I guess I'd try to treat it as a teachable moment... the answer points
to a basic difference in numpy versus MATLAB. Numpy operates at a
slightly lower level of abstraction. In MATLAB you're encouraged to
think of arrays as just mathematical matrices and let MATLAB worry
about how to actually represent those inside the computer. Sometimes
it does a good job, sometimes not. In numpy you need to think of
arrays as structured representations of a chunk of memory. There
disadvantages to this -- e.g. keeping track of which arrays return
view and which return copies can be tricky -- but it also gives a lot
of power: views are awesome, you get better interoperability with C
libraries/Cython, better ability to predict which operations are
expensive or cheap, more opportunities to use clever tricks when you
need to, etc.

And one example of this is that transpose and conjugate transpose
really are very different at this level, because one is a cheap stride
manipulation that returns a view, and the other is a (relatively)
expensive data copying operation. The convention in Python is that
attribute access is supposed to be cheap, while function calls serve
as a warning that something expensive might be going on. So in short:
MATLAB is optimized for doing linear algebra and not thinking too hard
about programming; numpy is optimized for writing good programs.
Having .T but not .H is an example of this split.

Also it's a good opportunity to demonstrate the value of making little
helper functions, which is a powerful technique that students
generally need to be taught ;-).

-n
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] fresh performance hits: numpy.linalg.pinv >30% slowdown

2013-07-22 Thread Benjamin Root
On Mon, Jul 22, 2013 at 10:55 AM, Yaroslav Halchenko
wrote:

> At some point I hope to tune up the report with an option of viewing the
> plot using e.g. nvd3 JS so it could be easier to pin point/analyze
> interactively.
>
>
shameless plug... the soon-to-be-finalized matplotlib-1.3 has a WebAgg
backend that allows for interactivity.

Cheers!
Ben Root
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] fresh performance hits: numpy.linalg.pinv >30% slowdown

2013-07-22 Thread Yaroslav Halchenko
At some point I hope to tune up the report with an option of viewing the
plot using e.g. nvd3 JS so it could be easier to pin point/analyze
interactively.

On Sat, 20 Jul 2013, Pauli Virtanen wrote:

> 20.07.2013 01:38, Nathaniel Smith kirjoitti:
> > The biggest ~recent change in master's linalg was the switch to gufunc
> > back ends - you might want to check for that event in your commit log.

> That was in mid-April, which doesn't match with the location of the 
> uptick in the graph.

-- 
Yaroslav O. Halchenko, Ph.D.
http://neuro.debian.net http://www.pymvpa.org http://www.fail2ban.org
Senior Research Associate, Psychological and Brain Sciences Dept.
Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755
Phone: +1 (603) 646-9834   Fax: +1 (603) 646-1419
WWW:   http://www.linkedin.com/in/yarik
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] fresh performance hits: numpy.linalg.pinv >30% slowdown

2013-07-22 Thread Yaroslav Halchenko

On Fri, 19 Jul 2013, Warren Weckesser wrote:

> Well, this is embarrassing: https://github.com/numpy/numpy/pull/3539

> Thanks for benchmarks!  I'm now an even bigger fan. :)

Great to see that those came of help!  I thought to provide a detailed
details (benchmarking all recent commits) to provide exact point of
regression, but embarrassingly I made that run outside of the
benchmarking chroot, so consistency was not guaranteed.  Anyways --
rerunning it correctly now (with recent commits included).

-- 
Yaroslav O. Halchenko, Ph.D.
http://neuro.debian.net http://www.pymvpa.org http://www.fail2ban.org
Senior Research Associate, Psychological and Brain Sciences Dept.
Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755
Phone: +1 (603) 646-9834   Fax: +1 (603) 646-1419
WWW:   http://www.linkedin.com/in/yarik
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Bringing order to higher dimensional operations

2013-07-22 Thread Stéfan van der Walt
On Sat, Jul 20, 2013 at 7:44 AM,   wrote:
> related: is there any advantage to np.add.reduce?
> I find it more difficult to read than sum() and still see it used sometimes.

I think ``np.add.reduce`` just falls out of the ufunc
implementation--there's no "per ufunc" choice to remove certain parts
of the API, if I recall correctly.

Stéfan
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion