Re: [Numpy-discussion] Changing FFT cache to a bounded LRU cache

2016-05-30 Thread Joseph Martinot-Lagarde
Marten van Kerkwijk  gmail.com> writes:

> I did a few simple timing tests (see comment in PR), which suggests it is
hardly worth having the cache. Indeed, if one really worries about speed,
one should probably use pyFFTW (scipy.fft is a bit faster too, but at least
for me the way real FFT values are stored is just too inconvenient). So, my
suggestion would be to do away with the cache altogether. 

The problem with FFTW is that its license is more restrictive (GPL), and
because of this may not be suitable everywhere numpy.fft is.

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] ndarray.T2 for 2D transpose

2016-04-07 Thread Joseph Martinot-Lagarde
> > For a 1D array a of shape (N,), I expect a.T2 to be of shape (N, 1),
> 
> Why not (1,N)? -- it is not well defined, though I suppose it's not so
> bad to establish a convention that a 1-D array is a "row vector"
> rather than a "column vector".
I like Todd's simple proposal: a.T2 should be equivalent to np.atleast_2d(arr).T

> BTW, if transposing a (N,) array gives you a (N,1) array, what does
> transposing a (N,1) array give you?
> 
> (1,N) or (N,) ?
The proposal changes nothin for dims > 1, so (1,N). That means that a.T2.T2
doesn"t have the same shape as a.

It boils down to practicality vs purity, as often !


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] ndarray.T2 for 2D transpose

2016-04-07 Thread Joseph Martinot-Lagarde
Alan Isaac  gmail.com> writes:

> But underlying the proposal is apparently the
> idea that there be an attribute equivalent to
> `atleast_2d`.  Then call it `d2p`.
> You can now have `a.d2p.T` which is a lot
> more explicit and general than say `a.T2`,
> while requiring only 3 more keystrokes.


How about a.T2d or a .T2D ?

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] ndarray.T2 for 2D transpose

2016-04-06 Thread Joseph Martinot-Lagarde
Nathaniel Smith  pobox.com> writes:

> An alternative that was mentioned in the bug tracker
> (https://github.com/numpy/numpy/issues/7495), possibly by me, would be
> to have arr.T2 act as a stacked-transpose operator, i.e. treat an arr
> with shape (..., n, m) as being a (...)-shaped stack of (n, m)
> matrices, and transpose each of those matrices, so the output shape is
> (..., m, n). And since this operation intrinsically acts on arrays
> with shape (..., n, m) then trying to apply it to a 0d or 1d array
> would be an error.

I think that the problem is not that it doesn't raise an error for 1D array,
but that it doesn't do anything useful to 1D arrays. Raising an error would
change nothing to the way transpose is used now.

For a 1D array a of shape (N,), I expect a.T2 to be of shape (N, 1), which
is useful when writing formulas, and clearer that a[None].T. Actually I'd
like a.T to do that alreadu, but I guess backward compatibility is more
important.


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] IDE's for numpy development?

2015-04-15 Thread Joseph Martinot-Lagarde
Le 08/04/2015 21:19, Yuxiang Wang a écrit :
> I think spyder supports code highlighting in C and that's all...
> There's no way to compile in Spyder, is there?
>
Well, you could write a compilation script using Scons and run it from 
spyder ! :)

But no, spyder is very python-oriented and there is no way to compile C 
in spyder.
For information the next version should have a better support for 
plugins so it could be done as a third-party extension.

Joseph


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Generalize hstack/vstack --> stack; Block matrices like in matlab

2014-09-08 Thread Joseph Martinot-Lagarde
Le 08/09/2014 15:29, Stefan Otte a écrit :
> Hey,
>
> quite often I work with block matrices. Matlab offers the convenient notation
>
>  [ a b; c d ]
>
> to stack matrices. The numpy equivalent is kinda clumsy:
>
> vstack([hstack([a,b]), hstack([c,d])])
>
> I wrote the little function `stack` that does exactly that:
>
>  stack([[a, b], [c, d]])
>
> In my case `stack` replaced `hstack` and `vstack` almost completely.
>
> If you're interested in including it in numpy I created a pull request
> [1]. I'm looking forward to getting some feedback!
>
>
> Best,
>   Stefan
>
>
>
> [1] https://github.com/numpy/numpy/pull/5057
>
The outside brackets are redundant, stack([[a, b], [c, d]]) should be 
stack([a, b], [c, d])

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Generalize hstack/vstack --> stack; Block matrices like in matlab

2014-09-08 Thread Joseph Martinot-Lagarde
Le 08/09/2014 16:41, Sturla Molden a écrit :
> Stefan Otte  wrote:
>
>>  stack([[a, b], [c, d]])
>>
>> In my case `stack` replaced `hstack` and `vstack` almost completely.
>>
>> If you're interested in including it in numpy I created a pull request
>> [1]. I'm looking forward to getting some feedback!
>
> As far as I can see, it uses hstack and vstack. But that means a and b have
> to have the same number of rows, c and d must have the same rumber of rows,
> and hstack((a,b)) and hstack((c,d)) must have the same number of columns.
>
> Thus it requires a regularity like this:
>
> BB
> BB
> CCCDDD
> CCCDDD
> CCCDDD
> CCCDDD
>
> What if we just ignore this constraint, and only require the output to be
> rectangular? Now we have a 'tetris game':
>
> BB
> BB
> BB
> BB
> DD
> DD
>
> or
>
> BB
> BB
> BB
> BB
> BB
> BB

stack([stack([[a], [c]]), b])

>
> This should be 'stackable', yes? Or perhaps we need another stacking
> function for this, say numpy.tetris?
The function should be implemented for its name only ! I like it !
>
> And while we're at it, what about higher dimensions? should there be an
> ndstack function too?
>
>
> Sturla
>


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Multiple comment tokens for loadtxt

2014-09-04 Thread Joseph Martinot-Lagarde
loadtxt currently has a keyword to change the comment token. The PR
#4612 [1] enables to define multiple comment token for a file. It is
motivated by #2633 [2]

What is your position on this one ?

Joseph

  [1] https://github.com/numpy/numpy/pull/4612
  [2] https://github.com/numpy/numpy/issues/2633

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] 'norm' keyword for FFT functions

2014-09-04 Thread Joseph Martinot-Lagarde
I have an old PR [1] to fix #2142 [2]. The idea is to have a new keyword
for all fft functions to define the normalisation of the fft:
- if 'norm' is None (the default), the normalisation is the current one:
fft() is not normalized ans ifft is normalized by 1/n.
- if norm is "ortho", the direct and inverse transforms are both
normalized by 1/sqrt(n). The results are then unitary.

The keyword name and value is consistent with scipy.fftpack.dct.

Do you feel that it should be merged ?

Joseph

  [1] https://github.com/numpy/numpy/pull/3883
  [2] https://github.com/numpy/numpy/issues/2142

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy.mean still broken for large float32 arrays

2014-07-24 Thread Joseph Martinot-Lagarde
Le 24/07/2014 12:55, Thomas Unterthiner a écrit :
> I don't agree. The problem is that I expect `mean` to do something
> reasonable. The documentation mentions that the results can be
> "inaccurate", which is a huge understatement: the results can be utterly
> wrong. That is not reasonable. At the very least, a warning should be
> issued in cases where the dtype might not be appropriate.
>
Maybe the problem is the documentation, then. If this is a common error, 
it could be explicitly documented in the function documentation.

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Short-hand array creation in `numpy.mat` style

2014-07-18 Thread Joseph Martinot-Lagarde
Le 18/07/2014 22:46, Chris Barker a écrit :
> On Fri, Jul 18, 2014 at 1:15 PM, Joseph Martinot-Lagarde
>  <mailto:joseph.martinot-laga...@m4x.org>> wrote:
>
> In addition,
> you have to use AltGr on some keyboards to get the brackets.
>
>
> If it's hard to type square brackets -- you're kind of dead in the water
> with Python anyway -- this is not going to help.
>
> -Chris
>
Welcome to the azerty world ! ;)

It's not that hard to type, just a bit more involved. My biggest problem 
is that you have to type the opening and closing bracket for each line, 
with a comma in between. It will always be harder and more error prone 
than a single semicolon, whatever the keyboard.

My use case is not teaching but doing quick'n'dirty computations with a 
few values. Sometimes these values are copy-pasted from a space 
separated file, or from a printed array in another console. Having to 
add comas and bracket makes simple computations less easy. That's why I 
often use Octave for these.


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Short-hand array creation in `numpy.mat` style

2014-07-18 Thread Joseph Martinot-Lagarde
Le 18/07/2014 20:42, Charles G. Waldman a écrit :
> Well, if the goal is "shorthand", typing numpy.array(numpy.mat())
> won't please many users.
>
> But the more I think about it, the less I think Numpy should support
> this (non-Pythonic) input mode.  Too much molly-coddling of new users!
> When doing interactive work I usually just type:
>
 np.array([[1,2,3],
> ...   [4,5,6],
> ...   [7,8,9]])
>
> which is (IMO) easier to read:  e.g. it's not totally obvious that
> "1,0,0;0,1,0;0,0,1" represents a 3x3 identity matrix, but
>
> [[1,0,0],
>[0,1,0],
>[0,0,1]]
>
> is pretty obvious.
>
Compare what's comparable:

[[1,0,0],
  [0,1,0],
  [0,0,1]]

vs

"1 0 0;"
"0 1 0;"
"0 0 1"

or

"""
1 0 0;
0 1 0;
0 0 1
"""

[[1,0,0], [0,1,0], [0,0,1]]
vs
"1 0 0; 0 1 0; 0 0 1"

> The difference in (non-whitespace) chars is 19 vs 25, so the
> "shorthand" doesn't seem to save that much.

Well, it's easier to type "" (twice the same character) than [], and you 
have no risk in swapping en opening and a closing bracket. In addition, 
you have to use AltGr on some keyboards to get the brackets. It doesn't 
boils down to a number of characters.

>
> Just my €0.02,
>
> - C
>
>
>
>
> On Fri, Jul 18, 2014 at 10:05 AM, Alan G Isaac  wrote:
>> On 7/18/2014 12:45 PM, Mark Miller wrote:
>>> If the true goal is to just allow quick entry of a 2d array, why not just 
>>> advocate using
>>> a = numpy.array(numpy.mat("1 2 3; 4 5 6; 7 8 9"))
>>
>>
>> It's even simpler:
>> a = np.mat(' 1 2 3;4 5 6;7 8 9').A
>>
>> I'm not putting a dog in this race.  Still I would say that
>> the reason why such proposals miss the point is that
>> there are introductory settings where one would like
>> to explain as few complications as possible.  In
>> particular, one might prefer *not* to discuss the
>> existence of a matrix type.  As an additional downside,
>> this is only good for 2d, and there have been proposals
>> for the new array builder to handle other dimensions.
>>
>> fwiw,
>> Alan Isaac
>>
>> ___
>> NumPy-Discussion mailing list
>> NumPy-Discussion@scipy.org
>> http://mail.scipy.org/mailman/listinfo/numpy-discussion
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] String type again.

2014-07-17 Thread Joseph Martinot-Lagarde
Le 15/07/2014 18:18, Chris Barker a écrit :
> (or does HDF support var-length
> elements?)
>
It does: http://www.hdfgroup.org/HDF5/doc/TechNotes/VLTypes.html


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Resolving the associativity/precedence debate for @

2014-03-24 Thread Joseph Martinot-Lagarde
Le 22/03/2014 19:13, Nathaniel Smith a écrit :
> Hi all,
>
> After 88 emails we don't have a conclusion in the other thread (see
> [1] for background). But we have to come to some conclusion or another
> if we want @ to exist :-). So I'll summarize where the discussion
> stands and let's see if we can find some way to resolve this.
>
> The fundamental question is whether a chain like (a @ b @ c) should be
> evaluated left-to-right (left-associativity) or right-to-left
> (right-associativity).
>
> DATA SOURCE 1:
>
> This isn't a democratic vote, but it's useful to get a sense of
> people's intuitions. Counting messages in the other thread, opinion
> seems to be pretty evenly split:
>
> == "Votes" for right-associativity ==
> Weak-right: [2] [3] [5]
> Tight-right: [4] [6]
> Same-right: [11]
>
> == "Votes" for left-associativity ==
> Same-left: [7] [8] [14] [15] [16]
> Tight-left: [9]
> Weak-left: [12]
>
> There's also the "grouping" option (described in [10]), but that's
> received very little support (just [13]).
>
> DATA SOURCE 2:
>
> Several people have suggested that performance considerations mean
> that right-to-left evaluation is more common in practice than
> left-to-right evaluation. But, if we look at actual usage in Python
> code, that's not what we find: when people call dot() in chains, then
> they're about evenly split, and actually use the left-to-right,
> left-associative order slightly more often than the right-to-left,
> right-associative order:
>http://mail.scipy.org/pipermail/numpy-discussion/2014-March/069578.html
>
> DATA SOURCE 3:
>
> And if we look at other languages, then we find:
>
> == "Votes" for right-associativity ==
> 
>
> == "Votes" for left-associativity ==
> Same-left: Matlab, Julia, IDL, GAUSS
> Tight-left: R

This is a very strong point. Lots of people come to python with a 
background and would be surprised if python behaves differently than 
other mainstream frameworks.

I'll add that simpler is better, multiplications should behave the same 
way, and vote for same-left.

>
> And Mathematica uses the "grouping" approach.
>
> ARGUMENTS:
>
> The final outcome of this is that I need to write a piece of text that
> says what our (at least rough) consensus is, and lays out the reasons.
> So long as the "vote" is so evenly split, I can't really do this. But
> I can imagine what the different pieces of text might look like.
>
> THE CASE FOR LEFT-ASSOCIATIVITY:
>
> If I were writing this text in favor of left-associativity, I'd point out:
>
> - "Special cases aren't special enough to break the rules". Every
> single operator in Python besides ** is left-associative (and ** has
> very compelling arguments for right associativity). @ does not have
> similarly compelling arguments. If we were having this debate about
> "*", then it'd probably be much more lopsided towards
> left-associativity. So sure, there's something about @ that makes
> right-associativity *more* appealing than for most other operators.
> But not *much* more appealing -- left-associativity still comes out at
> least slightly ahead in all of the above measures. And there are a lot
> of benefits to avoiding special cases -- it gives fewer rules to
> memorize, fewer rules to remember, etc. So @ may be a special case,
> but it's not special enough.
>
> - Other languages with @ operators almost overwhelmingly use the
> "same-left" rule, and I've never heard anyone complain about this, so
> clearly nothing horrible will happen if we go this way. We have no
> comparable experience for right-associativity.
>
> - Given left-associativity, then there's good agreement about the
> appropriate precedence. If we choose right-associativity then it's
> much less clear (which will then make things harder for experts to
> remember, harder for non-experts to guess, etc.). Note that one of the
> votes for right-associativity even preferred the "same-right" rule,
> which is not even technically possible...
>
> This strikes me as a nice solid case.
>
> THE CASE FOR RIGHT-ASSOCIATIVITY:
>
> If I were writing this text in favor of right-associativity, I'd point out:
>
> - Because matrix multiplication has a tight conceptual association
> with function application/composition, many mathematically
> sophisticated users have an intuition that a matrix expression like
>  R S x
> proceeds from right-to-left, with first S transforming x, and then R
> transforming the result. This isn't universally agreed, but at the
> least this intuition is more common than for other operations like 2 *
> 3 * 4 that everyone reads as going from left-to-right.
>
> - There might be some speed argument, if people often write things
> like "Mat @ Mat @ vec"? But no-one has found any evidence that people
> actually do write such things often.
>
> - There's been discussion of how right-associativity might maybe
> perhaps be nice for non-matmul applications? But I can't use those
> arguments [17] [18].
>
> - .. I got nothin'.
>
> I am fine with any ou

Re: [Numpy-discussion] It looks like Py 3.5 will include a dedicated infix matrix multiply operator

2014-03-16 Thread Joseph Martinot-Lagarde
Le 16/03/2014 15:39, Eelco Hoogendoorn a écrit :
> Note that I am not opposed to extra operators in python, and only mildly
> opposed to a matrix multiplication operator in numpy; but let me lay out
> the case against, for your consideration.
>
> First of all, the use of matrix semantics relative to arrays
> semantics is extremely rare; even in linear algebra heavy code, arrays
> semantics often dominate. As such, the default of array semantics for
> numpy has been a great choice. Ive never looked back at MATLAB semantics.
>
> Secondly, I feel the urge to conform to a historical mathematical
> notation is misguided, especially for the problem domain of linear
> algebra. Perhaps in the world of mathematics your operation is
> associative or commutes, but on your computer, the order of operations
> will influence both outcomes and performance. Even for products, we
> usually care not only about the outcome, but also how that outcome is
> arrived at. And along the same lines, I don't suppose I need to explain
> how I feel about A@@-1 and the likes. Sure, it isn't to hard to learn or
> infer this implies a matrix inverse, but why on earth would I want to
> pretend the rich complexity of numerical matrix inversion can be mangled
> into one symbol? Id much rather write inv or pinv, or whatever
> particular algorithm happens to be called for given the situation.
> Considering this isn't the num-lisp discussion group, I suppose I am
> hardly the only one who feels so.
>
> On the whole, I feel the @ operator is mostly superfluous. I prefer to
> be explicit about where I place my brackets. I prefer to be explicit
> about the data layout and axes that go into a (multi)linear product,
> rather than rely on obtuse row/column conventions which are not
> transparent across function calls. When I do linear algebra, it is
> almost always vectorized over additional axes; how does a special
> operator which is only well defined for a few special cases of 2d and 1d
> tensors help me with that?

Well, the PEP explains a well-defined logical interpretation for cases 
 >2d, using broadcasting. You can vectorize over additionnal axes.

> On the whole, the linear algebra conventions
> inspired by the particular constraints of people working
> with blackboards, are a rather ugly and hacky beast in my opinion, which
> I feel no inclination to emulate. As a sidenote to the contrary; I love
> using broadcasting semantics when writing papers. Sure, your reviewers
> will balk at it, but it wouldn't do to give the dinosaurs the last word
> on what any given formal language ought to be like. We get to define the
> future, and im not sure the set of conventions that goes under the name
> of 'matrix multiplication' is one of particular importance to the future
> of numerical linear algebra.
>
> Note that I don't think there is much harm in an @ operator; but I don't
> see myself using it either. Aside from making textbook examples like a
> gram-schmidt orthogonalization more compact to write, I don't see it
> having much of an impact in the real world.
>
>
> On Sat, Mar 15, 2014 at 3:52 PM, Charles R Harris
> mailto:charlesr.har...@gmail.com>> wrote:
>
>
>
>
> On Fri, Mar 14, 2014 at 6:51 PM, Nathaniel Smith  > wrote:
>
> Well, that was fast. Guido says he'll accept the addition of '@'
> as an
> infix operator for matrix multiplication, once some details are
> ironed
> out:
> https://mail.python.org/pipermail/python-ideas/2014-March/027109.html
> http://legacy.python.org/dev/peps/pep-0465/
>
> Specifically, we need to figure out whether we want to make an
> argument for a matrix power operator ("@@"), and what
> precedence/associativity we want '@' to have. I'll post two separate
> threads to get feedback on those in an organized way -- this is
> just a
> heads-up.
>
>
> Surprisingly little discussion on python-ideas, or so it seemed to
> me. Guido came out in favor less than halfway through.
> Congratulations on putting together a successful proposal, many of
> us had given up on ever seeing a matrix multiplication operator.
>
> Chuck
>
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org 
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
>
>
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>


---
Ce courrier électronique ne contient aucun virus ou logiciel malveillant parce 
que la protection avast! Antivirus est active.
http://www.avast.com


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion