Re: [Numpy-discussion] Instaling numpy without root access

2014-10-17 Thread Ignat Harczuk
Have you considered virtual environments?

http://docs.python-guide.org/en/latest/dev/virtualenvs/


Inside of each environment you can build a local python version and
packages with different versions through pip.
Maybe not exactly what you need help with but it is a good tool to have so
that you have less dependency issues.

On Mon, Oct 13, 2014 at 2:52 AM, Lahiru Samarakoon 
wrote:

> Guys, any advice is highly appreciated. I am a little new to building in
> Linux.
> Thanks,
> Lahiru
>
> On Sat, Oct 11, 2014 at 9:43 AM, Lahiru Samarakoon 
> wrote:
>
>> I switched to numpy-1.8.2. . Now getting following error. I am using
>> LAPACK that comes with atlast installation. Can this be a problem?
>>
>> Traceback (most recent call last):
>>   File "", line 1, in 
>>   File
>> "/home/svu/a0095654/.local/lib/python2.7/site-packages/numpy/__init__.py",
>> line 170, in 
>> from . import add_newdocs
>>   File
>> "/home/svu/a0095654/.local/lib/python2.7/site-packages/numpy/add_newdocs.py",
>> line 13, in 
>> from numpy.lib import add_newdoc
>>   File
>> "/home/svu/a0095654/.local/lib/python2.7/site-packages/numpy/lib/__init__.py",
>> line 18, in 
>> from .polynomial import *
>>   File
>> "/home/svu/a0095654/.local/lib/python2.7/site-packages/numpy/lib/polynomial.py",
>> line 19, in 
>> from numpy.linalg import eigvals, lstsq, inv
>>   File
>> "/home/svu/a0095654/.local/lib/python2.7/site-packages/numpy/linalg/__init__.py",
>> line 51, in 
>> from .linalg import *
>>   File
>> "/home/svu/a0095654/.local/lib/python2.7/site-packages/numpy/linalg/linalg.py",
>> line 29, in 
>> from numpy.linalg import lapack_lite, _umath_linalg
>> ImportError:
>> /home/svu/a0095654/.local/lib/python2.7/site-packages/numpy/linalg/lapack_lite.so:
>> undefined symbol: zgesdd_
>>
>> On Sat, Oct 11, 2014 at 1:30 AM, Julian Taylor <
>> jtaylor.deb...@googlemail.com> wrote:
>>
>>> On 10.10.2014 19:26, Lahiru Samarakoon wrote:
>>> > Red Hat Enterprise Linux release 5.8
>>> > gcc (GCC) 4.1.2
>>> >
>>> > I am also trying to install numpy 1.9.
>>>
>>> that is the broken platform, please try the master branch or the
>>> maintenance/1.9.x branch, those should work now.
>>>
>>> Are there volunteers to report this to redhat?
>>>
>>> >
>>> > On Sat, Oct 11, 2014 at 12:59 AM, Julian Taylor
>>> > mailto:jtaylor.deb...@googlemail.com>>
>>> > wrote:
>>> >
>>> > On 10.10.2014 18:51, Lahiru Samarakoon wrote:
>>> > > Dear all,
>>> > >
>>> > > I am trying to install numpy without root access. So I am
>>> building from
>>> > > the source.  I have installed atlas which also has lapack with
>>> it.  I
>>> > > changed the site.cfg file as given below
>>> > >
>>> > > [DEFAULT]
>>> > > library_dirs = /home/svu/a0095654/ATLAS/build/lib
>>> > > include_dirs = /home/svu/a0095654/ATLAS/build/include
>>> > >
>>> > >
>>> > > However, I am getting a segmentation fault when importing numpy.
>>> > >
>>> > > Please advise. I also put the build log file at the end of the
>>> email if
>>> > > necessary.
>>> >
>>> >
>>> > Which platform are you working on? Which compiler version?
>>> > We just solved a segfault on import on red hat 5 gcc 4.1.2. Very
>>> likely
>>> > caused by a compiler bug. See
>>> https://github.com/numpy/numpy/issues/5163
>>> >
>>> > The build log is complaining about your atlas being to small,
>>> possibly
>>> > the installation is broken?
>>> >
>>> >
>>>
>>>
>>
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Request for enhancement to numpy.random.shuffle

2014-10-17 Thread josef.pktd
On Thu, Oct 16, 2014 at 10:50 PM, Nathaniel Smith  wrote:
> On Fri, Oct 17, 2014 at 2:35 AM,   wrote:
>> On Thu, Oct 16, 2014 at 3:39 PM, Nathaniel Smith  wrote:
>>> On Thu, Oct 16, 2014 at 6:30 PM, Warren Weckesser
>>>  wrote:


 On Thu, Oct 16, 2014 at 12:40 PM, Nathaniel Smith  wrote:
>
> On Thu, Oct 16, 2014 at 4:39 PM, Warren Weckesser
>  wrote:
> >
> > On Sun, Oct 12, 2014 at 9:13 PM, Nathaniel Smith  wrote:
> >>
> >> Regarding names: shuffle/permutation is a terrible naming convention
> >> IMHO and shouldn't be propagated further. We already have a good
> >> naming convention for inplace-vs-sorted: sort vs. sorted, reverse vs.
> >> reversed, etc.
> >>
> >> So, how about:
> >>
> >> scramble + scrambled shuffle individual entries within each
> >> row/column/..., as in Warren's suggestion.
> >>
> >> shuffle + shuffled to do what shuffle, permutation do now (mnemonic:
> >> these break a 2d array into a bunch of 1d "cards", and then shuffle
> >> those cards).
> >>
> >> permuted remains indefinitely, with the docstring: "Deprecated alias
> >> for 'shuffled'."
> >
> > That sounds good to me.  (I might go with 'randomize' instead of
> > 'scramble',
> > but that's a second-order decision for the API.)
>
> I hesitate to use names like "randomize" because they're less
> informative than they feel seem -- if asked what this operation does
> to an array, then it would be natural to say "it randomizes the
> array". But if told that the random module has a function called
> randomize, then that's not very informative -- everything in random
> randomizes something somehow.

 I had some similar concerns (hence my original "disarrange"), but
 "randomize" seemed more likely to be found when searching or browsing the
 docs, and while it might be a bit too generic-sounding, it does feel like a
 natural verb for the process.   On the other hand, "permute" and "permuted"
 are even more natural and unambiguous.  Any objections to those?  (The
 existing function is "permutation".)
>>> [...]
 By the way, "permutation" has a feature not yet mentioned here: if the
 argument is an integer 'n', it generates a permutation of arange(n).  In
 this case, it acts like matlab's "randperm" function.  Unless we replicate
 that in the new function, we shouldn't deprecate "permutation".
>>>
>>> I guess we could do something like:
>>>
>>> permutation(n):
>>>
>>> Return a random permutation on n items. Equivalent to permuted(arange(n)).
>>>
>>> Note: for backwards compatibility, a call like permutation(an_array)
>>> currently returns the same as shuffled(an_array). (This is *not*
>>> equivalent to permuted(an_array).) This functionality is deprecated.
>>>
>>> OTOH "np.random.permute" as a name does have a downside: someday we'll
>>> probably add a function called "np.permute" (for applying a given
>>> permutation in place -- the O(n) algorithm for this is useful and
>>> tricky), and having two functions with the same name and very
>>> different semantics would be pretty confusing.
>>
>> I like `permute`. That's the one term I'm looking for first.
>>
>> If np.permute does some kind of deterministic permutation or pivoting,
>> then I wouldn't find it confusing if np.random.permute does "random"
>> permutation.
>
> Yeah, but:
>
> from ... import permute
> # 500 lines later
> def foo(...):
> permute(...)  # what the heck is this
>
> It definitely *can* be confusing; basically everything else in
> np.random has a name that suggests randomness even without seeing the
> full path.

I usually/always avoid importing names from random into the module namespace

np.random.xxx

from numpy.random import power
power(...)

>>> power(5, 3)
array([ 0.93771162,  0.96180884,  0.80191961])

???

and f and beta and gamma, ...

>>> bytes(10)
'\xa3\xf0%\x88\x11\xda\x0e\x81\x0c\x8e'
>>> bytes(5)
'\xb0B\x8e\xa1\x80'


>
> It's not a huge deal, though.
>
>> (I definitely don't like scrambled, sounds like eggs or cable TV that
>> needs to be unscrambled.)
>
> I vote that in this kind of bikeshed we try to restrict ourselves to
> arguments that we can at least pretend are motivated by some
> technical/UX concern ;-). (I guess unscrambling eggs would be
> technically impressive tho ;-))

Ignoring the eggs, it still sounds like a cheap encryption and is a
word I would never look for when looking for something to implement a
permutation test.

Josef


>
> --
> Nathaniel J. Smith
> Postdoctoral researcher - Informatics - University of Edinburgh
> http://vorpus.org
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discus

Re: [Numpy-discussion] Changed behavior of np.gradient

2014-10-17 Thread Benjamin Root
I see this as a regression. We don't keep regressions around for backwards
compatibility, we fix them.

Ben

On Thu, Oct 16, 2014 at 10:25 PM, Matthew Brett 
wrote:

> Hi,
>
> On Thu, Oct 16, 2014 at 6:38 PM, Benjamin Root  wrote:
> > That isn't what I meant. Higher order doesn't "necessarily" mean more
> > accurate. The results simply have different properties. The user needs to
> > choose the differentiation order that they need. One interesting effect
> in
> > data assimilation/modeling is that even-order differentiation can often
> have
> > detrimental effects while higher odd order differentiation are better,
> but
> > it is highly dependent upon the model.
> >
> > This change in gradient broke a unit test in matplotlib (for a new
> feature,
> > so it isn't *that* critical). We didn't notice it at first because we
> > weren't testing numpy 1.9 at the time. I want the feature (I have need
> for
> > it elsewhere), but I don't want the change in default behavior.
>
> I think it would be a bad idea to revert now.
>
> I suspect, if you revert, then a lot of other code will assume the <
> 1.9.0, >= 1.9.1  behavior.  In that case, the code will work as
> expected most of the time, except when combined with 1.9.0, which
> could be seriously surprising, and often missed.   If you keep the new
> behavior, then it will be clearer that other code will have to adapt
> to this change >= 1.9.0 - surprise, but predictable surprise, if you
> see what I mean...
>
> Matthew
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Changed behavior of np.gradient

2014-10-17 Thread Nathaniel Smith
On 17 Oct 2014 02:38, "Benjamin Root"  wrote:
>
> That isn't what I meant. Higher order doesn't "necessarily" mean more
accurate. The results simply have different properties. The user needs to
choose the differentiation order that they need. One interesting effect in
data assimilation/modeling is that even-order differentiation can often
have detrimental effects while higher odd order differentiation are better,
but it is highly dependent upon the model.

To be clear, we aren't talking about different degrees of differentiation,
we're talking about different approximations to the first derivative. I
just looked up the original pull request and it contains a pretty
convincing graph in which the old code has large systematic errors and the
new code doesn't:

  https://github.com/numpy/numpy/issues/3603

I think the claim is that the old code had approximation error that grows
like O(1/n), and the new code has errors like O(1/n**2). (Don't ask me what
n is though.)

> This change in gradient broke a unit test in matplotlib (for a new
feature, so it isn't *that* critical). We didn't notice it at first because
we weren't testing numpy 1.9 at the time. I want the feature (I have need
for it elsewhere), but I don't want the change in default behavior.

You say it's bad, the original poster says it's good, how are we poor
maintainers to know what to do? :-) Can you say any more about why you you
prefer so-called lower accuracy approximations here by default?

-n
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Add an axis argument to generalized ufuncs?

2014-10-17 Thread Stephan Hoyer
Yesterday I created a GitHub issue proposing adding an axis argument to
numpy's gufuncs:
https://github.com/numpy/numpy/issues/5197

I was told I should repost this on the mailing list, so here's the recap:

I would like to write generalized ufuncs (probably using numba), to create
fast functions such as nanmean (signature '(n)->()') or rolling_mean
(signature '(n),()->(n)') that take the axis along which to aggregate as a
keyword argument, e.g., nanmean(x, axis=0) or rolling_mean(x, window=5,
axis=0).

Of course, I could write my own wrapper for this that reorders dimensions
using swapaxes or transpose. But I also think that an "axis" argument to
allow for specifying the core dimensions of gufuncs would be more generally
useful, and we should consider adding it to numpy.

Nathaniel and Jaime added some good points, noting that such an axis
argument should cleanly handle multiple input and output arguments and have
a plan for handling optional dimensions (e.g., (m?,n),(n,p?)->(m?,p?) for
the new dot).

Here are my initial thoughts on the syntax:

(1) Generally speaking, I think the "nested tuple" syntax (e.g., axis=[(0,
1), (2, 3)]) would be most congruous with the axis arguments numpy already
supports.

(2) For gufuncs with simpler signatures, we should support supplying an
integer or an unnested tuple, e.g.,
- axis=0 for (n)->()
- axis=(0, 1) for (n)(m)->() or (n,m)->()
- axis=[(0, 1), 2] for (n,m),(o)->().

(3) If we require a full axis specification for core dimensions, we could
use the axis argument for unambiguous control of optional core dimensions:
e.g., axis=(0, 1) would indicate that you want the "vectorized inner
product" version of the new dot operator, rather than matrix
multiplication, and axis=[(-2, -1), -1] would mean that you want the
"vectorized matrix-vector product". This seems relatively tidy, although I
admit I am not convinced that optional core dimensions are necessary.

(4) We can either include the output axis as part of the signature, or add
another argument "axis_out" or "out_axis". I think prefer the separate
argument, particularly if we require "axis" to specify all core dimensions,
which may be a good idea even if we don't use "axis" for controlling
optional core dimensions.

Cheers,
Stephan
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Add an axis argument to generalized ufuncs?

2014-10-17 Thread Charles R Harris
On Fri, Oct 17, 2014 at 10:56 PM, Stephan Hoyer  wrote:

> Yesterday I created a GitHub issue proposing adding an axis argument to
> numpy's gufuncs:
> https://github.com/numpy/numpy/issues/5197
>
> I was told I should repost this on the mailing list, so here's the recap:
>
> I would like to write generalized ufuncs (probably using numba), to create
> fast functions such as nanmean (signature '(n)->()') or rolling_mean
> (signature '(n),()->(n)') that take the axis along which to aggregate as a
> keyword argument, e.g., nanmean(x, axis=0) or rolling_mean(x, window=5,
> axis=0).
>
> Of course, I could write my own wrapper for this that reorders dimensions
> using swapaxes or transpose. But I also think that an "axis" argument to
> allow for specifying the core dimensions of gufuncs would be more generally
> useful, and we should consider adding it to numpy.
>
> Nathaniel and Jaime added some good points, noting that such an axis
> argument should cleanly handle multiple input and output arguments and have
> a plan for handling optional dimensions (e.g., (m?,n),(n,p?)->(m?,p?) for
> the new dot).
>
> Here are my initial thoughts on the syntax:
>
> (1) Generally speaking, I think the "nested tuple" syntax (e.g., axis=[(0,
> 1), (2, 3)]) would be most congruous with the axis arguments numpy already
> supports.
>
> (2) For gufuncs with simpler signatures, we should support supplying an
> integer or an unnested tuple, e.g.,
> - axis=0 for (n)->()
> - axis=(0, 1) for (n)(m)->() or (n,m)->()
> - axis=[(0, 1), 2] for (n,m),(o)->().
>
> (3) If we require a full axis specification for core dimensions, we could
> use the axis argument for unambiguous control of optional core dimensions:
> e.g., axis=(0, 1) would indicate that you want the "vectorized inner
> product" version of the new dot operator, rather than matrix
> multiplication, and axis=[(-2, -1), -1] would mean that you want the
> "vectorized matrix-vector product". This seems relatively tidy, although I
> admit I am not convinced that optional core dimensions are necessary.
>
> (4) We can either include the output axis as part of the signature, or add
> another argument "axis_out" or "out_axis". I think prefer the separate
> argument, particularly if we require "axis" to specify all core dimensions,
> which may be a good idea even if we don't use "axis" for controlling
> optional core dimensions.
>
>
Might want to contact continuum analytics also. They recently created a
gufunc  repository.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Extract Indices of Numpy Array Based on Given Bit Information

2014-10-17 Thread Artur Bercik
Dear Python and Numpy Users:

My data are in the form of '32-bit unsigned integer' as follows:

myData = np.array([1073741824, 1073741877, 1073742657, 1073742709,
1073742723, 1073755137, 1073755189,1073755969],dtype=np.int32)

I want to get the index of my data where the following occurs:

Bit No. 0–1
Bit Combination: 00

How can I do it? I heard this type of problem first time, please help me.

Artur
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion