Re: [Numpy-discussion] NumPy 1.12.0b1 released

2016-11-18 Thread Peter Cock
Thanks Nathan,

That makes sense (compile using the oldest version of NumPy
we wish to support).

The information on https://github.com/MacPython/numpy-wheels
will probably be very useful too (I've been meaning to try out
appveyor at some point for Windows builds/testing).

Regards,

Peter

On Fri, Nov 18, 2016 at 2:46 PM, Nathan Goldbaum <nathan12...@gmail.com> wrote:
> Since the NumPy API is forwards compatible, you should use the oldest
> version of NumPy you would like to support to build your wheels with. The
> wheels will then work with any future NumPy versions.
>
> On Fri, Nov 18, 2016 at 9:30 AM Peter Cock <p.j.a.c...@googlemail.com>
> wrote:
>>
>> I have a related question to Matti's,
>>
>> Do you have any recommendations for building standard wheels
>> for 3rd party Python libraries which use both the NumPy Python
>> and C API?
>>
>> e.g. Do we need to do anything special given the NumPy C API
>> itself is versioned? Does it matter compiler chain should we use?
>>
>> Thanks
>>
>> Peter
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] NumPy 1.12.0b1 released

2016-11-18 Thread Peter Cock
I have a related question to Matti's,

Do you have any recommendations for building standard wheels
for 3rd party Python libraries which use both the NumPy Python
and C API?

e.g. Do we need to do anything special given the NumPy C API
itself is versioned? Does it matter compiler chain should we use?

Thanks

Peter

On Thu, Nov 17, 2016 at 11:24 PM, Matti Picus <matti.pi...@gmail.com> wrote:
> Congrats to all on the release.Two questions:
>
> Is there a guide to building standard wheels for NumPy?
>
> Assuming I can build standardized PyPy 2.7 wheels for Ubuntu, Win32 and
> OSX64, how can I get them blessed and uploaded to PyPI?
>
> Matti
>
>
> On 17/11/16 07:47, numpy-discussion-requ...@scipy.org wrote:
>>
>> Date: Wed, 16 Nov 2016 22:47:39 -0700
>> From: Charles R Harris<charlesr.har...@gmail.com>
>> To: numpy-discussion<numpy-discussion@scipy.org>, SciPy Users List
>> <scipy-u...@scipy.org>,  SciPy Developers
>> List<scipy-...@scipy.org>,
>> python-announce-l...@python.org
>> Subject: [Numpy-discussion] NumPy 1.12.0b1 released.
>>
>> Hi All,
>>
>> I'm pleased to annouce the release of NumPy 1.12.0b1. This release
>> supports  Python 2.7 and 3.4 - 3.6 and is the result of 388 pull requests
>> submitted by 133 contributors. It is quite sizeable and rather than put
>> the
>> release notes inline I've attached them as a file and they may also be
>> viewed at Github<https://github.com/numpy/numpy/releases/tag/v1.12.0b1>.
>> Zip files and tarballs may also be found the Github link. Wheels and
>> source
>> archives may be downloaded from PyPI, which is the recommended method.
>>
>> This release is a large collection of fixes, enhancements, and
>> improvements
>> and it is difficult to select just a few as highlights. However, the
>> following enhancements may be of particular interest
>>
>> - Order of operations in ``np.einsum`` now can be optimized for large
>> speed improvements.
>> - New ``signature`` argument to ``np.vectorize`` for vectorizing with
>> core dimensions.
>> - The ``keepdims`` argument was added to many functions.
>> - Support for PyPy 2.7 v5.6.0 has been added. While not complete, this
>> is a milestone for PyPy's C-API compatibility layer.
>>
>> Thanks to all,
>>
>> Chuck
>
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> https://mail.scipy.org/mailman/listinfo/numpy-discussion
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] padding options for diff

2016-10-26 Thread Peter Creasey
> Date: Wed, 26 Oct 2016 16:18:05 -0400
> From: Matthew Harrigan <harrigan.matt...@gmail.com>
>
> Would it be preferable to have to_begin='first' as an option under the
> existing kwarg to avoid overlapping?
>
>> if keep_left:
>> if to_begin is None:
>> to_begin = np.take(a, [0], axis=axis)
>> else:
>> raise ValueError(?np.diff(a, keep_left=False, to_begin=None)
>> can be used with either keep_left or to_begin, but not both.?)
>>
>> Generally I try to avoid optional keyword argument overlap, but in
>> this case it is probably justified.
>>

It works for me. I can't *think* of a case where you could have a
np.diff on a string array and 'first' could be confused with an
element, since you're not allowed diff on strings in the present numpy
anyway (unless wiser heads than me know something!). Feel free to move
the conversation to github btw.

Peter
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] padding options for diff

2016-10-26 Thread Peter Creasey
> Date: Wed, 26 Oct 2016 09:05:41 -0400
> From: Matthew Harrigan <harrigan.matt...@gmail.com>
>
> np.cumsum(np.diff(x, to_begin=x.take([0], axis=axis), axis=axis), axis=axis)
>
> That's certainly not going to win any beauty contests.  The 1d case is
> clean though:
>
> np.cumsum(np.diff(x, to_begin=x[0]))
>
> I'm not sure if this means the API should change, and if so how.  Higher
> dimensional arrays seem to just have extra complexity.
>
>>
>> I like the proposal, though I suspect that making it general has
>> obscured that the most common use-case for padding is to make the
>> inverse of np.cumsum (at least that?s what I frequently need), and now
>> in the multidimensional case you have the somewhat unwieldy:
>>
>> >>> np.diff(a, axis=axis, to_begin=np.take(a, 0, axis=axis))
>>
>> rather than
>>
>> >>> np.diff(a, axis=axis, keep_left=True)
>>
>> which of course could just be an option upon what you already have.
>>

So my suggestion was intended that you might want an additional
keyword argument (keep_left=False) to make the inverse np.cumsum
use-case easier, i.e. you would have something in your np.diff like:

if keep_left:
if to_begin is None:
to_begin = np.take(a, [0], axis=axis)
else:
raise ValueError(‘np.diff(a, keep_left=False, to_begin=None)
can be used with either keep_left or to_begin, but not both.’)

Generally I try to avoid optional keyword argument overlap, but in
this case it is probably justified.

Peter
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] padding options for diff

2016-10-25 Thread Peter Creasey
> Date: Mon, 24 Oct 2016 08:44:46 -0400
> From: Matthew Harrigan <harrigan.matt...@gmail.com>
>
> I posted a pull request <https://github.com/numpy/numpy/pull/8206> which
> adds optional padding kwargs "to_begin" and "to_end" to diff.  Those
> options are based on what's available in ediff1d.  It closes this issue
> <https://github.com/numpy/numpy/issues/8132>

I like the proposal, though I suspect that making it general has
obscured that the most common use-case for padding is to make the
inverse of np.cumsum (at least that’s what I frequently need), and now
in the multidimensional case you have the somewhat unwieldy:

>>> np.diff(a, axis=axis, to_begin=np.take(a, 0, axis=axis))

rather than

>>> np.diff(a, axis=axis, keep_left=True)

which of course could just be an option upon what you already have.

Best,
Peter
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Integers to negative integer powers, time for a decision.

2016-10-11 Thread Peter Creasey
> On Sun, Oct 9, 2016 at 12:59 PM, Stephan Hoyer <sho...@gmail.com> wrote:
>
>>
>> I agree with Sebastian and Nathaniel. I don't think we can deviating from
>> the existing behavior (int ** int -> int) without breaking lots of existing
>> code, and if we did, yes, we would need a new integer power function.
>>
>> I think it's better to preserve the existing behavior when it gives
>> sensible results, and error when it doesn't. Adding another function
>> float_power for the case that is currently broken seems like the right way
>> to go.
>>
>

I actually suspect that the amount of code broken by int**int->float
may be relatively small (though extremely annoying for those that it
happens to, and it would definitely be good to have statistics). I
mean, Numpy silently transitioned to int32+uint64->float64 not so long
ago which broke my code, but the world didn’t end.

If the primary argument against int**int->float seems to be the
difficulty of managing the transition, with int**int->Error being the
seen as the required yet *very* painful intermediate step for the
large fraction of the int**int users who didn’t care if it was int or
float (e.g. the output is likely to be cast to float in the next step
anyway), and fail loudly for those users who need int**int->int, then
if you are prepared to risk a less conservative transition (i.e. we
think that latter group is small enough) you could skip the error on
users and just throw a warning for a couple of releases, along the
lines of:

WARNING int**int -> int is going to be deprecated in favour of
int**int->float in Numpy 1.16. To avoid seeing this message, either
use “from numpy import __future_float_power__” or explicitly set the
type of one of your inputs to float, or use the new ipower(x,y)
function for integer powers.

Peter
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] State-of-the-art to use a C/C++ library from Python

2016-09-02 Thread Peter Creasey
> Date: Wed, 31 Aug 2016 13:28:21 +0200
> From: Michael Bieri <mibi...@gmail.com>
>
> I'm not quite sure which approach is state-of-the-art as of 2016. How would
> you do it if you had to make a C/C++ library available in Python right now?
>
> In my case, I have a C library with some scientific functions on matrices
> and vectors. You will typically call a few functions to configure the
> computation, then hand over some pointers to existing buffers containing
> vector data, then start the computation, and finally read back the data.
> The library also can use MPI to parallelize.
>

Depending on how minimal and universal you want to keep things, I use
the ctypes approach quite often, i.e. treat your numpy inputs an
outputs as arrays of doubles etc using the ndpointer(...) syntax. I
find it works well if you have a small number of well-defined
functions (not too many options) which are numerically very heavy.
With this approach I usually wrap each method in python to check the
inputs for contiguity, pass in the sizes etc. and allocate the numpy
array for the result.

Peter
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Integers to integer powers, let's make a decision

2016-06-10 Thread Peter Cock
On Fri, Jun 10, 2016 at 7:42 AM, Nathaniel Smith <n...@pobox.com> wrote:
> On Mon, Jun 6, 2016 at 1:17 PM, Charles R Harris
> <charlesr.har...@gmail.com> wrote:
>>
>> ...
>>
>> It looks to me like users want floats, while developers want the
>> easy path of raising an error. Darn those users, they just make
>> life sooo difficult...
>
> I dunno, with my user hat on I'd be incredibly surprised / confused /
> annoyed if an innocent-looking expression like
>
>   np.arange(10) ** 2
>
> started returning floats... having exact ints is a really nice feature
> of Python/numpy as compared to R/Javascript, and while it's true that
> int64 can overflow, there are also large powers that can be more
> precisely represented as int64 than float.
>
> -n

I was about to express an preference for (1), preserving integers
on output but treating negative powers as an error. However,
I realised the use case I had in mind does not apply:

Where I've used integer matrices as network topology adjacency
matrixes, to get connectivity by paths of n steps you use A**n,
by which I mean A x A x ... A using matrix multiplication. But
in NumPy A**n will do element wise multiplication, so this
example is not helpful.

Charles R Harris <charlesr.har...@gmail.com> wrote:

>1. Integers to negative integer powers raise an error.
>2. Integers to integer powers always results in floats.

As an aside, using boolean matrices can be helpful in the
context of connectivity matrices. How would the proposals
here affect booleans, where there is no risk of overflow?
If we went with (2), using promotion to floats here would
be very odd:

>>> import numpy
>>> A = numpy.array([[False,True,False],[True,False,True],[True,True,False]], 
>>> dtype=numpy.bool)
>>> A
array([[False,  True, False],
   [ True, False,  True],
   [ True,  True, False]], dtype=bool)
>>> A*A
array([[False,  True, False],
   [ True, False,  True],
   [ True,  True, False]], dtype=bool)
>>> A**2
array([[False,  True, False],
   [ True, False,  True],
   [ True,  True, False]], dtype=bool)
>>> numpy.dot(A,A)
array([[ True, False,  True],
   [ True,  True, False],
   [ True,  True,  True]], dtype=bool)
>>>

Regards,

Peter
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Integers to integer powers, let's make a decision

2016-06-04 Thread Peter Creasey
>
> +1
>
> On Sat, Jun 4, 2016 at 10:22 AM, Charles R Harris
> <charlesr.har...@gmail.com> wrote:
>> Hi All,
>>
>> I've made a new post so that we can make an explicit decision. AFAICT, the
>> two proposals are
>>
>> Integers to negative integer powers raise an error.
>> Integers to integer powers always results in floats.
>>
>> My own sense is that 1. would be closest to current behavior and using a
>> float exponential when a float is wanted is an explicit way to indicate that
>> desire. OTOH, 2. would be the most convenient default for everyday numerical
>> computation, but I think would more likely break current code. I am going to
>> come down on the side of 1., which I don't think should cause too many
>> problems if we start with a {Future, Deprecation}Warning explaining the
>> workaround.
>>
>> Chuck
>>


+1 (grudgingly)

My thoughts on this are:
(i) Intuitive APIs are better, and power(a,b) suggests to a lot of
(most?) readers that you are going to invoke a function like the C
pow(double x, double y) on every element. Doing positive integer
powers with the same function name suggests a correspondence that is
in practice not that helpful. With a time machine I’d suggest a
separate function for positive integer powers, however...
(ii) I think that ship has sailed, and particularly with e.g. a**3 the
numpy conventions are backed up by quite a bit of code, probably too
much to change without a lot of problems. So I’d go with integer ^
negative integer is an error.

Peter
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] linux wheels coming soon

2016-04-04 Thread Peter Cock
On Sun, Apr 3, 2016 at 2:11 AM, Matthew Brett <matthew.br...@gmail.com> wrote:
> On Fri, Mar 25, 2016 at 6:39 AM, Peter Cock <p.j.a.c...@googlemail.com> wrote:
>> On Fri, Mar 25, 2016 at 3:02 AM, Robert T. McGibbon <rmcgi...@gmail.com> 
>> wrote:
>>> I suspect that many of the maintainers of major scipy-ecosystem projects are
>>> aware of these (or other similar) travis wheel caches, but would guess that
>>> the pool of travis-ci python users who weren't aware of these wheel caches
>>> is much much larger. So there will still be a lot of travis-ci clock cycles
>>> saved by manylinux wheels.
>>>
>>> -Robert
>>
>> Yes exactly. Availability of NumPy Linux wheels on PyPI is definitely 
>> something
>> I would suggest adding to the release notes. Hopefully this will help trigger
>> a general availability of wheels in the numpy-ecosystem :)
>>
>> In the case of Travis CI, their VM images for Python already have a version
>> of NumPy installed, but having the latest version of NumPy and SciPy etc
>> available as Linux wheels would be very nice.
>
> We're very nearly there now.
>
> The latest versions of numpy, scipy, scikit-image, pandas, numexpr,
> statsmodels wheels for testing at
> http://ccdd0ebb5a931e58c7c5-aae005c4999d7244ac63632f8b80e089.r77.cf2.rackcdn.com/
>
> Please do test with:
> ...
>
> We would love to get any feedback as to whether these work on your machines.

Hi Matthew,

Testing on a 64bit CentOS 6 machine with Python 3.5 compiled
from source under my home directory:


$ python3.5 -m pip install --upgrade pip
Requirement already up-to-date: pip in ./lib/python3.5/site-packages

$ python3.5 -m pip install
--trusted-host=ccdd0ebb5a931e58c7c5-aae005c4999d7244ac63632f8b80e089.r77.cf2.rackcdn.com
--find-links=http://ccdd0ebb5a931e58c7c5-aae005c4999d7244ac63632f8b80e089.r77.cf2.rackcdn.com
numpy scipy
Requirement already satisfied (use --upgrade to upgrade): numpy in
./lib/python3.5/site-packages
Requirement already satisfied (use --upgrade to upgrade): scipy in
./lib/python3.5/site-packages

$ python3.5 -m pip install
--trusted-host=ccdd0ebb5a931e58c7c5-aae005c4999d7244ac63632f8b80e089.r77.cf2.rackcdn.com
--find-links=http://ccdd0ebb5a931e58c7c5-aae005c4999d7244ac63632f8b80e089.r77.cf2.rackcdn.com
numpy scipy --upgrade
Collecting numpy
  Downloading 
http://ccdd0ebb5a931e58c7c5-aae005c4999d7244ac63632f8b80e089.r77.cf2.rackcdn.com/numpy-1.11.0-cp35-cp35m-manylinux1_x86_64.whl
(15.5MB)
100% || 15.5MB 42.1MB/s
Collecting scipy
  Downloading 
http://ccdd0ebb5a931e58c7c5-aae005c4999d7244ac63632f8b80e089.r77.cf2.rackcdn.com/scipy-0.17.0-cp35-cp35m-manylinux1_x86_64.whl
(40.8MB)
100% || 40.8MB 53.6MB/s
Installing collected packages: numpy, scipy
  Found existing installation: numpy 1.10.4
Uninstalling numpy-1.10.4:
  Successfully uninstalled numpy-1.10.4
  Found existing installation: scipy 0.16.0
Uninstalling scipy-0.16.0:
  Successfully uninstalled scipy-0.16.0
Successfully installed numpy-1.11.0 scipy-0.17.0


$ python3.5 -c 'import numpy; numpy.test("full")'
Running unit tests for numpy
NumPy version 1.11.0
NumPy relaxed strides checking option: False
NumPy is installed in /home/xxx/lib/python3.5/site-packages/numpy
Python version 3.5.0 (default, Sep 28 2015, 11:25:31) [GCC 4.4.7
20120313 (Red Hat 4.4.7-16)]
nose version 1.3.7
.SKKK

Re: [Numpy-discussion] Reflect array?

2016-03-29 Thread Peter Creasey
> >> On Tue, Mar 29, 2016 at 1:46 PM, Benjamin Root <ben.v.r...@gmail.com>
> >> wrote:
> >> > Is there a quick-n-easy way to reflect a NxM array that represents a
> >> > quadrant into a 2Nx2M array? Essentially, I am trying to reduce the size
> >> > of
> >> > an expensive calculation by taking advantage of the fact that the first
> >> > part
> >> > of the calculation is just computing gaussian weights, which is radially
> >> > symmetric.
> >> >
> >> > It doesn't seem like np.tile() could support this (yet?). Maybe we could
> >> > allow negative repetitions to mean "reflected"? But I was hoping there
> >> > was
> >> > some existing function or stride trick that could accomplish what I am
> >> > trying.
> >> >
> >> > x = np.linspace(-5, 5, 20)
> >> > y = np.linspace(-5, 5, 24)
> >> > z = np.hypot(x[None, :], y[:, None])
> >> > zz = np.hypot(x[None, :int(len(x)//2)], y[:int(len(y)//2), None])
> >> > zz = some_mirroring_trick(zz)
> >>
>
> You can avoid the allocation with preallocation:
>
> nx = len(x) // 2
> ny = len(y) // 2
> zz = np.zeros((len(y), len(x)))
> zz[:ny,-nx:] = np.hypot.outer(y[:ny], x[:nx])
> zz[:ny, :nx] = zz[:ny,:-nx-1:-1]
> zz[-ny:, :] = zz[ny::-1, :]
>
> if nx * 2 != len(x):
> zz[:ny, nx] = y[::-1]
> zz[-ny:, nx] = y
> if ny * 2 != len(y):
> zz[ny, :nx] = x[::-1]
> zz[ny, -nx:] = x
>
> All of the steps after the call to `hypot.outer` create views. This is
> untested, so you may need to tweak the indices a little.
>

A couple of months ago I wrote a C-code with ctypes to do this sort of
mirroring trick on an (N,N,N) numpy array of fft weights (where you
can exploit the 48-fold symmetry of using interchangeable axes), which
was pretty useful since I had N^3 >> 1e9 and the weight function was
quite expensive. Obviously the (N,M) case doesn't allow quite so much
optimization but if it could be interesting then PM me.

Best,
Peter
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] linux wheels coming soon

2016-03-25 Thread Peter Cock
On Fri, Mar 25, 2016 at 3:02 AM, Robert T. McGibbon <rmcgi...@gmail.com> wrote:
> I suspect that many of the maintainers of major scipy-ecosystem projects are
> aware of these (or other similar) travis wheel caches, but would guess that
> the pool of travis-ci python users who weren't aware of these wheel caches
> is much much larger. So there will still be a lot of travis-ci clock cycles
> saved by manylinux wheels.
>
> -Robert

Yes exactly. Availability of NumPy Linux wheels on PyPI is definitely something
I would suggest adding to the release notes. Hopefully this will help trigger
a general availability of wheels in the numpy-ecosystem :)

In the case of Travis CI, their VM images for Python already have a version
of NumPy installed, but having the latest version of NumPy and SciPy etc
available as Linux wheels would be very nice.

Peter

P.S.

As an aside, PyPI seems to be having trouble displaying the main NumPy
page https://pypi.python.org/pypi/numpy at the moment (Error 404 page):

https://bitbucket.org/pypa/pypi/issues/423/version-less-page-for-numpy-broken-error
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] linux wheels coming soon

2016-03-24 Thread Peter Cock
On Thu, Mar 24, 2016 at 6:37 PM, Nathaniel Smith <n...@pobox.com> wrote:
> On Mar 24, 2016 8:04 AM, "Peter Cock" <p.j.a.c...@googlemail.com> wrote:
>>
>> Hi Nathaniel,
>>
>> Will you be providing portable Linux wheels aka manylinux1?
>> https://www.python.org/dev/peps/pep-0513/
>
> Matthew Brett will (probably) do the actual work, but yeah, that's the idea
> exactly. Note the author list on that PEP ;-)
>
> -n

Yep - I was partly double checking, but also aware many folk
skim the NumPy list and might not be aware of PEP-513 and
the standardisation efforts going on.

Also in addition to http://travis-dev-wheels.scipy.org/ and
http://travis-wheels.scikit-image.org/ mentioned by Ralf there
is http://wheels.scipy.org/ which I presume will get the new
Linux wheels once they go live.

Is it possible to add a README to these listings explaining
what they are intended to be used for?

P.S. To save anyone else Googling, you can do things like this:

pip install -r requirements.txt --timeout 60 --trusted-host
travis-wheels.scikit-image.org -f
http://travis-wheels.scikit-image.org/

Thanks,

Peter
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] linux wheels coming soon

2016-03-24 Thread Peter Cock
Hi Nathaniel,

Will you be providing portable Linux wheels aka manylinux1?
https://www.python.org/dev/peps/pep-0513/

Does this also open up the door to releasing wheels for SciPy
too?

While speeding up "pip install" would be of benefit in itself,
I am particularly keen to see this for use within automated
testing frameworks like TravisCI where currently having to
install NumPy (and SciPy) from source is an unreasonable
overhead.

Many thanks to everyone working on this,

Peter

On Tue, Mar 15, 2016 at 11:33 PM, Nathaniel Smith <n...@pobox.com> wrote:
> Hi all,
>
> Just a heads-up that we're planning to upload Linux wheels for numpy
> to PyPI soon. Unless there's some objection, these will be using
> ATLAS, just like the current Windows wheels, for the same reasons --
> moving to something faster like OpenBLAS would be good, but given the
> concerns about OpenBLAS's reliability we want to get something working
> first and then worry about making it fast. (Plus it doesn't make sense
> to ship different BLAS libraries on Windows versus Linux -- that just
> multiplies our support burden for no reason.)
>
> -n
>
> --
> Nathaniel J. Smith -- https://vorpus.org
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> https://mail.scipy.org/mailman/listinfo/numpy-discussion
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] proposal: new logspace without the log in the argument

2016-02-18 Thread Peter Creasey
>
> Some questions it'd be good to get feedback on:
>
> - any better ideas for naming it than "geomspace"? It's really too bad
> that the 'logspace' name is already taken.
>
> - I guess the alternative interface might be something like
>
> np.linspace(start, stop, steps, spacing="log")
>
> what do people think?
>
> -n
>
You’ve got to wonder how many people actually use logspace(start,
stop, num) in preference to 10.0**linspace(start, stop, num) - i.e. I
prefer the latter for clarity, and if I wanted performance I’d be
prepared to write something more ugly.

I don’t mind geomspace(), but if you are brainstorming
>>> linlogspace(start, end) # i.e. ‘linear in log-space’
is ok for me too.

Peter
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Behavior of np.random.uniform

2016-01-20 Thread Peter Creasey
>> I would also point out that requiring open vs closed intervals (in
>> doubles) is already an extremely specialised use case. In terms of
>> *sampling the reals*, there is no difference between the intervals
>> (a,b) and [a,b], because the endpoints have measure 0, and even with
>> double-precision arithmetic, you are going to have to make several
>> petabytes of random data before you hit an endpoint...
>>
> Petabytes ain't what they used to be ;) I remember testing some hardware
> which, due to grounding/timing issues would occasionally goof up a readable
> register. The hardware designers never saw it because they didn't test for
> hours and days at high data rates. But it was there, and it would show up
> in the data. Measure zero is about as real as real numbers...
>
> Chuck

Actually, your point is well taken and I am quite mistaken. If you
pick some values like uniform(low, low * (1+2**-52)) then you can hit
your endpoints pretty easily. I am out of practice making
pathological tests for double precision arithmetic.

I guess my suggestion would be to add the deprecation warning and
change the docstring to warn that the interval is not guaranteed to be
right-open.

Peter
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Behavior of np.random.uniform

2016-01-20 Thread Peter Creasey
+1 for the deprecation warning for low>high, I think the cases where
that is called are more likely to be unintentional rather than someone
trying to use uniform(closed_end, open_end) and you might help users
find bugs -  i.e. the idioms of ‘explicit is better than implicit’ and
‘fail early and fail loudly’ apply.

I would also point out that requiring open vs closed intervals (in
doubles) is already an extremely specialised use case. In terms of
*sampling the reals*, there is no difference between the intervals
(a,b) and [a,b], because the endpoints have measure 0, and even with
double-precision arithmetic, you are going to have to make several
petabytes of random data before you hit an endpoint...

Peter
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] How to find indices of values in an array (indirect in1d) ?

2015-12-30 Thread Peter Creasey
>
> In the end, I?ve only the list comprehension to work as expected
>
> A = [0,0,1,3]
> B = np.arange(8)
> np.random.shuffle(B)
> I = [list(B).index(item) for item in A if item in B]
>
>
> But Mark's and Sebastian's methods do not seem to work...
>


The function you want is also in the open source astronomy package
iccpy ( https://github.com/Lowingbn/iccpy ), which essentially does a
variant of Sebastian’s code (which I also couldn’t quite get working),
and handles a few things like old numpy versions (pre 1.4) and allows
you to specify if B is already sorted.

>>> from iccpy.utils import match
>>> print match(A,B)
[ 1  2  0 -1]

Peter
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] PR for complex np.interp, question about assert_almost_equal

2015-12-22 Thread Peter Creasey
>>> The tests pass on my machine, but I see that the TravisCI builds are
>>> giving assertion fails (on my own test) with python 3.3 and 3.5 of the
>>> form:
>>> > assert_almost_equal
>>> > TypeError: Cannot cast array data from dtype('complex128') to
>>> dtype('float64') according to the rule 'safe'
>>>
>
> The problem then is probably here
> <https://github.com/numpy/numpy/pull/6872/files#diff-45aacfd88a495829ee10815c1d02326fL623>
> .
>
> You may want to throw in a PyErr_Clear()
> <https://docs.python.org/3/c-api/exceptions.html#c.PyErr_Clear> when the
> conversion of the fp array to NPY_DOUBLE fails before trying with
> NPY_CDOUBLE, and check if it goes away.
>

Thanks for your tip Jaime, you were exactly right. Unfortunately I
only saw your message after and addressed the problem in a different
way to your suggestion (passing in a flag instead). It'd be great to
have your input on the PR though (maybe github or pm me, to avoid
flooding the mailing list).

Best,
Peter
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] PR for complex np.interp, question about assert_almost_equal

2015-12-22 Thread Peter Creasey
>> > assert_almost_equal
>> > TypeError: Cannot cast array data from dtype('complex128') to
>> dtype('float64') according to the rule 'safe'
>>
>>
>
> Hi Peter, that error is unrelated to assert_almost_equal. What happens is
> that when you pass in a complex argument `fp` to your modified
> `compiled_interp`, you're somewhere doing a cast that's not safe and
> trigger the error at
> https://github.com/numpy/numpy/blob/master/numpy/core/src/multiarray/ctors.c#L1930.


Thanks a lot Ralf! The build log I was looking at (
https://travis-ci.org/numpy/numpy/jobs/98198323 ) really confused me
by not mentioning the function call that wrote the error, but now I
think I understand and can recreate the failure in my setup.

Best,
Peter
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] PR for complex np.interp, question about assert_almost_equal

2015-12-21 Thread Peter Creasey
Hi all,
I submitted a PR (#6872) for using complex numbers in np.lib.interp.

The tests pass on my machine, but I see that the TravisCI builds are
giving assertion fails (on my own test) with python 3.3 and 3.5 of the
form:
> assert_almost_equal
> TypeError: Cannot cast array data from dtype('complex128') to 
> dtype('float64') according to the rule 'safe'

When I was writing the test I used np.testing.assert_almost_equal with
complex128 as it works in my python 2.7, however having checked the
docstring I cannot tell what the expected behaviour should be (complex
or no complex allowed). Should my test be changed or the
assert_almost_equal?

Best,
Peter
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Proposal: stop providing official win32 downloads (for now)

2015-12-18 Thread Peter Cock
On Fri, Dec 18, 2015 at 9:12 AM, Nathaniel Smith <n...@pobox.com> wrote:

> Hi all,
>
> I'm wondering what people think of the idea of us (= numpy) stopping
> providing our "official" win32 builds (the "superpack installers"
> distributed on sourceforge) starting with the next release.
>
> These builds are:
>
> - low quality: they're linked to an old & untuned build of ATLAS, so
> linear algebra will be dramatically slower than builds using MKL or
> OpenBLAS. They're win32 only and will never support win64. They're
> using an ancient version of gcc. They will never support python 3.5 or
> later.
>
> - a dead end: there's a lot of work going on to solve the windows
> build problem, and hopefully we'll have something better in the
> short-to-medium-term future; but, any solution will involve throwing
> out the current system entirely and switching to a new toolchain,
> wheel-based distribution, etc.
>
> - a drain on our resources: producing these builds is time-consuming
> and finicky; I'm told that these builds alone are responsible for a
> large proportion of the energy spent preparing each release, and take
> away from other things that our release managers could be doing (e.g.
> QA and backporting fixes).
>
> So the idea would be that for 1.11, we create a 1.11 directory on
> sourceforge and upload one final file: a README explaining the
> situation, a pointer to the source releases on pypi, and some links to
> places where users can find better-supported windows builds (Gohlke's
> page, Anaconda, etc.). I think this would serve our users better than
> the current system, while also freeing up a drain on our resources.
>
> Thoughts?
>
> -n
>


Hi Nathaniel,

Speaking as a downstream library (Biopython) using the NumPy
C API, we have to ensure binary compatibility with your releases.

We've continued to produce our own Windows 32 bit installers -
originally the .exe kind (from python setup.py bdist_wininst) but
now also .msi (from python setup.py bdist_msi).

However, in the absence of an official 64bit Windows NumPy
installer we've simply pointed people at Chris Gohlke's stack
http://www.lfd.uci.edu/~gohlke/pythonlibs/ and will likely also start
to recommend using Anaconda.

This means we don't have any comparable download metrics
to gauge 32 bit vs 64 bit Windows usage, but personally I'm
quite happy for NumPy to phase out their 32 bit Windows
installers (and then we can do the same).

I hope we can follow NumPy's lead with wheel distribution etc.

Thanks,

Peter
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] FeatureRequest: support for array

2015-12-12 Thread Peter Creasey
> >
> > from itertools import chain
> > def fromiter_awesome_edition(iterable):
> > elem = next(iterable)
> > dtype = whatever_numpy_does_to_infer_dtypes_from_lists(elem)
> > return np.fromiter(chain([elem], iterable), dtype=dtype)
> >
> > I think this would be a huge win for usability. Always getting tripped up by
> > the dtype requirement. I can submit a PR if people like this pattern.
>
> This isn't the semantics of np.array, though -- np.array will look at
> the whole input and try to find a common dtype, so this can't be the
> implementation for np.array(iter). E.g. try np.array([1, 1.0])
>
> I can see an argument for making the dtype= argument to fromiter
> optional, with a warning in the docs that it will guess based on the
> first element and that you should specify it if you don't want that.
> It seems potentially a bit error prone (in the sense that it might
> make it easier to end up with code that works great when you test it
> but then breaks later when something unexpected happens), but maybe
> the usability outweighs that. I don't use fromiter myself so I don't
> have a strong opinion.

I’m -1 on this, from an occasional user of np.fromiter, also for the
np.fromiter([1, 1.5, 2]) ambiguity reason. Pure python does a great
job of preventing users from hurting themselves with limited precision
arithmetic, however if their application makes them care enough about
speed (to be using numpy) and memory (to be using np.fromiter), then
it can almost always be assumed that the resulting dtype was important
enough to be specified.

P
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Where is Jaime?

2015-12-07 Thread Peter Creasey
>> >
>> > Is the interp fix in the google pipeline or do we need a workaround?
>> >
>>
>> Oooh, if someone is looking at changing interp, is there any chance
>> that fp could be extended to take complex128 rather than just float
>> values? I.e. so that I could write:
>>
>> >>> y = interp(mu, theta, m)
>> rather than
>> >>> y = interp(mu, theta, m.real) + 1.0j*interp(mu, theta, m.imag)
>>
>> which *sounds* like it might be simple and more (Num)pythonic.
>
> That sounds like an excellent improvement and you should submit a PR
> implementing it :-).
>
> "The interp fix" in question though is a regression in 1.10 that's blocking
> 1.10.2, and needs a quick minimal fix asap.
>


Good answer - as soon as I hit 'send' I wondered how many bugs get
introduced by people trying to attach feature requests to bug fixes. I
will take a look at the code later and pm you if I get anywhere...

Peter
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Misleading/erroneous TypeError message

2015-11-24 Thread Peter Creasey
> > I just upgraded my numpy and started to received a TypeError from one of
> > my codes that relied on the old, less strict, casting behaviour. The error
> > message, however, left me scratching my head when trying to debug something
> > like this:
> >
> > >>> a = array([0],dtype=uint64)
> > >>> a += array([1],dtype=int64)
> > TypeError: Cannot cast ufunc add output from dtype('float64') to
> > dtype('uint64') with casting rule 'same_kind'
> >
> > Where does the 'float64' come from?!?!
> >
>
> The combination of uint64 and int64 leads to promotion to float64 as the
> best option for the combination of signed and unsigned. To fix things, you
> can either use `np.add` with an output argument and `casting='unsafe'` or
> just be careful about using unsigned types.

Thanks for the quick response. I understand there are reasons for the
promotion to float64 (although my expectation would usually be that
Numpy is going to follow C conventions), however the I found the error
a little unhelpful. In particular Numpy is complaining about a dtype
(float64) that it silently promoted to, rather than the dtype that the
user provided, which generally seems like a bad idea. Could Numpy
somehow complain about the original dtypes in this case? Or at least
give a warning about the first promotion (e.g. loss of precision)?

Peter
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Misleading/erroneous TypeError message

2015-11-24 Thread Peter Creasey
Hi,

I just upgraded my numpy and started to received a TypeError from one of my
codes that relied on the old, less strict, casting behaviour. The error
message, however, left me scratching my head when trying to debug something
like this:

>>> a = array([0],dtype=uint64)
>>> a += array([1],dtype=int64)
TypeError: Cannot cast ufunc add output from dtype('float64') to
dtype('uint64') with casting rule 'same_kind'

Where does the 'float64' come from?!?!

Peter

PS Thanks for all the great work guys, numpy is a fantastic tool and has
been a lot of help to me over the years!
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Verify your sourceforge windows installer downloads

2015-05-28 Thread Peter Cock
Migrating from SourceForge seems worth considering. I also
agree this is a breach of trust with the open source community.

It is my impression that the GIMP team stopped using SF for
downloads some time ago in favour of using their own website,
leaving the SF account live to maintain the old release downloads:

https://mail.gnome.org/archives/gimp-developer-list/2015-May/msg00098.html

According to the SourceForge blog, they assumed the GIMP for
Windows account was abandoned, and it appears SF decided
to make some money off it as a mirror site offering adware-bundled
versions of the official releases:

http://sourceforge.net/blog/gimp-win-project-wasnt-hijacked-just-abandoned/

We would not want the same thing to happen to NumPy, but on
the other hand deleting all the old releases on SourceForge
would break a vast number of installation scripts/recipes.

Peter

On Thu, May 28, 2015 at 2:35 PM, David Cournapeau courn...@gmail.com wrote:
 IMO, this really begs the question on whether we still want to use
 sourceforge at all. At this point I just don't trust the service at all
 anymore.

 Could we use some resources (e.g. rackspace ?) to host those files ? Do we
 know how much traffic they get so estimate the cost ?

 David

 On Thu, May 28, 2015 at 9:46 PM, Julian Taylor
 jtaylor.deb...@googlemail.com wrote:

 hi,
 It has been reported that sourceforge has taken over the gimp
 unofficial windows downloader page and temporarily bundled the
 installer with unauthorized adware:
 https://plus.google.com/+gimp/posts/cxhB1PScFpe

 As NumPy is also distributing windows installers via sourceforge I
 recommend that when you download the files you verify the downloads
 via the checksums in the README.txt before using them. The README.txt
 is clearsigned with my gpg key so it should be safe from tampering.
 Unfortunately as I don't use windows I cannot give any advice on how
 to do the verifcation on these platforms. Maybe someone familar with
 available tools can chime in.

 I have checked the numpy downloads and they still match what I
 uploaded, but as sourceforge does redirect based on OS and geolocation
 this may not mean much.

 Cheers,
 Julian Taylor
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion



 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Numpy compilation error

2015-04-12 Thread Peter Kerpedjiev

Dear all,

Upon trying to install numpy using 'pip install numpy' in a virtualenv, 
I get the following error messages:


creating build/temp.linux-x86_64-2.7/numpy/random/mtrand

compile options: '-D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE=1 
-D_LARGEFILE64_SOURCE=1 -Inumpy/core/include 
-Ibuild/src.linux-x86_64-2.7/numpy/core/include/numpy -Inumpy/core/src/private 
-Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath 
-Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort 
-Inumpy/core/include -I/usr/include/python2.7 
-Ibuild/src.linux-x86_64-2.7/numpy/core/src/private 
-Ibuild/src.linux-x86_64-2.7/numpy/core/src/private 
-Ibuild/src.linux-x86_64-2.7/numpy/core/src/private 
-Ibuild/src.linux-x86_64-2.7/numpy/core/src/private -c'

gcc: numpy/random/mtrand/distributions.c

numpy/random/mtrand/distributions.c: In function ‘loggam’:

numpy/random/mtrand/distributions.c:892:1: internal compiler error: Illegal 
instruction

 }


 ^

Please submit a full bug report,

with preprocessed source if appropriate.

See http://bugzilla.redhat.com/bugzilla for instructions.

Preprocessed source stored into /tmp/ccjkBSd2.out file, please attach this to 
your bugreport.

This leads to the compilation process failing with this error:


Cleaning up...

Command /home/mescalin/pkerp/.virtualenvs/notebooks/bin/python -c import 
setuptools;__file__='/home/mescalin/pkerp/.virtualenvs/notebooks/build/numpy/setup.py';exec(compile(open(__file__).read().replace('\r\n',
 '\n'), __file

__, 'exec')) install --record /tmp/pip-c_Cd7B-record/install-record.txt 
--single-version-externally-managed --install-headers 
/home/mescalin/pkerp/.virtualenvs/notebooks/include/site/python2.7 failed with error 
code 1 in /ho

me/mescalin/pkerp/.virtualenvs/notebooks/build/numpy

Traceback (most recent call last):

  File /home/mescalin/pkerp/.virtualenvs/notebooks/bin/pip, line 9, in 
module

load_entry_point('pip==1.4.1', 'console_scripts', 'pip')()

  File 
/home/mescalin/pkerp/.virtualenvs/notebooks/lib/python2.7/site-packages/pip/__init__.py,
 line 148, in main

return command.main(args[1:], options)

  File 
/home/mescalin/pkerp/.virtualenvs/notebooks/lib/python2.7/site-packages/pip/basecommand.py,
 line 169, in main

text = '\n'.join(complete_log)

UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 72: 
ordinal not in range(128)


Have any of you encountered a similar problem before?

Thanks in advance,

-Peter



The gcc version is:

[pkerp@fluidspace ~]$ gcc --version

gcc (GCC) 4.8.3 20140911 (Red Hat 4.8.3-7)

Copyright (C) 2013 Free Software Foundation, Inc.

This is free software; see the source for copying conditions.  There is NO

warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.


PS:




ccjkBSd2.out
Description: chemical/gulp
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] cPickle.loads and Numeric

2014-02-25 Thread Peter Cock
On Tue, Feb 25, 2014 at 4:41 PM, Alexander Belopolsky ndar...@mac.com wrote:

 On Tue, Feb 25, 2014 at 11:29 AM, Benjamin Root ben.r...@ou.edu wrote:

 I seem to recall reading somewhere that pickles are not intended to be
 long-term archives as there is no guarantee that a pickle made in one
 version of python would work in another version, much less between different
 versions of the same (or similar) packages.

 That's not true about Python core and stdlib.  Python developers strive to
 maintain backward compatibility and any instance of newer python failing to
 read older pickles would be considered a bug.  This is even true across 2.x
 / 3.x line.

 You mileage with 3rd party packages, especially 10+ years old ones may vary.

As an example of a 10+ year old project, Biopython has accidentally
broken some pickled objects from older versions of Biopython.

Accidental breakages aside, I personally would not use pickle for long
term storage. Domain specific data formats or something simple like
tabular data, or JSON seems safer.

Peter
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] De Bruijn sequence

2014-02-01 Thread Peter Cock
On Sat, Feb 1, 2014 at 3:40 PM, Ralf Gommers ralf.gomm...@gmail.com wrote:

 On Fri, Jan 24, 2014 at 12:26 AM, Vincent Davis vinc...@vincentdavis.net
 wrote:

 I happen to be working with De Bruijn sequences. Is there any interest in
 this being part of numpy/scipy?

 https://gist.github.com/vincentdavis/8588879

 That looks like an old copy of GPL code from Sage:
 http://git.sagemath.org/sage.git/tree/src/sage/combinat/debruijn_sequence.pyx

 Besides the licensing issue, it doesn't really belong in scipy and certainly
 not in numpy imho.

 Ralf

If it is GPL code that would be a problem, but in terms of scope it might
fit under Biopython given how much De Bruijn graphs are used in
current sequence analysis.

Regards,

Peter
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] PyArray_BASE equivalent in python

2013-11-27 Thread Peter Rennert
First, sorry for not responding to your other replies, there was a jam 
in Thunderbird and I did not receive your answers.

The bits() seem to stay alive after deleting the image:

from PySide import QtGui
image = QtGui.QImage('/home/peter/code/pyTools/sandbox/images/faceDemo.jpg')
a = image.bits()
del image
b = str(a)

Works. But it might still be a Garbage collector thing and it might 
break later. I do not know enough about Python and its buffers to know 
what to expect here.

As a solution I have done something similar as it was proposed earlier, 
just that I derived from ndarray and kept the QImage reference it it:

from PySide import QtGui as _qt
import numpy as _np

class MemoryTie(np.ndarray):
 def __new__(cls, image):
 # Retrieving parameters for _np.ndarray()
 dims = [image.height(), image.width()]

 strides = [[] for i in range(2)]
 strides[0] = image.bytesPerLine()

 bits = image.bits()

 if image.format() == _qt.QImage.Format_Indexed8:
 dtype = _np.uint8
 strides[1] = 1
 elif image.format() == _qt.QImage.Format_RGB32 \
 or image.format() == _qt.QImage.Format_ARGB32 \
 or image.format() == _qt.QImage.Format_ARGB32_Premultiplied:
 dtype = _np.uint32
 strides[1] = 4
 elif image.format() == _qt.QImage.Format_Invalid:
 raise ValueError(qimageview got invalid QImage)
 else:
 raise ValueError(qimageview can only handle 8- or 32-bit 
QImages)

 # creation of ndarray
 obj = _np.ndarray(dims, _np.uint32, bits, 0, strides, 
'C').view(cls)
 obj._image = image

 return obj


Thanks all for your help,

P

On 11/26/2013 10:58 PM, Nathaniel Smith wrote:
 On Tue, Nov 26, 2013 at 2:55 PM, Peter Rennert p.renn...@cs.ucl.ac.uk wrote:
 Btw, I just wanted to file a bug at PySide, but it might be alright at
 their end, because I can do this:

 from PySide import QtGui

 image = QtGui.QImage('/home/peter/code/pyTools/sandbox/images/faceDemo.jpg')

 a = image.bits()

 del image

 a
 #read-write buffer ptr 0x7f5fe0034010, size 1478400 at 0x3c1a6b0
 That just means that the buffer still has a pointer to the QImage's
 old memory. It doesn't mean that following that pointer won't crash.
 Try str(a) or something that actually touches the buffer contents...


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] PyArray_BASE equivalent in python

2013-11-26 Thread Peter Rennert
Hi,

I as the title says, I am looking for a way to set in python the base of 
an ndarray to an object.

Use case is porting qimage2ndarray to PySide where I want to do 
something like:

In [1]: from PySide import QtGui

In [2]: image = 
QtGui.QImage('/home/peter/code/pyTools/sandbox/images/faceDemo.jpg')

In [3]: import numpy as np

In [4]: a = np.frombuffer(image.bits())

-- I would like to do something like:
In [5]: a.base = image

-- to avoid situations such as:
In [6]: del image

In [7]: a
Segmentation fault (core dumped)

The current implementation of qimage2ndarray uses a C function to do

 PyArray_BASE(sipRes) = image;
 Py_INCREF(image);

But I want to avoid having to install compilers, headers etc on target 
machines of my code just for these two lines of code.

Thanks,

P

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] PyArray_BASE equivalent in python

2013-11-26 Thread Peter Rennert
Brilliant thanks, I will try out the little class approach.

On 11/26/2013 08:03 PM, Nathaniel Smith wrote:
 On Tue, Nov 26, 2013 at 11:54 AM, Peter Rennert p.renn...@cs.ucl.ac.uk 
 wrote:
 Hi,

 I as the title says, I am looking for a way to set in python the base of
 an ndarray to an object.

 Use case is porting qimage2ndarray to PySide where I want to do
 something like:

 In [1]: from PySide import QtGui

 In [2]: image =
 QtGui.QImage('/home/peter/code/pyTools/sandbox/images/faceDemo.jpg')

 In [3]: import numpy as np

 In [4]: a = np.frombuffer(image.bits())

 -- I would like to do something like:
 In [5]: a.base = image

 -- to avoid situations such as:
 In [6]: del image

 In [7]: a
 Segmentation fault (core dumped)
 This is a bug in PySide -- the buffer object returned by image.bits()
 needs to hold a reference to the original image. Please report a bug
 to them. You will also get a segfault from code that doesn't use numpy
 at all, by doing things like:

 bits = image.bits()
 del image
 anything involving the bits object

 As a workaround, you can write a little class with an
 __array_interface__ attribute that points to the image's contents, and
 then call np.asarray() on this object. The resulting array will have
 your object as its .base, and then your object can hold onto whatever
 references it wants.

 -n
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] PyArray_BASE equivalent in python

2013-11-26 Thread Peter Rennert
I probably did something wrong, but it does not work how I tried it. I 
am not sure if you meant it like this, but I tried to subclass from 
ndarray first, but then I do not have access to __array_interface__. Is 
this what you had in mind?

from PySide import QtGui
import numpy as np

class myArray():
 def __init__(self, shape, bits, strides):
 self.__array_interface__ = \
 {'data': bits,
  'typestr': 'i32',
  'descr': [('', 'f8')],
  'shape': shape,
  'strides': strides,
  'version': 3}

image = QtGui.QImage('/home/peter/code/pyTools/sandbox/images/faceDemo.jpg')

b = myArray((image.width(), image.height()), image.bits(), 
(image.bytesPerLine(), 4))
b = np.asarray(b)

b.base
#read-write buffer ptr 0x7fd744c4b010, size 1478400 at 0x264e9f0

del image

b
# booom #

On 11/26/2013 08:12 PM, Peter Rennert wrote:
 Brilliant thanks, I will try out the little class approach.

 On 11/26/2013 08:03 PM, Nathaniel Smith wrote:
 On Tue, Nov 26, 2013 at 11:54 AM, Peter Rennert p.renn...@cs.ucl.ac.uk 
 wrote:
 Hi,

 I as the title says, I am looking for a way to set in python the base of
 an ndarray to an object.

 Use case is porting qimage2ndarray to PySide where I want to do
 something like:

 In [1]: from PySide import QtGui

 In [2]: image =
 QtGui.QImage('/home/peter/code/pyTools/sandbox/images/faceDemo.jpg')

 In [3]: import numpy as np

 In [4]: a = np.frombuffer(image.bits())

 -- I would like to do something like:
 In [5]: a.base = image

 -- to avoid situations such as:
 In [6]: del image

 In [7]: a
 Segmentation fault (core dumped)
 This is a bug in PySide -- the buffer object returned by image.bits()
 needs to hold a reference to the original image. Please report a bug
 to them. You will also get a segfault from code that doesn't use numpy
 at all, by doing things like:

 bits = image.bits()
 del image
 anything involving the bits object

 As a workaround, you can write a little class with an
 __array_interface__ attribute that points to the image's contents, and
 then call np.asarray() on this object. The resulting array will have
 your object as its .base, and then your object can hold onto whatever
 references it wants.

 -n
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] PyArray_BASE equivalent in python

2013-11-26 Thread Peter Rennert
Btw, I just wanted to file a bug at PySide, but it might be alright at 
their end, because I can do this:

from PySide import QtGui

image = QtGui.QImage('/home/peter/code/pyTools/sandbox/images/faceDemo.jpg')

a = image.bits()

del image

a
#read-write buffer ptr 0x7f5fe0034010, size 1478400 at 0x3c1a6b0


On 11/26/2013 09:37 PM, Peter Rennert wrote:
 I probably did something wrong, but it does not work how I tried it. I 
 am not sure if you meant it like this, but I tried to subclass from 
 ndarray first, but then I do not have access to __array_interface__. 
 Is this what you had in mind?

 from PySide import QtGui
 import numpy as np

 class myArray():
 def __init__(self, shape, bits, strides):
 self.__array_interface__ = \
 {'data': bits,
  'typestr': 'i32',
  'descr': [('', 'f8')],
  'shape': shape,
  'strides': strides,
  'version': 3}

 image = 
 QtGui.QImage('/home/peter/code/pyTools/sandbox/images/faceDemo.jpg')

 b = myArray((image.width(), image.height()), image.bits(), 
 (image.bytesPerLine(), 4))
 b = np.asarray(b)

 b.base
 #read-write buffer ptr 0x7fd744c4b010, size 1478400 at 0x264e9f0

 del image

 b
 # booom #

 On 11/26/2013 08:12 PM, Peter Rennert wrote:
 Brilliant thanks, I will try out the little class approach.

 On 11/26/2013 08:03 PM, Nathaniel Smith wrote:
 On Tue, Nov 26, 2013 at 11:54 AM, Peter Rennert 
 p.renn...@cs.ucl.ac.uk wrote:
 Hi,

 I as the title says, I am looking for a way to set in python the 
 base of
 an ndarray to an object.

 Use case is porting qimage2ndarray to PySide where I want to do
 something like:

 In [1]: from PySide import QtGui

 In [2]: image =
 QtGui.QImage('/home/peter/code/pyTools/sandbox/images/faceDemo.jpg')

 In [3]: import numpy as np

 In [4]: a = np.frombuffer(image.bits())

 -- I would like to do something like:
 In [5]: a.base = image

 -- to avoid situations such as:
 In [6]: del image

 In [7]: a
 Segmentation fault (core dumped)
 This is a bug in PySide -- the buffer object returned by image.bits()
 needs to hold a reference to the original image. Please report a bug
 to them. You will also get a segfault from code that doesn't use numpy
 at all, by doing things like:

 bits = image.bits()
 del image
 anything involving the bits object

 As a workaround, you can write a little class with an
 __array_interface__ attribute that points to the image's contents, and
 then call np.asarray() on this object. The resulting array will have
 your object as its .base, and then your object can hold onto whatever
 references it wants.

 -n
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Announcing Bokeh 0.2: interactive web plotting for Python

2013-11-06 Thread Peter Wang
Hi Sebastian,

On Wed, Nov 6, 2013 at 4:55 AM, Sebastian Haase seb.ha...@gmail.com wrote:

 Hi,
 you projects looks really great!
 I was wondering if you are making use of any pre-existing javascript
 plotting library like flot or flotr2 ?
 And if not, what are your reasons ?


We did not use any pre-existing JS plotting library.  At the time we were
exploring our options (and I believe this still to be the case), the
plotting libraries were all architected to (1) have their plots specified
by JS code, (2) interact with mouse/keyboard via DOM events and JS
callbacks; (3) process data locally in the JS namespace.  Flot was one of
the few that actually had any support for callbacks to retrieve data from a
server, but even so, its data model was very limited.

We recognized that in order to support good interaction with a non-browser
language, we would need a JS runtime that was *designed* to sync its object
models with server-side state, which could then be produced and modified by
other languages.  (Python is the first language for Bokeh, of course, but
other languages should be pretty straightforward.)  We also wanted to
structure the interaction model at a higher level, and offer the
configuration of interactions from a non-JS language.

It's not entirely obvious from our current set of initial examples, but if
you use the output_server() mode of bokeh, and you grab the Plot object
via curplot(), you can modify graphical and data attributes of the plot,
and *they are reflected in realtime in the browser*.  This is independent
of whether your plot is in an output cell of an IPython notebook, or
embedded in some HTML page you wrote - the BokehJS library powering those
plots are watching for server-side model updates automagically.

Lastly, most of the JS plotting libraries that we saw took a very
traditional perspective on information visualization, i.e. they treat it as
mostly as a rendering task.  So, you pass in some configuration and it
rasters some pixels on a backend or outputs some SVG.  None of the ones I
looked at used a scene-graph approach to info viz.  Even the venerable
d3.js did not do this; it is a scripting layer over DOM (including SVG),
and its core concepts are the primitives of the underlying drawing system,
and not ones appropriate to the infovis task.  (Only recently did they add
an axis object, and there still is not any reification of coordinate
spaces and such AFAIK.)

The charting libraries that build on top of d3 (e.g. nvd3 and d3-chart)
exist for a reason... but they mostly just use d3 as a fancy SVG rendering
layer.  And, once again, they live purely in Javascript, leaving
server-side data and state management as an exercise to the
scientist/analyst.

FWIW, I have CCed the Bokeh discussion list, which is perhaps a more
appropriate list for further discussion on this topic.  :-)


-Peter
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Announcing Bokeh 0.2: interactive web plotting for Python

2013-10-24 Thread Peter Wang
On Thu, Oct 24, 2013 at 7:39 AM, Jason Grout jason-s...@creativetrax.comwrote:

 On 10/23/13 6:00 PM, Peter Wang wrote:
 
  The project website (with interactive gallery) is at:
  http://bokeh.pydata.org

 Just a suggestion: could you put the source below each gallery image,
 like matplotlib does in their gallery?  I see lots of pretty plots, but
 I have to go digging in github or somewhere to see how you made these
 plots.  Since Bokeh is (at least partly) about making beautiful plots
 easy, showing off the source code is half of the story.


Thanks for the suggestion - we actually did have that at one point, but
experienced some formatting issues and are working on addressing that today.

-Peter
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Announcing Bokeh 0.2: interactive web plotting for Python

2013-10-24 Thread Peter Wang
On Thu, Oct 24, 2013 at 6:36 AM, Jason Grout jason-s...@creativetrax.comwrote:

 On 10/24/13 6:35 AM, Jason Grout wrote:
  This looks really cool.  I was checking out how easy it would be to
  embed in the Sage Cell Server [1].  I briefly looked at the code, and it
  appears that the IPython notebook mode does not use nodejs, redis,
  gevent, etc.?  Is that right?

 Or maybe the better way to phrase it is: what are the absolute minimum
 dependencies if all I want to do is to display in the IPython notebook?


You actually should not need nodejs, redis, and gevent if you just want to
embed the full source code of bokeh.js into the IPython notebook itself.
 Also, the data will be baked into the DOM as javascript variables.

You will still have interactivity *within* plots inside a single Notebook,
but they will not drive events back to the server side.  Also, if your data
is large, then the notebook will also get pretty big.  (We will be working
on more efficient encodings in a future release.)

-Peter
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Announcing Bokeh 0.2: interactive web plotting for Python

2013-10-24 Thread Peter Wang
On Thu, Oct 24, 2013 at 10:11 AM, Jason Grout
jason-s...@creativetrax.comwrote:

 It would be really cool if you could hook into the new IPython comm
  infrastructure to push events back to the server in IPython (this is not
 quite merged yet, but probably ready for experimentation like this).
 The comm infrastructure basically opens up a communication channel
 between objects on the server and the browser.  Messages get sent over
 the normal IPython channels.  The server and browser objects just use
 either send() or an on_message() handler.  See
 https://github.com/ipython/ipython/pull/4195


Yeah, I think we should definitely look into integrating with this
mechanism for when we are embedded in a Notebook.  However, we always want
the underlying infrastructure to be independent of IPython Notebook,
because we want people to be able to build analytical applications on top
of these components.


 Here's a very simple example of the Comm implementation working with
 matplotlib images in the Sage Cell server (which is built on top of the
 IPython infrastructure):  http://sagecell.sagemath.org/?q=fyjgmk (I'd
 love to see a bokeh version of this sort of thing :).


This is interesting, and introducing widgets is already on the roadmap,
tentatively v0.4.  When running against a plot server, Bokeh plots already
push selections back to the server side.  (That's how the linked brushing
in e.g. this example works: https://www.wakari.io/sharing/bundle/pwang/cars)

Our immediate short-term priorities for 0.3 are improving the layout
mechanism, incorporating large data processing into the plot server, and
investigating basic interop with Matplotlib objects.


-Peter
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Announcing Bokeh 0.2: interactive web plotting for Python

2013-10-23 Thread Peter Wang
Hi everyone,

I'm excited to announce the v0.2 release of Bokeh, an interactive web
plotting library for Python.  The long-term vision for Bokeh is to
provide rich interactivity, using the full power of Javascript and
Canvas, to Python users who don't need to write any JS or learn
the DOM.

The full blog post announcement is here:
http://continuum.io/blog/bokeh02

The project website (with interactive gallery) is at:
http://bokeh.pydata.org

And the Git repo is:
https://github.com/ContinuumIO/bokeh


Cheers,
Peter
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Removal of numarray and oldnumeric packages.

2013-09-24 Thread Peter Cock
On Mon, Sep 23, 2013 at 10:53 PM, Charles R Harris
charlesr.har...@gmail.com wrote:
 On Mon, Sep 23, 2013 at 3:18 PM, Charles R Harris wrote:
 On Mon, Sep 23, 2013 at 2:42 PM, Peter Cock wrote:

 Hi Chuck,

 Could you clarify how we'd know if this is a problem in a large package?
 i.e. Is it just Python imports I need to double check, or also C level?


 Just Python level unless you are calling python from C. The packages are
 not normally imported, so you should be able to just grep through. You could
 also apply the patch and see what happens. That might be the best test.

 I take that back. There is a _capi.c and include files for numarray.

 Chuck

Thanks - I just ran our test suite against numpy compiled from
that commit: https://github.com/numpy/numpy/pull/3638

We seem to be OK, so I have no objection to removing this.

Peter
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Removal of numarray and oldnumeric packages.

2013-09-23 Thread Peter Cock
On Mon, Sep 23, 2013 at 6:03 PM, Charles R Harris
charlesr.har...@gmail.com wrote:
 Hi All,

 I have gotten no feedback on the removal of the numarray and oldnumeric
 packages. Consequently the removal will take place on 9/28. Scream now or
 never...

 Chuck

Hi Chuck,

Could you clarify how we'd know if this is a problem in a large package?
i.e. Is it just Python imports I need to double check, or also C level?

Thanks!

Peter
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Binary releases

2013-09-16 Thread Peter Cock
On Mon, Sep 16, 2013 at 8:05 PM, Nathaniel Smith n...@pobox.com wrote:

 Why not just release numpy 1.8 with the old and terrible system? As
 you know I'm 110% in favor of getting rid of it, but 1.8 is ready to
 go and 1.9 is coming soon enough, and the old and terrible system does
 work right now, today. None of the other options have this property.

On the down side, the old and terrible system does not
cover providing pre-built binaries for 64 bit Windows.

Doing that right is important not just for SciPy but for any
other downstream package including C code compiled
against the NumPy C API (and the people doing this
probably will only have access to free compilers).

Regards,

Peter
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Removal of beta and rc files from Sourceforge

2013-09-08 Thread Peter Cock
On Sun, Sep 8, 2013 at 9:07 PM, Charles R Harris wrote:

 On Sun, Sep 8, 2013 at 1:29 PM, Ralf Gommers wrote:

 On Sun, Sep 8, 2013 at 7:26 PM, Charles R Harris wrote:

 Hi All,

 Currently the beta and rc files for numpy versions = 1.6.1 are still up
 on sourceforge. I think at this point they are at best clutter and propose
 that they be removed. Thoughts?


 +1


 Done. I suppose at some point we can remove some of the releases
 also, but that doesn't seem pressing.

-1 on removing old releases, sometimes they can be useful
for tracing an old bug or repeating an old analysis. They
still have some small value - and don't cost much to leave
online.

Peter
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Reading footer lines

2013-08-13 Thread Peter Cock
On Tue, Aug 13, 2013 at 1:20 PM, Resmi l.re...@gmail.com wrote:
 Hi,

 I've a list of long files of numerical data ending with footer lines
 (beginning with #). I am using numpy.loadtxt to read the numbers, and
 loadtxt ignores these footer lines. I want the numpy code to read one of the
 footer lines and extract words from it. Is there a way to use loadtxt for
 this? If there weren't many files I could have used the line number (which
 keep varying between the files) of the footer line along with linecache.
 Nevertheless there should be a generic way to do this in numpy?

 As a workaround, I've tried using os.system along with grep. And I get the
 following output :

 os.system(grep -e 'tx' 'data.dat' )
  ## tx =2023.06
 0

 Why is there a 0 in the output? The file has no blank lines.

The os.system function call returns the integer return value
(error level) of the command invoked, by convention zero
for success but the value is set by the tool (here grep).

 Since I want the value 2023.06 in the numpy code for later use I tried to
 pipe the output to a variable:-
 test = os.system(command)
 But that isn't working as test is getting assigned the value 0.
 Tried subprocess.call(['grep','-e','tx','data.dat']) which is also ending up
 in the same fashion.

 It'll be great if I can get to know (i) a way to read the footer lines (ii)
 to sort out the operation of os.system and subprocess.call output
 processing.

Don't use os.system, instead you should use subprocess:
http://docs.python.org/2/library/subprocess.html

The standard library module commands would also work
but is not cross platform, and is also deprecated in favour
of subprocess: http://docs.python.org/2/library/commands.html

Peter
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] multivariate_normal issue with 'size' argument

2013-05-24 Thread Peter Cock
On Fri, May 24, 2013 at 1:59 PM, Emanuele Olivetti
emanu...@relativita.com wrote:
 Interesting. Anyone able to reproduce what I observe?

 Emanuele


Yes, I can reproduce this IndexError under Mac OS X:

$ which python2.7
/usr/bin/python2.7
$ python2.7
Python 2.7.2 (default, Oct 11 2012, 20:14:37)
[GCC 4.2.1 Compatible Apple Clang 4.0 (tags/Apple/clang-418.0.60)] on darwin
Type help, copyright, credits or license for more information.
 import numpy as np
 print np.random.multivariate_normal(mean=np.zeros(2), cov=np.eye(2), size=1)
[[ 0.68446902  1.84926031]]
 print np.random.multivariate_normal(mean=np.zeros(2), cov=np.eye(2), 
 size=np.int64(1))
Traceback (most recent call last):
  File stdin, line 1, in module
  File mtrand.pyx, line 3990, in
mtrand.RandomState.multivariate_normal
(numpy/random/mtrand/mtrand.c:16663)
IndexError: invalid index to scalar variable.
 np.__version__
'1.6.1'
 quit()

And on a more recent self-compiled Python and NumPy,

$ which python3.3
/Users/pjcock/bin/python3.3
$ python3.3
Python 3.3.1 (default, Apr  8 2013, 17:54:08)
[GCC 4.2.1 Compatible Apple Clang 4.0 ((tags/Apple/clang-421.0.57))] on darwin
Type help, copyright, credits or license for more information.
 import numpy as np
 print(np.random.multivariate_normal(mean=np.zeros(2), cov=np.eye(2), 
 size=1))
[[-0.57757621  1.09307893]]
 print(np.random.multivariate_normal(mean=np.zeros(2), cov=np.eye(2), 
 size=np.int64(1)))
Traceback (most recent call last):
  File stdin, line 1, in module
  File mtrand.pyx, line 4161, in
mtrand.RandomState.multivariate_normal
(numpy/random/mtrand/mtrand.c:19140)
IndexError: invalid index to scalar variable.
 np.__version__
'1.7.1'

Peter
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] multivariate_normal issue with 'size' argument

2013-05-24 Thread Peter Cock
On Fri, May 24, 2013 at 2:15 PM, Robert Kern robert.k...@gmail.com wrote:
 On Fri, May 24, 2013 at 9:12 AM, Peter Cock p.j.a.c...@googlemail.com wrote:
 On Fri, May 24, 2013 at 1:59 PM, Emanuele Olivetti
 emanu...@relativita.com wrote:
 Interesting. Anyone able to reproduce what I observe?

 Emanuele


 Yes, I can reproduce this IndexError under Mac OS X:

 $ which python2.7
 /usr/bin/python2.7
 $ python2.7
 Python 2.7.2 (default, Oct 11 2012, 20:14:37)
 [GCC 4.2.1 Compatible Apple Clang 4.0 (tags/Apple/clang-418.0.60)] on darwin
 Type help, copyright, credits or license for more information.

 Can everyone please report whether they have a 32-bit build of Python
 or a 64-bit build? That's probably the most relevant factor.

It seems to affect all of 32 bit Windows XP, 64 bit Mac, 32 bit Linux,
and 64 bit Linux
for some versions of NumPy...  Thus far the only non-failure I've seen
is 64 bit Linux,
Python 2.6.6 with NumPy 1.6.2 (other Python/NumPy installs on this
machine do fail).

Its a bit strange - I don't see any obvious pattern.

Peter

---

Failures:

My Python installs on this Mac all seem to be 64bit (and fail),

$ python3.3
Python 3.3.1 (default, Apr  8 2013, 17:54:08)
[GCC 4.2.1 Compatible Apple Clang 4.0 ((tags/Apple/clang-421.0.57))] on darwin
Type help, copyright, credits or license for more information.
 import sys;print(%x % sys.maxsize, sys.maxsize  2**32)
7fff True
 import numpy as np
 print(np.random.multivariate_normal(mean=np.zeros(2), cov=np.eye(2), 
 size=1))
[[ 1.80932387  0.85894164]]
 print(np.random.multivariate_normal(mean=np.zeros(2), cov=np.eye(2), 
 size=np.int64(1)))
Traceback (most recent call last):
  File stdin, line 1, in module
  File mtrand.pyx, line 4161, in
mtrand.RandomState.multivariate_normal
(numpy/random/mtrand/mtrand.c:19140)
IndexError: invalid index to scalar variable.
 np.__version__
'1.7.1'
 quit()

This also affects NumPy 1.5 so this isn't a recent regression:

$ python3.2
Python 3.2 (r32:88445, Feb 28 2011, 17:04:33)
[GCC 4.2.1 (Apple Inc. build 5664)] on darwin
Type help, copyright, credits or license for more information.
 import sys;print(%x % sys.maxsize, sys.maxsize  2**32)
7fff True
 import numpy as np
 print(np.random.multivariate_normal(mean=np.zeros(2), cov=np.eye(2), 
 size=1))
[[ 1.11403341 -1.67856405]]
 print(np.random.multivariate_normal(mean=np.zeros(2), cov=np.eye(2), 
 size=np.int64(1)))
Traceback (most recent call last):
  File stdin, line 1, in module
  File mtrand.pyx, line 3954, in
mtrand.RandomState.multivariate_normal
(numpy/random/mtrand/mtrand.c:17234)
IndexError: invalid index to scalar variable.
 np.__version__
'1.5.0'

$ python3.1
Python 3.1.2 (r312:79147, Nov 15 2010, 16:28:52)
[GCC 4.2.1 (Apple Inc. build 5664)] on darwin
Type help, copyright, credits or license for more information.
 import sys;print(%x % sys.maxsize, sys.maxsize  2**32)
7fff True
 import numpy as np
 print(np.random.multivariate_normal(mean=np.zeros(2), cov=np.eye(2), 
 size=1))
[[ 0.3834108  -0.31124203]]
 print(np.random.multivariate_normal(mean=np.zeros(2), cov=np.eye(2), 
 size=np.int64(1)))
Traceback (most recent call last):
  File stdin, line 1, in module
  File mtrand.pyx, line 3954, in
mtrand.RandomState.multivariate_normal
(numpy/random/mtrand/mtrand.c:17234)
IndexError: invalid index to scalar variable.
 np.__version__
'1.5.0'
 quit()

And on my 32 bit Windows XP box,

Python 2.7 (r27:82525, Jul  4 2010, 09:01:59) [MSC v.1500 32 bit
(Intel)] on win32
Type help, copyright, credits or license for more information.
 import sys;print(%x % sys.maxsize, sys.maxsize  2**32)
('7fff', False)
 import numpy as np
 print(np.random.multivariate_normal(mean=np.zeros(2), cov=np.eye(2), 
 size=1))
[[-0.35072523 -0.58046885]]
 print(np.random.multivariate_normal(mean=np.zeros(2), cov=np.eye(2), 
 size=np.int64(1)))
Traceback (most recent call last):
  File stdin, line 1, in module
  File mtrand.pyx, line 3954, in
mtrand.RandomState.multivariate_normal
(numpy\random\mtrand\mtrand.c:17234)
IndexError: invalid index to scalar variable.
 np.__version__
'1.5.0'


Python 3.3.0 (v3.3.0:bd8afb90ebf2, Sep 29 2012, 10:55:48) [MSC v.1600
32 bit (Intel)] on win32
Type help, copyright, credits or license for more information.
 import numpy as np
 print(np.random.multivariate_normal(mean=np.zeros(2), cov=np.eye(2), 
 size=1))
[[-0.00453374  0.2210342 ]]
 print(np.random.multivariate_normal(mean=np.zeros(2), cov=np.eye(2), 
 size=np.int64(1)))
Traceback (most recent call last):
  File stdin, line 1, in module
  File mtrand.pyx, line 4142, in
mtrand.RandomState.multivariate_normal
(numpy\random\mtrand\mtrand.c:19128)
IndexError: invalid index to scalar variable.

 np.__version__
'1.7.0rc2'

Here's a couple of runs from an old 32 bit Linux machine which also
shows the problem:

$ python2.7
Python 2.7 (r27:82500, Nov 12 2010, 14:19:08)
[GCC 4.1.2 20070626 (Red Hat 4.1.2-13)] on linux2
Type help

Re: [Numpy-discussion] numpy.scipy.org page 404s

2013-04-26 Thread Peter Cock
On Fri, Apr 26, 2013 at 4:11 PM, Robert Kern robert.k...@gmail.com wrote:
 On Fri, Apr 26, 2013 at 3:20 PM, Christopher Hanley chan...@gmail.com wrote:
 Dear Numpy Webmasters,

 Would it be possible to either redirect numpy.scipy.org to www.numpy.org or
 to the main numpy github landing page?  Currently numpy.scipy.org hits a
 Github 404 page.  As the numpy.scipy.org site still shows up in searches it
 would be useful to have that address resolve to something more helpful.

 Thank you for your time and help,

 $ dig numpy.scipy.org

 ;  DiG 9.8.3-P1  numpy.scipy.org
 ;; global options: +cmd
 ;; Got answer:
 ;; -HEADER- opcode: QUERY, status: NOERROR, id: 49456
 ;; flags: qr rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 0

 ;; QUESTION SECTION:
 ;numpy.scipy.org. IN A

 ;; ANSWER SECTION:
 numpy.scipy.org. 682 IN CNAME www.numpy.org.
 www.numpy.org. 60 IN CNAME numpy.github.com.
 numpy.github.com. 42982 IN A 204.232.175.78

 ;; Query time: 14 msec
 ;; SERVER: 10.44.0.1#53(10.44.0.1)
 ;; WHEN: Fri Apr 26 16:05:20 2013
 ;; MSG SIZE  rcvd: 103


 Unfortunately, Github can only deal with one CNAME, www.numpy.org. The
 documentation recommends that one redirect the other domains, but
 it's not clear exactly what it is referring to. Having an HTTP server
 with an A record for numpy.scipy.org that just issues HTTP 301
 redirects for everything? I can look into getting that set up.

 https://help.github.com/articles/my-custom-domain-isn-t-working#multiple-domains-in-cname-file


+1 for fixing this - I tried to report this back in February, but
checking the archive my email seems to have gotten lost.

I noticed this from a manuscript in proof when the copy editor
pointed out http://numpy.scipy.org wasn't working.

As http://numpy.scipy.org used to be a widely used URL for
the project, and likely appears in many printed references,
fixing it to redirect to the (relatively new) http://www.numpy.org
would be good.

Peter
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] numpy.scipy.org giving 404 error

2013-02-26 Thread Peter Cock
Hello all,

http://numpy.scipy.org is giving a GitHub 404 error.

As this used to be a widely used URL for the project,
and likely appears in many printed references, could
it be fixed to point to or redirect to the (relatively new)
http://www.numpy.org site please?

Thanks,

Peter
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Seeking help and support for next-gen math modeling tools using Python

2013-02-20 Thread Peter Cock
On Wed, Feb 20, 2013 at 8:27 PM, Todd toddr...@gmail.com wrote:

 I am looking at documentation now, but a couple things from what I seen:

 Are you particularly tied to sourceforge? It seems a lot of python
 development is moving to github, and it makes third party contribution much
 easier.  You can still distribute releases through sourceforge even if you
 use github for revision control.

That's what NumPy has been doing for some time now, the repo is here:
https://github.com/numpy/numpy
http://sourceforge.net/projects/numpy/files/

Is there some misleading documentation still around that gave
you a different impression?

Peter
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Seeking help and support for next-gen math modeling tools using Python

2013-02-20 Thread Peter Cock
On Wed, Feb 20, 2013 at 11:23 PM, Robert Kern wrote:
 On Wed, Feb 20, 2013 at 11:21 PM, Peter Cock wrote:
 On Wed, Feb 20, 2013 at 8:27 PM, Todd wrote:

 I am looking at documentation now, but a couple things from what I seen:

 Are you particularly tied to sourceforge? It seems a lot of python
 development is moving to github, and it makes third party contribution much
 easier.  You can still distribute releases through sourceforge even if you
 use github for revision control.

 That's what NumPy has been doing for some time now, the repo is here:
 https://github.com/numpy/numpy
 http://sourceforge.net/projects/numpy/files/

 Is there some misleading documentation still around that gave
 you a different impression?

 Todd is responding to a message about PyDSTool, which is developed on
 Sourceforge, not numpy.

Ah - apologies for the noise (and plus one for adopting github).

Peter
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Will numpy 1.7.0 final be binary compatible with the rc?

2013-02-06 Thread Peter Cock
On Wed, Feb 6, 2013 at 3:46 AM, Ondřej Čertík ondrej.cer...@gmail.com wrote:
 On Tue, Feb 5, 2013 at 12:22 PM, Ralf Gommers ralf.gomm...@gmail.com wrote:
 On Tue, Feb 5, 2013 at 3:01 PM, Peter Cock p.j.a.c...@googlemail.com
 wrote:

 Hello all,

 Will the numpy 1.7.0 'final' be binary compatible with the release
 candidate(s)? i.e. Would it be safe for me to release a Windows
 installer for a package using the NumPy C API compiled against
 the NumPy 1.7.0rc?


 Yes, that should be safe.

 Yes. I plan to release rc2 immediately once

 https://github.com/numpy/numpy/pull/2964

 is merged (e.g. I am hoping for today). The final should then be
 identical to rc2.

 Ondrej

Great - in that case I'll wait a couple of days and use rc2 for this
(just in case there is a subtle difference from rc1).

Thanks,

Peter
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Will numpy 1.7.0 final be binary compatible with the rc?

2013-02-05 Thread Peter Cock
Hello all,

Will the numpy 1.7.0 'final' be binary compatible with the release
candidate(s)? i.e. Would it be safe for me to release a Windows
installer for a package using the NumPy C API compiled against
the NumPy 1.7.0rc?

I'm specifically interested in Python 3.3, and NumPy 1.7 will be
the first release to support that. For older versions of Python I
can use NumPy 1.6 instead.

Thanks,

Peter
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Building Numpy 1.6.2 for Python 3.3 on Windows

2013-01-10 Thread Peter Cock
On Thu, Jan 10, 2013 at 3:56 PM, klo klo...@gmail.com wrote:
 Hi,

 I run `python3 setup.py config` and then

   python3 setup.py build --compiler=mingw32

 but it picks that I have MSVC 10 and complains about manifests.
 Why, or even better, how to compile with available MinGW compilers?

I reported this issue/bug to the mailing list recently as part of
a discussion with Ralf which lead to various fixes being made
to get NumPy to compile with either mingw32 or MSCV 10.

http://mail.scipy.org/pipermail/numpy-discussion/2012-November/064454.html

My workaround is to change the default compiler for Python 3,
by creating C:\Python33\Lib\distutils\distutils.cfg containing:

[build]
compiler=mingw32

Peter
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Do we want scalar casting to behave as it does at the moment?

2013-01-03 Thread Peter Cock
On Fri, Jan 4, 2013 at 12:11 AM, Dag Sverre Seljebotn
d.s.seljeb...@astro.uio.no wrote:
 On 01/04/2013 12:39 AM, Andrew Collette wrote:
  Nathaniel Smith wrote:
  Consensus in that bug report seems to be that for array/scalar operations 
  like:
 np.array([1], dtype=np.int8) + 1000 # can't be represented as an int8!
  we should raise an error, rather than either silently upcasting the
  result (as in 1.6 and 1.7) or silently downcasting the scalar (as in
  1.5 and earlier).
 
  I have run into this a few times as a NumPy user, and I just wanted to
  comment that (in my opinion), having this case generate an error is
  the worst of both worlds.  The reason people can't decide between
  rollover and promotion is because neither is objectively better.  One

 If neither is objectively better, I think that is a very good reason to
 kick it down to the user. Explicit is better than implicit.

  avoids memory inflation, and the other avoids losing precision.  You
  just need to pick one and document it.  Kicking the can down the road
  to the user, and making him/her explicitly test for this condition, is
  not a very good solution.

 It's a good solution to encourage bug-free code. It may not be a good
 solution to avoid typing.

  What does this mean in practical terms for NumPy users?  I personally
  don't relish the choice of always using numpy.add, or always wrapping
  my additions in checks for ValueError.

 I think you usually have a bug in your program when this happens, since
 either the dtype is wrong, or the value one is trying to store is wrong.
 I know that's true for myself, though I don't claim to know everybody
 elses usecases.

I agree with Dag rather than Andrew, Explicit is better than implicit.
i.e. What Nathaniel described earlier as the apparent consensus.

Since I've actually used NumPy arrays with specific low memory
types, I thought I should comment about my use case if case it
is helpful:

I've only used the low precision types like np.uint8 (unsigned) where
I needed to limit my memory usage. In this case, the topology of a
graph allowing multiple edges held as an integer adjacency matrix, A.
I would calculate things like A^n for paths of length n, and also make
changes to A directly (e.g. adding edges). So an overflow was always
possible, and neither the old behaviour (type preserving but wrapping
on overflow giving data corruption) nor the current behaviour (type
promotion overriding my deliberate memory management) are nice.
My preferences here would be for an exception, so I knew right away.

The other use case which comes to mind is dealing with low level
libraries and/or file formats, and here automagic type promotion
would probably be unwelcome.

Regards,

Peter
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Do we want scalar casting to behave as it does at the moment?

2013-01-03 Thread Peter Cock
On Fri, Jan 4, 2013 at 12:39 AM, Peter Cock p.j.a.c...@googlemail.com wrote:
 Since I've actually used NumPy arrays with specific low memory
 types, I thought I should comment about my use case if case it
 is helpful:

 I've only used the low precision types like np.uint8 (unsigned) where
 I needed to limit my memory usage. In this case, the topology of a
 graph allowing multiple edges held as an integer adjacency matrix, A.
 I would calculate things like A^n for paths of length n, and also make
 changes to A directly (e.g. adding edges). So an overflow was always
 possible, and neither the old behaviour (type preserving but wrapping
 on overflow giving data corruption) nor the current behaviour (type
 promotion overriding my deliberate memory management) are nice.
 My preferences here would be for an exception, so I knew right away.

 The other use case which comes to mind is dealing with low level
 libraries and/or file formats, and here automagic type promotion
 would probably be unwelcome.

 Regards,

 Peter

Elsewhere on the thread, Nathaniel Smith n...@pobox.com wrote:

 To be clear: we're only talking here about the case where you have a mix of
 a narrow dtype in an array and a scalar value that cannot be represented in
 that narrow dtype. If both sides are arrays then we continue to upcast as
 normal. So my impression is that this means very little in practical terms,
 because this is a rare and historically poorly supported situation.

 But if this is something you're running into in practice then you may have a
 better idea than us about the practical effects. Do you have any examples
 where this has come up that you can share?

 -n

Clarification appreciated - on closer inspection for my adjacency
matrix example I would not fall over the issue in
https://github.com/numpy/numpy/issues/2878

 import numpy as np
 np.__version__
'1.6.1'
 A = np.zeros((100,100), np.uint8) # Matrix could be very big
 A[3,4] = 255 # Max value, setting up next step in example
 A[3,4]
255
 A[3,4] += 1 # Silently overflows on NumPy 1.6
 A[3,4]
0

To trigger the contentious behaviour I'd have to do something
like this:

 A = np.zeros((100,100), np.uint8)
 B = A + 256
 B
array([[256, 256, 256, ..., 256, 256, 256],
   [256, 256, 256, ..., 256, 256, 256],
   [256, 256, 256, ..., 256, 256, 256],
   ...,
   [256, 256, 256, ..., 256, 256, 256],
   [256, 256, 256, ..., 256, 256, 256],
   [256, 256, 256, ..., 256, 256, 256]], dtype=uint16)

I wasn't doing anything like that in my code though, just
simple matrix multiplication and in situ element modification,
for example A[i,j] += 1 to add an edge.

I still agree that for https://github.com/numpy/numpy/issues/2878
an exception sounds sensible.

Peter
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Do we want scalar casting to behave as it does at the moment?

2013-01-03 Thread Peter Cock
On Fri, Jan 4, 2013 at 12:49 AM, Nathaniel Smith n...@pobox.com wrote:
 On 4 Jan 2013 00:39, Peter Cock p.j.a.c...@googlemail.com wrote:
 I agree with Dag rather than Andrew, Explicit is better than implicit.
 i.e. What Nathaniel described earlier as the apparent consensus.

 Since I've actually used NumPy arrays with specific low memory
 types, I thought I should comment about my use case if case it
 is helpful:

 I've only used the low precision types like np.uint8 (unsigned) where
 I needed to limit my memory usage. In this case, the topology of a
 graph allowing multiple edges held as an integer adjacency matrix, A.
 I would calculate things like A^n for paths of length n, and also make
 changes to A directly (e.g. adding edges). So an overflow was always
 possible, and neither the old behaviour (type preserving but wrapping
 on overflow giving data corruption) nor the current behaviour (type
 promotion overriding my deliberate memory management) are nice.
 My preferences here would be for an exception, so I knew right away.

 I don't think the changes we're talking about here will help your use case
 actually; this is only about the specific case where one of your operands,
 itself, cannot be cleanly cast to the types being used for the operation -

Understood - I replied to your other message before I saw this one.

 it won't detect overflow in general. For that you want #593:
 https://github.com/numpy/numpy/issues/593

 On another note, while you're here, perhaps I can tempt you into having a go
 at fixing #593? :-)

 -n

I agree, and have commented on that issue. Thanks for pointing me to
that separate issue.

Peter
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy on Travis with Python 3

2012-11-27 Thread Peter Cock
On Tue, Nov 27, 2012 at 12:27 PM, Thomas Robitaille
thomas.robitai...@gmail.com wrote:
 Hi everyone,

 I'm currently having issues with installing Numpy 1.6.2 with Python
 3.1 and 3.2 using pip in Travis builds - see for example:

 https://travis-ci.org/astropy/astropy/jobs/3379866

 The build aborts with a cryptic message:

 ValueError: underlying buffer has been detached

 Has anyone seen this kind of issue before?

 Thanks for any help,

 Cheers,
 Tom

Hi Tom,

Yes, a similar error has been reported with virtualenv 1.8.3,
see https://github.com/pypa/virtualenv/issues/359

If you were not aware, the TravisCI team are currently actively
trying to get NumPy preinstalled on the virtual machines. As I
write this, Python 2.5, 2.6 and 2.7 now have NumPy while
Python 3.1 is being dropped (it was only ever an unofficially
supported platform), and NumPy under Python 3.2 and 3.3
is still in progress. See:

https://github.com/travis-ci/travis-cookbooks/issues/48
https://github.com/travis-ci/travis-cookbooks/issues/89

This ValueError: underlying buffer has been detached was
one of the issues the TravisCI team had faced.

Peter
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] In case anybody wants them: py4science.* domains...

2012-11-27 Thread Peter Cock
On Tue, Nov 27, 2012 at 9:25 PM, Anthony Scopatz scop...@gmail.com wrote:
 On Tue, Nov 27, 2012 at 2:08 PM, Fernando Perez fperez@gmail.com
 wrote:

 On Thu, Nov 22, 2012 at 1:30 AM, Peter Cock p.j.a.c...@googlemail.com
 wrote:

  Perhaps http://numfocus.org/ could take them on, or the PSF?
  (even if they don't have a specific use in mind immediately)
  For the short them I'd just have them redirect to www.scipy.org ;)

 I asked on the numfocus list and nobody was really interested, and I
 floated the question at a board meeting and folks also agreed that
 with the limited time/resources numfocus has right now, there were
 more important things to do.

 So I'll just let them lapse, if anybody cares, they'll be open for the
 taking come December 3 :)


 Gah! Sorry for missing this.  I actually think that the redirection idea is
 a really good one.

It seems more worthwhile than just letting a domain squatter use it.

 It can't be that expensive to just maintain these indefinitely.
 I'd rather us have them than someone else.  I'll make a motion for this
 on the NumFOCUS list.

The domain registration cost is minimal, and if NumFOCUS are
already looking after existing domains the extra admin should
be minimal.

Could you keep us informed if the domains still need a home?
Might make a good domain name for a blog... hmm.

Thanks,

Peter
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] In case anybody wants them: py4science.* domains...

2012-11-22 Thread Peter Cock
On Thu, Nov 22, 2012 at 4:17 AM, Fernando Perez fperez@gmail.com wrote:
 Hi folks,

 years ago, John Hunter and I bought the py4science.{com, org, info}
 domains thinking they might be useful.  We never did anything with
 them, and with his passing I realized I'm not really in the mood to
 keep renewing them without a clear goal in mind.

 Does anybody here want to do anything with these?  They expire
 December 3, 2012.  I can just let them lapse, but I figured I'd give a
 heads-up in case anybody has a concrete use for them I'd rather
 transfer them than let them go to a domain squatter.

 Basically, if you want them, they're yours to have as of 12/3/12.

 Cheers,

 f

Perhaps http://numfocus.org/ could take them on, or the PSF?
(even if they don't have a specific use in mind immediately)
For the short them I'd just have them redirect to www.scipy.org ;)

Peter
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Compiling NumPy on Windows for Python 3.3 with MSVC 2010

2012-11-20 Thread Peter Cock
On Fri, Nov 16, 2012 at 10:08 AM, Christoph Gohlke cgoh...@uci.edu wrote:
 On 11/16/2012 1:28 AM, Peter Cock wrote:
 On Thu, Nov 15, 2012 at 6:15 PM, Christoph Gohlke cgoh...@uci.edu wrote:

 Naturally the file would be named msvc10compiler.py but the name may be
 kept for compatibility reasons. AFAIK msvc10 does not use manifests any
 longer for the CRT dependencies and all the code handling msvc9
 manifests could be removed for Python 3.3. I have been building
 extensions for Python 3.3 with msvc10 and this distutils patch for some
 months and did not notice any issues.


 Sounds Python 3.3 needs a fix then - have you reported this?
 If not, could you report it (since you know far more about the
 Windows build system than I do)?

 If it will be fixed in Python itself, then perhaps a manual hack like
 this will be enough for NumPy in the short term. Otherwise, maybe
 numpy needs to include its own copy of msvc9compiler.py (or
 msvc10compiler.py)?

 Thanks,

 Peter

 Could be related to http://bugs.python.org/issue16296.

 Christoph

Thanks Christoph, you're probably right this is linked to
http://bugs.python.org/issue16296

Note here's an example of the manifest file, obtained from a hack
to Python 3.3's distutitls/msvc9compiler.py - looks like there are
no MSVC version numbers in here that we would need to
worry about:

build\temp.win32-3.3\Release\numpy\core\src\_dummy.pyd.manifest
?xml version='1.0' encoding='UTF-8' standalone='yes'?
assembly xmlns='urn:schemas-microsoft-com:asm.v1' manifestVersion='1.0'
  trustInfo xmlns=urn:schemas-microsoft-com:asm.v3
security
  requestedPrivileges
requestedExecutionLevel level='asInvoker' uiAccess='false' /
  /requestedPrivileges
/security
  /trustInfo
/assembly

I tried the patch from http://bugs.python.org/issue16296
applied by hand (in place of Christoph's one line change),
and it also seemed to work. I wanted to double check this,
so started by reverting to an unmodified copy of Python 3.3.

I just removed Python 3.3, and reinstalled it afresh using
python-3.3.0.msi, then updated to the latest commit on the
master branch of numpy, as it happens Ralf merging my
fixes to get mingw32  to compile numpy Python 3.3:
724da615902b9feb140cb6f7307ff1b1c2596a40

Now a clean numpy build under Python 3.3 with MSVC 10
just worked, the error Broken toolchain: cannot link a simple
C program has gone. The comments in msvc9compiler.py did
mention this manifest stuff was fragile... but I am puzzled.

My hunch right now is that the order of installation of
MSVC 2010 and Python 3.3 could be important. Either
that, or something else changed on the numpy master
which had an impact?

Regards,

Peter
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Compiling NumPy on Windows for Python 3.3 with MSVC 2010

2012-11-16 Thread Peter Cock
On Thu, Nov 15, 2012 at 6:15 PM, Christoph Gohlke cgoh...@uci.edu wrote:

 Naturally the file would be named msvc10compiler.py but the name may be
 kept for compatibility reasons. AFAIK msvc10 does not use manifests any
 longer for the CRT dependencies and all the code handling msvc9
 manifests could be removed for Python 3.3. I have been building
 extensions for Python 3.3 with msvc10 and this distutils patch for some
 months and did not notice any issues.


Sounds Python 3.3 needs a fix then - have you reported this?
If not, could you report it (since you know far more about the
Windows build system than I do)?

If it will be fixed in Python itself, then perhaps a manual hack like
this will be enough for NumPy in the short term. Otherwise, maybe
numpy needs to include its own copy of msvc9compiler.py (or
msvc10compiler.py)?

Thanks,

Peter
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Compiling NumPy on Windows for Python 3.3 with MSVC 2010

2012-11-15 Thread Peter Cock
On Wed, Nov 14, 2012 at 7:35 PM, Christoph Gohlke cgoh...@uci.edu wrote:
 ...
 RuntimeError: Broken toolchain: cannot link a simple C program

 It appears a similar issue was raised before:
 http://mail.scipy.org/pipermail/numpy-discussion/2012-June/062866.html

 Any tips?

 Peter

 Try changing line 648 in Python33\Lib\distutils\msvc9compiler.py to
 `mfinfo = None`.

 http://hg.python.org/cpython/file/tip/Lib/distutils/msvc9compiler.py#l648

 Christoph

Hi Christoph,

That was very precise advice and seems to solve this. Presumably
you've faced something like this before. Is there an open issue for
this in Python itself?

Line 648 didn't seem sensible (and I guess the tip has changed), but
I tried replacing this bit of Python33\Lib\distutils\msvc9compiler.py
which was near by:

# embed the manifest
# XXX - this is somewhat fragile - if mt.exe fails, distutils
# will still consider the DLL up-to-date, but it will not have a
# manifest.  Maybe we should link to a temp file?  OTOH, that
# implies a build environment error that shouldn't go undetected.
mfinfo = self.manifest_get_embed_info(target_desc, ld_args)

with your suggestion of 'mfinfo = None', and this did seem enough
to get NumPy to compile with Python 3.3 and MSCV v10. I could
then build and test a library using NumPy (C and Python APIs), but
I've not yet installed nose to run NumPy's own tests.

Looking at the code for the manifest_get_embed_info method,
I don't see any obvious 9 vs 10 issues like the problems I hit
before. However there are some regular expressions in the
method _remove_visual_c_ref which it calls which look more
likely - looking for two digits when perhaps it needs to be three
under MSVC 10...

As a general point, if MSVC 10 is sufficiently different from 9,
does it make sense to introduce distutils/msvc10compiler.py
(subclassing/reusing most of distutils/msvc9compiler.py) in
Python itself? Or in NumPy's distutils?

Thanks,

Peter
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] 1.7.0 release

2012-11-14 Thread Peter Cock
On Wed, Nov 14, 2012 at 4:24 AM, Ondřej Čertík ondrej.cer...@gmail.comwrote:

 On Mon, Nov 12, 2012 at 2:27 PM, Ondřej Čertík ondrej.cer...@gmail.com
 wrote:
 [...]
  Here is a list of issues that need to be fixed before the release:
 
  https://github.com/numpy/numpy/issues?milestone=3state=open
 
  If anyone wants to help, we just need to get through them and submit a
  PR for each, or close it if it doesn't apply anymore.
  This is what I am doing now.

 Ok, I went over all the issues, closed fixed issues and sent PRs for about
 6 issues. Just go to the link above to see them all (or go to issues and
 click on NumPy 1.7 milestone) and left comments on most issues.

 Most of the minor problems are fixed. There are only 3 big issues that
 need to be fixed so I set them priority high:


 https://github.com/numpy/numpy/issues?labels=priority%3A+highmilestone=3page=1state=open

 in particular:

 https://github.com/numpy/numpy/issues/568
 https://github.com/numpy/numpy/issues/2668
 https://github.com/numpy/numpy/issues/606

 after that I think we should be good to go. If you have some spare
 cycles, just concentrate on these 3.

 Ondrej


Hi all,

Having looked at the README.txt and INSTALL.txt files on the
branch, I see no mention of which Python 3.x versions are supported:

https://github.com/numpy/numpy/blob/maintenance/1.7.x/README.txt
https://github.com/numpy/numpy/blob/maintenance/1.7.x/INSTALL.txt

Is NumPy 1.7 intended to support Python 3.3?

My impression from this thread is probably yes:
http://mail.scipy.org/pipermail/numpy-discussion/2012-July/063483.html
...
http://mail.scipy.org/pipermail/numpy-discussion/2012-August/063597.html

If so, then under Windows (32 bit at least) where the Python.org
provided Python 3.3 is compiled with MSCV 2010 there are some
problems - see: https://github.com/numpy/numpy/pull/2726
Should an issue be filed for this?

Thanks,

Peter
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Compiling NumPy on Windows for Python 3.3 with MSVC 2010

2012-11-14 Thread Peter Cock
Changing title to reflect the fact this thread is now about using
the Microsoft compiler rather than mingw32 as in the old thread.

On Sat, Nov 10, 2012 at 11:04 PM, Peter Cock p.j.a.c...@googlemail.com wrote:
 On Sat, Nov 10, 2012 at 5:47 PM, Ralf Gommers ralf.gomm...@gmail.com wrote:



 On Tue, Nov 6, 2012 at 6:49 PM, Peter Cock p.j.a.c...@googlemail.com
 wrote:

 Dear all,

 Since the NumPy 1.7.0b2 release didn't include a Windows
 (32 bit) installer for Python 3.3, I am considering compiling it
 myself for local testing. What compiler is recommended?


 Either MSVC or MinGW 3.4.5. For the latter see
 https://github.com/certik/numpy-vendor

 Thanks Ralf,

 I was trying with MSVC 9.0 installed, but got this cryptic error:

C:\Downloads\numpy-1.7.0b2  C:\python33\python setup.py build
...
error: Unable to find vcvarsall.bat

 After sprinkling distutils with debug statements, I found it was
 looking for MSVC v10 (not numpy's fault but the error is most
 unhelpful).

 Presumably Microsoft Visual C++ 2010 Express Edition is
 the appropriate thing to download?
 http://www.microsoft.com/visualstudio/eng/downloads#d-2010-express


I would have tried this earlier, but it required Windows XP SP3
and there was something amiss with the permissions on my
work Windows machine preventing that update. Solved now,
and this file now exists:

C:\Program Files\Microsoft Visual Studio 10.0\VC\vcvarsall.bat

I've tried my branch (focussed on mingw32 fixes), and numpy-1.7.0b2
but both fail at the same point. I did remove the old build directory first.

C:\Downloads\numpy-1.7.0b2c:\python33\python setup.py build
Converting to Python3 via 2to3...
snip
Could not locate executable efc
don't know how to compile Fortran code on platform 'nt'
C:\Program Files\Microsoft Visual Studio 10.0\VC\BIN\cl.exe /c /nologo
/Ox /MD /W3 /GS- /DNDEBUG -Inumpy\core\src\private -Inumpy\core\src
-Inumpy\core -Inumpy\
core\src\npymath -Inumpy\core\src\multiarray -Inumpy\core\src\umath
-Inumpy\core\src\npysort -Inumpy\core\include -Ic:\python33\include
-Ic:\python33\include /Tc_configtest.c /Fo_configtest.obj
Found executable C:\Program Files\Microsoft Visual Studio 10.0\VC\BIN\cl.exe
C:\Program Files\Microsoft Visual Studio 10.0\VC\BIN\link.exe /nologo
/INCREMENTAL:NO _configtest.obj /OUT:_configtest.exe
/MANIFESTFILE:_configtest.exe.manifest
Found executable C:\Program Files\Microsoft Visual Studio 10.0\VC\BIN\link.exe
mt.exe -nologo -manifest _configtest.exe.manifest
-outputresource:_configtest.exe;1
Found executable C:\Program Files\Microsoft SDKs\Windows\v7.0A\bin\mt.exe

_configtest.exe.manifest : general error c1010070: Failed to load and
parse the manifest. The system cannot find the file specified.

failure.
removing: _configtest.c _configtest.obj
Traceback (most recent call last):
  File setup.py, line 214, in module
setup_package()
  File setup.py, line 207, in setup_package
configuration=configuration )
  File C:\Downloads\numpy-1.7.0b2\build\py3k\numpy\distutils\core.py, line 186
, in setup
return old_setup(**new_attr)
  File c:\python33\lib\distutils\core.py, line 148, in setup
dist.run_commands()
  File c:\python33\lib\distutils\dist.py, line 917, in run_commands
self.run_command(cmd)
  File c:\python33\lib\distutils\dist.py, line 936, in run_command
cmd_obj.run()
  File C:\Downloads\numpy-1.7.0b2\build\py3k\numpy\distutils\command\build.py,
 line 37, in run
old_build.run(self)
  File c:\python33\lib\distutils\command\build.py, line 126, in run
self.run_command(cmd_name)
  File c:\python33\lib\distutils\cmd.py, line 313, in run_command
self.distribution.run_command(command)
  File c:\python33\lib\distutils\dist.py, line 936, in run_command
cmd_obj.run()
  File C:\Downloads\numpy-1.7.0b2\build\py3k\numpy\distutils\command\build_src.
py, line 152, in run
self.build_sources()
  File C:\Downloads\numpy-1.7.0b2\build\py3k\numpy\distutils\command\build_src.
py, line 163, in build_sources
self.build_library_sources(*libname_info)
  File C:\Downloads\numpy-1.7.0b2\build\py3k\numpy\distutils\command\build_src.
py, line 298, in build_library_sources
sources = self.generate_sources(sources, (lib_name, build_info))
  File C:\Downloads\numpy-1.7.0b2\build\py3k\numpy\distutils\command\build_src.
py, line 385, in generate_sources
source = func(extension, build_dir)
  File numpy\core\setup.py, line 648, in get_mathlib_info
raise RuntimeError(Broken toolchain: cannot link a simple C program)
RuntimeError: Broken toolchain: cannot link a simple C program

It appears a similar issue was raised before:
http://mail.scipy.org/pipermail/numpy-discussion/2012-June/062866.html

Any tips?

Peter
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Compiling NumPy on Windows for Python 3.3

2012-11-12 Thread Peter Cock
On Sun, Nov 11, 2012 at 11:20 PM, Peter Cock p.j.a.c...@googlemail.com wrote:
 On Sun, Nov 11, 2012 at 9:05 PM, Ralf Gommers ralf.gomm...@gmail.com wrote:

 Those changes look correct, a PR would be great.


 I'll do that later this week - but feel free to do it yourself immediately
 if more convenient.


Hi again Ralf,

OK, new branch from the master here:
https://github.com/peterjc/numpy/tree/msvc10

The first two commits are the changes already discussed.
https://github.com/peterjc/numpy/commit/d8ead4c83364f9c7d4543690eb12e4543c608372
https://github.com/peterjc/numpy/commit/1d498f54668a9a87286fa31f2779acbb048edd39

 Fixing the next error also seems straightforward; around line 465 of
 mingw32ccompiler a check is needed for 10.0 (not sure which one)
 and then a line like
 _MSVCRVER_TO_FULLVER['10'] = 10.0.x.x
 needs to be added.

 I'd got that far last night before calling turning in - my question was
 where do I get the last two parts of the version number. Presumably
 I can just use the values from the msvcr100.dll on my machine?

The third commit provides a fall back value (based on the DLL on
my machine) as done for MSVC v8 and v9, and updates the live
lookup via msvcrt.CRT_ASSEMBLY_VERSION to actually check
what version that is (the current code wrongly assumes it is going
to be for MSCV v9):
https://github.com/peterjc/numpy/commit/24523565b5dbb23d6de0591ef2a4c1d014722c5d

Do you want a pull request for that work so far? Or can you
suggest what I should investigate next since there is at least
one more hurdle to clear before I can build it (build output below).

Regards,

Peter

--


I now get further trying to build with mingw32 (from this branch
from master, not the numpy-1.7.0b2 files this time), an excerpt
from which this line may be note-worthy:

Cannot build msvcr library: msvcr100d.dll not found

This omits the failure to detect ALTAS etc which seems irrelevant.
I would guess the MismatchCAPIWarning is an unrelated issue
from working from the master branch this time.

C:\repositories\numpyc:\python33\python setup.py build --compiler=mingw32
Converting to Python3 via 2to3...
snip
numpy\core\setup_common.py:86: MismatchCAPIWarning: API mismatch
detected, the C API version numbers have to be updated. Current C api
version is 8, with checksum f4362353e2d72f889fda0128aa015037, but
recorded checksum for C API version 8 in codegen_dir/cversions.txt is
17321775fc884de0b1eda478cd61c74b. If functions were added in the C
API, you have to update C_API_VERSION  in numpy\core\setup_common.py.
MismatchCAPIWarning)
snip
running build
running config_cc
unifing config_cc, config, build_clib, build_ext, build commands
--compiler options
running config_fc
unifing config_fc, config, build_clib, build_ext, build commands
--fcompiler options
running build_src
build_src
building py_modules sources
building library npymath sources
Cannot build msvcr library: msvcr100d.dll not found
Unable to find productdir in registry
Checking environ VS100COMNTOOLS
customize GnuFCompiler
Could not locate executable g77
Could not locate executable f77
customize IntelVisualFCompiler
Could not locate executable ifort
Could not locate executable ifl
customize AbsoftFCompiler
Could not locate executable f90
customize CompaqVisualFCompiler
Found executable C:\cygwin\bin\DF.exe
Found executable C:\cygwin\bin\DF.exe
customize IntelItaniumVisualFCompiler
Could not locate executable efl
customize Gnu95FCompiler
Could not locate executable gfortran
Could not locate executable f95
customize G95FCompiler
Could not locate executable g95
customize IntelEM64VisualFCompiler
customize IntelEM64TFCompiler
Could not locate executable efort
Could not locate executable efc
don't know how to compile Fortran code on platform 'nt'
C compiler: gcc -mno-cygwin -O2 -Wall -Wstrict-prototypes

compile options: '-DNPY_MINGW_USE_CUSTOM_MSVCR
-D__MSVCRT_VERSION__=0x1000 -Inumpy\core\src\private -Inumpy\core\src
-Inumpy\core -Inumpy\core\src\npymath -Inumpy\core\src\multiarray
-Inumpy\core\src\umath -Inumpy\core\src\npysort -Inumpy\core\include
-Ic:\python33\include -Ic:\python33\include -c'
gcc -mno-cygwin -O2 -Wall -Wstrict-prototypes
-DNPY_MINGW_USE_CUSTOM_MSVCR -D__MSVCRT_VERSION__=0x1000
-Inumpy\core\src\private -Inumpy\core\src -Inumpy\core
-Inumpy\core\src\npymath -Inumpy\core\src\multiarray
-Inumpy\core\src\umath -Inumpy\core\src\npysort -Inumpy\core\include
-Ic:\python33\include -Ic:\python33\include -c _configtest.c -o
_configtest.o
Found executable C:\cygwin\usr\bin\gcc.exe
g++ -mno-cygwin _configtest.o -lmsvcr100 -o _configtest.exe
Could not locate executable g++
Executable g++ does not exist

failure.
removing: _configtest.exe.manifest _configtest.c _configtest.o
Traceback (most recent call last):
  File setup.py, line 214, in module
setup_package()
  File setup.py, line 207, in setup_package
configuration=configuration )
  File C:\repositories\numpy\build\py3k\numpy\distutils\core.py, line 186, in
setup
return old_setup

Re: [Numpy-discussion] Compiling NumPy on Windows for Python 3.3

2012-11-12 Thread Peter Cock
 switch and distutils will
default to using mingw32 with Python 3.3, and build and
install seem to work.

I've not yet run the numpy tests yet, but I think this means
my github branches are worth merging:

https://github.com/peterjc/numpy/commits/msvc10

Regards,

Peter

P.S. I am really looking forward to official Windows installers for
NumPy on Python 3.3 on Windows...
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Compiling NumPy on Windows for Python 3.3

2012-11-12 Thread Peter Cock
On Tue, Nov 13, 2012 at 12:36 AM, Peter Cock p.j.a.c...@googlemail.com wrote:

 I've not yet run the numpy tests yet, but I think this means
 my github branches are worth merging:

 https://github.com/peterjc/numpy/commits/msvc10


Hi Ralf,

Pull request filed, assuming this gets applied to the master could
you also flag it for backporting to NumPy 1.7 as well? Thanks:
https://github.com/numpy/numpy/pull/2726

I hope to run the test suite later this week (tomorrow ideally).

Peter
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Compiling NumPy on Windows for Python 3.3

2012-11-11 Thread Peter Cock
On Sun, Nov 11, 2012 at 9:05 PM, Ralf Gommers ralf.gomm...@gmail.com wrote:

 Those changes look correct, a PR would be great.


I'll do that later this week - but feel free to do it yourself immediately
if more convenient.

 Fixing the next error also seems straightforward; around line 465 of
 mingw32ccompiler a check is needed for 10.0 (not sure which one)
 and then a line like
 _MSVCRVER_TO_FULLVER['10'] = 10.0.x.x
 needs to be added.

I'd got that far last night before calling turning in - my question was
where do I get the last two parts of the version number. Presumably
I can just use the values from the msvcr100.dll on my machine?

Thanks,

Peter
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Compiling NumPy on Windows for Python 3.3

2012-11-10 Thread Peter Cock
On Sat, Nov 10, 2012 at 5:47 PM, Ralf Gommers ralf.gomm...@gmail.com wrote:



 On Tue, Nov 6, 2012 at 6:49 PM, Peter Cock p.j.a.c...@googlemail.com
 wrote:

 Dear all,

 Since the NumPy 1.7.0b2 release didn't include a Windows
 (32 bit) installer for Python 3.3, I am considering compiling it
 myself for local testing. What compiler is recommended?


 Either MSVC or MinGW 3.4.5. For the latter see
 https://github.com/certik/numpy-vendor

Thanks Ralf,

I was trying with MSVC 9.0 installed, but got this cryptic error:

   C:\Downloads\numpy-1.7.0b2  C:\python33\python setup.py build
   ...
   error: Unable to find vcvarsall.bat

After sprinkling distutils with debug statements, I found it was
looking for MSVC v10 (not numpy's fault but the error is most
unhelpful).

Presumably Microsoft Visual C++ 2010 Express Edition is
the appropriate thing to download?
http://www.microsoft.com/visualstudio/eng/downloads#d-2010-express

Thanks,

Peter
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Compiling NumPy on Windows for Python 3.3

2012-11-10 Thread Peter Cock
On Sat, Nov 10, 2012 at 5:47 PM, Ralf Gommers ralf.gomm...@gmail.com wrote:
 On Tue, Nov 6, 2012 at 6:49 PM, Peter Cock wrote:

 Dear all,

 Since the NumPy 1.7.0b2 release didn't include a Windows
 (32 bit) installer for Python 3.3, I am considering compiling it
 myself for local testing. What compiler is recommended?


 Either MSVC or MinGW 3.4.5. For the latter see
 https://github.com/certik/numpy-vendor

 Ralf

I was trying with mingw32 via cygwin with gcc 2.4.4,
which also failed with a cryptic error:

C:\Downloads\numpy-1.7.0b2\build\py3k\numpy\distutils\system_info.py:1406: UserW
arning:
Lapack (http://www.netlib.org/lapack/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [lapack]) or by setting
the LAPACK environment variable.
  warnings.warn(LapackNotFoundError.__doc__)
lapack_src_info:
  NOT AVAILABLE

C:\Downloads\numpy-1.7.0b2\build\py3k\numpy\distutils\system_info.py:1409: UserW
arning:
Lapack (http://www.netlib.org/lapack/) sources not found.
Directories to search for the sources can be specified in the
numpy/distutils/site.cfg file (section [lapack_src]) or by setting
the LAPACK_SRC environment variable.
  warnings.warn(LapackSrcNotFoundError.__doc__)
  NOT AVAILABLE

running build
running config_cc
unifing config_cc, config, build_clib, build_ext, build commands --compiler opti
ons
running config_fc
unifing config_fc, config, build_clib, build_ext, build commands --fcompiler opt
ions
running build_src
build_src
building py_modules sources
building library npymath sources
Traceback (most recent call last):
  File setup.py, line 214, in module
setup_package()
  File setup.py, line 207, in setup_package
configuration=configuration )
  File C:\Downloads\numpy-1.7.0b2\build\py3k\numpy\distutils\core.py, line 186
, in setup
return old_setup(**new_attr)
  File c:\python33\lib\distutils\core.py, line 148, in setup
dist.run_commands()
  File c:\python33\lib\distutils\dist.py, line 917, in run_commands
self.run_command(cmd)
  File c:\python33\lib\distutils\dist.py, line 936, in run_command
cmd_obj.run()
  File C:\Downloads\numpy-1.7.0b2\build\py3k\numpy\distutils\command\build.py,
 line 37, in run
old_build.run(self)
  File c:\python33\lib\distutils\command\build.py, line 126, in run
self.run_command(cmd_name)
  File c:\python33\lib\distutils\cmd.py, line 313, in run_command
self.distribution.run_command(command)
  File c:\python33\lib\distutils\dist.py, line 936, in run_command
cmd_obj.run()
  File C:\Downloads\numpy-1.7.0b2\build\py3k\numpy\distutils\command\build_src.
py, line 152, in run
self.build_sources()
  File C:\Downloads\numpy-1.7.0b2\build\py3k\numpy\distutils\command\build_src.
py, line 163, in build_sources
self.build_library_sources(*libname_info)
  File C:\Downloads\numpy-1.7.0b2\build\py3k\numpy\distutils\command\build_src.
py, line 298, in build_library_sources
sources = self.generate_sources(sources, (lib_name, build_info))
  File C:\Downloads\numpy-1.7.0b2\build\py3k\numpy\distutils\command\build_src.
py, line 385, in generate_sources
source = func(extension, build_dir)
  File numpy\core\setup.py, line 646, in get_mathlib_info
st = config_cmd.try_link('int main(void) { return 0;}')
  File c:\python33\lib\distutils\command\config.py, line 243, in try_link
self._check_compiler()
  File C:\Downloads\numpy-1.7.0b2\build\py3k\numpy\distutils\command\config.py
, line 45, in _check_compiler
old_config._check_compiler(self)
  File c:\python33\lib\distutils\command\config.py, line 98, in _check_compile
r
dry_run=self.dry_run, force=1)
  File C:\Downloads\numpy-1.7.0b2\build\py3k\numpy\distutils\ccompiler.py, lin
e 560, in new_compiler
compiler = klass(None, dry_run, force)
  File C:\Downloads\numpy-1.7.0b2\build\py3k\numpy\distutils\mingw32ccompiler.p
y, line 94, in __init__
msvcr_success = build_msvcr_library()
  File C:\Downloads\numpy-1.7.0b2\build\py3k\numpy\distutils\mingw32ccompiler.p
y, line 336, in build_msvcr_library
if int(msvcr_name.lstrip('msvcr'))  80:
AttributeError: 'NoneType' object has no attribute 'lstrip'

I am updating cygwin to see if anything changes...

Peter
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Compiling NumPy on Windows for Python 3.3

2012-11-10 Thread Peter Cock
I meant to click on save not send, anyway:

On Sat, Nov 10, 2012 at 11:13 PM, Peter Cock p.j.a.c...@googlemail.com wrote:

 Either MSVC or MinGW 3.4.5. For the latter see
 https://github.com/certik/numpy-vendor

 Ralf

 I was trying with mingw32 via cygwin with gcc 2.4.4,

Typo, gcc 3.4.4

 which also failed with a cryptic error:

 ...
   File 
 C:\Downloads\numpy-1.7.0b2\build\py3k\numpy\distutils\mingw32ccompiler.p
 y, line 94, in __init__
 msvcr_success = build_msvcr_library()
   File 
 C:\Downloads\numpy-1.7.0b2\build\py3k\numpy\distutils\mingw32ccompiler.p
 y, line 336, in build_msvcr_library
 if int(msvcr_name.lstrip('msvcr'))  80:
 AttributeError: 'NoneType' object has no attribute 'lstrip'

I think part of the problem could be in numpy/distutils/misc_util.py
where there is no code to detect MSCV 10,

def msvc_runtime_library():
Return name of MSVC runtime library if Python was built with MSVC = 7
msc_pos = sys.version.find('MSC v.')
if msc_pos != -1:
msc_ver = sys.version[msc_pos+6:msc_pos+10]
lib = {'1300' : 'msvcr70',# MSVC 7.0
   '1310' : 'msvcr71',# MSVC 7.1
   '1400' : 'msvcr80',# MSVC 8
   '1500' : 'msvcr90',# MSVC 9 (VS 2008)
  }.get(msc_ver, None)
else:
lib = None
return lib

https://github.com/numpy/numpy/blob/master/numpy/distutils/misc_util.py#L353

Under Python 3.3, we have:

Python 3.3.0 (v3.3.0:bd8afb90ebf2, Sep 29 2012, 10:55:48) [MSC v.1600
32 bit (Intel)] on win32
Type help, copyright, credits or license for more information.
 import sys
 sys.version
'3.3.0 (v3.3.0:bd8afb90ebf2, Sep 29 2012, 10:55:48) [MSC v.1600 32 bit (Intel)]'

i.e. It looks to me like that dictionary needs another entry for key '1600'.

Peter
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Compiling NumPy on Windows for Python 3.3

2012-11-10 Thread Peter Cock
On Sat, Nov 10, 2012 at 11:24 PM, Peter Cock p.j.a.c...@googlemail.com wrote:

 I think part of the problem could be in numpy/distutils/misc_util.py
 where there is no code to detect MSCV 10,

 def msvc_runtime_library():
 Return name of MSVC runtime library if Python was built with MSVC = 7
 msc_pos = sys.version.find('MSC v.')
 if msc_pos != -1:
 msc_ver = sys.version[msc_pos+6:msc_pos+10]
 lib = {'1300' : 'msvcr70',# MSVC 7.0
'1310' : 'msvcr71',# MSVC 7.1
'1400' : 'msvcr80',# MSVC 8
'1500' : 'msvcr90',# MSVC 9 (VS 2008)
   }.get(msc_ver, None)
 else:
 lib = None
 return lib

 https://github.com/numpy/numpy/blob/master/numpy/distutils/misc_util.py#L353

 Under Python 3.3, we have:

 Python 3.3.0 (v3.3.0:bd8afb90ebf2, Sep 29 2012, 10:55:48) [MSC v.1600
 32 bit (Intel)] on win32
 Type help, copyright, credits or license for more information.
 import sys
 sys.version
 '3.3.0 (v3.3.0:bd8afb90ebf2, Sep 29 2012, 10:55:48) [MSC v.1600 32 bit 
 (Intel)]'

 i.e. It looks to me like that dictionary needs another entry for key '1600'.


Adding this line seems to help,

   '1600' : 'msvcr100',   # MSVC 10 (aka 2010)

Now my compile gets further, but runs into another issue:

  File c:\python33\lib\distutils\command\config.py, line 246, in try_link
libraries, library_dirs, lang)
  File 
C:\Downloads\numpy-1.7.0b2\build\py3k\numpy\distutils\command\config.py,
line 146, in _link
generate_manifest(self)
  File 
C:\Downloads\numpy-1.7.0b2\build\py3k\numpy\distutils\mingw32ccompiler.py,
line 562, in generate_manifest
check_embedded_msvcr_match_linked(msver)
  File 
C:\Downloads\numpy-1.7.0b2\build\py3k\numpy\distutils\mingw32ccompiler.py,
line 541, in check_embedded_msvcr_match_linked
(%d) % (int(msver), maj))
ValueError: Discrepancy between linked msvcr (10) and the one about to
be embedded (1)

My hunch was version something about the new three digit version
number is breaking things... which appears to be breaking here:

def check_embedded_msvcr_match_linked(msver):
msver is the ms runtime version used for the MANIFEST.
# check msvcr major version are the same for linking and
# embedding
msvcv = msvc_runtime_library()
if msvcv:
maj = int(msvcv[5:6])
if not maj == int(msver):
raise ValueError(
  Discrepancy between linked msvcr  \
  (%d) and the one about to be embedded  \
  (%d) % (int(msver), maj))

https://github.com/numpy/numpy/blob/master/numpy/distutils/mingw32ccompiler.py#L530

As you can see, to get the major version number from the
string it looks at the first digit. When the string was something
like 81 or 90 that was fine, but now it is 100. Instead it
should look at all the digits up to the final one, i.e. use:

maj = int(msvcv[5:-1])

Now (finally), I get an understandable (but hopefully wrong)
error message from trying to build NumPy 1.7.0b2 under
Python 3.3 on Windows XP,

  File numpy\core\setup.py, line 646, in get_mathlib_info
st = config_cmd.try_link('int main(void) { return 0;}')
  File c:\python33\lib\distutils\command\config.py, line 246, in try_link
libraries, library_dirs, lang)
  File 
C:\Downloads\numpy-1.7.0b2\build\py3k\numpy\distutils\command\config.py,
line 146, in _link
generate_manifest(self)
  File 
C:\Downloads\numpy-1.7.0b2\build\py3k\numpy\distutils\mingw32ccompiler.py,
line 568, in generate_manifest
manxml = msvc_manifest_xml(ma, mi)
  File 
C:\Downloads\numpy-1.7.0b2\build\py3k\numpy\distutils\mingw32ccompiler.py,
line 484, in msvc_manifest_xml
% (maj, min))
ValueError: Version 10,0 of MSVCRT not supported yet

Presumably those two changes I have described are worth
committing to the trunk anyway? I can prepare a patch or
pull request, but currently I've been working on this Windows
box remotely and I'd prefer to wait until next week when I
can do it directly on the machine concerned.

Files affected:
numpy/distutils/misc_util.py function msvc_runtime_library
numpy/distutils/mingw32ccompiler.py function check_embedded_msvcr_match_linked

Peter
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Compiling NumPy on Windows for Python 3.3

2012-11-06 Thread Peter Cock
Dear all,

Since the NumPy 1.7.0b2 release didn't include a Windows
(32 bit) installer for Python 3.3, I am considering compiling it
myself for local testing. What compiler is recommended?

Thanks,

Peter
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Blaze

2012-10-31 Thread Peter Wang
On Sat, Oct 27, 2012 at 7:14 PM, Charles R Harris charlesr.har...@gmail.com
 wrote:

 Thanks. I've been trying to keep an eye on this but it is difficult to
 find the details. I think the beta was scheduled for sometime in December.


Hey guys,

We're hoping to get a first release out by the end of the November, per
Stephen's slides.   It's taken a while to get some of the fundamentals
hashed out to a point that we were happy with, and that could form a solid
core to iterate on.

Cheers,
Peter
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Reminder: Last day of early registration for PyData NYC conference!

2012-10-12 Thread Peter Wang
Hi everyone,

Just a friendly reminder that today is the final day of early
registration for the PyData NYC conference later this month!  We have
a fantastic lineup of talks and workshops on a variety of topics
related to Python for data analysis, including topics that are hard to
find at other conferences (e.g. practical perspectives on Python and
Hadoop, using Python and R, etc.).

http://nyc2012.pydata.org/

Use the discount code numpy for a 20% discount off of registration!

We are also looking for sponsors.  We are proud to feature D. E. Shaw,
JP Morgan, and Appnexus as gold sponsors.  If your company or
organization would like some visibility in front of a few hundred
Python data hackers, please visit our sponsor information page:
http://nyc2012.pydata.org/sponsors/becoming/

Thanks,
Peter
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] PyData NYC 2012 Speakers and Talks announced!

2012-09-28 Thread Peter Wang
Hi everyone,

The PyData NYC team and Continuum Analytics are proud to announce the full
lineup of talks and speakers for the PyData NYC 2012 event!  We're thrilled
with the exciting lineup of workshops, hands-on tutorials, and talks about
real-world uses of Python for data analysis.

http://nyc2012.pydata.org/schedule

The list of presenters and talk abstracts are also available, and are
linked from the schedule page.

For those who will be in town on Thursday evening of October 25th, there
will be a special PyData edition of Drinks  Data at Dewey's Flatiron.
 It'll be a great chance to socialize and meet with PyData presenters and
other attendees.  Register here:
http://drinks-and-data-pydata-conf-ny.eventbrite.com/

We're also proud to be part of the NYC DataWeek:
http://oreilly.com/dataweek/?cmp=tw-strata-ev-dr.  The week of October 22nd
is going to be a great time to be in New York!

Lastly, we are still looking for sponsors!  If you want to get your company
recognition in front of a few hundred Python data hackers and hardcore
developers, PyData will be a premier venue to showcase your products or
recruit exceptional talent.  Please visit
http://nyc2012.pydata.org/sponsors/becoming/ to inquire about sponsorship.
 In addition to the conference sponsorship, charter sponsorships for dinner
Friday night, as well as the Sunday Hack-a-thon event are all open.

Please help us promote the conference! Tell your friends, email your meetup
groups, and follow @PyDataConf on Twitter.  Early registration ends in just
a few weeks, so register today!

http://pydata.eventbrite.com/


See you there!

-Peter Wang
Organizer, PyData NYC 2012
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] 64bit infrastructure

2012-08-22 Thread Peter
On Wed, Aug 22, 2012 at 11:40 AM, David Cournapeau courn...@gmail.com wrote:
 On Tue, Aug 21, 2012 at 12:15 AM, Chris Barker chris.bar...@noaa.gov wrote:
 On Mon, Aug 20, 2012 at 3:51 PM, Travis Oliphant tra...@continuum.io wrote:
 I'm actually not sure, why.   I think the issue is making sure that
 the release manager can actually build NumPy without having
 to buy a particular compiler.

 The MS Express editions, while not open source, are free-to-use,
 and work fine.

 Not sure what what do about Fortran, though, but that's a scipy, not a
 numpy issue, yes?

 fortran is the issue. Having one or two licenses of say Intel Fortran
 compiler is not enough because it makes it difficult for people to
 build on top of scipy.

 David

For those users/developers/packages using NumPy but not SciPy,
does this matter? Having just official NumPy 64bit Windows packages
would still be very welcome.

Is the problem that whatever route NumPy goes down will have
potential implications/restrictions for how SciPy could proceed?

Peter
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] NumPy 1.7 release delays

2012-06-27 Thread Peter Wang
On Wed, Jun 27, 2012 at 9:33 AM, Charles R Harris
charlesr.har...@gmail.com wrote:
 On Wed, Jun 27, 2012 at 7:20 AM, John Hunter jdh2...@gmail.com wrote:
 because the original developers are gone.  So people are loathe to
 upgrade.  It is certainly true that deprecations that have lived for a
 single point release cycle have not been vetted by a large part of the
 user community.

 I'd also venture a guess that many of those installations don't have
 adequate test suites.

Depends how you define adequate.  If these companies stopped being
able to make money, model science, or serve up web applications due to
the lack of a test suite, then they would immediately put all their
efforts on it.  But for users for whom Numpy (and software in general)
is merely a means to an end, the management of technical debt and
future technology risk is merely one component of all the risk factors
facing the organization.  Every hour spent managing code and technical
debt is an hour of lost business opportunity, and that balance is very
conscientiously weighed and oftentimes the decision is not in the
direction of quality of software process.

In my experience, it's a toss up.. most people have reasonable unit
tests and small integration tests, but large scale smoke tests can be
very difficult to maintain or to justify to upper management.  Because
Numpy can be both a component of larger software or a direct tool in
its own right, I've found that it makes a big difference whether an
organization sees code as a means to an end, or an ends unto itself.

-Peter
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Some numpy funcs for PyPy

2012-05-24 Thread Peter
On Thu, May 24, 2012 at 12:32 PM, Dmitrey tm...@ukr.net wrote:
 hi all,
 maybe you're aware of numpypy - numpy port for pypy (pypy.org) - Python
 language implementation with dynamic compilation.

 Unfortunately, numpypy developmnent is very slow due to strict quality
 standards and some other issues, so for my purposes I have provided some
 missing numpypy funcs, in particular

 atleast_1d, atleast_2d, hstack, vstack, cumsum, isscalar, asscalar,
 asfarray, flatnonzero, tile, zeros_like, ones_like, empty_like, where,
 searchsorted
 with axis parameter: nan(arg)min, nan(arg)max, all, any

 and have got some OpenOpt / FuncDesigner functionality working
 faster than in CPython.

 File with this functions you can get here

 Also you may be interested in some info at http://openopt.org/PyPy
 Regards, Dmitrey.

As a NumPy user interested in PyPy it is great to know more people
are trying to contribute in this area. I myself have only filed PyPy
bugs about missing NumPy features rendering the initial numpypy
support useless to me.

On your website you wrote:

 From my (Dmitrey) point of view numpypy development is
 very unfriendly for newcomers - PyPy developers say provide
 code, preferably in interpreter level instead of AppLevel,
 provide whole test coverage for all possible corner cases,
 provide hg diff for code, and then, maybe, it will be committed.
 Probably this is the reason why so insufficient number of
 developers work on numpypy.

I assume that is paraphrased with a little hyperbole, but it
isn't so different from numpy (other than using git), or many
other open source projects. Unit tests are important, and
taking patches without them is risky.

I've been subscribed to the pypy-dev list for a while, but I
don't recall seeing you posting there. Have you tried to submit
any of your work to PyPy yet? Perhaps you should have
sent this message to pypy-dev instead?

(I am trying to be constructive, not critical.)

Regards,

Peter
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Some numpy funcs for PyPy

2012-05-24 Thread Peter
On Thu, May 24, 2012 at 2:07 PM, Dmitrey tm...@ukr.net wrote:

 I had been subsribed IIRC for a couple of months

I don't follow the PyPy IRC so that would explain it. I don't know how
much they use that rather than their mailing list, but both seem a
better place to discuss their handling or external contributions than
on the numpy-discussion and scipy-user lists.

Still, I hope you are able to make some contributions to numpypy,
because so far I've also found PyPy's numpy implementation too
limited for my usage.

Regards,

Peter
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] un-silencing Numpy's deprecation warnings

2012-05-22 Thread Peter
On Tue, May 22, 2012 at 9:27 AM, Nathaniel Smith n...@pobox.com wrote:
 So starting in Python 2.7 and 3.2, the Python developers have made
 DeprecationWarnings invisible by default:
  http://docs.python.org/whatsnew/2.7.html#the-future-for-python-2-x
  http://mail.python.org/pipermail/stdlib-sig/2009-November/000789.html
  http://bugs.python.org/issue7319
 The only way to see them is to explicitly request them by running
 Python with -Wd.

 The logic seems to be that between the end-of-development for 2.7 and
 the moratorium on 3.2 changes, there were a *lot* of added
 deprecations that were annoying people, and deprecations in the Python
 stdlib mean this code is probably sub-optimal but it will still
 continue to work indefinitely. So they consider that deprecation
 warnings are like a lint tool for conscientious developers who
 remember to test their code with -Wd, but not something to bother
 users with.

 In Numpy, the majority of our users are actually (relatively
 unsophisticated) developers, and we don't plan to support deprecated
 features indefinitely. Our deprecations seem to better match what
 Python calls a FutureWarning: warnings about constructs that will
 change semantically in the future.
  http://docs.python.org/library/warnings.html#warning-categories
 FutureWarning is displayed by default, and available in all versions of 
 Python.

 So maybe we should change all our DeprecationWarnings into
 FutureWarnings (or at least the ones that we actually plan to follow
 through on). Thoughts?

 - N

We had the same discussion for Biopython two years ago, and
introduced our own warning class to avoid our deprecations being
silent (and thus almost pointless). It is just a subclass of Warning
(originally we used a subclass of UserWarning).
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] physics simulation

2012-04-12 Thread Peter Plantinga
I'm trying to run a physics simulation on a cluster. The original program
is written in fortran, and highly parallelizable. Unfortunately, I've had a
bit of trouble getting f2py to work with the compiler I'm using (absoft
v.10.1). The cluster is running Linux v. 12.

When I type just f2py I get the following error:

 f2py
Traceback (most recent call last):
  File /opt/absoft10.1/bin/f2py, line 20, in module
from numpy.f2py import main
  File /usr/local/lib/python2.6/dist-packages/numpy/__init__.py, line
137, in module
import add_newdocs
  File /usr/local/lib/python2.6/dist-packages/numpy/add_newdocs.py, line
9, in module
from numpy.lib import add_newdoc
  File /usr/local/lib/python2.6/dist-packages/numpy/lib/__init__.py, line
13, in module
from polynomial import *
  File /usr/local/lib/python2.6/dist-packages/numpy/lib/polynomial.py,
line 17, in module
from numpy.linalg import eigvals, lstsq
  File /usr/local/lib/python2.6/dist-packages/numpy/linalg/__init__.py,
line 48, in module
from linalg import *
  File /usr/local/lib/python2.6/dist-packages/numpy/linalg/linalg.py,
line 23, in module
from numpy.linalg import lapack_lite
ImportError: libaf90math.so: cannot open shared object file: No such file
or directory

It looks like f2py cannot find libaf90math.so, located in
/opt/absoft10.1/shlib. How can I tell f2py where af90math is?

Thanks for the help!
Peter Plantinga
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] NumPy-Discussion Digest, Vol 67, Issue 43

2012-04-12 Thread Peter Plantinga

 Really you have to have this setup in order to run a fortran executable,
 but the only thing that comes to mind is the LD_LIBRARY_PATH environment
 variable.  LD_LIBRARY_PATH is a colon separated list of paths that is
 searched for dynamic libraries by both regular programs and by Python.


Thanks for the suggestion! I added the library to LD_LIBRARY_PATH and that
fixed it.

On Thu, Apr 12, 2012 at 4:25 PM, numpy-discussion-requ...@scipy.org wrote:

 Send NumPy-Discussion mailing list submissions to
numpy-discussion@scipy.org

 To subscribe or unsubscribe via the World Wide Web, visit
http://mail.scipy.org/mailman/listinfo/numpy-discussion
 or, via email, send a message with subject or body 'help' to
numpy-discussion-requ...@scipy.org

 You can reach the person managing the list at
numpy-discussion-ow...@scipy.org

 When replying, please edit your Subject line so it is more specific
 than Re: Contents of NumPy-Discussion digest...


 Today's Topics:

   1. physics simulation (Peter Plantinga)
   2. Re: physics simulation (Tim Cera)
   3. Re: YouTrack testbed (Travis Oliphant)
   4. Re: YouTrack testbed (william ratcliff)


 --

 Message: 1
 Date: Thu, 12 Apr 2012 15:24:43 -0400
 From: Peter Plantinga p...@students.calvin.edu
 Subject: [Numpy-discussion] physics simulation
 To: numpy-discussion@scipy.org
 Message-ID:
caey-jdl+mycqgsuoh2fjltopqnotxkja8y4ch2k1jbzjbyu...@mail.gmail.com
 
 Content-Type: text/plain; charset=iso-8859-1

 I'm trying to run a physics simulation on a cluster. The original program
 is written in fortran, and highly parallelizable. Unfortunately, I've had a
 bit of trouble getting f2py to work with the compiler I'm using (absoft
 v.10.1). The cluster is running Linux v. 12.

 When I type just f2py I get the following error:

  f2py
 Traceback (most recent call last):
  File /opt/absoft10.1/bin/f2py, line 20, in module
from numpy.f2py import main
  File /usr/local/lib/python2.6/dist-packages/numpy/__init__.py, line
 137, in module
import add_newdocs
  File /usr/local/lib/python2.6/dist-packages/numpy/add_newdocs.py, line
 9, in module
from numpy.lib import add_newdoc
  File /usr/local/lib/python2.6/dist-packages/numpy/lib/__init__.py, line
 13, in module
from polynomial import *
  File /usr/local/lib/python2.6/dist-packages/numpy/lib/polynomial.py,
 line 17, in module
from numpy.linalg import eigvals, lstsq
  File /usr/local/lib/python2.6/dist-packages/numpy/linalg/__init__.py,
 line 48, in module
from linalg import *
  File /usr/local/lib/python2.6/dist-packages/numpy/linalg/linalg.py,
 line 23, in module
from numpy.linalg import lapack_lite
 ImportError: libaf90math.so: cannot open shared object file: No such file
 or directory

 It looks like f2py cannot find libaf90math.so, located in
 /opt/absoft10.1/shlib. How can I tell f2py where af90math is?

 Thanks for the help!
 Peter Plantinga
 -- next part --
 An HTML attachment was scrubbed...
 URL:
 http://mail.scipy.org/pipermail/numpy-discussion/attachments/20120412/6d1dfed3/attachment-0001.html

 --

 Message: 2
 Date: Thu, 12 Apr 2012 15:54:50 -0400
 From: Tim Cera t...@cerazone.net
 Subject: Re: [Numpy-discussion] physics simulation
 To: Discussion of Numerical Python numpy-discussion@scipy.org
 Message-ID:
CAO5s+D8PpuOhea4u-StF_o5ESXeckV71_MTGQ=OwbiGhbL77=w...@mail.gmail.com
 
 Content-Type: text/plain; charset=iso-8859-1

 
  It looks like f2py cannot find libaf90math.so, located in
  /opt/absoft10.1/shlib. How can I tell f2py where af90math is?


 Really you have to have this setup in order to run a fortran executable,
 but the only thing that comes to mind is the LD_LIBRARY_PATH environment
 variable.  LD_LIBRARY_PATH is a colon separated list of paths that is
 searched for dynamic libraries by both regular programs and by Python.

 Use...

 env | grep LD_

 to show you the existing LD_LIBRARY_PATH.  To change/append depends on your
 shell.

 Kindest regards,
 Tim
 -- next part --
 An HTML attachment was scrubbed...
 URL:
 http://mail.scipy.org/pipermail/numpy-discussion/attachments/20120412/12138bcf/attachment-0001.html

 --

 Message: 3
 Date: Thu, 12 Apr 2012 15:05:12 -0500
 From: Travis Oliphant tra...@continuum.io
 Subject: Re: [Numpy-discussion] YouTrack testbed
 To: Discussion of Numerical Python numpy-discussion@scipy.org
 Message-ID: 3517e0d6-b8cb-4a3a-a37c-f78860f2d...@continuum.io
 Content-Type: text/plain; charset=iso-8859-1

 This looks good.   Maggie and Bryan are now setting up a Redmine instance
 to try out how hard that is to administer.I have some experience with
 Redmine and have liked what I've seen in the past.  I think the user
 experience that Ralf is providing feedback on is much more important than
 how hard it is to administer.

 NumFocus will dedicate

Re: [Numpy-discussion] Numpy governance update

2012-02-16 Thread Peter Wang
On Feb 16, 2012, at 12:08 AM, Matthew Brett wrote:

 The question is more about what can possibly be done about it. To really
 shift power, my hunch is that the only practical way would be to, like
 Mark said, make sure there are very active non-Continuum-employed
 developers. But perhaps I'm wrong.
 
 It's not obvious to me that there isn't a set of guidelines,
 procedures, structures that would help to keep things clear in this
 situation.

Matthew, I think this is the crux of the issue.

There are two kinds of disagreements which could polarize Numpy development: 
disagreements over vision/values, and disagreements over implementation.  The 
latter can be (and has been) resolved in an ad-hoc fashion because we are all 
consenting adults here, and as long as there is a consensus about the shared 
values (i.e. long-term vision) of the project, we can usually work something 
out.

Disagreements over values and long-term vision are the ones that actually do 
split developer communities, and which procedural guidelines are really quite 
poor at resolving.  In the realm of open source software, value differences 
(most commonly, licensing disagreements) generally manifest as forks, 
regardless of what governance may be in place.  At the end of the day, you 
cannot compel people to continue committing to a project that they feel is 
going the *wrong direction*, not merely the right direction in the wrong way.

In the physical world, where we are forced to share geographic space with 
people who may have vastly different values, it is useful to have a framework 
for resolution of value differences, because a fork attempt usually means 
physical warfare.  Hence, constitutions, checks  balances, impeachment 
procedures, etc. are all there to avoid forking.  But with software, forks are 
not so costly, and not always a bad thing.  Numpy itself arose from merging 
Numeric and its fork, Numarray, and X.org and EGCS are examples of big forks of 
major projects which later became the mainline trunk.  In short, even if you 
*could* put governance in place to prevent a fork, that's not always a Good 
Thing.  Creative destruction is vital to the health of any organism or 
ecosystem, because that is how evolution frequently achieves its greatest 
results.

Of course, this is not to say that I have any desire to see Numpy forked.  What 
I *do* desire is a modular, extensible core of Numpy will allow the 
experimentation and creative destruction to occur, while minimizing the merge 
effort when people realize that someone cool has been developed.  Lowering the 
barrier to entry for hacking on the core array code is not merely for 
Continuum's benefit, but rather will benefit the ecosystem as a whole.

No matter how one feels about the potential conflicts of interest, I think we 
can all agree that the alternative of stagnation is far, far worse.  The only 
way to avoid stagnation is to give the hackers and rebels plenty of room to 
play, while ensuring a stable base platform for end users and downstream 
projects to avoid code churn.  Travis's and Mark's roadmap proposals for 
creating a modular core and an extensible C-level ABI are a key technical 
mechanism for achieving this.

Ultimately, procedures and guidelines are only a means to an end, not an ends 
unto themselves.  Curiously enough, I have not yet seen anyone articulate the 
desire for those *ends* themselves to be written down or manifest as a 
document.  Now, if the Numpy developers want to produce a vision document or 
values statement for the project, I think that would help as a reference 
point for any potential disagreements over the direction of the project as 
commercial stakeholders become involved.  But, of course, the request for such 
a document is itself an unfunded mandate, so it's perfectly possible we may get 
a one-liner like make Python scientific computing awesome.  :-)


-Peter

Disclaimer: I work with Travis at Continuum.

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy governance update

2012-02-15 Thread Peter Wang
On Feb 15, 2012, at 3:36 PM, Matthew Brett wrote:

 Honestly - as I was saying to Alan and indirectly to Ben - any formal
 model - at all - is preferable to the current situation. Personally, I
 would say that making the founder of a company, which is working to
 make money from Numpy, the only decision maker on numpy - is - scary.

How is this different from the situation of the last 4 years?  Travis was 
President at Enthought, which makes money from not only Numpy but SciPy as 
well.  In addition to employing Travis, Enthought also employees many other key 
contributors to Numpy and Scipy, like Robert and David.  Furthermore, the Scipy 
and Numpy mailing lists and repos and web pages were all hosted at Enthought.  
If they didn't like how a particular discussion was going, they could have 
memory-holed the entire conversation from the archives, or worse yet, revoked 
commit access and reverted changes.

But such things never transpired, and of course most of us know that such 
things would never happen.  I don't see why the current situation is any 
different from the previous situation, other than the fact that Travis actually 
plans on actively developing Numpy again, and that hardly seems scary.


-Peter

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Loading a Quicktime moive (*.mov) as series of arrays

2012-01-18 Thread Peter
Sending this again (sorry Robert, this will be the second time
for you) since I sent from a non-subscribed email address the
first time.

On Sun, Jan 15, 2012 at 7:12 PM, Robert Kern wrote:
 On Sun, Jan 15, 2012 at 19:10, Peter wrote:
 Hello all,

 Is there a recommended (and ideally cross platform)
 way to load the frames of a QuickTime movie (*.mov
 file) in Python as NumPy arrays? ...

 I've had luck with pyffmpeg, though I haven't tried
 QuickTime .mov files:

  http://code.google.com/p/pyffmpeg/

Thanks for the suggestion.

Sadly right now pyffmpeg won't install on Mac OS X,
at least not with the version of Cython I have installed:
http://code.google.com/p/pyffmpeg/issues/detail?id=44

There doesn't seem to have been any activity on the
official repository for some time either.

Peter
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Loading a Quicktime moive (*.mov) as series of arrays

2012-01-15 Thread Peter
Hello all,

Is there a recommended (and ideally cross platform)
way to load the frames of a QuickTime movie (*.mov
file) in Python as NumPy arrays? I'd be happy with
an iterator based approach, but random access to
the frames would be a nice bonus.

My aim is to try some image analysis in Python, if
there is any sound in the files I don't care about it.

I had a look at OpenCV which has Python bindings,
http://opencv.willowgarage.com/documentation/python/index.html
however I had no joy compiling this on Mac OS X
with QuickTime support. Is this the best bet?

Thanks,

Peter
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Long-standing issue with using numpy in embedded CPython

2011-12-09 Thread Peter CYC
Hi Armando,

No comment on the Java thing ;-)

However, 
http://www.opengda.org/documentation/manuals/Diamond_SciSoft_Python_Guide/8.18/contents.html
is more up-to-date and we are on github too:
https://github.com/DiamondLightSource

Peter


On 9 December 2011 13:05, Vicente Sole s...@esrf.fr wrote:
 Quoting Robert Kern robert.k...@gmail.com:

 On Fri, Dec 9, 2011 at 11:00, Yang Zhang yanghates...@gmail.com wrote:

 Thanks for the clarification.  Alas.  So is there no simple workaround
 to making numpy work in environments such as Jepp?

 I don't think so, no.


 It is far from being an optimal solution (in fact I dislike it) but
 there is a couple of research facilities that like the python
 interpreter, they like numpy, but prefer to use java for all their
 graphical interfaces. They have rewritten part of numpy in java in
 order to use it from Jython.

 http://www.opengda.org/documentation/manuals/Diamond_SciSoft_Python_Guide/8.16/scisoftpy.html


 Armando
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] binary to ascii

2011-11-29 Thread Peter
On Tue, Nov 29, 2011 at 8:13 PM, Alex Ter-Sarkissov ater1...@gmail.com wrote:
 hi eveyone,

 is there a simple command in numpy similar to matlab char(bin2dec('//some
 binary value//')) to convert binary to characters and back?

 thanks

Would the Python struct library do? You'd get a tuple
back from struct.unpack(...) which you can turn into
a numpy array if you need to.

Peter
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] PyPy guys deserve some help on micronumpy from you, numpy gurus!

2011-11-21 Thread Peter
On Fri, Sep 16, 2011 at 10:49 AM, Peter
numpy-discuss...@maubp.freeserve.co.uk wrote:
 On Thu, Sep 15, 2011 at 11:34 AM, Peter
 numpy-discuss...@maubp.freeserve.co.uk wrote:
 This may not be the best place to ask, but how should a
 python script (e.g. setup.py) distinguish between real NumPy
 and micronumpy? Or should I instead be looking to distinguish
 PyPy versus another Python implementation?

 For anyone interested, over on the pypy-dev list Alex recommended:
 import platform; platform.python_implementation == 'PyPy'
 http://mail.python.org/pipermail/pypy-dev/2011-September/008315.html

Good news, in PyPy 1.7 they have fixed the namespace clash.
http://morepypy.blogspot.com/2011/11/pypy-17-widening-sweet-spot.html

In PyPy 1.6, import numpy as np would give you fake numpy,
the micronumpy written by the PyPy team - which was a frustratingly
limited subset of the full NumPy API.

As of PyPy 1.7, you must explicitly use import numpypy as np to
get their micronumpy. This makes life simpler for 3rd party libraries
since import numpy will just fail, and they can use and test for
the PyPy mini-numpy explicitly if they want to.

Cheers,

Peter
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] np.dot and scipy sparse matrices

2011-11-09 Thread Peter Prettenhofer
Hi everybody,

I recently got the latest numpy version (2.0.0.dev-7297785) from the
git repo and realized that `np.dot` causes a segfault if its operands
are scipy sparse matrices. Here's some code to reproduce the problem::

   import numpy as np
   from scipy import sparse as sp
   A = np.random.rand(10, 10)
   S = sp.csr_matrix(A)
   _ = np.dot(A, A)   # this works OK
   _ = np.dot(S, S)   # this segfaults!

thanks,
 Peter
-- 
Peter Prettenhofer
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Moving to gcc 4.* for win32 installers ?

2011-10-27 Thread Peter
On Thu, Oct 27, 2011 at 2:02 PM, David Cournapeau courn...@gmail.com wrote:

 Hi,

 I was wondering if we could finally move to a more recent version of
 compilers for official win32 installers. This would of course concern
 the next release cycle, not the ones where beta/rc are already in
 progress.

 Basically, the pros:
  - we will have to move at some point
  - gcc 4.* seem less buggy, especially C++ and fortran.
  - no need to maintain msvcr90 vodoo
 The cons:
  - it will most likely break the ABI
  - we need to recompile atlas (but I can take care of it)
  - the biggest: it is difficult to combine gfortran with visual
 studio (more exactly you cannot link gfortran runtime to a visual
 studio executable). The only solution I could think of would be to
 recompile the gfortran runtime with Visual Studio, which for some
 reason does not sound very appealing :)

 Thoughts ?

Does this make any difference for producing 64bit Windows
installers?

Peter
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Comparing NumPy/IDL Performance

2011-09-26 Thread Peter
On Mon, Sep 26, 2011 at 3:19 PM, Keith Hughitt keith.hugh...@gmail.com wrote:
 Hi all,
 Myself and several colleagues have recently started work on a Python library
 for solar physics, in order to provide an alternative to the current
 mainstay for solar physics, which is written in IDL.
 One of the first steps we have taken is to create a Python port of a popular
 benchmark for IDL (time_test3) which measures performance for a variety of
 (primarily matrix) operations. In our initial attempt, however, Python
 performs significantly poorer than IDL for several of the tests. I have
 attached a graph which shows the results for one machine: the x-axis is the
 test # being compared, and the y-axis is the time it took to complete the
 test, in milliseconds. While it is possible that this is simply due to
 limitations in Python/Numpy, I suspect that this is due at least in part to
 our lack in familiarity with NumPy and SciPy.

 So my question is, does anyone see any places where we are doing things very
 inefficiently in Python?

Looking at the plot there are five stand out tests, 1,2,3, 6 and 21.

Tests 1, 2 and 3 are testing Python itself (no numpy or scipy),
but are things you should be avoiding when using numpy
anyway (don't use loops, use vectorised calculations etc).

This is test 6,

#Test 6 - Shift 512 by 512 byte and store
nrep = 300 * scale_factor
for i in range(nrep):
c = np.roll(np.roll(b, 10, axis=0), 10, axis=1) #pylint: disable=W0612
timer.log('Shift 512 by 512 byte and store, %d times.' % nrep)

The precise contents of b are determined by the previous tests
(is that deliberate - it makes testing it in isolation hard). I'm unsure
what you are trying to do and if it is the best way.

This is test 21, which is just calling a scipy function repeatedly.
Questions about this might be better directed to the scipy
mailing list - also check what version of SciPy etc you have.

n = 2**(17 * scale_factor)
a = np.arange(n, dtype=np.float32)
...
#Test 21 - Smooth 512 by 512 byte array, 5x5 boxcar
for i in range(nrep):
b = scipy.ndimage.filters.median_filter(a, size=(5, 5))
timer.log('Smooth 512 by 512 byte array, 5x5 boxcar, %d times' % nrep)

After than, tests 10, 15 and 18 stand out. Test 10 is another use
of roll, so whatever advice you get on test 6 may apply. Test 10:

#Test 10 - Shift 512 x 512 array
nrep = 60 * scale_factor
for i in range(nrep):
c = np.roll(np.roll(b, 10, axis=0), 10, axis=1)
#for i in range(nrep): c = d.rotate(
timer.log('Shift 512 x 512 array, %d times' % nrep)

Test 15 is a loop based version of 16, where Python wins. Test 18
is a loop based version of 19 (log), where the difference is small.

So in terms of numpy speed, your question just seems to be
about numpy.roll and how else one might achieve this result?

Peter
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] PyPy guys deserve some help on micronumpy from you, numpy gurus!

2011-09-16 Thread Peter
On Thu, Sep 15, 2011 at 11:34 AM, Peter
numpy-discuss...@maubp.freeserve.co.uk wrote:
 This may not be the best place to ask, but how should a
 python script (e.g. setup.py) distinguish between real NumPy
 and micronumpy? Or should I instead be looking to distinguish
 PyPy versus another Python implementation?

For anyone interested, over on the pypy-dev list Alex recommended:
import platform; platform.python_implementation == 'PyPy'
http://mail.python.org/pipermail/pypy-dev/2011-September/008315.html

Peter
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] PyPy guys deserve some help on micronumpy from you, numpy gurus!

2011-09-15 Thread Peter
On Thu, Sep 15, 2011 at 11:23 AM, Valery Khamenya khame...@gmail.com wrote:
 Hi
 (replying please Cc to me)
 There is a module micronumpy that appeared at PyPy source tree:
 https://bitbucket.org/pypy/pypy/src/dfae5033127e/pypy/module/micronumpy/
 The great contributions of Justin Peel and Ilya Osadchiy to micronumpy
 module revive step-by-step the functionality of numpy.
 It would be great if some of numpy-gurus could jump in to assist, contribute
 some code and also, perhaps, guide a bit where the things go deeply in numpy
 nature.
 For those who don't know yet much about PyPy:
 PyPy is a fast implementation of Python 2.7.
 As a rule of thumb, PyPy is currently about 4x times faster than CPython.
 Certain benchmarks taken from the real life show 20x speed-up and more.
 The successes of PyPy performance are very remarkable http://speed.pypy.org/
 best regards

This may not be the best place to ask, but how should a
python script (e.g. setup.py) distinguish between real NumPy
and micronumpy? Or should I instead be looking to distinguish
PyPy versus another Python implementation?

Peter
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] c-api return two arrays

2011-07-29 Thread Peter
On Fri, Jul 29, 2011 at 9:52 AM, Yoshi Rokuko yo...@rokuko.net wrote:

 hey, i have an algorithm that computes two matrices like that:

 A(i,k) = (x(i,k) + y(i,k))/norm
 B(i,k) = (x(i,k) - y(i,k))/norm

 it would be convenient to have the method like that:

 A, B = mod.meth(C, prob=.95)

 is ith possible to return two arrays?

 best regards

Yes, return a tuple of two elements. e.g.

def make_range(center, spread):
   return center-spread, center+spread

low, high = make_range(5,1)
assert low == 4
assert high == 6


Peter
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Scipy 2011 Convore thread now open

2011-07-12 Thread Peter Wang
Hi folks,

I have gone ahead and created a Convore group for the SciPy 2011 conference:

https://convore.com/scipy-2011/

I have already created threads for each of the tutorial topics, and
once the conference is underway, we'll create threads for each talk,
so that audience can interact and post questions.  Everyone is welcome
to create topics of their own, in addition to the official
conference topics.

For those who are unfamiliar with Convore, it is a cross between a
mailing list and a very souped-up IRC.  It's usable for aynchronous
discussion, but great for realtime, topical chats.  Those of you who
were at PyCon this year probably saw what a wonderful tool Convore
proved to be for a tech conference.  People used it for everything
from BoF planning to dinner coordination to good-natured heckling of
lightning talk speakers.  I'm hoping that it will be used to similarly
good effect for the SciPy


Cheers,
Peter
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Current status of 64 bit windows support.

2011-07-06 Thread Peter
On Fri, Jul 1, 2011 at 7:23 PM, Sturla Molden stu...@molden.no wrote:

 Den 01.07.2011 19:22, skrev Charles R Harris:
 Just curious as to what folks know about the current status of the
 free windows 64 bit compilers. I know things were dicey with gcc and
 gfortran some two years ago, but... well, two years have passed. This

 Windows 7 SDK is free (as in beer). It is the C compiler used to build
 Python on Windows 64. Here is the download:

 http://www.microsoft.com/download/en/details.aspx?displaylang=enid=3138

 A newer version of the Windows SDK will use a C compiler that links with
 a different CRT than Python uses. Use version 3.5. When using this
 compiler, remember to set the environment variable DISTUTILS_USE_SDK.

 This should be sufficient to build NumPy. AFAIK only SciPy requires a
 Fortran compiler.

 Mingw is still not stabile on Windows 64. There are supposedly
 compatibility issues between the MinGW runtime used by libgfortran and
 Python's CRT.  While there are experimental MinGW builds for Windows 64
 (e.g. TDM-GCC), we will probably need to build libgfortran against
 another C runtime for SciPy. A commercial Fortran compiler compatible
 with MSVC is recommended for SciPy, e.g. Intel, Absoft or Portland.


 Sturla

So it sounds like we're getting closer to having official NumPy 1.6.x
binaries for 64 bit Windows (using the Windows 7 SDK), but not quite
there yet? What is the roadblock? I would guess from the comments
on Christoph Gohlke's page the issue is having something that will
work with SciPy... see http://www.lfd.uci.edu/~gohlke/pythonlibs/

I'm interested from the point of view of third party libraries using
NumPy, where we have had users asking for 64bit installers. We
need an official NumPy installer to build against.

Regards,

Peter
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] using the same vocabulary for missing value ideas

2011-07-06 Thread Peter
On Wed, Jul 6, 2011 at 4:40 PM, Mark Wiebe mwwi...@gmail.com wrote:
 It appears to me that one of the biggest reason some of us have been talking
 past each other in the discussions is that different people have different
 definitions for the terms being used. Until this is thoroughly cleared up, I
 feel the design process is tilting at windmills.
 In the interests of clarity in our discussions, here is a starting point
 which is consistent with the NEP. These definitions have been added in a
 glossary within the NEP. If there are any ideas for amendments to these
 definitions that we can agree on, I will update the NEP with those
 amendments. Also, if I missed any important terms which need to be added,
 please propose definitions for them.

That sounds good - I've only been scanning these discussions and it
is confusing.

 NA (Not Available)
     A placeholder for a value which is unknown to computations. That
     value may be temporarily hidden with a mask, may have been lost
     due to hard drive corruption, or gone for any number of reasons.
     This is the same as NA in the R project.

Could you expand that to say how sums and products act with NA
(since you do so for the IGNORE case).

Thanks,

Peter
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


  1   2   3   >