Re: [Numpy-discussion] padding options for diff

2016-10-26 Thread Peter Creasey
> Date: Wed, 26 Oct 2016 16:18:05 -0400
> From: Matthew Harrigan 
>
> Would it be preferable to have to_begin='first' as an option under the
> existing kwarg to avoid overlapping?
>
>> if keep_left:
>> if to_begin is None:
>> to_begin = np.take(a, [0], axis=axis)
>> else:
>> raise ValueError(?np.diff(a, keep_left=False, to_begin=None)
>> can be used with either keep_left or to_begin, but not both.?)
>>
>> Generally I try to avoid optional keyword argument overlap, but in
>> this case it is probably justified.
>>

It works for me. I can't *think* of a case where you could have a
np.diff on a string array and 'first' could be confused with an
element, since you're not allowed diff on strings in the present numpy
anyway (unless wiser heads than me know something!). Feel free to move
the conversation to github btw.

Peter
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] padding options for diff

2016-10-26 Thread Peter Creasey
> Date: Wed, 26 Oct 2016 09:05:41 -0400
> From: Matthew Harrigan 
>
> np.cumsum(np.diff(x, to_begin=x.take([0], axis=axis), axis=axis), axis=axis)
>
> That's certainly not going to win any beauty contests.  The 1d case is
> clean though:
>
> np.cumsum(np.diff(x, to_begin=x[0]))
>
> I'm not sure if this means the API should change, and if so how.  Higher
> dimensional arrays seem to just have extra complexity.
>
>>
>> I like the proposal, though I suspect that making it general has
>> obscured that the most common use-case for padding is to make the
>> inverse of np.cumsum (at least that?s what I frequently need), and now
>> in the multidimensional case you have the somewhat unwieldy:
>>
>> >>> np.diff(a, axis=axis, to_begin=np.take(a, 0, axis=axis))
>>
>> rather than
>>
>> >>> np.diff(a, axis=axis, keep_left=True)
>>
>> which of course could just be an option upon what you already have.
>>

So my suggestion was intended that you might want an additional
keyword argument (keep_left=False) to make the inverse np.cumsum
use-case easier, i.e. you would have something in your np.diff like:

if keep_left:
if to_begin is None:
to_begin = np.take(a, [0], axis=axis)
else:
raise ValueError(‘np.diff(a, keep_left=False, to_begin=None)
can be used with either keep_left or to_begin, but not both.’)

Generally I try to avoid optional keyword argument overlap, but in
this case it is probably justified.

Peter
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] padding options for diff

2016-10-25 Thread Peter Creasey
> Date: Mon, 24 Oct 2016 08:44:46 -0400
> From: Matthew Harrigan 
>
> I posted a pull request  which
> adds optional padding kwargs "to_begin" and "to_end" to diff.  Those
> options are based on what's available in ediff1d.  It closes this issue
> 

I like the proposal, though I suspect that making it general has
obscured that the most common use-case for padding is to make the
inverse of np.cumsum (at least that’s what I frequently need), and now
in the multidimensional case you have the somewhat unwieldy:

>>> np.diff(a, axis=axis, to_begin=np.take(a, 0, axis=axis))

rather than

>>> np.diff(a, axis=axis, keep_left=True)

which of course could just be an option upon what you already have.

Best,
Peter
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Integers to negative integer powers, time for a decision.

2016-10-11 Thread Peter Creasey
> On Sun, Oct 9, 2016 at 12:59 PM, Stephan Hoyer  wrote:
>
>>
>> I agree with Sebastian and Nathaniel. I don't think we can deviating from
>> the existing behavior (int ** int -> int) without breaking lots of existing
>> code, and if we did, yes, we would need a new integer power function.
>>
>> I think it's better to preserve the existing behavior when it gives
>> sensible results, and error when it doesn't. Adding another function
>> float_power for the case that is currently broken seems like the right way
>> to go.
>>
>

I actually suspect that the amount of code broken by int**int->float
may be relatively small (though extremely annoying for those that it
happens to, and it would definitely be good to have statistics). I
mean, Numpy silently transitioned to int32+uint64->float64 not so long
ago which broke my code, but the world didn’t end.

If the primary argument against int**int->float seems to be the
difficulty of managing the transition, with int**int->Error being the
seen as the required yet *very* painful intermediate step for the
large fraction of the int**int users who didn’t care if it was int or
float (e.g. the output is likely to be cast to float in the next step
anyway), and fail loudly for those users who need int**int->int, then
if you are prepared to risk a less conservative transition (i.e. we
think that latter group is small enough) you could skip the error on
users and just throw a warning for a couple of releases, along the
lines of:

WARNING int**int -> int is going to be deprecated in favour of
int**int->float in Numpy 1.16. To avoid seeing this message, either
use “from numpy import __future_float_power__” or explicitly set the
type of one of your inputs to float, or use the new ipower(x,y)
function for integer powers.

Peter
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] State-of-the-art to use a C/C++ library from Python

2016-09-02 Thread Peter Creasey
> Date: Wed, 31 Aug 2016 13:28:21 +0200
> From: Michael Bieri 
>
> I'm not quite sure which approach is state-of-the-art as of 2016. How would
> you do it if you had to make a C/C++ library available in Python right now?
>
> In my case, I have a C library with some scientific functions on matrices
> and vectors. You will typically call a few functions to configure the
> computation, then hand over some pointers to existing buffers containing
> vector data, then start the computation, and finally read back the data.
> The library also can use MPI to parallelize.
>

Depending on how minimal and universal you want to keep things, I use
the ctypes approach quite often, i.e. treat your numpy inputs an
outputs as arrays of doubles etc using the ndpointer(...) syntax. I
find it works well if you have a small number of well-defined
functions (not too many options) which are numerically very heavy.
With this approach I usually wrap each method in python to check the
inputs for contiguity, pass in the sizes etc. and allocate the numpy
array for the result.

Peter
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Integers to integer powers, let's make a decision

2016-06-04 Thread Peter Creasey
>
> +1
>
> On Sat, Jun 4, 2016 at 10:22 AM, Charles R Harris
>  wrote:
>> Hi All,
>>
>> I've made a new post so that we can make an explicit decision. AFAICT, the
>> two proposals are
>>
>> Integers to negative integer powers raise an error.
>> Integers to integer powers always results in floats.
>>
>> My own sense is that 1. would be closest to current behavior and using a
>> float exponential when a float is wanted is an explicit way to indicate that
>> desire. OTOH, 2. would be the most convenient default for everyday numerical
>> computation, but I think would more likely break current code. I am going to
>> come down on the side of 1., which I don't think should cause too many
>> problems if we start with a {Future, Deprecation}Warning explaining the
>> workaround.
>>
>> Chuck
>>


+1 (grudgingly)

My thoughts on this are:
(i) Intuitive APIs are better, and power(a,b) suggests to a lot of
(most?) readers that you are going to invoke a function like the C
pow(double x, double y) on every element. Doing positive integer
powers with the same function name suggests a correspondence that is
in practice not that helpful. With a time machine I’d suggest a
separate function for positive integer powers, however...
(ii) I think that ship has sailed, and particularly with e.g. a**3 the
numpy conventions are backed up by quite a bit of code, probably too
much to change without a lot of problems. So I’d go with integer ^
negative integer is an error.

Peter
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Reflect array?

2016-03-29 Thread Peter Creasey
> >> On Tue, Mar 29, 2016 at 1:46 PM, Benjamin Root 
> >> wrote:
> >> > Is there a quick-n-easy way to reflect a NxM array that represents a
> >> > quadrant into a 2Nx2M array? Essentially, I am trying to reduce the size
> >> > of
> >> > an expensive calculation by taking advantage of the fact that the first
> >> > part
> >> > of the calculation is just computing gaussian weights, which is radially
> >> > symmetric.
> >> >
> >> > It doesn't seem like np.tile() could support this (yet?). Maybe we could
> >> > allow negative repetitions to mean "reflected"? But I was hoping there
> >> > was
> >> > some existing function or stride trick that could accomplish what I am
> >> > trying.
> >> >
> >> > x = np.linspace(-5, 5, 20)
> >> > y = np.linspace(-5, 5, 24)
> >> > z = np.hypot(x[None, :], y[:, None])
> >> > zz = np.hypot(x[None, :int(len(x)//2)], y[:int(len(y)//2), None])
> >> > zz = some_mirroring_trick(zz)
> >>
>
> You can avoid the allocation with preallocation:
>
> nx = len(x) // 2
> ny = len(y) // 2
> zz = np.zeros((len(y), len(x)))
> zz[:ny,-nx:] = np.hypot.outer(y[:ny], x[:nx])
> zz[:ny, :nx] = zz[:ny,:-nx-1:-1]
> zz[-ny:, :] = zz[ny::-1, :]
>
> if nx * 2 != len(x):
> zz[:ny, nx] = y[::-1]
> zz[-ny:, nx] = y
> if ny * 2 != len(y):
> zz[ny, :nx] = x[::-1]
> zz[ny, -nx:] = x
>
> All of the steps after the call to `hypot.outer` create views. This is
> untested, so you may need to tweak the indices a little.
>

A couple of months ago I wrote a C-code with ctypes to do this sort of
mirroring trick on an (N,N,N) numpy array of fft weights (where you
can exploit the 48-fold symmetry of using interchangeable axes), which
was pretty useful since I had N^3 >> 1e9 and the weight function was
quite expensive. Obviously the (N,M) case doesn't allow quite so much
optimization but if it could be interesting then PM me.

Best,
Peter
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] proposal: new logspace without the log in the argument

2016-02-18 Thread Peter Creasey
>
> Some questions it'd be good to get feedback on:
>
> - any better ideas for naming it than "geomspace"? It's really too bad
> that the 'logspace' name is already taken.
>
> - I guess the alternative interface might be something like
>
> np.linspace(start, stop, steps, spacing="log")
>
> what do people think?
>
> -n
>
You’ve got to wonder how many people actually use logspace(start,
stop, num) in preference to 10.0**linspace(start, stop, num) - i.e. I
prefer the latter for clarity, and if I wanted performance I’d be
prepared to write something more ugly.

I don’t mind geomspace(), but if you are brainstorming
>>> linlogspace(start, end) # i.e. ‘linear in log-space’
is ok for me too.

Peter
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Behavior of np.random.uniform

2016-01-20 Thread Peter Creasey
>> I would also point out that requiring open vs closed intervals (in
>> doubles) is already an extremely specialised use case. In terms of
>> *sampling the reals*, there is no difference between the intervals
>> (a,b) and [a,b], because the endpoints have measure 0, and even with
>> double-precision arithmetic, you are going to have to make several
>> petabytes of random data before you hit an endpoint...
>>
> Petabytes ain't what they used to be ;) I remember testing some hardware
> which, due to grounding/timing issues would occasionally goof up a readable
> register. The hardware designers never saw it because they didn't test for
> hours and days at high data rates. But it was there, and it would show up
> in the data. Measure zero is about as real as real numbers...
>
> Chuck

Actually, your point is well taken and I am quite mistaken. If you
pick some values like uniform(low, low * (1+2**-52)) then you can hit
your endpoints pretty easily. I am out of practice making
pathological tests for double precision arithmetic.

I guess my suggestion would be to add the deprecation warning and
change the docstring to warn that the interval is not guaranteed to be
right-open.

Peter
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Behavior of np.random.uniform

2016-01-20 Thread Peter Creasey
+1 for the deprecation warning for low>high, I think the cases where
that is called are more likely to be unintentional rather than someone
trying to use uniform(closed_end, open_end) and you might help users
find bugs -  i.e. the idioms of ‘explicit is better than implicit’ and
‘fail early and fail loudly’ apply.

I would also point out that requiring open vs closed intervals (in
doubles) is already an extremely specialised use case. In terms of
*sampling the reals*, there is no difference between the intervals
(a,b) and [a,b], because the endpoints have measure 0, and even with
double-precision arithmetic, you are going to have to make several
petabytes of random data before you hit an endpoint...

Peter
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] How to find indices of values in an array (indirect in1d) ?

2015-12-30 Thread Peter Creasey
>
> In the end, I?ve only the list comprehension to work as expected
>
> A = [0,0,1,3]
> B = np.arange(8)
> np.random.shuffle(B)
> I = [list(B).index(item) for item in A if item in B]
>
>
> But Mark's and Sebastian's methods do not seem to work...
>


The function you want is also in the open source astronomy package
iccpy ( https://github.com/Lowingbn/iccpy ), which essentially does a
variant of Sebastian’s code (which I also couldn’t quite get working),
and handles a few things like old numpy versions (pre 1.4) and allows
you to specify if B is already sorted.

>>> from iccpy.utils import match
>>> print match(A,B)
[ 1  2  0 -1]

Peter
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] PR for complex np.interp, question about assert_almost_equal

2015-12-22 Thread Peter Creasey
>>> The tests pass on my machine, but I see that the TravisCI builds are
>>> giving assertion fails (on my own test) with python 3.3 and 3.5 of the
>>> form:
>>> > assert_almost_equal
>>> > TypeError: Cannot cast array data from dtype('complex128') to
>>> dtype('float64') according to the rule 'safe'
>>>
>
> The problem then is probably here
> 
> .
>
> You may want to throw in a PyErr_Clear()
>  when the
> conversion of the fp array to NPY_DOUBLE fails before trying with
> NPY_CDOUBLE, and check if it goes away.
>

Thanks for your tip Jaime, you were exactly right. Unfortunately I
only saw your message after and addressed the problem in a different
way to your suggestion (passing in a flag instead). It'd be great to
have your input on the PR though (maybe github or pm me, to avoid
flooding the mailing list).

Best,
Peter
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] PR for complex np.interp, question about assert_almost_equal

2015-12-22 Thread Peter Creasey
>> > assert_almost_equal
>> > TypeError: Cannot cast array data from dtype('complex128') to
>> dtype('float64') according to the rule 'safe'
>>
>>
>
> Hi Peter, that error is unrelated to assert_almost_equal. What happens is
> that when you pass in a complex argument `fp` to your modified
> `compiled_interp`, you're somewhere doing a cast that's not safe and
> trigger the error at
> https://github.com/numpy/numpy/blob/master/numpy/core/src/multiarray/ctors.c#L1930.


Thanks a lot Ralf! The build log I was looking at (
https://travis-ci.org/numpy/numpy/jobs/98198323 ) really confused me
by not mentioning the function call that wrote the error, but now I
think I understand and can recreate the failure in my setup.

Best,
Peter
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] PR for complex np.interp, question about assert_almost_equal

2015-12-21 Thread Peter Creasey
Hi all,
I submitted a PR (#6872) for using complex numbers in np.lib.interp.

The tests pass on my machine, but I see that the TravisCI builds are
giving assertion fails (on my own test) with python 3.3 and 3.5 of the
form:
> assert_almost_equal
> TypeError: Cannot cast array data from dtype('complex128') to 
> dtype('float64') according to the rule 'safe'

When I was writing the test I used np.testing.assert_almost_equal with
complex128 as it works in my python 2.7, however having checked the
docstring I cannot tell what the expected behaviour should be (complex
or no complex allowed). Should my test be changed or the
assert_almost_equal?

Best,
Peter
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] FeatureRequest: support for array

2015-12-12 Thread Peter Creasey
> >
> > from itertools import chain
> > def fromiter_awesome_edition(iterable):
> > elem = next(iterable)
> > dtype = whatever_numpy_does_to_infer_dtypes_from_lists(elem)
> > return np.fromiter(chain([elem], iterable), dtype=dtype)
> >
> > I think this would be a huge win for usability. Always getting tripped up by
> > the dtype requirement. I can submit a PR if people like this pattern.
>
> This isn't the semantics of np.array, though -- np.array will look at
> the whole input and try to find a common dtype, so this can't be the
> implementation for np.array(iter). E.g. try np.array([1, 1.0])
>
> I can see an argument for making the dtype= argument to fromiter
> optional, with a warning in the docs that it will guess based on the
> first element and that you should specify it if you don't want that.
> It seems potentially a bit error prone (in the sense that it might
> make it easier to end up with code that works great when you test it
> but then breaks later when something unexpected happens), but maybe
> the usability outweighs that. I don't use fromiter myself so I don't
> have a strong opinion.

I’m -1 on this, from an occasional user of np.fromiter, also for the
np.fromiter([1, 1.5, 2]) ambiguity reason. Pure python does a great
job of preventing users from hurting themselves with limited precision
arithmetic, however if their application makes them care enough about
speed (to be using numpy) and memory (to be using np.fromiter), then
it can almost always be assumed that the resulting dtype was important
enough to be specified.

P
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Where is Jaime?

2015-12-07 Thread Peter Creasey
>> >
>> > Is the interp fix in the google pipeline or do we need a workaround?
>> >
>>
>> Oooh, if someone is looking at changing interp, is there any chance
>> that fp could be extended to take complex128 rather than just float
>> values? I.e. so that I could write:
>>
>> >>> y = interp(mu, theta, m)
>> rather than
>> >>> y = interp(mu, theta, m.real) + 1.0j*interp(mu, theta, m.imag)
>>
>> which *sounds* like it might be simple and more (Num)pythonic.
>
> That sounds like an excellent improvement and you should submit a PR
> implementing it :-).
>
> "The interp fix" in question though is a regression in 1.10 that's blocking
> 1.10.2, and needs a quick minimal fix asap.
>


Good answer - as soon as I hit 'send' I wondered how many bugs get
introduced by people trying to attach feature requests to bug fixes. I
will take a look at the code later and pm you if I get anywhere...

Peter
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Where is Jaime?

2015-12-06 Thread Peter Creasey
>
> Is the interp fix in the google pipeline or do we need a workaround?
>

Oooh, if someone is looking at changing interp, is there any chance
that fp could be extended to take complex128 rather than just float
values? I.e. so that I could write:

>>> y = interp(mu, theta, m)
rather than
>>> y = interp(mu, theta, m.real) + 1.0j*interp(mu, theta, m.imag)

which *sounds* like it might be simple and more (Num)pythonic.

Peter
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Misleading/erroneous TypeError message

2015-11-24 Thread Peter Creasey
> > I just upgraded my numpy and started to received a TypeError from one of
> > my codes that relied on the old, less strict, casting behaviour. The error
> > message, however, left me scratching my head when trying to debug something
> > like this:
> >
> > >>> a = array([0],dtype=uint64)
> > >>> a += array([1],dtype=int64)
> > TypeError: Cannot cast ufunc add output from dtype('float64') to
> > dtype('uint64') with casting rule 'same_kind'
> >
> > Where does the 'float64' come from?!?!
> >
>
> The combination of uint64 and int64 leads to promotion to float64 as the
> best option for the combination of signed and unsigned. To fix things, you
> can either use `np.add` with an output argument and `casting='unsafe'` or
> just be careful about using unsigned types.

Thanks for the quick response. I understand there are reasons for the
promotion to float64 (although my expectation would usually be that
Numpy is going to follow C conventions), however the I found the error
a little unhelpful. In particular Numpy is complaining about a dtype
(float64) that it silently promoted to, rather than the dtype that the
user provided, which generally seems like a bad idea. Could Numpy
somehow complain about the original dtypes in this case? Or at least
give a warning about the first promotion (e.g. loss of precision)?

Peter
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Misleading/erroneous TypeError message

2015-11-24 Thread Peter Creasey
Hi,

I just upgraded my numpy and started to received a TypeError from one of my
codes that relied on the old, less strict, casting behaviour. The error
message, however, left me scratching my head when trying to debug something
like this:

>>> a = array([0],dtype=uint64)
>>> a += array([1],dtype=int64)
TypeError: Cannot cast ufunc add output from dtype('float64') to
dtype('uint64') with casting rule 'same_kind'

Where does the 'float64' come from?!?!

Peter

PS Thanks for all the great work guys, numpy is a fantastic tool and has
been a lot of help to me over the years!
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Installation info

2008-05-30 Thread Peter Creasey
2008/5/30 Peter Creasey <[EMAIL PROTECTED]>:
> Is numpy v1.1 going to come out in egg format?
>

Oops, I didn't mean to mail with an entire numpy digest in the body.

sorry,
Peter
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Installation info

2008-05-30 Thread Peter Creasey
Is numpy v1.1 going to come out in egg format?

I ask because I only see the superpack installers on the sourceforge
page, and we have users who we are delivering to via egg - requires.

thanks,
Peter




2008/5/23  <[EMAIL PROTECTED]>:
> Send Numpy-discussion mailing list submissions to
>numpy-discussion@scipy.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
>http://projects.scipy.org/mailman/listinfo/numpy-discussion
> or, via email, send a message with subject or body 'help' to
>[EMAIL PROTECTED]
>
> You can reach the person managing the list at
>[EMAIL PROTECTED]
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of Numpy-discussion digest..."
>
>
> Today's Topics:
>
>   1. Re: f2py errors: any help interpreting? (Mark Miller)
>   2. Re: f2py errors: any help interpreting? (Robert Kern)
>   3. Re: f2py errors: any help interpreting? (Mark Miller)
>   4. Re: f2py errors: any help interpreting? (Robert Kern)
>   5. Re: f2py errors: any help interpreting? (Mark Miller)
>   6. Re: f2py errors: any help interpreting? (Mark Miller)
>
>
> --
>
> Message: 1
> Date: Fri, 23 May 2008 14:48:47 -0700
> From: "Mark Miller" <[EMAIL PROTECTED]>
> Subject: Re: [Numpy-discussion] f2py errors: any help interpreting?
> To: "Discussion of Numerical Python" 
> Message-ID:
><[EMAIL PROTECTED]>
> Content-Type: text/plain; charset="iso-8859-1"
>
> Super...I'll give it a try.  Or should I just wait for the numpy 1.1
> release?
>
> thanks,
>
> -Mark
>
> On Fri, May 23, 2008 at 2:45 PM, Robert Kern <[EMAIL PROTECTED]> wrote:
>
>> On Fri, May 23, 2008 at 4:00 PM, Mark Miller <[EMAIL PROTECTED]> wrote:
>>
>> >   File "C:\Python25\lib\site-packages\numpy\f2py\rules.py", line 1222, in
>> > buildmodule
>> > for l in '\n\n'.join(funcwrappers2)+'\n'.split('\n'):
>> > TypeError: cannot concatenate 'str' and 'list' objects
>> >
>> >
>> > Any thoughts? Please let me know if more information is needed to
>> > troubleshoot.
>>
>> This is a bug that was fixed in SVN r4335.
>>
>> http://projects.scipy.org/scipy/numpy/changeset/4335
>>
>> --
>> Robert Kern
>>
>> "I have come to believe that the whole world is an enigma, a harmless
>> enigma that is made terrible by our own mad attempt to interpret it as
>> though it had an underlying truth."
>>  -- Umberto Eco
>> ___
>> Numpy-discussion mailing list
>> Numpy-discussion@scipy.org
>> http://projects.scipy.org/mailman/listinfo/numpy-discussion
>>
> -- next part --
> An HTML attachment was scrubbed...
> URL: 
> http://projects.scipy.org/pipermail/numpy-discussion/attachments/20080523/750a2e8e/attachment-0001.html
>
> --
>
> Message: 2
> Date: Fri, 23 May 2008 17:01:04 -0500
> From: "Robert Kern" <[EMAIL PROTECTED]>
> Subject: Re: [Numpy-discussion] f2py errors: any help interpreting?
> To: "Discussion of Numerical Python" 
> Message-ID:
><[EMAIL PROTECTED]>
> Content-Type: text/plain; charset=UTF-8
>
> On Fri, May 23, 2008 at 4:48 PM, Mark Miller <[EMAIL PROTECTED]> wrote:
>> Super...I'll give it a try.  Or should I just wait for the numpy 1.1
>> release?
>
> Probably. You can get a binary installer for the release candidate here:
>
> http://www.ar.media.kyoto-u.ac.jp/members/david/archives/numpy-1.1.0rc1-win32-superpack-python2.5.exe
>
> --
> Robert Kern
>
> "I have come to believe that the whole world is an enigma, a harmless
> enigma that is made terrible by our own mad attempt to interpret it as
> though it had an underlying truth."
>  -- Umberto Eco
>
>
> --
>
> Message: 3
> Date: Fri, 23 May 2008 15:48:02 -0700
> From: "Mark Miller" <[EMAIL PROTECTED]>
> Subject: Re: [Numpy-discussion] f2py errors: any help interpreting?
> To: "Discussion of Numerical Python" 
> Message-ID:
><[EMAIL PROTECTED]>
> Content-Type: text/plain; charset="iso-8859-1"
>
> Thank you...getting much closer now.
>
> My current issue is this message:
>
> running build_ext
> error:  don't know how to compile C/C++ code on platform 'nt' with 'g95'
> compiler.
>
> Any help?
>
> Again, sorry to pester.  I'm just pretty unfamiliar with these things.  Once
> I get environmental variables set up, I rarely need to fiddle with them
> again.  So I don't have a specific feel for what might be happening here.
>
> thanks,
>
> -Mark
>
>
>
>
> On Fri, May 23, 2008 at 3:01 PM, Robert Kern <[EMAIL PROTECTED]> wrote:
>
>> On Fri, May 23, 2008 at 4:48 PM, Mark Miller <[EMAIL PROTECTED]>
>> wrote:
>> > Super...I'll give it a try.  Or should I just wait for the numpy 1.1
>> > release?
>>
>> Probably. You can get a binary installer for the release candidate here:
>>
>>
>> http://www.ar.media.kyoto-u.ac.jp/members/david/archives/numpy-1.1.0rc1-win32-superpack-python2.5.exe
>>
>> --
>> Robert Kern
>>
>> "I have come to believe that the whol

Re: [Numpy-discussion] Multiple Boolean Operations

2008-05-23 Thread Peter Creasey
Hi Andrea,

2008/5/23  "Andrea Gavana" <[EMAIL PROTECTED]>:
> And so on. The probelm with this approach is that I lose the original
> indices for which I want all the inequality tests to succeed:

To have the original indices you just need to re-index your indices, as it were

idx = flatnonzero(xCent >= xMin)
idx = idx[flatnonzero(xCent[idx] <= xMax)]
idx = idx[flatnonzero(yCent[idx] >= yMin)]
idx = idx[flatnonzero(yCent[idx] <= yMax)]
...
(I haven't tested this code, apologies for bugs)

However, there is a performance penalty for doing all this re-indexing
(I once fell afoul of this), and if these conditions "mostly" evaluate
to True you can often be better off with one of the solutions already
suggested.

Regards,
Peter
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Generalised inner product

2008-05-19 Thread Peter Creasey
> > c[i,j,k,l,m] = sum (over x) of a[i,j,x] * b[k,x,l,m]
> >
>
> Try tensordot.
>
> Chuck


That was exactly what I needed. Thanks!

Peter
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Generalised inner product

2008-05-19 Thread Peter Creasey
Hi,

Does numpy have some sort of generalised inner product? For example I have
arrays

a.shape = (5,6,7)
b.shape = (8,7,9,10)

and I want to perform a product over the 3rd axis of a and the 2nd of b,
i.e.

c[i,j,k,l,m] = sum (over x) of a[i,j,x] * b[k,x,l,m]

I guess I could do it with swapaxes and numpy.dot or numpy.inner but I
wondered if there was a general function.

Thanks,
Peter
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Fwd: Fast histogram

2008-04-17 Thread Peter Creasey
On 17/04/2008, Zachary Pincus wrote:
 >  But even if indices = array, one still needs to do something like:
 >  for index in indices: histogram[index] += 1
 >
 >  Which is slow in python and fast in C.
 >

 I haven't tried this, but if you want the sum in C you could do

 for x in unique(indices):
histogram[x] = (indices==x).sum()

 Of course, this just replaces an O(N log N) algorithm by an O(N * M)
 (M is the number of bins), which is only going to be worth it for very
 small M.


 Peter
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Fast histogram

2008-04-17 Thread Peter Creasey
On 17/04/2008, Zachary Pincus wrote:
>  But even if indices = array, one still needs to do something like:
>  for index in indices: histogram[index] += 1
>
>  Which is slow in python and fast in C.
>

I haven't tried this, but if you want the sum in C you could do

for x in unique(indices):
histogram[x] = (indices==x).sum()

Of course, this just replaces an O(N log N) algorithm by an O(N * M)
(M is the number of bins), which is only going to be worth for very
small M.

Peter
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Win32 installer: please test it

2008-04-15 Thread Peter Creasey
Yes, I am one of those users with crashes in numpy 1.4. Your build
seems to pass for me. For reference my machine is Windows XP, Intel
Xeon 5140

-
Numpy is installed in C:\Python25\lib\site-packages\numpy
Numpy version 1.0.5.dev5008
Python version 2.5.2 (r252:60911, Feb 21 2008, 13:11:45) [MSC v.1310 32 bit (Int
el)]
  Found 10/10 tests for numpy.core.defmatrix
  Found 3/3 tests for numpy.core.memmap
  Found 266/266 tests for numpy.core.multiarray
  Found 69/69 tests for numpy.core.numeric
  Found 31/31 tests for numpy.core.numerictypes
  Found 12/12 tests for numpy.core.records
  Found 7/7 tests for numpy.core.scalarmath
  Found 16/16 tests for numpy.core.umath
  Found 5/5 tests for numpy.ctypeslib
  Found 5/5 tests for numpy.distutils.misc_util
  Found 2/2 tests for numpy.fft.fftpack
  Found 3/3 tests for numpy.fft.helper
  Found 20/20 tests for numpy.lib._datasource
  Found 10/10 tests for numpy.lib.arraysetops
  Found 1/1 tests for numpy.lib.financial
  Found 0/0 tests for numpy.lib.format
  Found 48/48 tests for numpy.lib.function_base
  Found 5/5 tests for numpy.lib.getlimits
  Found 4/4 tests for numpy.lib.index_tricks
  Found 7/7 tests for numpy.lib.io
  Found 4/4 tests for numpy.lib.polynomial
  Found 49/49 tests for numpy.lib.shape_base
  Found 15/15 tests for numpy.lib.twodim_base
  Found 43/43 tests for numpy.lib.type_check
  Found 1/1 tests for numpy.lib.ufunclike
  Found 59/59 tests for numpy.linalg
  Found 92/92 tests for numpy.ma.core
  Found 14/14 tests for numpy.ma.extras
  Found 7/7 tests for numpy.random
  Found 0/0 tests for numpy.testing.utils
  Found 0/0 tests for __main__











...
--
Ran 887 tests in 0.890s

OK


On 14/04/2008, Bill Baxter wrote:
>  Seems to work here now, too!
>
>  It doesn't tell you in an easy to see place what version of SSE it
>  decides to use.  Do you think that's ok?  (You can tell by looking at
>  the "details" at then end of installation, though.
>
>  Is there some way to tell this info from inside NumPy itself?  If so
>  then maybe it doesn't matter.
>
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Simple financial functions for NumPy

2008-04-10 Thread Peter Creasey
>  > Right now it looks like there is a mix of attitudes, about the
>  > financial
>  > functions.   They are a small enough addition, that I don't think it
>  > matters terribly much what we do with them.  So, it seems to me that
>  > keeping them in numpy.lib and following the rule for that namespace
>  > for
>  > 1.0.5 will be viewed as anywhere from tolerable to a good idea
>  > depending
>  > on your point of view.
>
>  Just to be sure, you are talking about functions to compute interest
>  rates, cumulative interests, asset depreciation, annuity periodic
>  payments, security yields, and stuff like this?
>
>  Joris

Actually, I was wondering about this, I wasn't sure if you might mean
option pricing, stochastic calculus and black-scholes analytic
formulae.

I use these things fairly heavily, calling NumPy and SciPy functions.

My instinct is that these are probably more appropriate for SciPy
since they are quite niche (in comparison to something like fourier
transforms).

Peter
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Correlate with small arrays

2008-03-20 Thread Peter Creasey
>  >  I'm trying to do a PDE style calculation with numpy arrays
>  >
>  >  y = a * x[:-2] + b * x[1:-1] + c * x[2:]
>  >
>  >  with a,b,c constants. I realise I could use correlate for this, i.e
>  >
>  >  y = numpy.correlate(x, array((a, b, c)))
>
>  The relative performance seems to vary depending on the size, but it
>  seems to me that correlate usually beats the manual implementation,
>  particularly if you don't measure the array() part, too. len(x)=1000
>  is the only size where the manual version seems to beat correlate on
>  my machine.

Thanks for the quick response! Unfortunately 1000 < len(x) < 2 are
just the cases I'm using, (they seem to be 1-3 times as slower on my
machine).

I'm just thinking that this is exactly the kind of problem that could
be done much faster in C, i.e in the manual implementation the
processor goes through an array of len(x) maybe 5 times (3
multiplications and 2 additions), yet in C I could put those constants
in the registers and go through the array just once. Maybe this is
flawed logic, but if not I'm hoping someone has already done this?
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Correlate with small arrays

2008-03-19 Thread Peter Creasey
Hi,

I'm trying to do a PDE style calculation with numpy arrays

y = a * x[:-2] + b * x[1:-1] + c * x[2:]

with a,b,c constants. I realise I could use correlate for this, i.e

y = numpy.correlate(x, array((a, b, c)))

however the performance doesn't seem as good (I suspect correlate is
optimised for both arguments being long arrays). Is the first thing I
wrote probably the best? Or is there a better numpy function for this
case?

Regards,
Peter
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Problem with numpy.linalg.eig?

2007-11-13 Thread Peter Creasey
> 
> Yes, with the MSI I can always reproduce the problem with
> numpy.test(). It always hangs.With the egg it does not hang. Pointer
> problems are usually random, but not random if we are using the same
> binaries in EGG and MSI and variables are always initialized to
> certain value.
> 

I can consistently reproduce this problem with the EGG. 

It disappears, however, when I replace the lapack_lite.pyd with the version from
the 1.0.3.1 EGG (python 2.4). 

cheers,

Peter




___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Problem with numpy.linalg.eig?

2007-11-12 Thread Peter Creasey
Hi all,

The following code calling numpy v1.0.4 fails to terminate on my machine,
which was not the case with v1.0.3.1

from numpy import arange, float64
from numpy.linalg import eig
a = arange(13*13, dtype = float64)
a.shape = (13,13)
a = a%17
eig(a)


Regards,
Peter
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion