[Numpy-discussion] docs.scipy.org down

2017-03-13 Thread Ryan May
Is https://docs.scipy.org/ being down known issue?

Ryan

-- 
Ryan May
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Arrays and format()

2017-02-28 Thread Ryan May
Gustav,

I had seen this discussion, but completely blanked when I posted my problem.

I looked over the proposal and nothing jumped out at me on a quick
read-through; it seemed straightforward and would meet my needs.

I'll try to carve out some time to think a bit more about it and let you
know if anything jumps out.

Ryan


On Tue, Feb 28, 2017 at 12:59 PM, Gustav Larsson <lars...@cs.uchicago.edu>
wrote:

> I am hoping to submit a PR for a __format__ numpy enhancement proposal
> this weekend. I will be a slightly revised version of my original draft
> posted here two weeks ago. Ryan, if you have any thoughts on the writeup
> <https://gist.github.com/gustavla/2783543be1204d2b5d368f6a1fb4d069> so
> far, I'd love to hear them.
>
>
> On Tue, Feb 28, 2017 at 9:38 AM, Nathan Goldbaum <nathan12...@gmail.com>
> wrote:
>
>> See this issue:
>>
>> https://github.com/numpy/numpy/issues/5543
>>
>> There was also a very thorough discussion of this recently on this
>> mailing list:
>>
>> http://numpy-discussion.10968.n7.nabble.com/Proposal-to-supp
>> ort-format-td43931.html
>>
>> On Tue, Feb 28, 2017 at 11:32 AM Ryan May <rma...@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> Can someone take a look at: https://github.com/numpy/numpy/issues/7978
>>>
>>> The crux of the issue is that this:
>>>
>>> # This works
>>> a = "%0.3g" % np.array(2)
>>> a
>>> '2'
>>>
>>> # This does not
>>> a = "{0:0.3g}".format(np.array(2))
>>> TypeError: non-empty format string passed to object.__format__
>>>
>>> I've now hit this in my code. If someone can even point me in the
>>> general direction of the code to dig into for this (please let it be
>>> python, please let it be python...), I'll dig in more.
>>>
>>> Ryan
>>>
>>> --
>>> Ryan May
>>>
>>> ___
>>> NumPy-Discussion mailing list
>>> NumPy-Discussion@scipy.org
>>> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>>>
>>
>> ___
>> NumPy-Discussion mailing list
>> NumPy-Discussion@scipy.org
>> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>>
>>
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>
>


-- 
Ryan May
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Arrays and format()

2017-02-28 Thread Ryan May
Hi,

Can someone take a look at: https://github.com/numpy/numpy/issues/7978

The crux of the issue is that this:

# This works
a = "%0.3g" % np.array(2)
a
'2'

# This does not
a = "{0:0.3g}".format(np.array(2))
TypeError: non-empty format string passed to object.__format__

I've now hit this in my code. If someone can even point me in the general
direction of the code to dig into for this (please let it be python, please
let it be python...), I'll dig in more.

Ryan

-- 
Ryan May
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] array comprehension

2016-11-04 Thread Ryan May
On Fri, Nov 4, 2016 at 9:04 AM, Stephan Hoyer <sho...@gmail.com> wrote:

> On Fri, Nov 4, 2016 at 7:12 AM, Francesc Alted <fal...@gmail.com> wrote:
>
>> Does this generalize to >1 dimensions?
>>>
>>
>> A reshape() is not enough?  What do you want to do exactly?
>>
>
> np.fromiter takes scalar input and only builds a 1D array. So it actually
> can't combine multiple values at once unless they are flattened out in
> Python. It could be nice to add support for non-scalar inputs, stacking
> them similarly to np.array. Likewise, it could be nice to add an axis
> argument, so it can work similarly to np.stack.
>

 itertools.product, itertools.permutation, etc. with np.fromiter (and
reshape) is probably also useful here, though it doesn't solve the
non-scalar problem.

Ryan

-- 
Ryan May
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] How to use user input as equation directly

2016-10-27 Thread Ryan May
On Thu, Oct 27, 2016 at 1:58 PM, djxvillain <djxvill...@gmail.com> wrote:

> Hello all,
>
> I am an electrical engineer and new to numpy.  I need the ability to take
> in
> user input, and use that input as a variable.  For example:
>
> t = input('enter t: ')
> x = input('enter x: ')
>
> I need the user to be able to enter something like x =
> 2*np.sin(2*np.pi*44100*t+np.pi/2) and it be the same as if they just typed
> it in the .py file.  There's no clean way to cast or evaluate it that I've
> found.
>

Are you aware of Python's eval function:
https://docs.python.org/3/library/functions.html#eval

?

Ryan

-- 
Ryan May
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Integers to negative integer powers, time for a decision.

2016-10-10 Thread Ryan May
On Sun, Oct 9, 2016 at 12:59 PM, Stephan Hoyer  wrote:

> On Sun, Oct 9, 2016 at 6:25 AM, Sebastian Berg  > wrote:
>
>> For what its worth, I still feel it is probably the only real option to
>> go with error, changing to float may have weird effects. Which does not
>> mean it is impossible, I admit, though I would like some data on how
>> downstream would handle it. Also would we need an int power? The fpower
>> seems more straight forward/common pattern.
>> If errors turned out annoying in some cases, a seterr might be
>> plausible too (as well as a deprecation).
>>
>
> I agree with Sebastian and Nathaniel. I don't think we can deviating from
> the existing behavior (int ** int -> int) without breaking lots of existing
> code, and if we did, yes, we would need a new integer power function.
>
> I think it's better to preserve the existing behavior when it gives
> sensible results, and error when it doesn't. Adding another function
> float_power for the case that is currently broken seems like the right way
> to go.
>

+1

Ryan
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Custom Dtype/Units discussion

2016-07-14 Thread Ryan May
Sounds good.

On Thu, Jul 14, 2016 at 10:51 AM, Nathan Goldbaum <nathan12...@gmail.com>
wrote:

> Fine with me as well. Meet in the downstairs lobby after the lightning
> talks?
>
> On Thu, Jul 14, 2016 at 10:49 AM, Ryan May <rma...@gmail.com> wrote:
>
>> Fine with me.
>>
>> Ryan
>>
>> On Thu, Jul 14, 2016 at 12:48 AM, Nathaniel Smith <n...@pobox.com> wrote:
>>
>>> I have something at lunch, so dinner would be good for me too.
>>> On Jul 13, 2016 7:46 PM, "Charles R Harris" <charlesr.har...@gmail.com>
>>> wrote:
>>>
>>>> Evening would work for me. Dinner?
>>>> On Jul 13, 2016 2:43 PM, "Ryan May" <rma...@gmail.com> wrote:
>>>>
>>>>> On Mon, Jul 11, 2016 at 12:39 PM, Chris Barker <chris.bar...@noaa.gov>
>>>>> wrote:
>>>>>
>>>>>>
>>>>>>
>>>>>> On Sun, Jul 10, 2016 at 8:12 PM, Nathan Goldbaum <
>>>>>> nathan12...@gmail.com> wrote:
>>>>>>
>>>>>>>
>>>>>>> Maybe this can be an informal BOF session?
>>>>>>>
>>>>>>
>>>>>> or  maybe a formal BoF? after all, how formal do they get?
>>>>>>
>>>>>> Anyway, it was my understanding that we really needed to do some
>>>>>> significant refactoring of how numpy deals with dtypes in order to do 
>>>>>> this
>>>>>> kind of thing cleanly -- so where has that gone since last year?
>>>>>>
>>>>>> Maybe this conversation should be about how to build a more flexible
>>>>>> dtype system generally, rather than specifically about unit support.
>>>>>> (though unit support is a great use-case to focus on)
>>>>>>
>>>>>>
>>>>> So Thursday's options seem to be in the standard BOF slot (up against
>>>>> the Numfocus BOF), or doing something that evening, which would overlap at
>>>>> least part of multiple happy hour events. I lean towards evening. 
>>>>> Thoughts?
>>>>>
>>>>> Ryan
>>>>>
>>>>> --
>>>>> Ryan May
>>>>>
>>>>>
>>>>> ___
>>>>> NumPy-Discussion mailing list
>>>>> NumPy-Discussion@scipy.org
>>>>> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>>>>>
>>>>>
>>>> ___
>>>> NumPy-Discussion mailing list
>>>> NumPy-Discussion@scipy.org
>>>> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>>>>
>>>>
>>> ___
>>> NumPy-Discussion mailing list
>>> NumPy-Discussion@scipy.org
>>> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>>>
>>>
>>
>>
>> --
>> Ryan May
>>
>>
>> ___
>> NumPy-Discussion mailing list
>> NumPy-Discussion@scipy.org
>> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>>
>>
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>
>


-- 
Ryan May
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Custom Dtype/Units discussion

2016-07-14 Thread Ryan May
Fine with me.

Ryan

On Thu, Jul 14, 2016 at 12:48 AM, Nathaniel Smith <n...@pobox.com> wrote:

> I have something at lunch, so dinner would be good for me too.
> On Jul 13, 2016 7:46 PM, "Charles R Harris" <charlesr.har...@gmail.com>
> wrote:
>
>> Evening would work for me. Dinner?
>> On Jul 13, 2016 2:43 PM, "Ryan May" <rma...@gmail.com> wrote:
>>
>>> On Mon, Jul 11, 2016 at 12:39 PM, Chris Barker <chris.bar...@noaa.gov>
>>> wrote:
>>>
>>>>
>>>>
>>>> On Sun, Jul 10, 2016 at 8:12 PM, Nathan Goldbaum <nathan12...@gmail.com
>>>> > wrote:
>>>>
>>>>>
>>>>> Maybe this can be an informal BOF session?
>>>>>
>>>>
>>>> or  maybe a formal BoF? after all, how formal do they get?
>>>>
>>>> Anyway, it was my understanding that we really needed to do some
>>>> significant refactoring of how numpy deals with dtypes in order to do this
>>>> kind of thing cleanly -- so where has that gone since last year?
>>>>
>>>> Maybe this conversation should be about how to build a more flexible
>>>> dtype system generally, rather than specifically about unit support.
>>>> (though unit support is a great use-case to focus on)
>>>>
>>>>
>>> So Thursday's options seem to be in the standard BOF slot (up against
>>> the Numfocus BOF), or doing something that evening, which would overlap at
>>> least part of multiple happy hour events. I lean towards evening. Thoughts?
>>>
>>> Ryan
>>>
>>> --
>>> Ryan May
>>>
>>>
>>> ___
>>> NumPy-Discussion mailing list
>>> NumPy-Discussion@scipy.org
>>> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>>>
>>>
>> ___
>> NumPy-Discussion mailing list
>> NumPy-Discussion@scipy.org
>> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>>
>>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>
>


-- 
Ryan May
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Custom Dtype/Units discussion

2016-07-13 Thread Ryan May
On Mon, Jul 11, 2016 at 12:39 PM, Chris Barker <chris.bar...@noaa.gov>
wrote:

>
>
> On Sun, Jul 10, 2016 at 8:12 PM, Nathan Goldbaum <nathan12...@gmail.com>
> wrote:
>
>>
>> Maybe this can be an informal BOF session?
>>
>
> or  maybe a formal BoF? after all, how formal do they get?
>
> Anyway, it was my understanding that we really needed to do some
> significant refactoring of how numpy deals with dtypes in order to do this
> kind of thing cleanly -- so where has that gone since last year?
>
> Maybe this conversation should be about how to build a more flexible dtype
> system generally, rather than specifically about unit support. (though unit
> support is a great use-case to focus on)
>
>
So Thursday's options seem to be in the standard BOF slot (up against the
Numfocus BOF), or doing something that evening, which would overlap at
least part of multiple happy hour events. I lean towards evening. Thoughts?

Ryan

-- 
Ryan May
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Custom Dtype/Units discussion

2016-07-11 Thread Ryan May
On Mon, Jul 11, 2016 at 12:39 PM, Chris Barker <chris.bar...@noaa.gov>
wrote:
>
> Maybe this conversation should be about how to build a more flexible dtype
> system generally, rather than specifically about unit support. (though unit
> support is a great use-case to focus on)
>


I agree that a more general solution is a good goal--just that units is my
"sine qua non". Also, I would have love to have heard that someone solved
the unit + ndarray-like thing problem. :)

Ryan

-- 
Ryan May
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Custom Dtype/Units discussion

2016-07-10 Thread Ryan May
Sounds like an apt description of what this is intended to be.

Ryan

On Sun, Jul 10, 2016 at 10:12 PM, Nathan Goldbaum <nathan12...@gmail.com>
wrote:

> Hi Ryan,
>
> As a maintainer of a unit-aware ndarray subclass, I'm also interested in
> sitting in.
>
> Maybe this can be an informal BOF session?
>
> Nathan
>
>
> On Sunday, July 10, 2016, Ryan May <rma...@gmail.com> wrote:
>
>> Hi Nathaniel,
>>
>> Thursday works for me; anyone else interested is welcome to join.
>>
>> Ryan
>>
>> On Sun, Jul 10, 2016 at 12:20 AM, Nathaniel Smith <n...@pobox.com> wrote:
>>
>>> Hi Ryan,
>>>
>>> I'll be and SciPy and I'd love to talk about this :-). Things are a
>>> bit hectic for me on Mon/Tue/Wed between the Python Compilers Workshop
>>> and my talk, but do you want to meet up Thursday maybe?
>>>
>>> -n
>>>
>>> On Sat, Jul 9, 2016 at 6:44 PM, Ryan May <rma...@gmail.com> wrote:
>>> > Greetings!
>>> >
>>> > I've been beating my head against a wall trying to work seamlessly with
>>> > pint's unit support and arrays from numpy and xarray; these same
>>> issues seem
>>> > to apply to other unit frameworks as well. Last time I dug into these
>>> > problems, things like custom dtypes were raised as a more extensible
>>> > solution that works within numpy (and other libraries) without needing
>>> a
>>> > bunch of custom support.
>>> >
>>> > Anyone around SciPy this week want to get together and talk about how
>>> we can
>>> > move ahead? (or acquaint me with another/better path forward?) I feel
>>> like I
>>> > need to get this figured out one way or another before I can move
>>> forward in
>>> > my corner of the world, and I have time I can dedicate to implementing
>>> a
>>> > solution.
>>> >
>>> > Ryan
>>> >
>>> > --
>>> > Ryan May
>>> >
>>> >
>>> > ___
>>> > NumPy-Discussion mailing list
>>> > NumPy-Discussion@scipy.org
>>> > https://mail.scipy.org/mailman/listinfo/numpy-discussion
>>> >
>>>
>>>
>>>
>>> --
>>> Nathaniel J. Smith -- https://vorpus.org
>>> ___
>>> NumPy-Discussion mailing list
>>> NumPy-Discussion@scipy.org
>>> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>>>
>>
>>
>>
>> --
>> Ryan May
>>
>>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>
>


-- 
Ryan May
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Custom Dtype/Units discussion

2016-07-10 Thread Ryan May
Hi Nathaniel,

Thursday works for me; anyone else interested is welcome to join.

Ryan

On Sun, Jul 10, 2016 at 12:20 AM, Nathaniel Smith <n...@pobox.com> wrote:

> Hi Ryan,
>
> I'll be and SciPy and I'd love to talk about this :-). Things are a
> bit hectic for me on Mon/Tue/Wed between the Python Compilers Workshop
> and my talk, but do you want to meet up Thursday maybe?
>
> -n
>
> On Sat, Jul 9, 2016 at 6:44 PM, Ryan May <rma...@gmail.com> wrote:
> > Greetings!
> >
> > I've been beating my head against a wall trying to work seamlessly with
> > pint's unit support and arrays from numpy and xarray; these same issues
> seem
> > to apply to other unit frameworks as well. Last time I dug into these
> > problems, things like custom dtypes were raised as a more extensible
> > solution that works within numpy (and other libraries) without needing a
> > bunch of custom support.
> >
> > Anyone around SciPy this week want to get together and talk about how we
> can
> > move ahead? (or acquaint me with another/better path forward?) I feel
> like I
> > need to get this figured out one way or another before I can move
> forward in
> > my corner of the world, and I have time I can dedicate to implementing a
> > solution.
> >
> > Ryan
> >
> > --
> > Ryan May
> >
> >
> > ___
> > NumPy-Discussion mailing list
> > NumPy-Discussion@scipy.org
> > https://mail.scipy.org/mailman/listinfo/numpy-discussion
> >
>
>
>
> --
> Nathaniel J. Smith -- https://vorpus.org
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>



-- 
Ryan May
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Custom Dtype/Units discussion

2016-07-09 Thread Ryan May
Greetings!

I've been beating my head against a wall trying to work seamlessly with
pint's unit support and arrays from numpy and xarray; these same issues
seem to apply to other unit frameworks as well. Last time I dug into these
problems, things like custom dtypes were raised as a more extensible
solution that works within numpy (and other libraries) without needing a
bunch of custom support.

Anyone around SciPy this week want to get together and talk about how we
can move ahead? (or acquaint me with another/better path forward?) I feel
like I need to get this figured out one way or another before I can move
forward in my corner of the world, and I have time I can dedicate to
implementing a solution.

Ryan

-- 
Ryan May
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Scipy 2016 attending

2016-05-18 Thread Ryan May
Yup.

On Wed, May 18, 2016 at 5:04 PM, Steve Waterbury <water...@pangalactic.us>
wrote:

> Me 3!  ;)
>
> Steve
>
>
> On 05/18/2016 06:03 PM, Nathaniel Smith wrote:
>
> Me too.
>
> On Wed, May 18, 2016 at 3:02 PM, Chris Barker <chris.bar...@noaa.gov> 
> <chris.bar...@noaa.gov> wrote:
>
> I'll be there.
>
> -CHB
>
>
> On Wed, May 18, 2016 at 2:09 PM, Charles R Harris<charlesr.har...@gmail.com> 
> <charlesr.har...@gmail.com> wrote:
>
> Hi All,
>
> Out of curiosity, who all here intends to be at Scipy 2016?
>
> Chuck
>
> ___
> NumPy-Discussion mailing 
> listNumPy-Discussion@scipy.orghttps://mail.scipy.org/mailman/listinfo/numpy-discussion
>
>
>
> --
>
> Christopher Barker, Ph.D.
> Oceanographer
>
> Emergency Response Division
> NOAA/NOS/OR(206) 526-6959   voice
> 7600 Sand Point Way NE   (206) 526-6329   fax
> Seattle, WA  98115   (206) 526-6317   main reception
> chris.bar...@noaa.gov
>
> ___
> NumPy-Discussion mailing 
> listNumPy-Discussion@scipy.orghttps://mail.scipy.org/mailman/listinfo/numpy-discussion
>
>
>
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>
>


-- 
Ryan May
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Cython-based OpenMP-accelerated quartic polynomial solver

2015-10-02 Thread Ryan May
numpy.asanyarray() would be my preferred goto, as it will leave subclasses
of ndarray untouched; asarray() and atleast_1d() force ndarray. It's nice
to do the whenever possible.

Ryan

On Fri, Oct 2, 2015 at 6:52 AM, Slavin, Jonathan <jsla...@cfa.harvard.edu>
wrote:

> ​Personally I like atleast_1d, which will convert a scalar into a 1d array
> but will leave arrays untouched (i.e. won't change the dimensions.  Not
> sure what the advantages/disadvantages are relative to asarray.
>
> Jon​
>
>
> On Fri, Oct 2, 2015 at 7:05 AM, <numpy-discussion-requ...@scipy.org>
> wrote:
>
>> From: Juha Jeronen <juha.jero...@jyu.fi>
>> To: Discussion of Numerical Python <numpy-discussion@scipy.org>
>> Cc:
>> Date: Fri, 2 Oct 2015 13:31:47 +0300
>> Subject: Re: [Numpy-discussion] Cython-based OpenMP-accelerated quartic
>> polynomial solver
>>
>> On 02.10.2015 13:07, Daπid wrote:
>>
>>
>> On 2 October 2015 at 11:58, Juha Jeronen <juha.jero...@jyu.fi> wrote:
>>
>>>
>>>>
>>> First version done and uploaded:
>>>
>>>
>>> https://yousource.it.jyu.fi/jjrandom2/miniprojects/trees/master/misc/polysolve_for_numpy
>>>
>>
>> Small comment: now you are checking if the input is a scalar or a
>> ndarray, but it should also accept any array-like. If I pass a list, I
>> expect it to work, internally converting it into an array.
>>
>>
>> Good catch.
>>
>> Is there an official way to test for array-likes? Or should I always
>> convert with asarray()? Or something else?
>>
>>
>>  -J
>>
>
>
>
>
> --
> 
> Jonathan D. Slavin Harvard-Smithsonian CfA
> jsla...@cfa.harvard.edu   60 Garden Street, MS 83
> phone: (617) 496-7981   Cambridge, MA 02138-1516
> cell: (781) 363-0035 USA
> 
>
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>
>


-- 
Ryan May
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Python needs goto

2015-09-25 Thread Ryan May
On Fri, Sep 25, 2015 at 3:02 PM, Nathaniel Smith  wrote:
>
> The coroutines in 3.5 are just syntactic sugar around features that were
> added in *2*.5 (yield expressions and yield from), so no need to wait :-).
> They fall far short of arbitrary continuations, though.
>

Correction: Python 3.4 gained "yield from". Prior to that, you had a lot of
work to properly delegate from one generator to another.

But yes, async and await are just syntactic sugar (consistent with other
languages) for python 3.4's coroutine functionality.

Ryan
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Governance model request

2015-09-22 Thread Ryan May
This has to be one of the most bizarre threads I've ever read in my life.
Somehow companies are lurking around like the boogeyman and academics are
completely free of ulterior motives and conflicts of interest? This is just
asinine--we're all people and have various motivations. (Having just gotten
out of my university after 15 years, the idea that academics are somehow
immune to ulterior motives and conflicts of interest makes me laugh
hysterically.)

The sad part is that this worry completely unnecessary. This is an open
source project, not international politics, and the end goal is to produce
software. Therefore, 99.9% of the time motives (ulterior, profit, or
otherwise) are completely orthogonal to the question of: IS IT A GOOD
TECHNICAL IDEA? It's really that simple: does the proposed change move the
project in a direction that we want to go?

Now, for the 0.01% of the time, where nobody can agree on that answer, or
the question is non-technical, and there is concern about the motives of
members of the "council" (or whatnot), it's again simple: RECUSAL. It's a
simple concept that I learned in the godawful ethics class NSF forced grad
students to take: if you have a conflict of interest, you don't vote. It's
how the grownups from the Supreme Court to the college football playoff
deal with the fact that people WILL have conflicts; potential conflicts
don't disbar qualified individuals from being included in the group, just
from weighing in when their decisions can be clouded.

So how about we stop making up reasons to discourage participation by
(over-)qualified individuals, and actually take advantage of the fact that
people actually want to move numpy forward?

Ryan
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] A proposed change to rollaxis() behavior for negative 'start' values

2010-09-23 Thread Ryan May
On Thu, Sep 23, 2010 at 10:32 AM, Anne Archibald
aarch...@physics.mcgill.ca wrote:
 Just a quick correction: len(a) gives a.shape[0], while what you want
 is actually len(a.shape). So:

 In [1]: a = np.zeros((2,3,4,5,6))

 In [2]: len(a)
 Out[2]: 2

 In [8]: np.rollaxis(a,0,len(a.shape)).shape
 Out[8]: (3, 4, 5, 6, 2)

 So it behaves just like insert. But len(a.shape) is rather
 cumbersome, especially if you haven't given a a name yet:

It's available as a.ndim

Ryan

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Question about masked arrays

2010-09-20 Thread Ryan May
On Mon, Sep 20, 2010 at 3:23 PM, Gökhan Sever gokhanse...@gmail.com wrote:
 On Mon, Sep 20, 2010 at 1:05 PM, Robert Kern robert.k...@gmail.com wrote:

 Are you asking about when masked arrays are casted to ndarrays (and
 thus losing the mask information)? Most times when a function uses
 asarray() or array() to explicitly cast the inputs to an ndarray. The
 reason that np.mean() gives the same result as np.ma.mean() is that it
 simply defers to the .mean() method on the object, so it works as
 expected on a masked array. Many other functions will not.

 --
 Robert Kern

 Right guess. It is important for me to able to preserve masked array
 properties of an array. Otherwise losing the mask information yields
 unexpected results in some of my calculations. I could see from np.mean??
 that mean function is indeed the object method. Also in /numpy/ma there is a
 conversion for np.zeros(). I guess in any case it is the user's
 responsibility to make sure that the operations are performed on a desired
 array type.

True, but in some cases the functions just blindly call asarray() or
array() without thinking about using asanyarray(). If you encounter a
basic numpy function that calls asarray() but would work fine with
masked arrays (or other subclasses), feel free to file/post as a bug.
It's good to get those cases fixed where possible. (I've done this in
the past.)

Ryan

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] conversion to 1D array of specified dtype

2010-08-31 Thread Ryan May
On Tue, Aug 31, 2010 at 3:57 AM, Mark Bakker mark...@gmail.com wrote:
 Hello list,

 What is the easiest way to convert a function argument to at least a 1D
 array of specified dtype?

 atleast_1d(3,dtype='d') doesn't work (numpy 1.3.0)

 array(atleast_1d(3),dtype='d') works but seems cumbersome

atleast_1d(d).astype('d')

Ryan

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] BOF notes: Fernando's proposal: NumPy?ndarray with named axes

2010-07-12 Thread Ryan May
On Mon, Jul 12, 2010 at 8:04 AM, Neil Crighton neilcrigh...@gmail.com wrote:
 Gael Varoquaux gael.varoquaux at normalesup.org writes:
 I do such manipulation all the time, and keeping track of which axis is
 what is fairly tedious and error prone. It would be much nicer to be able
 to write:

     data.ax_day.mean(axis=0)
     data.ax_hour.mean(axis=0)


 Thanks, that's a really nice description. Instead of

 data.ax_day.mean(axis=0)

 I think it would be clearer to do something like

 data.mean(axis='day')

IIRC somewhere, at least at the BOF, this exact syntax was intended to
be supported.

Ryan

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] question about creating numpy arrays

2010-05-20 Thread Ryan May
On Thu, May 20, 2010 at 9:44 AM, Benjamin Root ben.r...@ou.edu wrote:
 I gave two counterexamples of why.

 The examples you gave aren't counterexamples.  See below...

 On Wed, May 19, 2010 at 7:06 PM, Darren Dale dsdal...@gmail.com wrote:

 On Wed, May 19, 2010 at 4:19 PM,  josef.p...@gmail.com wrote:
  On Wed, May 19, 2010 at 4:08 PM, Darren Dale dsdal...@gmail.com wrote:
  I have a question about creation of numpy arrays from a list of
  objects, which bears on the Quantities project and also on masked
  arrays:
 
  import quantities as pq
  import numpy as np
  a, b = 2*pq.m,1*pq.s
  np.array([a, b])
  array([ 12.,   1.])
 
  Why doesn't that create an object array? Similarly:
 


 Consider the use case of a person creating a 1-D numpy array:
   np.array([12.0, 1.0])
 array([ 12.,  1.])

 How is python supposed to tell the difference between
   np.array([a, b])
 and
   np.array([12.0, 1.0])
 ?

 It can't, and there are plenty of times when one wants to explicitly
 initialize a small numpy array with a few discrete variables.

What do you mean it can't? 12.0 and 1.0 are floats, a and b are not.
While, yes, they can be coerced to floats, this is a *lossy*
transformation--it strips away information contained in the class, and
IMHO should not be the default behavior. If I want the objects, I can
force it:

In [7]: np.array([a,b],dtype=np.object)
Out[7]: array([2.0 m, 1.0 s], dtype=object)

This works fine, but feels ugly since I have to explicitly tell numpy
not to do something. It feels to me like it's violating the principle
of in the face of ambiguity, resist the temptation to guess.

Ryan

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Another masked array question

2010-05-08 Thread Ryan May
On Sat, May 8, 2010 at 7:52 PM, Gökhan Sever gokhanse...@gmail.com wrote:
 Hello,

 Consider my masked arrays:

 I[28]: type basic.data['Air_Temp']
 - type(basic.data['Air_Temp'])
 O[28]: numpy.ma.core.MaskedArray

 I[29]: basic.data['Air_Temp']
 O[29]:
 masked_array(data = [-- -- -- ..., -- -- --],
  mask = [ True  True  True ...,  True  True  True],
    fill_value = 99.)


 I[17]: basic.data['Air_Temp'].data = np.ones(len(basic.data['Air_Temp']))*30
 ---
 AttributeError    Traceback (most recent call last)

  1
   2
   3
   4
   5

 AttributeError: can't set attribute

 Why this assignment fails? I want to set each element in the original
 basic.data['Air_Temp'].data to another value. (Because the main instrument
 was forgotten to turn on for that day, and I am using a secondary
 measurement data for Air Temperature for my another calculation. However it
 fails. Although single assignment works:

 I[13]: basic.data['Air_Temp'].data[0] = 30

 Shouldn't this be working like the regular NumPy arrays do?

Based on the traceback, I'd say it's because you're trying to replace
the object pointed to by the .data attribute. Instead, try to just
change the bits contained in .data:

basic.data['Air_Temp'].data[:] = np.ones(len(basic.data['Air_Temp']))*30

Ryan

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Is this odd?

2010-04-02 Thread Ryan May
On Thu, Apr 1, 2010 at 10:07 PM, Shailendra shailendra.vi...@gmail.com wrote:
 Hi All,
 Below is some array behaviour which i think is odd
 a=arange(10)
 a
 array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
 b=nonzero(a0)
 b
 (array([], dtype=int32),)
 if not b[0]:
 ...     print 'b[0] is false'
 ...
 b[0] is false

 Above case the b[0] is empty so it is fine it is considered false

 b=nonzero(a1)
 b
 (array([0]),)
 if not b[0]:
 ...     print 'b[0] is false'
 ...
 b[0] is false

 Above case b[0] is a non-empty array. Why should this be consider false.

 b=nonzero(a8)
 b
 (array([9]),)
 if not b[0]:
 ...     print 'b[0] is false'
 ...

 Above case b[0] is non-empty and should be consider true.Which it does.

 I don't understand why non-empty array should not be considered true
 irrespective to what value they have.
 Also, please suggest the best way to differentiate between an empty
 array and non-empty array( irrespective to what is inside array).

But by using:

if not b[0]:

You're not considering the array as a whole, you're looking at the
first element, which is giving expected results.  As I'm sure you're
aware, however, you can't simply do:

if not b: # Raises exception

So what you need to do is:

if b.any():

or:

if b.all()

Now for determining empty or not, you'll need to look at len(b) or b.shape

Ryan

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Is this odd?

2010-04-02 Thread Ryan May
On Fri, Apr 2, 2010 at 8:31 AM, Robert Kern robert.k...@gmail.com wrote:
 On Fri, Apr 2, 2010 at 08:28, Ryan May rma...@gmail.com wrote:
 On Thu, Apr 1, 2010 at 10:07 PM, Shailendra shailendra.vi...@gmail.com 
 wrote:
 Hi All,
 Below is some array behaviour which i think is odd
 a=arange(10)
 a
 array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
 b=nonzero(a0)
 b
 (array([], dtype=int32),)
 if not b[0]:
 ...     print 'b[0] is false'
 ...
 b[0] is false

 Above case the b[0] is empty so it is fine it is considered false

 b=nonzero(a1)
 b
 (array([0]),)
 if not b[0]:
 ...     print 'b[0] is false'
 ...
 b[0] is false

 Above case b[0] is a non-empty array. Why should this be consider false.

 b=nonzero(a8)
 b
 (array([9]),)
 if not b[0]:
 ...     print 'b[0] is false'
 ...

 Above case b[0] is non-empty and should be consider true.Which it does.

 I don't understand why non-empty array should not be considered true
 irrespective to what value they have.
 Also, please suggest the best way to differentiate between an empty
 array and non-empty array( irrespective to what is inside array).

 But by using:

 if not b[0]:

 You're not considering the array as a whole, you're looking at the
 first element, which is giving expected results.

 No, b is a tuple containing the array. b[0] is the array itself.

Wow, that's what I get for trying to read code *before* coffee.  On
the plus side, I now know how nonzero() actually works.

Ryan

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Bug in logaddexp2.reduce

2010-03-31 Thread Ryan May
On Wed, Mar 31, 2010 at 5:37 PM, Warren Weckesser
warren.weckes...@enthought.com wrote:
 T J wrote:
 On Wed, Mar 31, 2010 at 1:21 PM, Charles R Harris
 charlesr.har...@gmail.com wrote:

 Looks like roundoff error.



 So this is expected behavior?

 In [1]: np.logaddexp2(-1.5849625007211563, -53.584962500721154)
 Out[1]: -1.5849625007211561

 In [2]: np.logaddexp2(-0.5849625007211563, -53.584962500721154)
 Out[2]: nan


 Is any able to reproduce this?  I don't get 'nan' in either 1.4.0 or
 2.0.0.dev8313 (32 bit Mac OSX).  In an earlier email T J reported using
 1.5.0.dev8106.

No luck here on Gentoo Linux:

Python 2.6.4 (r264:75706, Mar 11 2010, 09:29:48)
[GCC 4.3.4] on linux2
Type help, copyright, credits or license for more information.
 import numpy as np
 np.logaddexp2(-0.5849625007211563, -53.584962500721154)
-0.58496250072115619
 np.logaddexp2(-1.5849625007211563, -53.584962500721154)
-1.5849625007211561
 np.version.version
'2.0.0.dev8313'

Ryan

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Making a 2to3 distutils command ?

2010-03-30 Thread Ryan May
On Tue, Mar 30, 2010 at 1:09 AM, Pauli Virtanen p...@iki.fi wrote:
 2010/3/30 David Cournapeau da...@silveregg.co.jp:
 Currently, when building numpy with python 3, the 2to3 conversion
 happens before calling any distutils command. Was there a reason for
 doing it as it is done now ?

 This allowed 2to3 to also port the various setup*.py files and
 numpy.distutils, and implementing it this way required the minimum
 amount of work and understanding of distutils -- you need to force it
 to proceed with the build using the set of output files from 2to3.

 I would like to make a proper numpy.distutils command for it, so that it
 can be more finely controlled (in particular, using the -j option). It
 would also avoid duplication in scipy.

 Are you sure you want to mix distutils in this? Wouldn't it only
 obscure how things work?

 If the aim is in making the 2to3 processing reusable, I'd rather
 simply move tools/py3tool.py under numpy.distutils (+ perhaps do some
 cleanups), and otherwise keep it completely separate from distutils.
 It could be nice to have the 2to3 conversion parallelizable, but there
 are probably simple ways to do it without mixing distutils in.

Out of curiosity, is there something wrong with the support for 2to3
that already exists within distutils? (Other than it just being
distutils)

http://bruynooghe.blogspot.com/2010/03/using-lib2to3-in-setuppy.html

Ryan

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Set values of a matrix within a specified range to zero

2010-03-30 Thread Ryan May
On Tue, Mar 30, 2010 at 11:12 AM, Alan G Isaac ais...@american.edu wrote:
 On 3/30/2010 12:56 PM, Sean Mulcahy wrote:
 512x512 arrays.  I would like to set elements of the array whose value fall 
 within a specified range to zero (eg 23  x  45).

 x[(23x)*(x45)]=0

Or a version that seems a bit more obvious (doing a multiply between
boolean arrays to get an AND operator seems a tad odd):

x[(23x)  (x45)] = 0

Ryan

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Set values of a matrix within a specified range to zero

2010-03-30 Thread Ryan May
On Tue, Mar 30, 2010 at 3:16 PM, Friedrich Romstedt
friedrichromst...@gmail.com wrote:
 2010/3/30 Ryan May rma...@gmail.com:
 On Tue, Mar 30, 2010 at 11:12 AM, Alan G Isaac ais...@american.edu wrote:
 On 3/30/2010 12:56 PM, Sean Mulcahy wrote:
 512x512 arrays.  I would like to set elements of the array whose value 
 fall within a specified range to zero (eg 23  x  45).

 x[(23x)*(x45)]=0

 Or a version that seems a bit more obvious (doing a multiply between
 boolean arrays to get an AND operator seems a tad odd):

 x[(23x)  (x45)] = 0

 We recently found out that it executes faster using:

 x *= ((x = 23) | (x = 45))  .

Interesting. In an ideal world, I'd love to see why exactly that is,
because I don't think multiplication should be faster than a boolean
op.  If you need speed, then by all means go for it.  But if you don't
need speed I'd use the  since that will be more obvious to the person
who ends up reading your code later and has to spend time decoding
what that line does.

Ryan

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Set values of a matrix within a specified range to zero

2010-03-30 Thread Ryan May
On Tue, Mar 30, 2010 at 3:40 PM, Robert Kern robert.k...@gmail.com wrote:
 On Tue, Mar 30, 2010 at 16:35, Ryan May rma...@gmail.com wrote:
 On Tue, Mar 30, 2010 at 3:16 PM, Friedrich Romstedt
 friedrichromst...@gmail.com wrote:

 x *= ((x = 23) | (x = 45))  .

 Interesting. In an ideal world, I'd love to see why exactly that is,
 because I don't think multiplication should be faster than a boolean
 op.

 Branch prediction failures are really costly in modern CPUs.

 http://en.wikipedia.org/wiki/Branch_prediction

That makes sense.

I still maintain that for 95% of code, easy to understand code is more
important than performance differences due to branch misprediction.
(And more importantly, we don't want to be teaching new users to code
like that from the beginning.)

Ryan

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy.trapz() doesn't respect subclass

2010-03-29 Thread Ryan May
On Mon, Mar 29, 2010 at 8:00 AM, Bruce Southey bsout...@gmail.com wrote:
 On 03/27/2010 01:31 PM, Ryan May wrote:
 Because of the call to asarray(), the mask is completely discarded and
 you end up with identical results to an unmasked array,
 which is not what I'd expect.  Worse, the actual numeric value of the
 positions that were masked affect the final answer. My patch allows
 this to work as expected too.

 Actually you should assume that unless it is explicitly addressed
 (either by code or via a test), any subclass of ndarray (matrix, masked,
 structured, record and even sparse) may not provide a 'valid' answer.
 There are probably many numpy functions that only really work with the
 standard ndarray. Most of the time people do not meet these with the
 subclasses or have workarounds so there has been little requirement to
 address this especially due to the added overhead needed for checking.

It's not that I'm surprised that masked arrays don't work. It's more
that the calls to np.asarray within trapz() have been held up as being
necessary for things like matrices and (at the time) masked arrays to
work properly; as if calling asarray() is supposed to make all
subclasses work, though at a base level by dropping to an ndarray. To
me, the current behavior with masked arrays is worse than if passing
in a matrix raised an exception.  One is a silently wrong answer, the
other is a big error that the programmer can see, test, and fix.

 Also, any patch that does not explicitly define the assumed behavior
 with points that are masked  has to be rejected. It is not even clear
 what the expected behavior is for masked arrays should be:
 Is it even valid for trapz to be integrating across the full range if
 there are missing points? That implies some assumption about the missing
 points.
 If is valid, then should you just ignore the masked values or try to
 predict the missing values first? Perhaps you may want to have the
 option to do both.

You're right, it doesn't actually work with MaskedArrays as it stand
right now, because it calls add.reduce() directly instead of using the
array.sum() method. Once fixed, by allowing MaskedArray to handle the
operation, you end up not integrating over the masked region. Any
operation involving masked points results in contributions by masked
points are ignored.  I guess it's as if you assumed the function was 0
over the masked region.  If you wanted to ignore the masked points,
but integrate over the region (making a really big trapezoid over that
region), you could just pass in the .compressed() versions of the
arrays.

 than implicit) It just seems absurd that if I make my own ndarray
 subclass that *just* adds some behavior to the array, but doesn't
 break *any* operations, I need to do one of the following:

 1) Have my own copy of trapz that works with my class
 2) Wrap every call to numpy's own trapz() to put the metadata back.

 Does it not seem backwards that the class that breaks conventions
 just works while those that don't break conventions, will work
 perfectly with the function as written, need help to be treated
 properly?

 You need your own version of trapz or whatever function because it has
 the behavior that you expect. But a patch should not break numpy so you
 need to at least to have a section that looks for masked array subtypes
 and performs the desired behavior(s).

I'm not trying to be difficult but it seems like there are conflicting
ideas here: we shouldn't break numpy, which in this case means making
matrices no longer work with trapz().  On the other hand, subclasses
can do a lot of things, so there's no real expectation that they
should ever work with numpy functions in general.  Am I missing
something here? I'm just trying to understand what I perceive to be
some inconsistencies in numpy's behavior and, more importantly,
convention with regard subclasses.

Ryan

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy.trapz() doesn't respect subclass

2010-03-29 Thread Ryan May
Hi,

I decided that having actual code that does what I want and keeps
backwards compatibility (and adds tests) might be better than arguing
semantics.  I've updated my patch to:

* Uses the array.sum() method instead of add.reduce to make subclasses
fully work (this was still breaking masked arrays.
* Catches an exception on doing the actual multiply and sum of the
arrays and tries again after casting to ndarrays.  This allows any
subclasses that relied on being cast to still work.
* Adds tests that ensure matrices work (test passes before and after
changes to trapz()) and adds a test for masked arrays that checks that
masked points are treated as expected. In this case, expected is
defined to be the same as if you implemented the trapezoidal method by
hand using MaskedArray's basic arithmetic operations.

Attached here and at: http://projects.scipy.org/numpy/ticket/1438

I think this addresses the concerns that were raised about the changes
for subclasses in this case. Let me know if I've missed something (or
if there's no way in hell any such patch will ever be committed).

Thanks,

Ryan

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy.trapz() doesn't respect subclass

2010-03-27 Thread Ryan May
On Mon, Mar 22, 2010 at 8:14 AM, Ryan May rma...@gmail.com wrote:
 On Sun, Mar 21, 2010 at 11:57 PM,  josef.p...@gmail.com wrote:
 On Mon, Mar 22, 2010 at 12:49 AM, Ryan May rma...@gmail.com wrote:
 Hi,

 I found that trapz() doesn't work with subclasses:

 http://projects.scipy.org/numpy/ticket/1438

 A simple patch (attached) to change asarray() to asanyarray() fixes
 the problem fine.

 Are you sure this function works with matrices and other subclasses?

 Looking only very briefly at it: the multiplication might be a problem.

 Correct, it probably *is* a problem in some cases with matrices.  In
 this case, I was using quantities (Darren Dale's unit-aware array
 package), and the result was that units were stripped off.

 The patch can't make trapz() work with all subclasses. However, right
 now, you have *no* hope of getting a subclass out of trapz().  With
 this change, subclasses that don't redefine operators can work fine.
 If you're passing a Matrix to trapz() and expecting it to work, IMHO
 you're doing it wrong.  You can still pass one in by using asarray()
 yourself.  Without this patch, I'm left with copying and maintaining a
 copy of the code elsewhere, just so I can loosen the function's input
 processing. That seems wrong, since there's really no need in my case
 to drop down to an ndarray. The input I'm giving it supports all the
 operations it needs, so it should just work with my original input.

Anyone else care to weigh in here?

Ryan

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy.trapz() doesn't respect subclass

2010-03-27 Thread Ryan May
On Sat, Mar 27, 2010 at 11:12 AM,  josef.p...@gmail.com wrote:
 On Sat, Mar 27, 2010 at 1:00 PM, Ryan May rma...@gmail.com wrote:
 On Mon, Mar 22, 2010 at 8:14 AM, Ryan May rma...@gmail.com wrote:
 On Sun, Mar 21, 2010 at 11:57 PM,  josef.p...@gmail.com wrote:
 On Mon, Mar 22, 2010 at 12:49 AM, Ryan May rma...@gmail.com wrote:
 Hi,

 I found that trapz() doesn't work with subclasses:

 http://projects.scipy.org/numpy/ticket/1438

 A simple patch (attached) to change asarray() to asanyarray() fixes
 the problem fine.

 Are you sure this function works with matrices and other subclasses?

 Looking only very briefly at it: the multiplication might be a problem.

 Correct, it probably *is* a problem in some cases with matrices.  In
 this case, I was using quantities (Darren Dale's unit-aware array
 package), and the result was that units were stripped off.

 The patch can't make trapz() work with all subclasses. However, right
 now, you have *no* hope of getting a subclass out of trapz().  With
 this change, subclasses that don't redefine operators can work fine.
 If you're passing a Matrix to trapz() and expecting it to work, IMHO
 you're doing it wrong.  You can still pass one in by using asarray()
 yourself.  Without this patch, I'm left with copying and maintaining a
 copy of the code elsewhere, just so I can loosen the function's input
 processing. That seems wrong, since there's really no need in my case
 to drop down to an ndarray. The input I'm giving it supports all the
 operations it needs, so it should just work with my original input.

 With asarray it gives correct results for matrices and all array_like
 and subclasses, it just doesn't preserve the type.
 Your patch would break matrices and possibly other types, masked_arrays?, ...

It would break matrices, yes.  I would argue that masked arrays are
already broken with trapz:

In [1]: x = np.arange(10)

In [2]: y = x * x

In [3]: np.trapz(y, x)
Out[3]: 244.5

In [4]: ym = np.ma.array(y, mask=(x4)(x7))

In [5]: np.trapz(ym, x)
Out[5]: 244.5

In [6]: y[5:7] = 0

In [7]: ym = np.ma.array(y, mask=(x4)(x7))

In [8]: np.trapz(ym, x)
Out[8]: 183.5

Because of the call to asarray(), the mask is completely discarded and
you end up with identical results to an unmasked array,
which is not what I'd expect.  Worse, the actual numeric value of the
positions that were masked affect the final answer. My patch allows
this to work as expected too.

 One solution would be using arraywrap as in numpy.linalg.

By arraywrap, I'm assuming you mean:

def _makearray(a):
new = asarray(a)
wrap = getattr(a, __array_prepare__, new.__array_wrap__)
return new, wrap

I'm not sure if that's identical to just letting the subclass handle
what's needed.  To my eyes, that doesn't look as though it'd be
equivalent, both for handling masked arrays and Quantities. For
quantities at least, the result of trapz will have different units
than either of the inputs.

 for related discussion:
 http://mail.scipy.org/pipermail/scipy-dev/2009-June/012061.html

Actually, that discussion kind of makes my point.  Matrices are a pain
to make work in a general sense because they *break* ndarray
conventions--to me it doesn't make sense to help along classes that
break convention at the expense of making well-behaved classes a pain
to use.  You should need an *explicit* cast of a matrix to an ndarray
instead of the function quietly doing it for you. (Explicit is better
than implicit) It just seems absurd that if I make my own ndarray
subclass that *just* adds some behavior to the array, but doesn't
break *any* operations, I need to do one of the following:

1) Have my own copy of trapz that works with my class
2) Wrap every call to numpy's own trapz() to put the metadata back.

Does it not seem backwards that the class that breaks conventions
just works while those that don't break conventions, will work
perfectly with the function as written, need help to be treated
properly?

Ryan

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy.trapz() doesn't respect subclass

2010-03-27 Thread Ryan May
On Sat, Mar 27, 2010 at 8:23 PM,  josef.p...@gmail.com wrote:
 Matrices have been part of numpy for a long time and your patch would
 break backwards compatibility in a pretty serious way.

Yeah, and I should admit that I realize that makes this particular
patch a no-go. However, that to me doesn't put the issue to bed for
any future code that gets written (see below).

 subclasses of ndarray, like masked_arrays and quantities, and classes
 that delegate to array calculations, like pandas, can redefine
 anything. So there is not much that can be relied on if any subclass
 is allowed to be used inside a function

 e.g. quantities redefines sin, cos,...
 http://packages.python.org/quantities/user/issues.html#umath-functions
 What happens if you call fft with a quantity array?

Probably ends up casting to an ndarray. But that's a complex operation
that I can live with not working. It's coded in C and can't be
implemented quickly using array methods. And in this

 Except for simple functions and ufuncs, it would be a lot of work and
 fragile to allow asanyarray. And, as we discussed in a recent thread
 on masked arrays (and histogram), it would push the work on the
 function writer instead of the ones that are creating new subclasses.

I disagree in this case.  I think the function writer should only be
burdened to try to use array methods rather than numpy functions, if
possible, and avoiding casts other than asanyarray() at all costs.  I
think we shouldn't be scared of getting an error when a subclass is
passed to a function, because that's an indication to the programmer
that it doesn't work with what you're passing in and you need to
*explicitly* cast it to an ndarray. Having the function do the cast
for you is: 1) magical and implicit 2) Forces an unnecessary cast on
those who would otherwise work fine. I get errors when I try to pass
structured arrays to math functions, but I don't see numpy casting
that away.

 Of course, the behavior in numpy and scipy can be improved, and trapz
 may be simple enough to change, but I don't think a patch that breaks
 backwards compatibility pretty seriously and is not accompanied by
 sufficient tests should go into numpy or scipy.

If sufficient tests is the only thing holding this back, let me know.
I'll get to coding.

But I can't argue with the backwards incompatibility. At this point, I
think I'm more trying to see if there's any agreement that: casting
*everyone* because some class breaks behavior is a bad idea.  The
programmer can always make it work by explicitly asking for the cast,
but there's no way for the programmer to ask the function *not* to
cast the data. Hell, I'd be happy if trapz took a flag just telling it
subok=True.

 (On the other hand, I'm very slowly getting used to the pattern that
 for a simple function, 10% is calculation and 90% is interface code.)

Yeah, it's kind of annoying, since the 10% is the cool part you want,
and that 90% is thorny to design and boring to code.

Ryan

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy.trapz() doesn't respect subclass

2010-03-22 Thread Ryan May
On Sun, Mar 21, 2010 at 11:57 PM,  josef.p...@gmail.com wrote:
 On Mon, Mar 22, 2010 at 12:49 AM, Ryan May rma...@gmail.com wrote:
 Hi,

 I found that trapz() doesn't work with subclasses:

 http://projects.scipy.org/numpy/ticket/1438

 A simple patch (attached) to change asarray() to asanyarray() fixes
 the problem fine.

 Are you sure this function works with matrices and other subclasses?

 Looking only very briefly at it: the multiplication might be a problem.

Correct, it probably *is* a problem in some cases with matrices.  In
this case, I was using quantities (Darren Dale's unit-aware array
package), and the result was that units were stripped off.

The patch can't make trapz() work with all subclasses. However, right
now, you have *no* hope of getting a subclass out of trapz().  With
this change, subclasses that don't redefine operators can work fine.
If you're passing a Matrix to trapz() and expecting it to work, IMHO
you're doing it wrong.  You can still pass one in by using asarray()
yourself.  Without this patch, I'm left with copying and maintaining a
copy of the code elsewhere, just so I can loosen the function's input
processing. That seems wrong, since there's really no need in my case
to drop down to an ndarray. The input I'm giving it supports all the
operations it needs, so it should just work with my original input.

Or am I just off base here?

Ryan

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Array bug with character (regression from 1.4.0)

2010-03-21 Thread Ryan May
On Sun, Mar 21, 2010 at 5:29 PM, Pauli Virtanen p...@iki.fi wrote:
 su, 2010-03-21 kello 16:13 -0600, Charles R Harris kirjoitti:
 I was wondering if this was related to Michael's fixes for
 character arrays? A little bisection might help localize the problem.

 It's a bug I introduced in r8144... I forgot one *can* assign strings to
 0-d arrays, and strings are indeed one sequence type.

 I'm going to back that changeset out, since it was only a cosmetic fix.
 That particular part of code needs some cleanup (it's a bit too hairy if
 things like this can slip), but I don't have the time at the moment to
 come up with a more complete fix.

That fixed it for me, thanks for getting done quickly.

What's amusing is that I found it because pupynere was failing to
write files where a variable had an attribute that consisted of a
single letter.

Ryan

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] numpy.trapz() doesn't respect subclass

2010-03-21 Thread Ryan May
Hi,

I found that trapz() doesn't work with subclasses:

http://projects.scipy.org/numpy/ticket/1438

A simple patch (attached) to change asarray() to asanyarray() fixes
the problem fine.

Ryan

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma


fix_trapz_subclass.diff
Description: Binary data
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Array bug with character (regression from 1.4.0)

2010-03-20 Thread Ryan May
The following code, which works with numpy 1.4.0, results in an error:

In [1]: import numpy as np
In [2]: v = 'm'
In [3]: dt = np.dtype('c')
In [4]: a = np.asarray(v, dt)

On 1.4.0:

In [5]: a
Out[5]:
array('m',
  dtype='|S1')
In [6]: np.__version__
Out[6]: '1.4.0'

On SVN trunk:

/home/rmay/.local/lib/python2.6/site-packages/numpy/core/numeric.pyc
in asarray(a, dtype, order)
282
283 
-- 284 return array(a, dtype, copy=False, order=order)
285
286 def asanyarray(a, dtype=None, order=None):

ValueError: assignment to 0-d array

In [5]: np.__version__
Out[5]: '2.0.0.dev8297'

Thoughts?
(Filed at: http://projects.scipy.org/numpy/ticket/1436)

Ryan

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] numpy.gradient() does not return ndarray subclasses

2010-03-18 Thread Ryan May
Hi,

Can I get someone to look at: http://projects.scipy.org/numpy/ticket/1435

Basically, numpy.gradient() uses numpy.zeros() to create an output
array.  This breaks the use of any ndarray subclasses, like masked
arrays, since the function will only return
ndarrays.  I've attached a patch that fixes that problem and has a
simple test checking output types. With this patch, I can use gradient
on masked arrays and get appropriately masked output.

If we could, it'd be nice to get this in for 2.0 so that I (and my
coworkers who found the bug) don't have to use a custom patched
gradient until the next release after that.

Thanks,

Ryan

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma


fix_gradient_with_subclasses.diff
Description: Binary data
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Warnings in numpy.ma.test()

2010-03-18 Thread Ryan May
On Thu, Mar 18, 2010 at 2:46 PM, Christopher Barker
chris.bar...@noaa.gov wrote:
 Gael Varoquaux wrote:
 On Thu, Mar 18, 2010 at 12:12:10PM -0700, Christopher Barker wrote:
 sure -- that's kind of my point -- if EVERY numpy array were
 (potentially) masked, then folks would write code to deal with them
 appropriately.

 That's pretty much saying: I have a complicated problem and I want every
 one else to have to deal with the full complexity of it, even if they
 have a simple problem.

 Well -- I did say it was a fantasy...

 But I disagree -- having invalid data is a very common case. What we
 have now is a situation where we have two parallel systems, masked
 arrays and regular arrays. Each time someone does something new with
 masked arrays, they often find another missing feature, and have to
 solve that. Also, the fact that masked arrays are tacked on means that
 performance suffers.

Case in point, I just found a bug in np.gradient where it forces the
output to be an ndarray.
(http://projects.scipy.org/numpy/ticket/1435).  Easy fix that doesn't
actually require any special casing for masked arrays, just making
sure to use the proper function to create a new array of the same
subclass as the input.  However, now for any place that I can't patch
I have to use a custom function until a fixed numpy is released.

Maybe universal support for masked arrays (and masking invalid points)
is a pipe dream, but every function in numpy should IMO deal properly
with subclasses of ndarray.

Ryan

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Warnings in numpy.ma.test()

2010-03-17 Thread Ryan May
On Wed, Mar 17, 2010 at 7:19 AM, Darren Dale dsdal...@gmail.com wrote:
 Is this general enough for your use case? I haven't tried to think
 about how to change some global state at one point and change it back
 at another, that seems like a bad idea and difficult to support.

Sounds like the textbook use case for the python 2.5/2.6 context
manager.   Pity we can't use it yet... (and I'm not sure it'd be easy
to wrap around the calls here.)

Ryan

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Warnings in numpy.ma.test()

2010-03-17 Thread Ryan May
On Wed, Mar 17, 2010 at 9:20 AM, Darren Dale dsdal...@gmail.com wrote:
 On Wed, Mar 17, 2010 at 10:11 AM, Ryan May rma...@gmail.com wrote:
 On Wed, Mar 17, 2010 at 7:19 AM, Darren Dale dsdal...@gmail.com wrote:
 Is this general enough for your use case? I haven't tried to think
 about how to change some global state at one point and change it back
 at another, that seems like a bad idea and difficult to support.

 Sounds like the textbook use case for the python 2.5/2.6 context
 manager.   Pity we can't use it yet... (and I'm not sure it'd be easy
 to wrap around the calls here.)

 I don't think context managers would work. They would be implemented
 in one of the subclasses special methods and would thus go out of
 scope before the ufunc got around to performing the calculation that
 required the change in state.

Right, that's the part I was referring to in the last part of my post.
 But the concept of modifying global state and ensuring that no matter
what happens, that state reset to its initial condition, is the
textbook use case for context managers.

Problem is, I think that limitation replies to any method that tries
to be exception-safe.  It seems like you basically need to wrap the
initial function call.

Ryan

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Some help on matlab to numpy translation

2010-03-13 Thread Ryan May
On Sat, Mar 13, 2010 at 4:45 AM, Fabrice Silva si...@lma.cnrs-mrs.fr wrote:
 Le samedi 13 mars 2010 à 10:20 +0100, Nicolas Rougier a écrit :
 Hello,
 I'm trying to translate a small matlab program for the simulation in a
 2D flow in a channel past a cylinder and since I do not have matlab
 access, I would like to know if someone can help me, especially on
 array indexing. The matlab source code is available at:
 http://www.lbmethod.org/openlb/lb.examples.html and below is what I've
 done so far in my translation effort.

 In the matlab code, there is a ux array of shape (1,lx,ly) and I do
 not understand syntax: ux(:,1,col) with col = [2:(ly-1)]. If
 someone knows, that would help me a lot...


 As ux 's shape is (1,lx,ly), ux(:,1,col) is equal to ux(1,1,col) which
 is a vector with the elements [ux(1,1,2), ... ux(1,1,ly-1)].
 Using : juste after the reshape seems a lit bit silly...

Except that python uses 0-based indexing and does not include the last
number in a slice, while Matlab uses 1-based indexing and includes the
last number, so really:
ux(:,1,col)
becomes:
ux(0, 0, col) # or ux(:, 0, col)

And if col is
col = [2:(ly-1)]
This needs to be:
col = np.arange([1, ly - 1)

Ryan

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Removing datetime support for 1.4.x series ?

2010-02-08 Thread Ryan May
On Mon, Feb 8, 2010 at 4:09 PM, Gael Varoquaux
gael.varoqu...@normalesup.org wrote:
 On Mon, Feb 08, 2010 at 05:08:17PM -0500, Darren Dale wrote:
 On Mon, Feb 8, 2010 at 5:05 PM, Darren Dale dsdal...@gmail.com wrote:
  On Mon, Feb 8, 2010 at 5:05 PM, Jarrod Millman mill...@berkeley.edu 
  wrote:
  On Mon, Feb 8, 2010 at 1:57 PM, Charles R Harris
  charlesr.har...@gmail.com wrote:
  Should the release containing the datetime/hasobject changes be called

  a) 1.5.0
  b) 2.0.0

  My vote goes to b.

  You don't matter. Nor do I.

 I definitely should have counted to 100 before sending that. It wasn't
 helpful and I apologize.

 Actually, Darren, I found you fairly entertaining.

 ;)

Agreed.  I found it actually helpful in hammering home something said
by Travis that was somewhat ignored.

Ryan

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] scipy-tickets restarted emailing on jan17 - how about numpy-tickets ?

2010-01-25 Thread Ryan May
On Mon, Jan 25, 2010 at 2:55 AM, Sebastian Haase seb.ha...@gmail.com wrote:
 Hi,
 long time ago I had subscript to get both scipy-tickets and
 numpy-tickets emailed.
 Now scipy-tickets apparently started emailing again on 17th of Januar.
 Will numpy-tickets also come back by itself - or should I resubscribe?

I'm seeing traffic on numpy-tickets since about the time scipy-tickets
came back. I'd try resubscribing.

Ryan

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
Sent from Norman, Oklahoma, United States
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Assigning complex values to a real array

2009-12-10 Thread Ryan May
On Thu, Dec 10, 2009 at 2:54 AM, Dag Sverre Seljebotn
da...@student.matnat.uio.no wrote:
 Anne Archibald wrote:
 2009/12/9 Dr. Phillip M. Feldman pfeld...@verizon.net:


 When I recently tried to validate a code, the answers were wrong, and it
 took two full days to track down the cause.  I am now forced to reconsider
 carefully whether Python/NumPy is a suitable platform for serious scientific
 computing.


 While I find the current numpy complex-real conversion annoying, I
 have to say, this kind of rhetoric does not benefit your cause. It
 sounds childish and manipulative, and makes even people who agree in
 principle want to tell you to go ahead and use MATLAB and stop
 pestering us. We are not here to sell you on numpy; if you hate it,
 don't use it. We are here because *we* use it, warts and all, and we
 want to discuss interesting topics related to numpy. That you would
 have implemented it differently is not very interesting if you are not
 even willing to understand why it is the way it is and what a change
 would cost, let alone propose a workable way to improve.

 At this point I want to remind us about Charles Harris' very workable
 proposal: Raise a warning. That should both keep backward compatability
 and prevent people from wasting days. (Hopefully, we can avoid wasting
 days discussing this issue too :-) ).

+1 Completely agree. And to be clear, I realize the need not to break
anything relying on this behavior.  I just don't want people passing
this off as a non-issue/'not a big deal'.

Ryan

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Assigning complex values to a real array

2009-12-09 Thread Ryan May
On Wed, Dec 9, 2009 at 3:51 AM, David Warde-Farley d...@cs.toronto.edu wrote:
 On 9-Dec-09, at 1:26 AM, Dr. Phillip M. Feldman wrote:
 Unfortunately, NumPy seems to be a sort of step-child of Python,
 tolerated,
 but not fully accepted. There are a number of people who continue to
 use Matlab,
 despite all of its deficiencies, because it can at least be counted
 on to
 produce correct answers most of the time.

 Except that you could never fully verify that it produces correct
 results, even if that was your desire.

 There are legitimate reasons for wanting to use Matlab (e.g.
 familiarity, because collaborators do, and for certain things it's
 still faster than the alternatives) but correctness of results isn't
 one of them. That said, people routinely let price tags influence
 their perceptions of worth.

While I'm not going to argue in favor of Matlab, and think it's
benefits are being over-stated, let's call a spade a spade.  Silent
downcasting of complex types to float is a *wart*.  It's not sensible
behavior, it's an implementation detail that smacks new users in the
face.  It's completely insensible to consider converting from complex
to float in the same vein as a simple loss of precision from 64-bit to
32-bit.  The following doesn't work:

a = np.array(['bob', 'sarah'])
b = np.arange(2.)
b[:] = a
---
ValueErrorTraceback (most recent call last)

/home/rmay/ipython console in module()

ValueError: invalid literal for float(): bob

Why doesn't that silently downcast the strings to 0.0 or something
silly?  Because that would be *stupid*.  So why doesn't trying to
stuff 3+4j into the array get the same error, because 3+4j is
definitely not a float value either.

Ryan

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Assigning complex values to a real array

2009-12-08 Thread Ryan May
 At a minimum, this inconsistency needs to be cleared up.  My
 preference
 would be that the programmer should have to explicitly downcast from
 complex to float, and that if he/she fails to do this, that an
 exception be
 triggered.

 That would most likely break a *lot* of deployed code that depends on
 the implicit downcast behaviour. A less harmful solution (if a
 solution is warranted, which is for the Council of the Elders to
 decide) would be to treat the Python complex type as a special case,
 so that the .real attribute is accessed instead of trying to cast to
 float.

Except that the exception raised on downcast is the behavior we really
want.  We don't need python complex types introducing subtle bugs as
well.

I understand why we have the silent downcast from complex to float,
but I consider it a wart, not a feature.  I've lost hours tracking
down bugs where I'm putting complex data from some routine into a new
array (without specifying a dtype) ends up with the complex downcast
silently to float64. The only reason you even notice it is because at
the end you have incorrect answers. I know to look for it now, but for
inexperienced users, it's a pain.

Ryan

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Vector interpolation on a 2D grid (with realistic results)

2009-11-07 Thread Ryan May
On Sat, Nov 7, 2009 at 5:38 PM, Pierre GM pgmdevl...@gmail.com wrote:
 Linear interpolation with the delaunay package doesn't work great for
 my data. I played with the radial basis functions, but I'm afraid
 they're leading me down the dark, dark path of parameter fiddling. In
 particular, I'm not sure how to prevent my interpolated values to be
 bounded by the min and max of my actual observations.
 Ralf' suggestion of smoothing the values afterwards is tempting, but
 sounds a bit too ad-hoc (btw, Ralf, those were relative differences of
 monthly average precipitation between El Niño and Neutral phases for
 November).

That was me, not Ralf. :)  And I agree, I the interpolated field does
look a bit noisy for such data.  I've been doing the smoothing on top
of natural neighbor for doing some of my own meteorological analysis.
Using the Gaussian kernel isn't really *that* ad hoc considering the
prevalence of Barnes/Cressman weighting for spatial averaging
typically used in meteorology.  And if you have no idea what I'm
talking about, Google them, and you'll see. :)

Ryan

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Vector interpolation on a 2D grid (with realistic results)

2009-11-07 Thread Ryan May
On Sat, Nov 7, 2009 at 5:38 PM, Pierre GM pgmdevl...@gmail.com wrote:
 Linear interpolation with the delaunay package doesn't work great for
 my data. I played with the radial basis functions, but I'm afraid
 they're leading me down the dark, dark path of parameter fiddling. In
 particular, I'm not sure how to prevent my interpolated values to be
 bounded by the min and max of my actual observations.
 Ralf' suggestion of smoothing the values afterwards is tempting, but
 sounds a bit too ad-hoc (btw, Ralf, those were relative differences of
 monthly average precipitation between El Niño and Neutral phases for
 November).

That was me, not Ralf. :)  And I agree, I the interpolated field does
look a bit noisy for such data.  I've been doing the smoothing on top
of natural neighbor for doing some of my own meteorological analysis.
Using the Gaussian kernel isn't really *that* ad hoc considering the
prevalence of Barnes/Cressman weighting for spatial averaging
typically used in meteorology.  And if you have no idea what I'm
talking about, Google them, and you'll see. :)

Ryan

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] recommended way to run numpy on snow leopard

2009-10-23 Thread Ryan May
On Fri, Oct 23, 2009 at 4:02 AM, David Warde-Farley d...@cs.toronto.edu wrote:
 On 21-Oct-09, at 11:01 AM, Ryan May wrote:

 ~/.local was added to *be the standard* for easily installing python
 packages in your user account.  And it works perfectly on the other
 major OSes, no twiddling of paths anymore.

 I've had a lot of headaches with ~/.local on Ubuntu, actually.
 Apparently Ubuntu has some crazy 'dist-packages' thing going on in
 parallel to site-packages and /usr and /usr/local and its precedence
 is unclear. virtualenv also doesn't know jack about it (speaking of
 which, there's no way to control precedence of ~/.local with
 virtualenv, so I can't use virtualenv to override ~/.local if I want
 to treat ~/.local as the new site-packages).

Ok, so *some* linux distros also choose to break stuff.  I'm noticing
a theme here where OSes that strive for ease end up breaking something
basic.  I'm not saying they all need to drastically change; they just
need to insert their paths *after* ~/.local.  (Thankfully, Gentoo
doesn't get in my way.)

Ryan

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Objected-oriented SIMD API for Numpy

2009-10-21 Thread Ryan May
On Wed, Oct 21, 2009 at 11:38 AM, Gregor Thalhammer
gregor.thalham...@gmail.com wrote:
 I once wrote a module that replaces the built in transcendental
 functions of numpy by optimized versions from Intels vector math
 library. If someone is interested, I can publish it. In my experience it
 was of little use since real world problems are limited by memory
 bandwidth. Therefore extending numexpr with optimized transcendental
 functions was the better solution. Afterwards I discovered that I could
 have saved the effort of the first approach since gcc is able to use
 optimized functions from Intels vector math library or AMD's math core
 library, see the doc's of -mveclibabi. You just need to recompile numpy
 with proper compiler arguments.

Do you have a link to the documentation for -mveclibabi?  I can't find
this anywhere and I'm *very* interested.

Ryan

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Objected-oriented SIMD API for Numpy

2009-10-21 Thread Ryan May
On Wed, Oct 21, 2009 at 1:23 PM, Ryan May rma...@gmail.com wrote:
 On Wed, Oct 21, 2009 at 11:38 AM, Gregor Thalhammer
 gregor.thalham...@gmail.com wrote:
 I once wrote a module that replaces the built in transcendental
 functions of numpy by optimized versions from Intels vector math
 library. If someone is interested, I can publish it. In my experience it
 was of little use since real world problems are limited by memory
 bandwidth. Therefore extending numexpr with optimized transcendental
 functions was the better solution. Afterwards I discovered that I could
 have saved the effort of the first approach since gcc is able to use
 optimized functions from Intels vector math library or AMD's math core
 library, see the doc's of -mveclibabi. You just need to recompile numpy
 with proper compiler arguments.

 Do you have a link to the documentation for -mveclibabi?  I can't find
 this anywhere and I'm *very* interested.

Ah, there it is.  Google doesn't come up with much, but the PDF manual
does have it:
http://gcc.gnu.org/onlinedocs/gcc-4.4.2/gcc.pdf

(It helps when you don't mis-type your search in the PDF).

Ryan

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
Sent from Norman, Oklahoma, United States
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] genfromtxt to structured array

2009-09-25 Thread Ryan May
On Fri, Sep 25, 2009 at 4:30 PM, Timmie timmichel...@gmx-topmail.de wrote:
 Hello,
 this may be a easier question.

 I want to load data into an structured array with getting the names from the
 column header (names=True).

 The data looks like:

    ;month;day;hour;value
    1995;1;1;01;0


 but loading only works only if changed to:

    year;month;day;hour;value
    1995;1;1;01;0


 How do I read in the original data?

There's an assumption that the number of names is the same as the
number of columns.  You can just specify the names and skip reading
the names from the file:

numpy.genfromtext(filename, delimiter=';', skiprows=1, names=['year',
'month', 'day', 'hour', 'value'])

Ryan

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
Sent from Norman, Oklahoma, United States
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] masked arrays as array indices

2009-09-21 Thread Ryan May
2009/9/21 Ernest Adrogué eadro...@gmx.net

 Hello there,

 Given a masked array such as this one:

 In [19]: x = np.ma.masked_equal([-1, -1, 0, -1, 2], -1)

 In [20]: x
 Out[20]:
 masked_array(data = [-- -- 0 -- 2],
 mask = [ True  True False  True False],
   fill_value = 99)

 When you make an assignemnt in the vein of x[x == 0] = 25
 the result can be a bit puzzling:

 In [21]: x[x == 0] = 25

 In [22]: x
 Out[22]:
 masked_array(data = [25 25 25 25 2],
 mask = [False False False False False],
   fill_value = 99)

 Is this the correct result or have I found a bug?


I see the same here on 1.4.0.dev7400.  Seems pretty odd to me.  Then again,
it's a bit more complex using masked boolean arrays for indexing since you
have True, False, and masked values.  Anyone have thoughts on what *should*
happen here?  Or is this it?

Ryan

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Error in numpy 1.4.0 dev 07384

2009-09-15 Thread Ryan May
Keep in mind that you can still have a problem with a conflict between your
SVN copy and system copy, if the SVN copy is visible by default (like, say,
installed to ~/.local under python 2.6)  In my case, there was a problem
where a gnome panel applet used a feature in pygtk which called to numpy.  I
was getting the same RuntimeError because Pygtk was built against the system
1.3 copy, but when I ran the applet, it would first find my 1.4 SVN numpy.

Just an FYI (to you and others) as I lost a chunk of time figuring that out.

Ryan

2009/9/15 Nadav Horesh nad...@visionsense.com

 That it!

  Thanks,

Nadav

 -הודעה מקורית-
 מאת: numpy-discussion-boun...@scipy.org בשם Citi, Luca
 נשלח: ג 15-ספטמבר-09 11:32
 אל: Discussion of Numerical Python
 נושא: Re: [Numpy-discussion] Error in numpy 1.4.0 dev 07384

 I got the same problem when compiling a new svn revision with some
 intermediate files left from the build of a previous revision.
 Removing the content of the build folder before compiling the new version
 solved the issue.
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion




-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Howto create a record array from arrays without copying their data

2009-08-12 Thread Ryan May
On Wed, Aug 12, 2009 at 10:22 AM, Ralph Heinkel ra...@dont-mind.de wrote:

 Hi,

 I'm creating (actually calculating) a set of very large 1-d arrays
 (vectors), which I would like to assemble into a record array so I can
 access the data row-wise.  Unfortunately it seems that all data of my
 original 1-d arrays are getting copied in memory during that process.
 Is there a way to get around that?


I don't think so, because fundamentally numpy assumes array elements are
packed together in memory.  If you know C, record arrays are pretty much
arrays of structures.  You could try just using a python dictionary to hold
the arrays, depending on you motives behind using a record array.

Ryan

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
Sent from Norman, Oklahoma, United States
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Python 3.n and Scipy Numpy

2009-07-30 Thread Ryan May
On Thu, Jul 30, 2009 at 2:13 PM, BBands bba...@gmail.com wrote:

 Could someone point me toward some information on Scipy/Numpy and
 Python 3.1? I'd like to upgrade, but can't seem to find the path.


Scipy/Numpy are have not yet been ported to python 3.x

Ryan

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
Sent from Norman, Oklahoma, United States
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Problem with correlate

2009-06-04 Thread Ryan May
On Thu, Jun 4, 2009 at 5:14 AM, David Cournapeau courn...@gmail.com wrote:

 On Tue, Jun 2, 2009 at 10:56 PM, Ryan May rma...@gmail.com wrote:
  On Tue, Jun 2, 2009 at 5:59 AM, David Cournapeau
  da...@ar.media.kyoto-u.ac.jp wrote:
 
  Robin wrote:
   On Tue, Jun 2, 2009 at 11:36 AM, David Cournapeau courn...@gmail.com
 
   wrote:
  
   Done in r7031 - correlate/PyArray_Correlate should be unchanged, and
   acorrelate/PyArray_Acorrelate implement the conventional definitions,
  
  
   I don't know if it's been discussed before but while people are
   thinking about/changing correlate I thought I'd like to request as a
   user a matlab style xcorr function (basically with the functionality
   of the matlab version).
  
   I don't know if this is a deliberate emission, but it is often one of
   the first things my colleagues try when I get them using Python, and
   as far as I know there isn't really a good answer. There is xcorr in
   pylab, but it isn't vectorised like xcorr from matlab...
  
 
  There is one in the talkbox scikit:
 
 
 
 http://github.com/cournape/talkbox/blob/202135a9d848931ebd036b97302f1e10d7488c63/scikits/talkbox/tools/correlations.py
 
  It uses the fft, and bonus point, the file is independent of the rest of
  toolbox. There is another version which uses direct implementation (this
  is faster if you need only a few lags, and it takes less memory too).
 
  I'd be +1 on including something like this (provided it expanded to
 include
  complex-valued data).  I think it's a real need, since everyone seems to
  keep rolling their own.  I had to write my own just so that I can
 calculate
  a few lags in a vectorized fashion.

 The code in talkbox is not good enough for scipy. I made an attempt
 for scipy.signal here:


 http://github.com/cournape/scipy3/blob/b004d17d824f1c03921d9663207ee40adadc5762/scipy/signal/correlations.py

 It is reasonably fast when only a few lags are needed, both double and
 complex double are supported, and it works on arbitrary axis and lags.
 Other precisions should be easy to add, but I think I need to extend
 the numpy code generators to support cython sources to avoid code
 duplication.

 Does that fill your need ?


It would fill mine.  Would it make sense to make y default to x, so that you
can use xcorr to do the autocorrelation as:

xcorr(x)

?

Ryan

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Problem with correlate

2009-06-02 Thread Ryan May
On Tue, Jun 2, 2009 at 5:59 AM, David Cournapeau 
da...@ar.media.kyoto-u.ac.jp wrote:

 Robin wrote:
  On Tue, Jun 2, 2009 at 11:36 AM, David Cournapeau courn...@gmail.com
 wrote:
 
  Done in r7031 - correlate/PyArray_Correlate should be unchanged, and
  acorrelate/PyArray_Acorrelate implement the conventional definitions,
 
 
  I don't know if it's been discussed before but while people are
  thinking about/changing correlate I thought I'd like to request as a
  user a matlab style xcorr function (basically with the functionality
  of the matlab version).
 
  I don't know if this is a deliberate emission, but it is often one of
  the first things my colleagues try when I get them using Python, and
  as far as I know there isn't really a good answer. There is xcorr in
  pylab, but it isn't vectorised like xcorr from matlab...
 

 There is one in the talkbox scikit:


 http://github.com/cournape/talkbox/blob/202135a9d848931ebd036b97302f1e10d7488c63/scikits/talkbox/tools/correlations.py

 It uses the fft, and bonus point, the file is independent of the rest of
 toolbox. There is another version which uses direct implementation (this
 is faster if you need only a few lags, and it takes less memory too).


I'd be +1 on including something like this (provided it expanded to include
complex-valued data).  I think it's a real need, since everyone seems to
keep rolling their own.  I had to write my own just so that I can calculate
a few lags in a vectorized fashion.

Ryan

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] skiprows option in loadtxt

2009-05-20 Thread Ryan May
On Wed, May 20, 2009 at 10:04 AM, Nils Wagner
nwag...@iam.uni-stuttgart.dewrote:

 Hi all,

 Is the value of skiprows in loadtxt restricted to values
 in [0-10] ?

 It doesn't work for skiprows=11.


Works for me:

s = '\n'.join(map(str,range(20)))
from StringIO import StringIO
np.loadtxt(StringIO(s), skiprows=11)

The last line yields, as expected:
  array([ 11.,  12.,  13.,  14.,  15.,  16.,  17.,  18.,  19.])

This is with 1.4.0.dev6983.  Can we see code and data file?

Ryan

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
Sent from Norman, Oklahoma, United States
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] loadtxt example problem ?

2009-05-04 Thread Ryan May
On Mon, May 4, 2009 at 3:06 PM, bruno Piguet bruno.pig...@gmail.com wrote:

 Hello,

   I'm new to numpy, and considering using loadtxt() to read a data file.

   As a starter, I tried the example of the doc page (
 http://docs.scipy.org/doc/numpy/reference/generated/numpy.loadtxt.html) :


  from StringIO import StringIO   # StringIO behaves like a file object
  c = StringIO(0 1\n2 3)
  np.loadtxt(c)
 I didn't get the expectd answer, but :

 Traceback (moste recent call last):
   File(stdin), line 1, in module
   File C:\Python25\lib\sire-packages\numpy\core\numeric.py, line 725, in 
 loadtxt
 X = array(X, dtype)
 ValueError: setting an array element with a sequence.


 I'm using verison 1.0.4 of numpy).

 I got the same problem on a Ms-Windows and a Linux Machine.

 I could run the example by adding a \n at the end of c :
 c = StringIO(0 1\n2 3\n)


 Is it the normal and expected behaviour ?

 Bruno.


It's a bug that's been fixed.  Numpy 1.0.4 is quite a bit out of date, so
I'd recommend updating to the latest (1.3).

Ryan


-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] object arrays and ==

2009-05-04 Thread Ryan May
On Mon, May 4, 2009 at 3:55 PM, David Warde-Farley d...@cs.toronto.eduwrote:

 Hi,

 Is there a simple way to compare each element of an object array to a
 single object? objarray == None, for example, gives me a single
 False. I couldn't find any reference to it in the documentation, but
 I'll admit, I wasn't quite sure where to look.


I think it might depend on some factors:

In [1]: a = np.array(['a','b'], dtype=np.object)

In [2]: a=='a'
Out[2]: array([ True, False], dtype=bool)

In [3]: a==None
Out[3]: False

In [4]: a == []
Out[4]: False

In [5]: a == ''
Out[5]: array([False, False], dtype=bool)

In [6]: a == dict()
Out[6]: array([False, False], dtype=bool)

In [7]: numpy.__version__
Out[7]: '1.4.0.dev6885'

In [8]: a == 5
Out[8]: array([False, False], dtype=bool)

In [9]: a == 5.
Out[9]: array([False, False], dtype=bool)

But based on these results, I have no idea what the factors might be.  I
know this works with datetime objects, but I'm really not sure why None and
the empty list don't work.

Ryan

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
Sent from Norman, Oklahoma, United States
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Porting strategy for py3k

2009-04-23 Thread Ryan May
On Thu, Apr 23, 2009 at 9:52 AM, David Cournapeau courn...@gmail.comwrote:

 On Thu, Apr 23, 2009 at 11:20 PM, Pauli Virtanen p...@iki.fi wrote:
  Thu, 23 Apr 2009 22:38:21 +0900, David Cournapeau kirjoitti:
  [clip]
  I looked more in detail on what would be needed to port numpy to
  py3k. In particular, I was interested at the possible strategies to keep
  one single codebase for both python 2.x and python 3.x. The first step
  is to remove all py3k warnings reported by python 2.6. A couple of
  recurrent problems
  - reduce is removed in py3k
  - print is removed
 
  Print is not removed, just changed to a function. So,
 
 print(foo)

 Yes, as reduce, they are still available, but not as builtins anymore.
 But replacing print is not as easy as reduce. Things like print
 yoyo, a do not work, for example.


I think the point is that you can just change it to print(yoyo) which will
work in both python 2.x and 3.x.  The parenthesis are just extraneous in
python 2.x. Now, the more complicated uses of print won't be as easy to
change, but I'm not sure how prevalent their use is.

Ryan

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
Sent from Norman, Oklahoma, United States
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Porting strategy for py3k

2009-04-23 Thread Ryan May
On Thu, Apr 23, 2009 at 11:23 AM, Christopher Barker
chris.bar...@noaa.govwrote:

 Ryan May wrote:
  On Thu, Apr 23, 2009 at 9:52 AM, David Cournapeau courn...@gmail.com
  But replacing print is not as easy as reduce. Things like print
  yoyo, a do not work, for example.
 
  I think the point is that you can just change it to print(yoyo) which
  will work in both python 2.x and 3.x.

 I think he meant:

 print yoyo, a

 which can not be translated by adding parens:

   print yoyo, a
 yoyo [ 1.  1.  1.]
   print (yoyo, a)
 ('yoyo', array([ 1.,  1.,  1.]))


 I suppose we can write something like:

 def new_print(*args):
print ( .join([str(s) for s in args]))

   new_print(yoyo, a)
 yoyo [ 1.  1.  1.]


 Though I'm a bit surprised that that's not how the print function is
 written in the first place (maybe it is in py3k -- I'm testing on 2.5)

 -Chris


Good point.  We could just borrow the implementation from 2.6 and in fact
just import print from future on 2.6.  Just a thought...

Ryan

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
Sent from Norman, Oklahoma, United States
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Using loadtxt() twice on same file freezes python

2009-03-26 Thread Ryan May
On Thu, Mar 26, 2009 at 3:24 PM, Sander de Kievit 
dekie...@strw.leidenuniv.nl wrote:

 David Cournapeau wrote:
  On Fri, Mar 27, 2009 at 1:14 AM, Sander de Kievit
  dekie...@strw.leidenuniv.nl wrote:
  Hi,
 
  On my PC the following code freezes python:
 
  [code]
  import numpy as np
  from StringIO import StringIO
  c = StringIO(0 1\n2 3)
  np.loadtxt(c)
  np.loadtxt(c)
  [/code]
 
  Is this intentional behaviour or should I report this as a bug?
 
  Which version of numpy are you using (numpy.version.version), on which OS
 ?
 The specifics for my platform:
 Fedora release 10 (Cambridge)
 kernel: 2.6.27.19-170.2.35.fc10.i686 #1 SMP
 python: 2.5.2
 numpy: 1.2.0

 Also, if I close the file in between the two calls it works without
 problem (if I use a real file, that is).

 
  That's a most definitly not expected behavior (you should get an
  exception the second time because the stream is empty - which is
  exactly what happens on my installation, but your problem may be
  platform specific).
 
  cheers,
 
  David

 Thanks for the quick replies! I'll report the bug.


Before reporting the bug, can you upgrade to 1.2.1.  I seem to remember
something about this bug and my gut tells me it got fixed in between 1.2.0
and 1.2.1.

Ryan

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
Sent from Norman, Oklahoma, United States
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] svn and tickets email status

2009-03-16 Thread Ryan May
Hi,

What's the status on SVN and ticket email notifications?  The only messages
I'm seeing since the switch is the occasional spam.  Should I try
re-subscribing?

Ryan

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
Sent from: Norman Oklahoma United States.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Intel MKL on Core2 system

2009-03-12 Thread Ryan May
On Thu, Mar 12, 2009 at 3:05 AM, Francesc Alted fal...@pytables.org wrote:

 A Wednesday 11 March 2009, Ryan May escrigué:
  Thanks.  That's actually pretty close to what I had.  I was actually
  thinking that you were using only blas_opt and lapack_opt, since
  supposedly the [mkl] style section is deprecated.  Thus far, I cannot
  get these to work with MKL.

 Well, my configuration was thought to link with the VML integrated in
 the MKL, but I'd say that it would be similar for blas and lapack.
 What's you configuration?  What's the error you are running into?


I can get it working now with either the [mkl] section like your config or
the following config:

[DEFAULT]
include_dirs = /opt/intel/mkl/10.0.2.018/include/
library_dirs = /opt/intel/mkl/10.0.2.018/lib/em64t/:/usr/lib

[blas]
libraries = mkl_gf_lp64, mkl_gnu_thread, mkl_core, iomp5

[lapack]
libraries = mkl_lapack, mkl_gf_lp64, mkl_gnu_thread, mkl_core, iomp5

It's just confusing I guess because if I change blas and lapack to blas_opt
and lapack_opt, I cannot get it to work.  The only reason I even care is
that site.cfg.example leads me to believe that the *_opt sections are the
way you're supposed to add them.

Ryan

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Intel MKL on Core2 system

2009-03-12 Thread Ryan May
On Thu, Mar 12, 2009 at 8:30 AM, David Cournapeau 
da...@ar.media.kyoto-u.ac.jp wrote:

 Ryan May wrote:
 
  [DEFAULT]
  include_dirs = /opt/intel/mkl/10.0.2.018/include/
  http://10.0.2.018/include/
  library_dirs = /opt/intel/mkl/10.0.2.018/lib/em64t/:/usr/lib
  http://10.0.2.018/lib/em64t/:/usr/lib
 
  [blas]
  libraries = mkl_gf_lp64, mkl_gnu_thread, mkl_core, iomp5
 
  [lapack]
  libraries = mkl_lapack, mkl_gf_lp64, mkl_gnu_thread, mkl_core, iomp5
 
  It's just confusing I guess because if I change blas and lapack to
  blas_opt and lapack_opt, I cannot get it to work.


 Yes, the whole thing is very confusing; trying to understand it when I
 try to be compatible with it in numscons drove me crazy (the changes
 with default section handling in python 2.6 did no help). IMHO, we
 should get rid of all this at some point, and use something much simpler
 (one file, no sections, just straight LIBPATH + LIBS + CPPATH options),
 because the current code has gone much beyond the madness point. But it
 will break some configurations for sure.


Glad to hear it's not just me.  I was beginning to think I was being thick
headed

Ryan

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
Sent from: Norman Oklahoma United States.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Error building SciPy SVN with NumPy SVN

2009-03-12 Thread Ryan May
On Thu, Mar 12, 2009 at 9:02 AM, David Cournapeau courn...@gmail.comwrote:

 On Thu, Mar 12, 2009 at 5:25 AM, Ryan May rma...@gmail.com wrote:

  That's fine.  I just wanted to make sure I didn't do something weird
 while
  getting numpy built with MKL.

 It should be fixed in r6650


Fixed for me.  I get a segfault running scipy.test(), but that's probably
due to MKL.

Thanks, David.

Ryan

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
Sent from: Norman Oklahoma United States.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Error building SciPy SVN with NumPy SVN

2009-03-12 Thread Ryan May
On Thu, Mar 12, 2009 at 9:55 AM, David Cournapeau courn...@gmail.comwrote:

 On Thu, Mar 12, 2009 at 11:23 PM, Ryan May rma...@gmail.com wrote:

 
  Fixed for me.  I get a segfault running scipy.test(), but that's probably
  due to MKL.

 Yes, it is. Scipy run the test suite fine for me.


While scipy builds, matplotlib's basemap toolkit spits this out:

running install
running build
running config_cc
unifing config_cc, config, build_clib, build_ext, build commands --compiler
options
running config_fc
unifing config_fc, config, build_clib, build_ext, build commands --fcompiler
options
running build_src
building extension mpl_toolkits.basemap._proj sources
error: build/src.linux-x86_64-2.5/gfortran_vs2003_hack.c: No such file or
directory

Any ideas?

Ryan

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
Sent from: Norman Oklahoma United States.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Intel MKL on Core2 system

2009-03-12 Thread Ryan May
On Thu, Mar 12, 2009 at 10:11 AM, Francesc Alted fal...@pytables.orgwrote:

 A Thursday 12 March 2009, Ryan May escrigué:
  I can get it working now with either the [mkl] section like your
  config or the following config:
 
  [DEFAULT]
  include_dirs = /opt/intel/mkl/10.0.2.018/include/
  library_dirs = /opt/intel/mkl/10.0.2.018/lib/em64t/:/usr/lib
  ^
 I see that you are using a multi-directory path here.  My understanding
 was that this is not supported by numpy.distutils, but apparently it
 worked for you (?), or if you get rid of the ':/usr/lib' trailing part
 of library_dirs it works ok too?


Well, if by multi-directory you mean the colon-separated list, this is what
is documented in site.cfg.example and used by the gentoo ebuild on my
system.  I need the /usr/lib part so that it can pick up libblas.so and
liblapack.so.  Otherwise, it won't link in MKL.

Ryan

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
Sent from: Norman Oklahoma United States.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Error building SciPy SVN with NumPy SVN

2009-03-12 Thread Ryan May
On Thu, Mar 12, 2009 at 12:00 PM, David Cournapeau courn...@gmail.comwrote:

 On Fri, Mar 13, 2009 at 12:10 AM, Ryan May rma...@gmail.com wrote:
  On Thu, Mar 12, 2009 at 9:55 AM, David Cournapeau courn...@gmail.com
  wrote:
 
  On Thu, Mar 12, 2009 at 11:23 PM, Ryan May rma...@gmail.com wrote:
 
  
   Fixed for me.  I get a segfault running scipy.test(), but that's
   probably
   due to MKL.
 
  Yes, it is. Scipy run the test suite fine for me.
 
  While scipy builds, matplotlib's basemap toolkit spits this out:
 
  running install
  running build
  running config_cc
  unifing config_cc, config, build_clib, build_ext, build commands
 --compiler
  options
  running config_fc
  unifing config_fc, config, build_clib, build_ext, build commands
 --fcompiler
  options
  running build_src
  building extension mpl_toolkits.basemap._proj sources
  error: build/src.linux-x86_64-2.5/gfortran_vs2003_hack.c: No such file or
  directory

 Ok, I've just back out the changes in 6653 - let's not break everything now
 :)


Thanks, that fixed it.

Ryan

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
Sent from: Norman Oklahoma United States.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Intel MKL on Core2 system

2009-03-11 Thread Ryan May
Hi,

I noticed the following in numpy/distutils/system_info.py while trying to
get numpy to build against MKL:

if cpu.is_Itanium():
plt = '64'
#l = 'mkl_ipf'
elif cpu.is_Xeon():
plt = 'em64t'
#l = 'mkl_em64t'
else:
plt = '32'
#l = 'mkl_ia32'

So in the autodetection for MKL, the only way to get plt (platform) set to
'em64t' is to test true for a Xeon.  This function returns false on my Core2
Duo system, even though the platform is very much 'em64t'.  I think that
check should instead read:

elif cpu.is_Xeon() or cpu.is_Core2():

Thoughts?

Ryan

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
Sent from: Norman Oklahoma United States.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Intel MKL on Core2 system

2009-03-11 Thread Ryan May
On Wed, Mar 11, 2009 at 1:41 PM, David Cournapeau courn...@gmail.comwrote:

 On Thu, Mar 12, 2009 at 3:15 AM, Ryan May rma...@gmail.com wrote:
  Hi,
 
  I noticed the following in numpy/distutils/system_info.py while trying to
  get numpy to build against MKL:
 
  if cpu.is_Itanium():
  plt = '64'
  #l = 'mkl_ipf'
  elif cpu.is_Xeon():
  plt = 'em64t'
  #l = 'mkl_em64t'
  else:
  plt = '32'
  #l = 'mkl_ia32'
 
  So in the autodetection for MKL, the only way to get plt (platform) set
 to
  'em64t' is to test true for a Xeon.  This function returns false on my
 Core2
  Duo system, even though the platform is very much 'em64t'.  I think that
  check should instead read:
 
  elif cpu.is_Xeon() or cpu.is_Core2():
 
  Thoughts?

 I think this whole code is inherently fragile. A much better solution
 is to make the build process customization easier and more
 straightforward. Auto-detection will never work well.

 David
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


Fair enough.

Ryan

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Intel MKL on Core2 system

2009-03-11 Thread Ryan May
On Wed, Mar 11, 2009 at 1:34 PM, Francesc Alted fal...@pytables.org wrote:

 A Wednesday 11 March 2009, Ryan May escrigué:
  Hi,
 
  I noticed the following in numpy/distutils/system_info.py while
  trying to get numpy to build against MKL:
 
  if cpu.is_Itanium():
  plt = '64'
  #l = 'mkl_ipf'
  elif cpu.is_Xeon():
  plt = 'em64t'
  #l = 'mkl_em64t'
  else:
  plt = '32'
  #l = 'mkl_ia32'
 
  So in the autodetection for MKL, the only way to get plt (platform)
  set to 'em64t' is to test true for a Xeon.  This function returns
  false on my Core2 Duo system, even though the platform is very much
  'em64t'.  I think that check should instead read:
 
  elif cpu.is_Xeon() or cpu.is_Core2():
 
  Thoughts?

 This may help you to see the developer's view on this subject:

 http://projects.scipy.org/numpy/ticket/994

 Cheers,

 --
 Francesc Alted
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion



You know, I knew this sounded familiar.  If you regularly build against MKL,
can you send me your site.cfg.  I've had a lot more success getting the
build to work using the autodetection than the blas_opt and lapack_opt
sections.   Since the autodetection doesn't seem like the accepted way, I'd
love to see how to get the accepted way to actually work. :)

Ryan

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
Sent from: Norman Oklahoma United States.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Intel MKL on Core2 system

2009-03-11 Thread Ryan May
On Wed, Mar 11, 2009 at 2:20 PM, Francesc Alted fal...@pytables.org wrote:

 A Wednesday 11 March 2009, Ryan May escrigué:
  You know, I knew this sounded familiar.  If you regularly build
  against MKL, can you send me your site.cfg.  I've had a lot more
  success getting the build to work using the autodetection than the
  blas_opt and lapack_opt sections.   Since the autodetection doesn't
  seem like the accepted way, I'd love to see how to get the accepted
  way to actually work. :)

 Not that I'm an expert in that sort of black magic, but the next worked
 fine for me and numexpr:

 [mkl]

 # Example for using MKL 10.0
 #library_dirs = /opt/intel/mkl/10.0.2.018/lib/em64t
 #include_dirs http://10.0.2.018/lib/em64t%0A#include_dirs =
  /opt/intel/mkl/10.0.2.018/include

 # Example for the MKL included in Intel C 11.0 compiler
 library_dirs = /opt/intel/Compiler/11.0/074/mkl/lib/em64t/
 include_dirs =  /opt/intel/Compiler/11.0/074/mkl/include/

 ##the following set of libraries is suited for compilation
 ##with the GNU C compiler (gcc). Refer to the MKL documentation
 ##if you use other compilers (e.g., Intel C compiler)
 mkl_libs = mkl_gf_lp64, mkl_gnu_thread, mkl_core


Thanks.  That's actually pretty close to what I had.  I was actually
thinking that you were using only blas_opt and lapack_opt, since supposedly
the [mkl] style section is deprecated.  Thus far, I cannot get these to work
with MKL.

Ryan

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
Sent from: Norman Oklahoma United States.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Error building SciPy SVN with NumPy SVN

2009-03-11 Thread Ryan May
Hi,

This is what I'm getting when I try to build scipy HEAD:

building library superlu_src sources
building library arpack sources
building library sc_c_misc sources
building library sc_cephes sources
building library sc_mach sources
building library sc_toms sources
building library sc_amos sources
building library sc_cdf sources
building library sc_specfun sources
building library statlib sources
building extension scipy.cluster._vq sources
error:
/home/rmay/.local/lib64/python2.5/site-packages/numpy/distutils/command/../mingw/gfortran_vs2003_hack.c:
No such file or directory

This didn't happen until I updated to *numpy* SVN HEAD.  Numpy itself is
building without errors and no tests fail on my system.  Any ideas?

Ryan

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
Sent from: Norman Oklahoma United States.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] numpy-svn mails

2009-03-06 Thread Ryan May
Hi,

Is anyone getting mails of the SVN commits?  I've gotten 1 spam message from
that list, but no commits.

Ryan

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Fancy indexing question:

2009-02-24 Thread Ryan May
On Tue, Feb 24, 2009 at 1:39 PM, Christopher Barker
chris.bar...@noaa.govwrote:

 HI all,

 I'm having a bit of trouble getting fancy indexing to do what I want.

 Say I have a 2-d array:

   a
 array([[ 0,  1,  2,  3],
[ 4,  5,  6,  7],
[ 8,  9, 10, 11],
[12, 13, 14, 15],
[16, 17, 18, 19],
[20, 21, 22, 23]])

 I want to extract a sub-array:

 The 1st, 3rd, and 4th rows:
   i
 [1, 3, 4]

 and the 1st and 3rd columns:
   j
 [1, 3]

 so I should get a 3x2 array:

 [[ 5,  7],
  [13, 15],
  [17, 19]]

 The obvious (to me!) way to do this:

   a[i,j]
 Traceback (most recent call last):
   File stdin, line 1, in module
 ValueError: shape mismatch: objects cannot be broadcast to a single shape


You need to listen more closely to the error message you get back. :)  It's
a broadcasting issue, so why not try this:

a = np.array([[ 0,  1,  2,  3],
   [ 4,  5,  6,  7],
   [ 8,  9, 10, 11],
   [12, 13, 14, 15],
   [16, 17, 18, 19],
   [20, 21, 22, 23]])
i = np.array([1, 3, 4]).reshape(-1,1)
j = np.array([1, 3])

a[i,j]

You need to make i,j conformable to the numpy broadcasting rules by manually
appending size 1 dimension.

Ryan

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
Sent from: Norman Oklahoma United States.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] parallel compilation of numpy

2009-02-18 Thread Ryan May
On Wed, Feb 18, 2009 at 8:00 PM, David Cournapeau courn...@gmail.comwrote:

 On Thu, Feb 19, 2009 at 10:50 AM, Sturla Molden stu...@molden.no wrote:
 
  I have a shiny new computer with 8 cores and numpy still takes forever
  to compile
 
  Yes, forever/8 = forever.

 Not if you are a physician: my impression in undergrad was that
 infinity / 8 could be anything from 0 to infinity in physics :)


Not to nitpick, but this is the second time I've seen this lately:

physician == medical doctor != physicist :)

Ryan

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] parallel compilation of numpy

2009-02-18 Thread Ryan May
On Wed, Feb 18, 2009 at 8:19 PM, David Cournapeau courn...@gmail.comwrote:

 On Thu, Feb 19, 2009 at 11:07 AM, Ryan May rma...@gmail.com wrote:

 
  Not to nitpick, but this is the second time I've seen this lately:
 
  physician == medical doctor != physicist :)

 You're right of course - the French word for physicist being
 physicien, it may be one more mistake perpetuated by the French :)


:)

Well, not nearly as bad as Jerry Lewis. :)

Ryan

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
Sent from: Norman Oklahoma United States.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] genloadtxt: dtype=None and unpack=True

2009-02-12 Thread Ryan May
On Wed, Feb 11, 2009 at 10:47 PM, Pierre GM pgmdevl...@gmail.com wrote:


 On Feb 11, 2009, at 11:38 PM, Ryan May wrote:

  Pierre,
 
  I noticed that using dtype=None with a heterogeneous set of data,
  trying to use unpack=True to get the columns into separate arrays
  (instead of a structured array) doesn't work.  I've attached a patch
  that, in the case of dtype=None, unpacks the fields in the final
  array into a list of separate arrays.  Does this seem like a good
  idea to you?

 Nope, as it breaks consistency: depending on some input parameters,
 you either get an array or a list. I think it's better to leave it as
 it is, maybe adding an extra line in the doc precising that
 unpack=True doesn't do anything for structured arrays.


Ah, I hadn't thought of that.  I was only thinking in terms of the behavior
of unpacking on return, not in the actual returned object.  You're right,
it's a bad idea.

Ryan

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Integer cast problems

2009-02-12 Thread Ryan May
On Thu, Feb 12, 2009 at 11:21 AM, Ralph Kube ralphk...@googlemail.comwrote:

 Hi there,
 I have a little problem here with array indexing, hope you see the problem.
 I use the following loop to calculate some integrals

 import numpy as N
 from scipy.integrate import quad
 T = 1
 dt = 0.005
 L = 3
 n = 2
 ints = N.zeros([T/dt])

 for t in N.arange(0, T, dt):
 a = quad(lambda
 x:-1*(1-4*(t**4))*N.exp(-t**4)*N.exp(-x**2)*N.cos(n*N.pi*(x-L)/(2*L)),
 -L, L)[0]
 ints[int(t/dt)] = a
 print t, N.int32(t/dt), t/dt, a, ints[int(t/dt)]

 The output from the print statement looks like:

 0.14 28 28.0 2.52124867251e-16 2.52124867251e-16
 0.145 28 29.0 2.03015199575e-16 2.03015199575e-16
 0.15 30 30.0 2.40857836418e-16 2.40857836418e-16
 0.155 31 31.0 2.52191011339e-16 2.52191011339e-16

 The same happens on the ipython prompt:

 0.145 * 0.005 = 28.996
 N.int32(0.145 * 0.005) = 28

 Any ideas how to deal with this?


I'm assuming you mean 0.145 / 0.005 = 28.996

When you cast to an integer, it *truncates* the fractional part, and life
with floating point says that what should be an exact result won't
necessarily be exact.  Try using N.around.

Ryan


-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] genloadtxt: dtype=None and unpack=True

2009-02-11 Thread Ryan May
Pierre,

I noticed that using dtype=None with a heterogeneous set of data, trying to
use unpack=True to get the columns into separate arrays (instead of a
structured array) doesn't work.  I've attached a patch that, in the case of
dtype=None, unpacks the fields in the final array into a list of separate
arrays.  Does this seem like a good idea to you?

Here's a test case:

from cStringIO import StringIO
s = '2,1950-02-27,35.55\n2,1951-02-19,35.27\n'
a,b,c = np.genfromtxt(StringIO(s), delimiter=',', unpack=True, missing=' ',
dtype=None)

Ryan

--
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma


genloadtxt_unpack_fields.diff
Description: Binary data
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] PEP: named axis

2009-02-06 Thread Ryan May
On Fri, Feb 6, 2009 at 3:30 PM, Robert Kern robert.k...@gmail.com wrote:

 On Fri, Feb 6, 2009 at 03:22, Stéfan van der Walt ste...@sun.ac.za
 wrote:
  Hi Robert
 
  2009/2/6 Robert Kern robert.k...@gmail.com:
  This could be implemented but would require adding information to the
  NumPy array.
 
  More than that, though. Every function and method that takes an axis
  or reduces an axis will need to be rewritten. For that reason, I'm -1
  on the proposal.
 
  Are you -1 on the array dictionary, or on using it to do axis mapping?

 I'm -1 on rewriting every axis= argument to accept strings. I'm +1 on
 a generic metadata dict that does not implicitly propagate.

   I would imagine that Gael would be happier even if he had to do
 
  axis = x.meta.axis['Lateral']
  some_func(x, axis)

 That's fine with me.

  I'm of the opinion that it should never guess. We have no idea what
  semantics are being placed on the dict. Even in the case where all of
  the inputs have the same dict, the operation may easily invalidate the
  metadata. For example, a reduction on one of these axis-decorated
  arrays would make the axis labels incorrect.
 
  That's a good point.  So what would be a sane way of propagating
  meta-data?  If we don't want to make any assumptions, it becomes the
  user's responsibility to do it manually.

 I don't think there is *any* sane way of numpy propagating the user's
 metadata. The user must be the one to do it.


I'm +1 on all of what Robert said.  I've considered writing a
subclass/wrapping just so I can make metadata available while passing around
recarrays.  It'd save me a bunch of work.  I don't think there's anything
wrong with making the user propagate the dictionary.

Ryan

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Argsort

2009-02-05 Thread Ryan May
Hi,

Ok, what am I missing here:

x = np.array([[4,2],[5,3]])
x[x.argsort(1)]

array([[[5, 3],
[4, 2]],

   [[5, 3],
[4, 2]]])

I was expecting:

array([[2,4],[3,5]])

Certainly not a 3D array.  What am I doing wrong?

Ryan

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] genloadtxt question

2009-02-03 Thread Ryan May
Pierre,

Should the following work?

import numpy as np
from StringIO import StringIO

converter = {'date':lambda s: datetime.strptime(s,'%Y-%m-%d %H:%M:%SZ')}
data = np.ndfromtxt(StringIO('2009-02-03 12:00:00Z,72214.0'), delimiter=',',
names=['date','stid'], dtype=None, converters=converter)

Right now, it's giving me the following:

Traceback (most recent call last):
  File check_oban.py, line 15, in module
converters=converter)
  File /home/rmay/.local/lib64/python2.5/site-packages/numpy/lib/io.py, line
993, in ndfromtxt
return genfromtxt(fname, **kwargs)
  File /home/rmay/.local/lib64/python2.5/site-packages/numpy/lib/io.py, line
842, in genfromtxt
locked=True)
  File /home/rmay/.local/lib64/python2.5/site-packages/numpy/lib/_iotools.py,
line 472, in update
self.type = self._getsubdtype(func('0'))
  File check_oban.py, line 9, in lambda
lambda s: datetime.strptime(s,'%Y-%m-%d %H:%M:%SZ').replace(tzinfo=UTC)}
  File /usr/lib64/python2.5/_strptime.py, line 330, in strptime
(data_string, format))
ValueError: time data did not match format:  data=0  fmt=%Y-%m-%d %H:%M:%SZ

Which comes from a part of the code in updating converters where it passes the
string '0' to the converter.  Are the converters expected to handle what amounts
to bad input even though the file itself has no such problems? Specifying the
dtype doesn't appear to help either.

Ryan

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] genloadtxt question

2009-02-03 Thread Ryan May
Pierre GM wrote:
 On Feb 3, 2009, at 11:24 AM, Ryan May wrote:
 
 Pierre,

 Should the following work?

 import numpy as np
 from StringIO import StringIO

 converter = {'date':lambda s: datetime.strptime(s,'%Y-%m-%d %H:%M: 
 %SZ')}
 data = np.ndfromtxt(StringIO('2009-02-03 12:00:00Z,72214.0'),  
 delimiter=',',
 names=['date','stid'], dtype=None, converters=converter)
 
 Well, yes, it should work. That's indeed a problem with the  
 getsubdtype method of the converter.
 The problem is that we need to estimate the datatype of the output of  
 the converter. In most cases, trying to convert '0' works properly,  
 not in yours however. In r6338, I force the type to object if  
 converting '0' does not work. That's a patch till the next corner  
 case...

Thanks for the quick patch!  And yeah, I can't think of any better behavior. 
It's
 actually what I ended up doing in my conversion function, so, if nothing else,
it removes the user from having to write that kind of boilerplate code.

Thanks,

Ryan

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Operations on masked items

2009-02-03 Thread Ryan May
Ryan May wrote:
 Pierre,
 
 I know you did some preliminary work on helping to make sure that doing
 operations on masked arrays doesn't change the underlying data.  I ran into 
 the
 following today.
 
 import numpy as np
 a = np.ma.array([1,2,3], mask=[False, True, False])
 b = a * 10
 c = 10 * a
 print b.data # Prints [10 2 30] Good!
 print c.data # Prints [10 10 30] Oops.
 
 I tracked it down to __call__ on the _MaskedBinaryOperation class.  If 
 there's a
 mask on the data, you use:
 
   result = np.where(m, da, self.f(da, db, *args, **kwargs))
 
 You can see that if a (and hence da) is a scalar, your masked values end up 
 with
 the value of the scalar.  If this is getting too hairy to handle not touching
 data, I understand.  I just thought I should point out the inconsistency here.

Well, I guess I hit send too soon.  Here's one easy solution (consistent with
what you did for __radd__), change the code for __rmul__ to do:

return multiply(self, other)

instead of:

return multiply(other, self)

That fixes it for me, and I don't see how it would break anything.

Ryan

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Operations on masked items

2009-02-03 Thread Ryan May
Pierre,

I know you did some preliminary work on helping to make sure that doing
operations on masked arrays doesn't change the underlying data.  I ran into the
following today.

import numpy as np
a = np.ma.array([1,2,3], mask=[False, True, False])
b = a * 10
c = 10 * a
print b.data # Prints [10 2 30] Good!
print c.data # Prints [10 10 30] Oops.

I tracked it down to __call__ on the _MaskedBinaryOperation class.  If there's a
mask on the data, you use:

result = np.where(m, da, self.f(da, db, *args, **kwargs))

You can see that if a (and hence da) is a scalar, your masked values end up with
the value of the scalar.  If this is getting too hairy to handle not touching
data, I understand.  I just thought I should point out the inconsistency here.

Ryan

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Operations on masked items

2009-02-03 Thread Ryan May
Pierre GM wrote:
 On Feb 3, 2009, at 4:00 PM, Ryan May wrote:
 Well, I guess I hit send too soon.  Here's one easy solution  
 (consistent with
 what you did for __radd__), change the code for __rmul__ to do:

  return multiply(self, other)

 instead of:

  return multiply(other, self)

 That fixes it for me, and I don't see how it would break anything.
 
 Good call, but once again: Thou shalt not put trust in ye masked  
 values [1].
 
   a = np.ma.array([1,2,3],mask=[0,1,0])
   b = np.ma.array([10, 20, 30], mask=[0,1,0])
   (a*b).data
 array([10,  2, 90])
   (b*a).data
 array([10, 20, 90])
 
 So yes, __mul__ is not commutative when you deal w/ masked arrays (at  
 least, when you try to access the data under a mask). Nothing I can  
 do. Remember that preventing the underlying data to be modified is  
 NEVER guaranteed...

Fair enough.

Ryan

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Fortran binary files and numpy/scipy

2009-02-02 Thread Ryan May
Nils Wagner wrote:
 On Mon, 2 Feb 2009 13:39:32 +0100
  Matthieu Brucher matthieu.bruc...@gmail.com wrote:
 Hi,

 There was a discussion about this last week. You can find it int he
 archives ;)

 Matthieu
  
 Hi Matthieu,
 
 Sorry but I missed that.
 Anyway I have some trouble with my short example.
 
 g77 -c binary_fortran.f
 g77 -o io binary_fortran.o
 ./io
 
  11 254  254.
  12 253  126.
  13 252  84.
  14 251  62.
  15 250  50.
  16 249  41.
  17 248  35.
  18 247  30.
  19 246  27.
  20 245  24.
 
 python -i read_fortran.py
 
 a
 array([(16, 1090921693195, 254.0), (16, 16, 5.3686493512014268e-312),
(463856687870322, 16, 7.9050503334599447e-323),
(1082331758605, 4635611391447793664, 7.9050503334599447e-323),
(16, 1078036791310, 62.0), (16, 16, 5.3049894774872906e-312),
(4632233691727265792, 16, 7.9050503334599447e-323),
(1069446856720, 4630967054332067840, 7.9050503334599447e-323),
(16, 1065151889425, 35.0), (16, 16, 5.2413296037731544e-312),
(4629137466983448576, 16, 7.9050503334599447e-323),
(1056561954835, 4628293042053316608, 7.9050503334599447e-323),
(16, 1052266987540, 24.0)],
   dtype=[('irow', 'i8'), ('icol', 'i8'), ('value', 'f8')])
 
 How can I fix the problem ?
 

Every write statement in fortran first writes out the number of bytes that will
follow, *then* the actual data.  So, for instance, the first write to file in
your program will write the bytes corresponding to these values:

16 X(1) Y(1) Z(1)

The 16 comes from the size of 2 ints and 1 double.  Since you're always writing
out the 3 values, and they're always the same size, try adding another integer
column as the first field in your array.

Ryan

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Fortran binary files and numpy/scipy

2009-02-02 Thread Ryan May
Nils Wagner wrote:
 On Mon, 02 Feb 2009 10:17:13 -0600
   Ryan May rma...@gmail.com wrote:
 Every write statement in fortran first writes out the 
 number of bytes that will
 follow, *then* the actual data.  So, for instance, the 
 first write to file in
 your program will write the bytes corresponding to these 
 values:

 16 X(1) Y(1) Z(1)

 The 16 comes from the size of 2 ints and 1 double. 
 Since you're always writing
 out the 3 values, and they're always the same size, try 
 adding another integer
 column as the first field in your array.

 Ryan

 Hi Ryan,
 
 I have modified the python script.
 
 import numpy as np
 fname = open(bin.dat,'rb')
 dt = 
 np.dtype([('isize',int),('irow',int),('icol',int),('value',float)])
 a  = np.fromfile(fname,dtype=dt)
 
 
 a
 array([(16, 1090921693195, 4643140847074803712, 
 7.9050503334599447e-323),
 (16, 1086626725900, 463856687870322, 
 7.9050503334599447e-323),
 (16, 1082331758605, 4635611391447793664, 
 7.9050503334599447e-323),
 (16, 1078036791310, 4633922541587529728, 
 7.9050503334599447e-323),
 (16, 1073741824015, 4632233691727265792, 
 7.9050503334599447e-323),
 (16, 1069446856720, 4630967054332067840, 
 7.9050503334599447e-323),
 (16, 1065151889425, 4630122629401935872, 
 7.9050503334599447e-323),
 (16, 1060856922130, 4629137466983448576, 
 7.9050503334599447e-323),
 (16, 1056561954835, 4628293042053316608, 
 7.9050503334599447e-323),
 (16, 1052266987540, 4627448617123184640, 
 7.9050503334599447e-323)],
dtype=[('isize', 'i8'), ('irow', 'i8'), ('icol', 
 'i8'), ('value', 'f8')])
 
 Is this a 64-bit problem ?
 

I don't know if it's a 64-bit problem per-se, so much as a disagreement between
fortran and numpy.  Numpy is making the size of the integer fields 8 bytes, 
while
in Fortran, they're only 4 bytes.  When constructing your dtype, use np.int32 or
'i4' for your type for the integer fields, and see if that fixes it.

Ryan

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Fortran binary files and numpy/scipy

2009-02-02 Thread Ryan May
Nils Wagner wrote:
 Is this a 64-bit problem ?

 I don't know if it's a 64-bit problem per-se, so much as 
 a disagreement between
 fortran and numpy.  Numpy is making the size of the 
 integer fields 8 bytes, while
 in Fortran, they're only 4 bytes.  When constructing 
 your dtype, use np.int32 or
 'i4' for your type for the integer fields, and see if 
 that fixes it.

 
 dt = 
 np.dtype([('isize','int32'),('irow','int32'),('icol','int32'),('value','float')])
 
 
 a
 array([(16, 0, 11, 1.2549267404367662e-321),
 (1081065472, 16, 0, 7.9050503334599447e-323),
 (12, 253, 0, 3.4485523805914514e-313),
 (0, 16, 0, 5.3474293932967148e-312),
 (0, 1079312384, 16, 3.3951932655444357e-313), (0, 
 14, 251, 62.0),
 (16, 0, 16, 3.1829936864479085e-313),
 (250, 0, 1078525952, 7.9050503334599447e-323),
 (16, 0, 16, 1.2302234581447039e-321),
 (1078231040, 16, 0, 7.9050503334599447e-323),
 (17, 248, 0, 3.4484552433329538e-313),
 (0, 16, 0, 5.2413296037731544e-312),
 (0, 1077805056, 16, 3.3951932655444357e-313), (0, 
 19, 246, 27.0),
 (16, 0, 16, 4.2439915819305446e-313),
 (245, 0, 1077411840, 7.9050503334599447e-323)],
dtype=[('isize', 'i4'), ('irow', 'i4'), ('icol', 
 'i4'), ('value', 'f8')])
 

Maybe on 64-bit machines, the number of bytes is 64-bits instead of 32 (see the
fact that the first 12 bytes of the file are 16 0 11.  Try:

dt =
np.dtype([('isize','int64'),('irow','int32'),('icol','int32'),('value','float')])

Ryan

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Fortran binary files and numpy/scipy

2009-02-02 Thread Ryan May
Nils Wagner wrote:
 On Mon, 02 Feb 2009 14:07:35 -0600
   Ryan May rma...@gmail.com wrote:
 Nils Wagner wrote:
 Is this a 64-bit problem ?

 I don't know if it's a 64-bit problem per-se, so much as 
 a disagreement between
 fortran and numpy.  Numpy is making the size of the 
 integer fields 8 bytes, while
 in Fortran, they're only 4 bytes.  When constructing 
 your dtype, use np.int32 or
 'i4' for your type for the integer fields, and see if 
 that fixes it.

 dt = 
 np.dtype([('isize','int32'),('irow','int32'),('icol','int32'),('value','float')])


 a
 array([(16, 0, 11, 1.2549267404367662e-321),
 (1081065472, 16, 0, 7.9050503334599447e-323),
 (12, 253, 0, 3.4485523805914514e-313),
 (0, 16, 0, 5.3474293932967148e-312),
 (0, 1079312384, 16, 3.3951932655444357e-313), 
 (0, 
 14, 251, 62.0),
 (16, 0, 16, 3.1829936864479085e-313),
 (250, 0, 1078525952, 7.9050503334599447e-323),
 (16, 0, 16, 1.2302234581447039e-321),
 (1078231040, 16, 0, 7.9050503334599447e-323),
 (17, 248, 0, 3.4484552433329538e-313),
 (0, 16, 0, 5.2413296037731544e-312),
 (0, 1077805056, 16, 3.3951932655444357e-313), 
 (0, 
 19, 246, 27.0),
 (16, 0, 16, 4.2439915819305446e-313),
 (245, 0, 1077411840, 7.9050503334599447e-323)],
dtype=[('isize', 'i4'), ('irow', 'i4'), 
 ('icol', 
 'i4'), ('value', 'f8')])

 Maybe on 64-bit machines, the number of bytes is 64-bits 
 instead of 32 (see the
 fact that the first 12 bytes of the file are 16 0 11. 
 Try:

 dt =
 np.dtype([('isize','int64'),('irow','int32'),('icol','int32'),('value','float')])

 Ryan

 
 Strange
 
 a
 array([(16, 11, 254, 254.0), (16, 16, 0, 
 5.3686493512014268e-312),
 (463856687870322, 16, 0, 
 7.9050503334599447e-323),
 (1082331758605, 0, 1079312384, 
 7.9050503334599447e-323),
 (16, 14, 251, 62.0), (16, 16, 0, 
 5.3049894774872906e-312),
 (4632233691727265792, 16, 0, 
 7.9050503334599447e-323),
 (1069446856720, 0, 1078231040, 
 7.9050503334599447e-323),
 (16, 17, 248, 35.0), (16, 16, 0, 
 5.2413296037731544e-312),
 (4629137466983448576, 16, 0, 
 7.9050503334599447e-323),
 (1056561954835, 0, 1077608448, 
 7.9050503334599447e-323),
 (16, 20, 245, 24.0)],
dtype=[('isize', 'i8'), ('irow', 'i4'), ('icol', 
 'i4'), ('value', 'f8')])
   

Apparently I was slightly off on the details (it's been awhile since I had to
deal with this nonsense).  The number of bytes written is written before *and*
after writing your actual data.  So the following should work:

dtype=[('isize', 'i8'), ('irow', 'i4'), ('icol', 'i4'), ('value', 'f8'),
'isize2', 'i8'])

Weird.

Ryan

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] minor improvment to ones

2009-01-30 Thread Ryan May
Sturla Molden wrote:
 On 1/30/2009 2:18 PM, Neal Becker wrote:
 A nit, but it would be nice if 'ones' could fill with a value other than 1.

 Maybe an optional val= keyword?

 
 I am -1 on this. Ones should fill with ones, zeros should fill with 
 zeros. Anything else is counter-intuitive. Calling numpy.ones to fill 
 with fives makes no sense to me. But I would be +1 on having a function 
 called numpy.values or numpy.fill that would create and fill an ndarray 
 with arbitrary values.

I completely agree here.

Ryan

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] minor improvment to ones

2009-01-30 Thread Ryan May
Christopher Barker wrote:
 On 1/30/2009 3:22 PM, Neal Becker wrote:

 Now what would be _really_ cool is a special array type that would 
 represent 
 a constant array without wasting memory.
 
 Can't you do that with scary stride tricks? I think I remember some 
 discussion of this a while back.

I think that's right, but at that point, what gain is that over using a regular
constant and relying on numpy's broadcasting?

Ryan

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] example reading binary Fortran file

2009-01-30 Thread Ryan May
David Froger wrote:
 import numpy as np
 
 nx,ny = 2,5
 
 fourBytes = np.fromfile('uxuyp.bin', count=1, dtype=np.float32)
 ux = np.fromfile('uxuyp.bin', count=nx*ny,
 dtype=np.float32).reshape((nx,ny), order='F')
 
 print ux
 #===
 
 I get :
 
 [[  1.12103877e-43   1.1100e+02   1.1200e+02   1.1300e+02
 1.1400e+02]
  [  1.0100e+02   1.0200e+02   1.0300e+02   1.0400e+02
 1.0500e+02]]
 
 
 this function do the trick, but is it optimized?
 
 #===
 def lread(f,fourBeginning,fourEnd,
 *tuple):
 from struct import unpack
 Reading a Fortran binary file in litte-endian
 
 if fourBeginning: f.seek(4,1)
 for array in tuple:
 for elt in xrange(array.size):
 transpose(array).flat[elt] =
 unpack(array.dtype.char,f.read(array.itemsize))[0]
 if fourEnd: f.seek(4,1)
 #===

I'm not sure about whether or not its optimized, but I can tell you that the
mystery 4 bytes are the number of bytes it that wrote out followed by that
number of bytes of data.

Ryan

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


  1   2   >