Thanks to everyone supporting this. I wish I could attend this year,
and I will be making it a point to attend next year. I am very
grateful to be able to catch the talks at this years conference.
Thanks!
Chris
On Wed, Aug 12, 2009 at 6:27 PM, Fernando Perez wrote:
> Hi all,
>
> as you may recal
> 2009/8/12 Robert Kern :
> On Sat, Aug 8, 2009 at 21:33, Tom Kuiper wrote:
>> There is something curious here. The second flush() fails. Can anyone
>> explain this?
>
> numpy.append() does not append values in-place. It is just a
> convenience wrapper for numpy.concatenate().
Meaning that a cop
Thanks David, I'll look into it now.
Regarding the allocation/deallocation times I think that is not an issue for
me. The chunks are generated by a fortran routine that takes several minutes to
run (I am collecting a few thousand points before saving to disk). They are
approximately the same siz
On 12-Aug-09, at 7:11 PM, Juan Fiol wrote:
> Hi, I finally decided by the pytables approach because will be
> easier later to work with the data. Now, I know is not the right
> place but may be I can get some quick pointers. I've calculated a
> numpy array of about 20 columns and a few thous
Hi, I finally decided by the pytables approach because will be easier later to
work with the data. Now, I know is not the right place but may be I can get
some quick pointers. I've calculated a numpy array of about 20 columns and a
few thousands rows at each time. I'd like to append all the rows
Hi all,
as you may recall, there have been recently a number of requests for
videotaping the conference.
I am very happy to announce that we will indeed have full video
coverage this year of both tutorial tracks as well as the main talks
(minus any specific talk where a speaker may object to bein
I suspect I am trying to do something similar... I would like to create a
mask where I have data. In essence, I need to return True where x,y is equal
to lon,lat
I suppose a setmember solution may somehow be more elegant, but this is what
I've worked up for now... suggestions?
def genData
We should also talk to Ondrej about this at SciPy. Both sympy (through
mpmath) and mpmath have matplotlib based function plotting. I don't think
it is adaptive, but I know mpmath can handle singularities. Also, Ondrej is
doing doing his graduate with with a group that does adaptive finite
elemen
On Wed, Aug 12, 2009 at 4:28 AM, John Hunter wrote:
> We would like to add function plotting to mpl, but to do this right we
> need to be able to adaptively sample a function evaluated over an
> interval so that some tolerance condition is satisfied, perhaps with
> both a relative and absolute erro
Someone posts on offtopic.com
>
> (1) Extend the work of others (in this case Luca Citi and Robert Kern)
> (2) File a ticket
> (3) ???
> (4) Profit
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinf
On Wed, Aug 12, 2009 at 8:53 AM, Charles R
Harris wrote:
>
>
> On Wed, Aug 12, 2009 at 9:29 AM, Scott Sinclair
> wrote:
>>
>> >2009/8/12 Keith Goodman :
>> > On Wed, Aug 12, 2009 at 7:24 AM, Keith Goodman
>> > wrote:
>> >> On Wed, Aug 12, 2009 at 1:31 AM, Lars
>> >> Bittrich wrote:
>> >>>
>> >>> a
Hi,
I am working on a *very* simple Python interface to ScaLAPACK using the
NumPy C-API. I am not
using f2py at all.
Simple question:
How can I copy a C-order NumPy array into a Fortran-order NumPy array within
the C-API?
(This is trivial in Python, it is simply A = A.copy("Fortran"))
I would li
On Mon, Aug 10, 2009 at 14:19, Maria Liukis wrote:
> Hello everybody,
> I'm using following versions of Scipy and Numpy packages:
scipy.__version__
> '0.7.1'
np.__version__
> '1.3.0'
> My code uses boolean array to filter 2-dimensional array which sometimes
> happens to be an empty array.
On Fri, Aug 7, 2009 at 07:15, Nanime Puloski wrote:
> But if it were an unsigned int64, it should be able to hold 2**64 or at
> least 2**64-1.
> Am I correct?
There is no numpy.sin() implementation for uint64s, just the floating
point types.
--
Robert Kern
"I have come to believe that the whole
On Fri, Aug 7, 2009 at 13:05, Tom Kuiper wrote:
> If this appears twice, forgive me. I sent it previously (7:13 am PDT) via a
> browser interface to JPL's Office Outlook. I have doubts about this
> system. This time, from Iceweasel through our SMTP server.
>
> There are two things I'd like to do
On Fri, Aug 7, 2009 at 23:53, Dr. Phillip M.
Feldman wrote:
>
> I'd like to be able to make a slice of a 3-dimensional array, doing something
> like the following:
>
> Y= X[A, B, C]
>
> where A, B, and C are lists of indices. This works, but has an unexpected
> side-effect. When A, B, or C is a len
(copied from the lengthy unicode thread in scipy-dev, so it doesn't get lost)
this looks like a bug ? or is it a known limitation that chararrays
cannot be 0-d
>>> b0= np.array(u'\xe9','>> print b0.encode('cp1252')
Traceback (most recent call last):
File "", line 1, in
print b0.encode('cp125
On Sat, Aug 8, 2009 at 21:33, Tom Kuiper wrote:
> There is something curious here. The second flush() fails. Can anyone
> explain this?
numpy.append() does not append values in-place. It is just a
convenience wrapper for numpy.concatenate().
--
Robert Kern
"I have come to believe that the who
On Mon, Aug 10, 2009 at 10:08, Rich E wrote:
> Dear all,
> I am having a few issues with indexing in numpy and wondered if you could help
> me out.
> If I define an array
> a = zeros(( 4))
> a
> array([ 0., 0., 0., 0.])
>
> Then I try and reference a point beyond the bounds of the array
>
> a[4]
On Wed, Aug 12, 2009 at 11:28 AM, Ryan May wrote:
> On Wed, Aug 12, 2009 at 10:22 AM, Ralph Heinkel wrote:
>>
>> Hi,
>>
>> I'm creating (actually calculating) a set of very large 1-d arrays
>> (vectors), which I would like to assemble into a record array so I can
>> access the data row-wise. Unfo
On Wed, Aug 12, 2009 at 9:29 AM, Scott Sinclair wrote:
> >2009/8/12 Keith Goodman :
> > On Wed, Aug 12, 2009 at 7:24 AM, Keith Goodman
> wrote:
> >> On Wed, Aug 12, 2009 at 1:31 AM, Lars
> >> Bittrich wrote:
> >>>
> >>> a colleague made me aware of a speed issue with numpy.identity. Since
> he wa
On Fri, Aug 7, 2009 at 13:54, T J wrote:
> The reduce function of ufunc of a vectorized function doesn't seem to
> respect the dtype.
>
def a(x,y): return x+y
b = vectorize(a)
c = array([1,2])
b(c, c) # use once to populate b.ufunc
d = b.ufunc.reduce(c)
c.dtype, type(
>2009/8/12 Keith Goodman :
> On Wed, Aug 12, 2009 at 7:24 AM, Keith Goodman wrote:
>> On Wed, Aug 12, 2009 at 1:31 AM, Lars
>> Bittrich wrote:
>>>
>>> a colleague made me aware of a speed issue with numpy.identity. Since he was
>>> using numpy.diag(numpy.ones(N)) before, he expected identity to be
On Wed, Aug 12, 2009 at 10:22 AM, Ralph Heinkel wrote:
> Hi,
>
> I'm creating (actually calculating) a set of very large 1-d arrays
> (vectors), which I would like to assemble into a record array so I can
> access the data row-wise. Unfortunately it seems that all data of my
> original 1-d array
Hi,
I'm creating (actually calculating) a set of very large 1-d arrays
(vectors), which I would like to assemble into a record array so I can
access the data row-wise. Unfortunately it seems that all data of my
original 1-d arrays are getting copied in memory during that process.
Is there a w
On Wed, Aug 12, 2009 at 7:24 AM, Keith Goodman wrote:
> On Wed, Aug 12, 2009 at 1:31 AM, Lars
> Bittrich wrote:
>> Hi,
>>
>> a colleague made me aware of a speed issue with numpy.identity. Since he was
>> using numpy.diag(numpy.ones(N)) before, he expected identity to be at least
>> as
>> fast as
On Wed, Aug 12, 2009 at 1:31 AM, Lars
Bittrich wrote:
> Hi,
>
> a colleague made me aware of a speed issue with numpy.identity. Since he was
> using numpy.diag(numpy.ones(N)) before, he expected identity to be at least as
> fast as diag. But that is not the case.
>
> We found that there was a discu
On Wed, Aug 12, 2009 at 3:12 AM, Danny Handoko wrote:
> Dear all,
>
> We try to use numpy.histogram with combination of matplotlib. We are using
> numpy 1.3.0, but a somewhat older matplotlib version of 0.91.2.
> Matplotlib's axes.hist() function calls the numpy.histogram, passing
> through the
We would like to add function plotting to mpl, but to do this right we
need to be able to adaptively sample a function evaluated over an
interval so that some tolerance condition is satisfied, perhaps with
both a relative and absolute error tolerance condition. I am a bit
out of my area of compete
Hi,
a colleague made me aware of a speed issue with numpy.identity. Since he was
using numpy.diag(numpy.ones(N)) before, he expected identity to be at least as
fast as diag. But that is not the case.
We found that there was a discussion on the list (July, 20th; "My identity" by
Keith Goodman).
Dear all,
We try to use numpy.histogram with combination of matplotlib. We are using
numpy 1.3.0, but a somewhat older matplotlib version of 0.91.2.
Matplotlib's axes.hist() function calls the numpy.histogram, passing through
the 'normed' parameter. However, this version of matplotlib uses '0
31 matches
Mail list logo