On Thu, Jan 22, 2009 at 17:09, Wes McKinney wrote:
> Windows XP, Pentium D, Python 2.5.2
I can replicate the negative numbers on my Windows VM. I'll take a look at it.
Wrote profile results to foo.py.lprof
Timer unit: 4.17601e-010 s
File: foo.py
Function: f at line 1
Total time: -3.02963 s
Lin
Windows XP, Pentium D, Python 2.5.2
On Thu, Jan 22, 2009 at 6:03 PM, Robert Kern wrote:
> On Thu, Jan 22, 2009 at 17:00, Wes McKinney wrote:
> > import cProfile
> >
> > def f():
> > pass
> >
> > def g():
> > for i in xrange(100):
> > f()
> >
> > cProfile.run("g()")
> >
> >>t
On Thu, Jan 22, 2009 at 17:00, Wes McKinney wrote:
> import cProfile
>
> def f():
> pass
>
> def g():
> for i in xrange(100):
> f()
>
> cProfile.run("g()")
>
>>test.py
> 103 function calls in 1.225 CPU seconds
>
>Ordered by: standard name
>
>ncalls tottime
import cProfile
def f():
pass
def g():
for i in xrange(100):
f()
cProfile.run("g()")
>test.py
103 function calls in 1.225 CPU seconds
Ordered by: standard name
ncalls tottime percall cumtime percall filename:lineno(function)
10.0000.0
On Thu, Jan 22, 2009 at 01:46, Hanni Ali wrote:
> I have been using your profiler extensively and it has contributed to my
> achieving significant improvements in the application I work on largely due
> to the usefulness of the line by line breakdown enabling me to easily select
> the next part of
I have been using your profiler extensively and it has contributed to my
achieving significant improvements in the application I work on largely due
to the usefulness of the line by line breakdown enabling me to easily select
the next part of code to work on optimizing. So firstly many thanks for
w
On Wed, Jan 21, 2009 at 12:13, Wes McKinney wrote:
> Robert-- this is a great little piece of code, I already think it will be a
> part of my workflow. However, I seem to be getting negative % time taken on
> the more time consuming lines, perhaps getting some overflow?
That's odd. Can you send m
Neal Becker wrote:
> Ravi wrote:
>
>> On Wednesday 21 January 2009 13:55:49 Neal Becker wrote:
>>> I'm only interested in simple strided 1-d vectors. In that case, I
>>> think your code already works. If you have c++ code using the iterator
>>> interface, the iterators dereference will use (*a
On Wednesday 21 January 2009 14:57:59 Neal Becker wrote:
> ublas::vector func (numpy::array_from_py::type const&)
>
> But not for a function that modifies it arg in-place (& instead of const&):
>
> void func (numpy::array_from_py::type &)
Use void func
Ravi wrote:
> On Wednesday 21 January 2009 13:55:49 Neal Becker wrote:
>> I'm only interested in simple strided 1-d vectors. In that case, I think
>> your code already works. If you have c++ code using the iterator
>> interface, the iterators dereference will use (*array )[index]. This
>> will
On Wednesday 21 January 2009 13:55:49 Neal Becker wrote:
> I'm only interested in simple strided 1-d vectors. In that case, I think
> your code already works. If you have c++ code using the iterator
> interface, the iterators dereference will use (*array )[index]. This will
> use operator[], wh
Ravi wrote:
> On Wednesday 21 January 2009 10:22:36 Neal Becker wrote:
>> > [http://mail.python.org/pipermail/cplusplus-sig/2008-
October/013825.html
>>
>> Thanks for reminding me about this!
>>
>> Do you have a current version of the code? I grabbed the files from the
>> above message, but I see
Ravi wrote:
> On Wednesday 21 January 2009 10:22:36 Neal Becker wrote:
>> > [http://mail.python.org/pipermail/cplusplus-sig/2008-
October/013825.html
>>
>> Thanks for reminding me about this!
>>
>> Do you have a current version of the code? I grabbed the files from the
>> above message, but I see
Robert-- this is a great little piece of code, I already think it will be a
part of my workflow. However, I seem to be getting negative % time taken on
the more time consuming lines, perhaps getting some overflow?
Thanks a lot,
Wes
On Wed, Jan 21, 2009 at 3:23 AM, Robert Kern wrote:
> On Tue, J
Neal Becker wrote:
> I tried a little experiment, implementing some code in numpy
It sounds like you've found your core issue, but a couple comments:
> from numpy import *
I'm convinced that "import *" is a bad idea. I think the "standard"
syntax is now "import numpy as np"
> from math impo
On Wednesday 21 January 2009 10:22:36 Neal Becker wrote:
> > [http://mail.python.org/pipermail/cplusplus-sig/2008-October/013825.html
>
> Thanks for reminding me about this!
>
> Do you have a current version of the code? I grabbed the files from the
> above message, but I see some additional subse
Ravi wrote:
> Hi Neal,
>
> On Wednesday 21 January 2009 07:27:04 Neal Becker wrote:
>> It might if I had used this for all of my c++ code, but I have a big
>> library of c++ wrapped code that doesn't use pyublas. Pyublas takes
>> numpy objects from python and allows the use of c++ ublas on it (w
Ravi wrote:
> Hi Neal,
>
> On Wednesday 21 January 2009 07:27:04 Neal Becker wrote:
>> It might if I had used this for all of my c++ code, but I have a big
>> library of c++ wrapped code that doesn't use pyublas. Pyublas takes
>> numpy objects from python and allows the use of c++ ublas on it (w
Hi Neal,
On Wednesday 21 January 2009 07:27:04 Neal Becker wrote:
> It might if I had used this for all of my c++ code, but I have a big
> library of c++ wrapped code that doesn't use pyublas. Pyublas takes numpy
> objects from python and allows the use of c++ ublas on it (without
> conversion).
On 1/21/2009 2:38 PM, Sturla Molden wrote:
> If you can get a pointer (as integer) to your C++ data, and the shape
> and dtype is known, you may use this (rather unsafe) 'fromaddress' hack:
And opposite, if you need to get the address referenced to by an
ndarray, you can do this:
def addressof
On 1/21/2009 1:27 PM, Neal Becker wrote:
> It might if I had used this for all of my c++ code, but I have a big library
> of c++ wrapped code that doesn't use pyublas. Pyublas takes numpy objects
> from python and allows the use of c++ ublas on it (without conversion).
If you can get a pointer
Robert Kern wrote:
> On Tue, Jan 20, 2009 at 20:57, Neal Becker wrote:
>
>> I see the problem. Thanks for the great profiler! You ought to make
>> this more widely known.
>
> I'll be making a release shortly.
>
>> It seems the big chunks of time are used in data conversion between numpy
>> a
T J wrote:
> On Tue, Jan 20, 2009 at 6:57 PM, Neal Becker wrote:
>> It seems the big chunks of time are used in data conversion between numpy
>> and my own vectors classes. Mine are wrappers around boost::ublas. The
>> conversion must be falling back on a very inefficient method since there
>>
On Tue, Jan 20, 2009 at 20:57, Neal Becker wrote:
> I see the problem. Thanks for the great profiler! You ought to make this
> more widely known.
http://pypi.python.org/pypi/line_profiler
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless
enigma that is mad
On Tue, Jan 20, 2009 at 6:57 PM, Neal Becker wrote:
> It seems the big chunks of time are used in data conversion between numpy
> and my own vectors classes. Mine are wrappers around boost::ublas. The
> conversion must be falling back on a very inefficient method since there is no
> special code
On Tue, Jan 20, 2009 at 20:57, Neal Becker wrote:
> I see the problem. Thanks for the great profiler! You ought to make this
> more widely known.
I'll be making a release shortly.
> It seems the big chunks of time are used in data conversion between numpy
> and my own vectors classes. Mine a
Robert Kern wrote:
> 2009/1/20 Neal Becker :
>> I tried a little experiment, implementing some code in numpy (usually I
>> build modules in c++ to interface to python). Since these operations are
>> all large vectors, I hoped it would be reasonably efficient.
>>
>> The code in question is simple.
On Tue, Jan 20, 2009 at 20:44, Neal Becker wrote:
> Robert Kern wrote:
>
>> 2009/1/20 Neal Becker :
>>> I tried a little experiment, implementing some code in numpy (usually I
>>> build modules in c++ to interface to python). Since these operations are
>>> all large vectors, I hoped it would be r
Robert Kern wrote:
> 2009/1/20 Neal Becker :
>> I tried a little experiment, implementing some code in numpy (usually I
>> build modules in c++ to interface to python). Since these operations are
>> all large vectors, I hoped it would be reasonably efficient.
>>
>> The code in question is simple.
2009/1/20 Neal Becker :
> I tried a little experiment, implementing some code in numpy (usually I
> build modules in c++ to interface to python). Since these operations are
> all large vectors, I hoped it would be reasonably efficient.
>
> The code in question is simple. It is a model of an ampli
2009/1/20 Neal Becker :
> I tried a little experiment, implementing some code in numpy (usually I
> build modules in c++ to interface to python). Since these operations are
> all large vectors, I hoped it would be reasonably efficient.
>
> The code in question is simple. It is a model of an ampli
I tried a little experiment, implementing some code in numpy (usually I
build modules in c++ to interface to python). Since these operations are
all large vectors, I hoped it would be reasonably efficient.
The code in question is simple. It is a model of an amplifier, modeled by
it's AM/AM an
32 matches
Mail list logo