On 23 September 2010 23:34, Charles R Harris wrote:
>
>
> On Thu, Sep 23, 2010 at 7:22 PM, Lisandro Dalcin wrote:
>>
>> On 23 September 2010 01:26, Charles R Harris
>> wrote:
>> >
>> >
>> > On Wed, Sep 22, 2010 at 10:00 PM, Charles R Harris
y"
if mode=="g3-numpy":
-sys.stderr.write("G3 f2py support is not implemented, yet.\n")
+sys.stderr.write("G3 f2py support is not implemented, yet.\\n")
sys.exit(1)
elif mode=="2e-numeric":
from f2py2e import main
@@ -72,7 +72,7 @@ el
On 22 September 2010 13:48, Charles R Harris wrote:
>
>
> On Wed, Sep 22, 2010 at 8:35 AM, Lisandro Dalcin wrote:
>>
>> It seems that lib2to3 does not process the main f2py bootstrap script
>> that gets autogenerated by f2py's setup.py. The trivial patch below
ain
else:
-print >> sys.stderr, "Unknown mode:",`mode`
+sys.stderr.write("Unknown mode: '%s'\n" % mode)
sys.exit(1)
main()
'''%(os.path.basename(sys.executable)))
--
Lisandro Dalcin
---
CIMEC (INTEC/C
unprotected usage of 'long double', so it seems that CPython requires
that the C compiler to support 'long double'.
--
Lisandro Dalcin
---
CIMEC (INTEC/CONICET-UNL)
Predio CONICET-Santa Fe
Colectora RN 168 Km 472, Paraje El Pozo
Tel: +54-342-4511594 (ext 1011)
o be useful for non-distutils setups
> (e.g. pyximport and inline code).
>
> Try it out and let me know what you think in terms of features and API.
>
> - Robert
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http:
On 2 September 2010 09:27, Neal Becker wrote:
> Lisandro Dalcin wrote:
>
>> On 1 September 2010 19:24, Neal Becker wrote:
>>> It seems if I call kron with 2 C-contiguous arrays, it returns an F-
>>> contiguous array. Any reason for this (it's not what I want
On 1 September 2010 19:24, Neal Becker wrote:
> It seems if I call kron with 2 C-contiguous arrays, it returns an F-
> contiguous array. Any reason for this (it's not what I wanted)?
>
Try numpy.linalg.inv ...
--
Lisandro Dalcin
---
CIMEC (INTEC/CONICET-UNL)
Predio
/__ufunc_api.h:182:
warning: ‘_import_umath’ defined but not used
Any chance these functions could be decorated with the NPY_UNUSED() macro ?
--
Lisandro Dalcin
---
CIMEC (INTEC/CONICET-UNL)
Predio CONICET-Santa Fe
Colectora RN 168 Km 472, Paraje El Pozo
Tel: +54-342-4511594 (ext 1011
, 'd', 'Zf', 'Zd', etc.)
without calling Python code... Of course, complaints without patches
should not be taken too seriously ;-)
--
Lisandro Dalcin
---
CIMEC (INTEC/CONICET-UNL)
Predio CONICET-Santa Fe
Colectora RN 168 Km 472, Paraje El Pozo
Tel: +54-342-45
On 30 June 2010 15:08, Charles R Harris wrote:
>
>
> On Wed, Jun 30, 2010 at 10:57 AM, Lisandro Dalcin wrote:
>>
>> On 30 June 2010 02:48, Charles R Harris wrote:
>> >
>>
>> Oh! Sorry! Now I realize that!
>>
>
> Do I detect a touch of s
yArray_Descr *typecode = NULL;
@@ -1873,7 +1873,9 @@
Py_XDECREF(typecode);
return NULL;
}
-return PyArray_ArangeObj(o_start, o_stop, o_step, typecode);
+range = PyArray_ArangeObj(o_start, o_stop, o_step, typecode);
+Py_XDECREF(typecode);
+ return range;
}
/*NUMPY_API
On 30 June 2010 02:48, Charles R Harris wrote:
>
>
> On Tue, Jun 29, 2010 at 8:21 PM, Lisandro Dalcin wrote:
>>
>> Do we really need this for NumPy 2? What about using the old PyCObject
>> for all Py 2.x versions? If this is not done, perhaps NumPy 2 on top
>>
vel/rev/8a58f1544bd8#l1.33 .
--
Lisandro Dalcin
---
CIMEC (INTEC/CONICET-UNL)
Predio CONICET-Santa Fe
Colectora RN 168 Km 472, Paraje El Pozo
Tel: +54-342-4511594 (ext 1011)
Tel/Fax: +54-342-4511169
___
NumPy-Discussion mailing list
Nu
e 'm', 'M' types also lack dictionary entries.
>
Map to 'timedelta' and 'datetime' ?
--
Lisandro Dalcin
---
Centro Internacional de Métodos Computacionales en Ingeniería (CIMEC)
Instituto de Desarrollo Tecnológico para la Industria
The monkeypatching below in your setup.py could work. This way, you
just have to use numpy.distutils, but you will not be able to pass
many options to Cython (like C++ code generation?)
from numpy.distutils.command import build_src
import Cython
import Cython.Compiler.Main
build_src.Pyrex = Cython
Hi, folks, Using NumPy 1.3.0 from Fedora 11, though this issue likely
applies to current trunk (I've not actually tested, just taken a look
at the sources)
As numpy.distutils.FCompiler inherits from
distutils.ccompiler.CCompiler, the method
"runtime_library_dir_option()" fails with NotImplementedE
Would two brand new .cpp files abusing of #include do the trick?
For example, for '_CCG_dmodule':
//file: CCG_dmodule.cxx
#define USE_DOUBLE 1
#include "CCG_caller.cpp"
#include "ccg_funct.cpp"
#include "ccg.cpp"
#include "CCG_d_wrap.cxx"
Then you can use:
ext_ccg_d = Extension('_CCG_dmodule',
On Fri, Jul 10, 2009 at 3:52 AM, Robert
Bradshaw wrote:
> Nevermind, I just found http://bugs.python.org/issue1675423 .
>
Nevermind? Perhaps NumPy should handle this gotcha for Python < 2.6 ?
-
> On Jul 9, 2009, at 1:41 AM, Robert Bradshaw wrote:
>
>> I know using __complex__ has been discussed
2009/7/7 Stéfan van der Walt :
> Hi Kenny
>
> 2009/7/7 Kenny Abernathy :
>> I can guarantee that all analysis will be finished before the Unit object is
>> destroyed and delete[] is called, so I don't think that's a problem.
>
> There is a neat trick to make sure things don't get deallocated out of
On Mon, Jun 22, 2009 at 11:53 PM, Kurt Smith wrote:
> Hello,
>
> Is there a way for numpy.distutils to compile a fortran source file
> into an executable?
If the whole point of building the executable is to run it in order to
parse the output, then you can start with this:
$ cat setup.py
from num
On Sun, Jun 14, 2009 at 5:27 PM, Bryan Cole wrote:
>> In fact, I should have specified previously: I need to
>> deploy on MS-Win. On first glance, I can't see that mpi4py is
>> installable on Windows.
>
> My mistake. I see it's included in Enthon, which I'm using.
>
Hi, Bryan... I'm the author of
1) Calling both PyArray_XDECREF(array) and Py_DECREF(array) is likely wrong.
2) Py_DECREF(input) should never be done.
On Fri, Apr 17, 2009 at 12:25 PM, Dan S wrote:
> Hi -
>
> I have written a numpy extension which works fine but has a memory
> leak. It takes a single array argument and returns
In general, using complex extension modules like numpy between
matching pairs of Py_Initialize()/Py_Finalize() is tricky...
Extension modules have to be VERY carefully written as to permit such
usage pattern... It is too easy to forget the init/cleanup/finalize
steps... I was able to manage this i
When you do pixeldata[::-1,:,::-1], you just got a new array with
different strides, but now non-contiguous... So I believe you really
need a fresh copy of the data... tostring() copies, but could be
slow... try to use
revpixels = pixeldata[::-1,:,::-1].copy()
...
rgbBMP = wx.BitmapFromBuffer(64
>From my experience working on my own projects and Cython:
* the C code making Python C-API calls could be made to
version-agnostic by using preprocessor macros, and even some
compatibility header conditionally included. Perhaps the later would
be the easiest for C-API calls (we have a lot already
On Tue, Sep 30, 2008 at 9:27 PM, Brian Blais <[EMAIL PROTECTED]>
> thanks for all of the help. My initial solution is to pickle my object,
> with the text-based version of pickle, and send it across rpc. I do this
> because the actual thing I am sending is a dictionary, with lots of arrays,
> and
Sebastien, numpy arrays are picklable; so no need to register them
with copy_reg. I believe the actual problem with xmlrpclib is that it
uses the marshal protocol (only supports core builtin types), and not
the pickle protocol.
On Tue, Sep 30, 2008 at 5:18 PM, Sebastien Binet
<[EMAIL PROTECTED]> w
Ups! Since I´ve started to use Cython, it seems I´m starting to forget
things about SWIG. Mi comments about a typecheck typemaps were a
nonsese (they have another pourpose). Look at the SWIG docs, you need
to use something like SWIG_arg_fail macro.
On Wed, Sep 24, 2008 at 6:48 PM, Lisandro Dalcin
I believe you should look at the SWIG docs and then write a typecheck
typemap. Checking foir the type of and array and returning NULL is not
fair play for SWIG, nor for Python. Before returning NULL, and
exception should be set. For this, SWIG provides some 'SWIG_xxx_fail'
macros. Typemaps and frag
You, know, float are inmutable objects, and then 'float(f)' just
returns a new reference to 'f' is 'f' is (exactly) of type 'float'
In [1]: f = 1.234
In [2]: f is float(f)
Out[2]: True
I do not remember right now the implementations of comparisons in core
Python, but I believe the 'in' operator i
On Wed, Sep 10, 2008 at 4:55 AM, Stéfan van der Walt <[EMAIL PROTECTED]> wrote:
> 2008/9/10 Travis E. Oliphant <[EMAIL PROTECTED]>:
>> The post is
>>
>> http://blog.enthought.com/?p=62
>
> Very cool post, thank you! I agree that it would be great to have
> such a mechanism in NumPy.
>
And then bu
On Wed, Aug 27, 2008 at 12:01 PM, Claude Gouedard <[EMAIL PROTECTED]> wrote:
> Ok ,
> The same for asarray(1) ..
> The problem is that
> aa=asarray(1) is an numpy.array (right ? ) with a size 1 and a shape ( ) !
> No surprising ?
For me, this is not surprising at all :-) . Furthermore, if you try
David Cournapeau <[EMAIL PROTECTED]> wrote:
>>
>> On Tue, Aug 5, 2008 at 1:29 AM, Lisandro Dalcin <[EMAIL PROTECTED]> wrote:
>> > David, I second your approach. Furthermore, look how SWIG handles
>> > this, it is very similar to your proposal. The difference
David, I second your approach. Furthermore, look how SWIG handles
this, it is very similar to your proposal. The difference is that SWIG
uses SWIGUNUSED for some autogenerated functions. Furthermore, it
seems the SWIG developers protected the generated code taking into
account GCC versions ;-) and
Did you build Python from sources in such a way that the Python
library is a shared one?
I mean, Do you have the file /usr/local/lib/libpython2.5.so ??
On Thu, Jul 24, 2008 at 4:21 AM, G <[EMAIL PROTECTED]> wrote:
> Hello,
> I have installed the svn version of numpy. I have deleted all previou
On 6/20/08, Gael Varoquaux <[EMAIL PROTECTED]> wrote:
> I am trying to figure the right way of looping throught ndarrays using
> Cython, currently. Things seem to have evolved a bit compared to what
> some documents on the web claim (eg "for i from 0<=i faster than "for i in range(n)").
Regard
I believe in your current setup there is no better way. But you should
seriously consider changing the way of using array data. Storing bare
pointers in the C side and not holding a reference to the object
providing the data in the C side is really error prone.
On 6/3/08, Jose Martin <[EMAIL PROT
On 5/1/08, David Cournapeau <[EMAIL PROTECTED]> wrote:
> On Wed, 2008-04-30 at 16:44 -0300, Lisandro Dalcin wrote:
> > David, in order to put clear what I was proposing to you in previous
> > mail regarding to implementing plugin systems for numpy, please take a
> > l
Sorry, I forgot to attach the code...
On 4/30/08, Lisandro Dalcin <[EMAIL PROTECTED]> wrote:
> David, in order to put clear what I was proposing to you in previous
> mail regarding to implementing plugin systems for numpy, please take a
> look at the attached tarball.
>
>
David, in order to put clear what I was proposing to you in previous
mail regarding to implementing plugin systems for numpy, please take a
look at the attached tarball.
The plugins are in charge of implementing the action of generic foo()
and bar() functions in C. The example actually implements
David, I briefly took a look at your code, and I have a very, very
important observation.
Your implementation make uses of low level dlopening. Then, your are
going to have to manage all the oddities of runtime loading in the
different systems. In this case, 'libtool' could really help. I know,
it
I think you are wrong, here THERE ARE tmp arrays involved... numpy has
to copy data if indices are not contiguous or strides (in the sense of
actually using a slice)
In [1]: from numpy import *
In [2]: A = array([0,0,0])
In [3]: B = A[[0,1,2]]
In [4]: print B.base
None
In [5]: C = A[0:3]
In [6]: p
If you just want to manage VTK files, the you have to definitely try
pyvtk. http://cens.ioc.ee/projects/pyvtk/
I have a similar numpy-based but independent implementation, not fully
tested, targeted to only write VTK files for big datasets (let say,
more than 1 millon nodes) in eider ascii or byna
Damian Eads wrote:
> One used -mfpmath=sse, and the other, -mfpmath=387.
> Keeping them both
> the same cleared the discrepancy.
Oh yes! I think you got it...
On 3/3/08, Christopher Barker <[EMAIL PROTECTED]> wrote:
>
> Was it really a "significant" difference, or just noticeable? I hope
> not,
On 3/3/08, Revaz Yves <[EMAIL PROTECTED]> wrote:
> I'm computing the cross product of positions and velocities of n points
> in a 3d space.
> Using the numpy function "cross", this can be written as :
> I compare the computation time needed with a C-api I wrote (dedicated to
> this operation).
Sorry for the stupid question, but my English knowledge just covers
reading and writting (the last, not so good)
At the very begining, http://scipy.org/ says
SciPy (pronounced "Sigh Pie") ...
Then, for the other guy, this assertion
NumPy (pronounced "Num Pie", "Num" as in "Number") ...
whould
On 3/1/08, Charles R Harris <[EMAIL PROTECTED]> wrote:
> So they differ in the least significant bit. Not surprising, I expect the
> Fortran compiler might well perform operations in different order,
> accumulate in different places, etc. It might also accumulate in higher
> precision registers or
l try to do the complete application in pure python.
Regards,
On 3/1/08, Charles R Harris <[EMAIL PROTECTED]> wrote:
>
>
>
> On Sat, Mar 1, 2008 at 12:43 PM, Lisandro Dalcin <[EMAIL PROTECTED]> wrote:
> > Dear all,
> >
> > I want to comment some extrange stu
On 3/1/08, Pauli Virtanen <[EMAIL PROTECTED]> wrote:
> A silly question: did you check directly that the pure-numpy code and
> the F90 code give the same results for the Jacobian-vector product
> J(z0) z for some randomly chosen vectors z0, z?
No, I did not do that. However, I've checked the out
Dear all,
I want to comment some extrange stuff I'm experiencing with numpy.
Please, let me know if this is expected and known.
I'm trying to solve a model nonlinear PDE, 2D Bratu problem (-Lapacian
u - alpha * exp(u), homogeneus bondary conditions), using the simple
finite differences with a 5-p
On 2/27/08, Travis E. Oliphant <[EMAIL PROTECTED]> wrote:
> Did this discussion resolve with a fix that can go in before 1.0.5 is
> released?
I believe the answer is yes, but we have to choose:
1- Use the regepx based solution of David.
2- Move to use 'index' instead of 'find' as proposed by Al
Well, after all that said, I'm also fine with either approach. Anyway,
I would say that my personal preference is for the one using
'str.index', as it is the simplest one regarding the old code.
Like Christopher, I rarelly (never?) use 'loadtxt'. But this issue
made a coworker to get crazy (he is
Dear all,
I believe the current 'loadtxt' function is broken if file does not
end in newline. The problem is at the last line of this fragment:
for i,line in enumerate(fh):
if ihttp://projects.scipy.org/mailman/listinfo/numpy-discussion
Travis, after reading all the post on this thread, my comments
Fist of all, I'm definitelly +1 on your suggestion. Below my rationale.
* I believe numpy scalars should provide all possible features needed
to smooth the difference between mutable, indexable 0-d arrays and
inmutable, non-indexable
On 2/12/08, Pearu Peterson <[EMAIL PROTECTED]> wrote:
> according to which makes your goal unachivable because of how
> Python loads shared libraries *by default*, see below.
> Try to use sys.setdlopenflags(...) before importing f2py generated
> extension modules and then reset the state using sys
Unless you try to run it as root, it will not work. Your file
permissions are a mess.
Please do the following (as root or via sudo) and try again
$ chmod 755 /flib.so
On 2/6/08, Chris <[EMAIL PROTECTED]> wrote:
> Pearu Peterson cens.ioc.ee> writes:
> > > This works fine on Windows and Mac; the
On 2/1/08, Pearu Peterson <[EMAIL PROTECTED]> wrote:
> >> Sorry, I haven't been around there long time.
> >
> > Are you going to continue not reading the f2py list? If so, you should
> > point everyone there to this list and close the list.
>
> Anyway, I have subscribed to the f2py list again I'll
Sorry if I'm making noise, my knowledge of fortran is really little,
but in your routine AllocateDummy your are fist allocating and next
deallocating the arrays. Are you sure you can then access the contents
of your arrays after deallocating them?
How much complicated is your binary format? For si
Mmm...
It looks as it 'mask' is being inernally converted from
[True, False, False, False, True]
to
[1, 0, 0, 0, 1]
so your are finally getting
x[1], x[0], x[0], x[0], x[1]
On 11/5/07, John Hunter <[EMAIL PROTECTED]> wrote:
> A colleague of mine just asked for help with a pesky bug that turned
Pauli, I finally understood your idea What a good hack!!!
You have to pass an integer array with enough space in order Fortran
con store there some extra metadata, no just the buffer pointer, but
also dimensions and some other runtime stuff.
Many, many, many thanks
On 10/24/07, Pauli Vi
On 10/24/07, Pauli Virtanen <[EMAIL PROTECTED]> wrote:
>Using a hack like this, it's also possible to pass derived type
object >pointers, "type(Typename), pointer", from the Python side to
the >Fortran side, as opaque handles.
Could you please send/point me an example of how I can actually pass
F
Im trying to use f2py to wrap some fortran functions wich receive as
argument PETSc objects. The usual way to implement this with
PETSc+fortran is to do something like this
soubrutine matvec(A,x,y)
Mat A
Vec x
Vec y
end soubrutine
The 'Mat' and 'Vec' types are actually integers of appropriate
On 10/20/07, David Cournapeau <[EMAIL PROTECTED]> wrote:
And I
> hope that in the end, scons will be used for numpy (for 1.1 ?), once I
> finish the conversion.
>
> I don't see situations where adding 350 Kb in the tarball can be an
> issue, so could you tell me what the problem would be ?
Then if
On 10/19/07, David Cournapeau <[EMAIL PROTECTED]> wrote:
> numpy.scons branch
>
> This is a much more massive change. Scons itself adds something like 350
> kb to a bzip tarball.
If numpy build system will not depend on scons (is this right?) then
.. Is it strictly needed to distribute scons with
David, I'll try to show you what I do for a custom C++ class, of
course this does not solve the issue resizing (my class does not
actually support resizing, so this is fine for me):
My custom class is a templatized one called DTable (is like a 2d
contiguous array), but currently I only instantiate
On 8/3/07, David Cournapeau <[EMAIL PROTECTED]> wrote:
> Here is what I can think of:
> - adding an API to know whether a given PyArrayObject has its data
> buffer 16 bytes aligned, and requesting a 16 bytes aligned
> PyArrayObject. Something like NPY_ALIGNED, basically.
> - forcing da
As PyArray_DescrConverter return new references, I think there could
be many places were PyArray_Descr* objects get its reference count
incremented.
Here, I send a patch correcting this for array() and arange(), but not
sure if this is the more general solution.
BTW, please see my previous commen
This patch corrected the problem for me, numpy test pass...
On 8/2/07, Lisandro Dalcin <[EMAIL PROTECTED]> wrote:
> I think the problem is in _array_fromobject (seen as numpy.array in Python)
--
Lisandro Dalcín
---
Centro Internacional de Métodos Computacionales en I
re big
chances of leaking references in the case of bad args to C functions.
Regards,
On 8/2/07, Timothy Hochberg <[EMAIL PROTECTED]> wrote:
>
>
>
> On 8/2/07, Lisandro Dalcin <[EMAIL PROTECTED]> wrote:
> > using numpy-1.0.3, I believe there are a reference leak som
wrote:
>
>
>
> On 8/2/07, Lisandro Dalcin <[EMAIL PROTECTED]> wrote:
> > using numpy-1.0.3, I believe there are a reference leak somewhere.
> > Using a debug build of Python 2.5.1 (--with-pydebug), I get the
> > following
> >
> > import sys, gc
>
using numpy-1.0.3, I believe there are a reference leak somewhere.
Using a debug build of Python 2.5.1 (--with-pydebug), I get the
following
import sys, gc
import numpy
def testleaks(func, args=(), kargs={}, repeats=5):
for i in xrange(repeats):
r1 = sys.gettotalrefcount()
fun
On 7/25/07, Amir Hirsch <[EMAIL PROTECTED]> wrote:
> The Python 2.3 installation I am using came with OpenOffice.org 2.2 and it
> must
> not have registered python with Windows. I require PyUNO and Numpy (and
> PyOpenGL and Ctypes) to work together for the application I am developing and
> PyUno
Is the following inteded? Why array(1) is not fortran contiguous?
In [1]: from numpy import *
In [2]: __version__
Out[2]: '1.0.3'
In [3]: array(1).flags
Out[3]:
C_CONTIGUOUS : True
F_CONTIGUOUS : False
OWNDATA : True
WRITEABLE : True
ALIGNED : True
UPDATEIFCOPY : False
--
Lisandro
On 7/10/07, Ben ZX <[EMAIL PROTECTED]> wrote:
> I ran f2py. It seems to always generate NumPy modules.
>
> How do I tell f2py to generate Numeric modules?
If you do
$ f2py -h
you will see near the beginning the option '--2d-numeric'.
I never tested it (moved to numpy from its very beginings)
On 4/19/07, Travis Oliphant <[EMAIL PROTECTED]> wrote:
> Nick Fotopoulos wrote:
> > Devs, is there any possibility of moving/copying pylab.load to numpy?
> > I don't see anything in the source that requires the rest of
> > matplotlib. Among convenience functions, I think that this function
> > ran
On 3/29/07, Brad Malone <[EMAIL PROTECTED]> wrote:
> Hi, I use python for some fairly heavy scientific computations (at least to
> be running on a single processor) and would like to use it in parallel.
> I've seen some stuff online about Parallel Python and mpiPy, but I don't
> know much about the
On 1/31/07, Travis Oliphant <[EMAIL PROTECTED]> wrote:
> To me this is so obvious that I don't understand the resistance in the
> Python community to the concept.
Indeed
Travis, I was not reading this for a some time ago. Can you point me
your last proposal? I remember reading about extending the
>
> It seems to me that the temporary file mechanism on Windows is a little
> odd.
>
Indeed, looking at sources, the posix version uses the mkstemp/unlink
idiom.. but in win it uses a bit of hackery. It seems opened files
cannot be unlinked.
if _os.name != 'posix' or _os.sys.platform == 'cygwin'
79 matches
Mail list logo