Re: [Numpy-discussion] tofile speed

2007-07-25 Thread Lars Friedrich
Hello,

I tried the following:


### start code

a = N.random.rand(100)

myFile = file('test.bin', 'wb')

for i in range(100):
a.tofile(myFile)

myFile.close()

### end code


And this gives roughly 50 MB/s on my office-machine but only 6.5 MB/s on 
the machine that I was reporting about.

Both computers use Python 2.4.3 with enthought 1.0.0 and numpy 1.0.1

So I think I will go and check the harddisk-drivers. array.tofile does 
not seem to be the problem and actually seems to be very fast. Any other 
recommendations?

Thanks
Lars
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Compile extension modules with Visual Studio 2005

2007-07-25 Thread Ray Schumacher
Geoffrey Zhu wrote:
  Hi,
 
  I am about to write a C extension module. C functions in the module will
  take and return numpy arrays. I found a tutorial online, but I am not
  sure about the following:

I agree with others that ctypes might be your best path.
The codeGenerator is magic, if you ask me:
http://starship.python.net/crew/theller/ctypes/old/codegen.html

But, if the function is simple, why not weave.inline? What I have 
done is run the function once, hunt down the long-named library, copy 
it to the local directory, then include it explicitly and call its 
function. This eliminates some overhead time for the call. I use it 
to convert packed IEEE data from an ADC data read function, and it's 
faster than the manufacturer's own function version that returns 
scaled integers!

Ray

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Compile extension modules with Visual Studio 2005

2007-07-25 Thread Gael Varoquaux
On Wed, Jul 25, 2007 at 06:38:55AM -0700, Ray Schumacher wrote:
 The codeGenerator is magic, if you ask me:
 http://starship.python.net/crew/theller/ctypes/old/codegen.html

Can it wrap code passing around arrays ? If so it really does magic that
I don't understand.

Gaël
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Compile extension modules with Visual Studio 2005

2007-07-25 Thread Stefan van der Walt
On Wed, Jul 25, 2007 at 03:41:37PM +0200, Gael Varoquaux wrote:
 On Wed, Jul 25, 2007 at 06:38:55AM -0700, Ray Schumacher wrote:
  The codeGenerator is magic, if you ask me:
  http://starship.python.net/crew/theller/ctypes/old/codegen.html
 
 Can it wrap code passing around arrays ? If so it really does magic that
 I don't understand.

If your array is contiguous, it really is only a matter of passing
along a pointer and dimensions.

By writing your C-functions in the form

void func(double* data, int rows, int cols, double* out) { }

wrapping becomes trivial.

Cheers
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] tofile speed

2007-07-25 Thread Charles R Harris

On 7/25/07, Lars Friedrich [EMAIL PROTECTED] wrote:


Hello,

I tried the following:


### start code

a = N.random.rand(100)

myFile = file('test.bin', 'wb')

for i in range(100):
a.tofile(myFile)

myFile.close()

### end code


And this gives roughly 50 MB/s on my office-machine but only 6.5 MB/s on
the machine that I was reporting about.

Both computers use Python 2.4.3 with enthought 1.0.0 and numpy 1.0.1

So I think I will go and check the harddisk-drivers. array.tofile does
not seem to be the problem and actually seems to be very fast. Any other
recommendations?



You might check what disk controllers the disks are using. I got an almost
x10 speedup moving some disks from a DELL PCI CERC board to the onboard SATA
and using software raid.  Sometimes DMA isn't enabled, but that is pretty
rare these days.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Compile extension modules with Visual Studio 2005

2007-07-25 Thread Gael Varoquaux
On Wed, Jul 25, 2007 at 04:44:08PM +0200, Stefan van der Walt wrote:
 On Wed, Jul 25, 2007 at 03:41:37PM +0200, Gael Varoquaux wrote:
  On Wed, Jul 25, 2007 at 06:38:55AM -0700, Ray Schumacher wrote:
   The codeGenerator is magic, if you ask me:
   http://starship.python.net/crew/theller/ctypes/old/codegen.html

  Can it wrap code passing around arrays ? If so it really does magic that
  I don't understand.

 If your array is contiguous, it really is only a matter of passing
 along a pointer and dimensions.

 By writing your C-functions in the form

 void func(double* data, int rows, int cols, double* out) { }

 wrapping becomes trivial.

Yes, I have done this many times. It is trivial and very convenient. I
was just wondering if the code generator could detect this pattern.

Gaël
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Change to get_printoptions?

2007-07-25 Thread Zachary Pincus
Hello all,

I just recently updated to the SVN version of numpy to test my code  
against it, and found that a small change made to  
numpy.get_printoptions (it now returns a dictionary instead of a  
list) breaks my code.

Here's the changeset:
http://projects.scipy.org/scipy/numpy/changeset/3877

I'm not really looking forward to needing to detect numpy versions  
just so I can do the right thing with get_printoptions, but I do  
agree that the new version of the function is more sensible. My  
question is if there's any particular policy about backwards- 
incompatible python api changes, or if I need to be aware of their  
possibility at every point release. (Either is fine -- I'm happy for  
numpy to be better at the cost of incompatibility, but I'd like to  
know if changes like these are the rule or exception.)

Also, in terms of compatibility checking, has anyone written a little  
function to check if numpy is within a particular version range?  
Specifically, one that handles numpy's built from SVN as well as from  
release tarballs.


Thanks,

Zach Pincus

Program in Biomedical Informatics and Department of Biochemistry
Stanford University School of Medicine

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] output arguments

2007-07-25 Thread Thomas Breuel

 For example, it would be nice if outer
 supported:

 outer(b,c,output=a)
 outer(b,c,increment=a)
 outer(b,c,increment=a,scale=eps)

 or maybe one could specify an accumulation ufunc, with addition,
 multiplication, min, and max being fast, and with an optional scale
 parameter.

What would the increment and scale parameters do? Why should they be part
of the
outer() interface? Have you looked at the .outer() method on the add,
multiply,
minimum, and maximum ufuncs?



output=a would put the output of the operation into a
increment=a would increment the values in a by the result of the operation
increment=a,scale=eps would increment the values in a by the result of
multiplying the result by eps.

I think a good goal for a numerical scripting language should be to allow
people to express common numerical algorithms without extraneous array
allocations for intermediate results; with a few dozen such primitives, a
lot of native code can be avoided in my experience.


Another approach might be to provide, in addition to the convenient
 high-level NumPy operations, direct bindings for BLAS and/or similar
 libraries, with Fortran-like procedural interfaces, but I can't find any
 such libraries in NumPy or SciPy.  Am I missing something?

scipy.linalg.flapack, etc.



Thanks; I'll have to take a look. I don't remember whether LAPACK contains
these low-level ops itself.

Cheers,
Thomas.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Compile extension modules with Visual Studio 2005

2007-07-25 Thread Chris Barker
Ray Schumacher wrote:
 I agree with others that ctypes might be your best path.

Pyrex is a good bet too:

http://www.scipy.org/Cookbook/Pyrex_and_NumPy

The advantage with pyrex is that you don't have to write any C at all.

You will have to use a compiler that is compatible with your Python build.

I found MingGW very easy to use with the python.org python2.5 on 
Windows-- distutils has built-in support for it.

http://boodebr.org/main/python/build-windows-extensions

-Chris



-- 
Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/ORR(206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115   (206) 526-6317   main reception

[EMAIL PROTECTED]
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Compile extension modules with Visual Studio 2005

2007-07-25 Thread David Cournapeau
Gael Varoquaux wrote:
 On Wed, Jul 25, 2007 at 04:44:08PM +0200, Stefan van der Walt wrote:
 On Wed, Jul 25, 2007 at 03:41:37PM +0200, Gael Varoquaux wrote:
 On Wed, Jul 25, 2007 at 06:38:55AM -0700, Ray Schumacher wrote:
 The codeGenerator is magic, if you ask me:
 http://starship.python.net/crew/theller/ctypes/old/codegen.html

 Can it wrap code passing around arrays ? If so it really does magic that
 I don't understand.

 If your array is contiguous, it really is only a matter of passing
 along a pointer and dimensions.

 By writing your C-functions in the form

 void func(double* data, int rows, int cols, double* out) { }

 wrapping becomes trivial.

 Yes, I have done this many times. It is trivial and very convenient. I
 was just wondering if the code generator could detect this pattern.
I don't see either how to magically generate those functions, since C 
has no concept of arrays (I mean outside a serie of contiguous bytes): 
if you see the declaration int f(double* in, int rows, int cols), the 
compiler does not know that it means a double array of size rows * cols, 
and that the in(i, j) is given by in[i*rows+j]. Actually, you don't know 
either without reading the source code or code conventions :).

Now, if you have always the same convention, I think it is conceptually 
possible to automatically generate the wrappers, a bit like swig does 
with typemaps for example (maybe f2py can do it for Fortran code ? I 
have never used f2py, but I think Fortran has a concept of arrays and 
matrices ?).

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Should I use numpy array?

2007-07-25 Thread Robert Kern
Geoffrey Zhu wrote:
 Hi,
 
 I am writing a function that would take a list of datetime objects and
 a list of single letter characters (such as [A,B,C]). The number
 of items tend to be big and both the list and the numpy array have all
 the functionalities I need.
 
 Do you think I should use numpy arrays or the regular lists?

If lists have all the functionality that you need, then I would probably
recommend sticking with them if the contents are going to be objects rather than
numbers. numpy object arrays can be tricky.

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless enigma
 that is made terrible by our own mad attempt to interpret it as though it had
 an underlying truth.
  -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Should I use numpy array?

2007-07-25 Thread Geoffrey Zhu
Hi,

I am writing a function that would take a list of datetime objects and
a list of single letter characters (such as [A,B,C]). The number
of items tend to be big and both the list and the numpy array have all
the functionalities I need.

Do you think I should use numpy arrays or the regular lists?

Thanks,
cg
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion