[Numpy-discussion] Bug in loadtxt

2010-08-21 Thread Thomas Robitaille
Hi,

I am running into a precision issue with np.loadtxt. I have a data file with 
the following contents:

$ cat data.txt
-9.61922814E-01
-9.96192290E-01
-9.99619227E-01
-9.99961919E-01
-9.6192E-01
-9.9611E-01
-1.E+00

If I try and read this in using loadtxt, which should read numbers in using 
(64-bit) float by default, I get:

Python 2.6.1 (r261:67515, Feb 11 2010, 00:51:29) 
[GCC 4.2.1 (Apple Inc. build 5646)] on darwin
Type help, copyright, credits or license for more information.
 import numpy as np
 np.__version__
'2.0.0.dev8657'
 np.loadtxt('data.txt')
array([-1., -1., -1., -1., -1., -1., -1.])

If I now create a file called data2.txt with only the first line:

$ cat data2.txt
-9.61922814E-01

loadtxt works correctly:

Python 2.6.1 (r261:67515, Feb 11 2010, 00:51:29) 
[GCC 4.2.1 (Apple Inc. build 5646)] on darwin
Type help, copyright, credits or license for more information.
 import numpy as np
 np.__version__
'2.0.0.dev8657'
 np.loadtxt('data2.txt')
array(-0.961922814)

I have submitted a bug report:

http://projects.scipy.org/numpy/ticket/1589

Cheers,

Tom
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Bug in loadtxt

2010-08-21 Thread josef . pktd
On Sat, Aug 21, 2010 at 1:43 PM, Thomas Robitaille
thomas.robitai...@gmail.com wrote:
 Hi,

 I am running into a precision issue with np.loadtxt. I have a data file with 
 the following contents:

 $ cat data.txt
 -9.61922814E-01
 -9.96192290E-01
 -9.99619227E-01
 -9.99961919E-01
 -9.6192E-01
 -9.9611E-01
 -1.E+00

 If I try and read this in using loadtxt, which should read numbers in using 
 (64-bit) float by default, I get:

 Python 2.6.1 (r261:67515, Feb 11 2010, 00:51:29)
 [GCC 4.2.1 (Apple Inc. build 5646)] on darwin
 Type help, copyright, credits or license for more information.
 import numpy as np
 np.__version__
 '2.0.0.dev8657'
 np.loadtxt('data.txt')
 array([-1., -1., -1., -1., -1., -1., -1.])

 If I now create a file called data2.txt with only the first line:

 $ cat data2.txt
 -9.61922814E-01

 loadtxt works correctly:

 Python 2.6.1 (r261:67515, Feb 11 2010, 00:51:29)
 [GCC 4.2.1 (Apple Inc. build 5646)] on darwin
 Type help, copyright, credits or license for more information.
 import numpy as np
 np.__version__
 '2.0.0.dev8657'
 np.loadtxt('data2.txt')
 array(-0.961922814)

 I have submitted a bug report:

 http://projects.scipy.org/numpy/ticket/1589


are you sure this is not just a print precision problem?

 a =np.array([-9.61922814E-01,
-9.96192290E-01,
-9.99619227E-01,
-9.99961919E-01,
-9.6192E-01,
-9.9611E-01,
-1.E+00])
 a
array([-1., -1., -1., -1., -1., -1., -1.])
 a[0]
-0.961922814
 a[:1]
array([-1.])

Josef


 Cheers,

 Tom
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Bug in loadtxt

2010-08-21 Thread Thomas Robitaille


josef.pktd wrote:
 
 are you sure this is not just a print precision problem?
 

Thanks for pointing this out, it does seem to be just to do with the
printing precision. I didn't notice this before, because for the last few
elements of the array, print still gives just -1:

In [19]: for x in a:
   : print x
   : 
   : 
-0.9619
-0.9962
-0.9996
-1.0
-1.0
-1.0
-1.0

But I now realize that to get the full precision, I should have done

In [20]: for x in a:
   : repr(x)
   : 
   : 
Out[20]: '-0.961922814'
Out[20]: '-0.99619229'
Out[20]: '-0.999619227'
Out[20]: '-0.61919'
Out[20]: '-0.96192'
Out[20]: '-0.99611'
Out[20]: '-1.0'

Cheers,

Tom




-- 
View this message in context: 
http://old.nabble.com/Bug-in-loadtxt-tp29500522p29500609.html
Sent from the Numpy-discussion mailing list archive at Nabble.com.

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [ANN] carray: an in-memory compressed data container

2010-08-21 Thread Sebastian Haase
Hi Francesc,

another exciting project ... congratulations !
Am I correct in thinking that memmapping a carray would also be a
great speed advantage over memmapped ndarrays ?  Let's say I have a
2Gbyte ndarray memmaped over a NFS network connection, should the
speed increase simply scale with the compression factor ?

Regards,
Sebastian


On Sat, Aug 21, 2010 at 1:31 AM, Francesc Alted fal...@pytables.org wrote:
 2010/8/20, Zbyszek Szmek zbys...@in.waw.pl:
 OK, I've got a case where carray really shines :|

 zbys...@escher:~/python/numpy/carray-0.1.dev$ PYTHONPATH=. python
 bench/concat.py numpy 80 1000 4 1
 problem size: (80) x 1000 = 10^8.90309
 time for concat: 4.806s
 size of the final container: 6103.516 MB
 zbys...@escher:~/python/numpy/carray-0.1.dev$ PYTHONPATH=. python
 bench/concat.py concat 80 1000 4 1
 problem size: (80) x 1000 = 10^8.90309
 time for concat: 3.475s
 size of the final container: 6103.516 MB
 zbys...@escher:~/python/numpy/carray-0.1.dev$ PYTHONPATH=. python
 bench/concat.py carray 80 1000 4 1
 problem size: (80) x 1000 = 10^8.90309
 time for concat: 1.434s
 size of the final container: 373.480 MB

 Size is set to NOT hit the swap. This is still the easily compressible
 arange... but still, the results are very nice.

 Wow, the results with your processor are much nicer than with my Atom
 indeed.  But yeah, I somewhat expected this because Blosc works much
 faster with recent processors, as can be seen in:

 http://blosc.pytables.org/trac/wiki/SyntheticBenchmarks

 BTW, the difference between memcpy and memmove times for this
 benchmark is almost 40% for your computer, which is really large :-/
 Hmm, something must go really wrong with memcpy in some glibc
 distributions...

 At any rate, for real data that is less compressible the advantages of
 carray will be less apparent, but at least the proof of concept seems
 to work as intended, so I'm very happy with it.  I'm also expecting
 that the combination carray/numexpr would perform faster than plain
 computations programmed in C, most specially with modern processors,
 but will see how much faster exactly.

 Of course when the swap is hit, the ratio between carray and a normal array
 can grow to infinity :)

 zbys...@escher:~/python/numpy/carray-0.1.dev$ PYTHONPATH=. python
 bench/concat.py numpy 100 1000 3 1
 problem size: (100) x 1000 = 10^9
 time for concat: 35.700s
 size of the final container: 7629.395 MB
 zbys...@escher:~/python/numpy/carray-0.1.dev$ PYTHONPATH=. python
 bench/concat.py carray 100 1000 3 1
 problem size: (100) x 1000 = 10^9
 time for concat: 1.751s
 size of the final container: 409.633 MB

 Exactly.  This is another scenario where the carray concept can be
 really useful.

 --
 Francesc Alted
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] mingw-w64 tutorial ?

2010-08-21 Thread Sebastian Haase
Hi,

this is somewhat OT for this list, but since I know that David and
many others here have lot's of experience compiling C extensions I
thought I could just ask:
Looking at
http://sourceforge.net/projects/mingw-w64/files/
I did not know (even after reading the FAQ) which file to download and
how things would eventually work.

I have a 64bit windows 7 installed, and got many precompiled packages
for amd64 Python 2.7 from
http://www.lfd.uci.edu/~gohlke/pythonlibs/
(thanks to  Christoph Gohlke for all the work)
But now I have some C++ extensions on my own, and know how build them
using cygwin -- but that would only produce 32bit modules and should
be unusable.

So, the question is if someone has or knows of some tutorial about how
to go about this - step by step. This info could maybe even go the
scipy wiki

Thanks,
Sebastian Haase
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] mingw-w64 tutorial ?

2010-08-21 Thread Christoph Gohlke


On 8/21/2010 1:44 PM, Sebastian Haase wrote:
 Hi,

 this is somewhat OT for this list, but since I know that David and
 many others here have lot's of experience compiling C extensions I
 thought I could just ask:
 Looking at
 http://sourceforge.net/projects/mingw-w64/files/
 I did not know (even after reading the FAQ) which file to download and
 how things would eventually work.

 I have a 64bit windows 7 installed, and got many precompiled packages
 for amd64 Python 2.7 from
 http://www.lfd.uci.edu/~gohlke/pythonlibs/
 (thanks to  Christoph Gohlke for all the work)
 But now I have some C++ extensions on my own, and know how build them
 using cygwin -- but that would only produce 32bit modules and should
 be unusable.

 So, the question is if someone has or knows of some tutorial about how
 to go about this - step by step. This info could maybe even go the
 scipy wiki

 Thanks,
 Sebastian Haase


Hi Sebastian,

I am not aware of such a tutorial. There's some information at 
http://projects.scipy.org/numpy/wiki/MicrosoftToolchainSupport

I did not have good experience last time (about a year ago) I tried 
mingw-w64. Occasional crashes during compilation and at runtime. 
Probably that has changed. At least you have to create the missing 
libpython and libmsvcr90 libraries from the dlls and make libmsvcr90 the 
default crt.

You probably know that the free Windows 7 Platform SDK can be used to 
build Python =2.6 extensions written in C89. 
http://mattptr.net/2010/07/28/building-python-extensions-in-a-modern-windows-environment/

--
Christoph



___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] mingw-w64 tutorial ?

2010-08-21 Thread Sebastian Haase
On Sat, Aug 21, 2010 at 11:29 PM, Christoph Gohlke cgoh...@uci.edu wrote:


 On 8/21/2010 1:44 PM, Sebastian Haase wrote:
 Hi,

 this is somewhat OT for this list, but since I know that David and
 many others here have lot's of experience compiling C extensions I
 thought I could just ask:
 Looking at
 http://sourceforge.net/projects/mingw-w64/files/
 I did not know (even after reading the FAQ) which file to download and
 how things would eventually work.

 I have a 64bit windows 7 installed, and got many precompiled packages
 for amd64 Python 2.7 from
 http://www.lfd.uci.edu/~gohlke/pythonlibs/
 (thanks to  Christoph Gohlke for all the work)
 But now I have some C++ extensions on my own, and know how build them
 using cygwin -- but that would only produce 32bit modules and should
 be unusable.

 So, the question is if someone has or knows of some tutorial about how
 to go about this - step by step. This info could maybe even go the
 scipy wiki

 Thanks,
 Sebastian Haase


 Hi Sebastian,

 I am not aware of such a tutorial. There's some information at
 http://projects.scipy.org/numpy/wiki/MicrosoftToolchainSupport

 I did not have good experience last time (about a year ago) I tried
 mingw-w64. Occasional crashes during compilation and at runtime.
 Probably that has changed. At least you have to create the missing
 libpython and libmsvcr90 libraries from the dlls and make libmsvcr90 the
 default crt.

 You probably know that the free Windows 7 Platform SDK can be used to
 build Python =2.6 extensions written in C89.
 http://mattptr.net/2010/07/28/building-python-extensions-in-a-modern-windows-environment/

 --
Hi Christoph,

I did not exactly know this - thanks for the info (I knew about
something called Visual Studio Express 2003- but that only
works/worked for Python 2.5, I think...)

Rephrasing my original question: Is the mingw-w64 at all easy by now
? How about cross compiling to 64bit Windows from a 32bit Ubuntu (that
I could easily run on virtualbox) ?

(But I'm not apposed at all to the free Windows 7 Platform SDK, so
I'll look into that -- giant download !?)

Thanks,
Sebastian
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] mingw-w64 tutorial ?

2010-08-21 Thread Christoph Gohlke


On 8/21/2010 2:37 PM, Sebastian Haase wrote:
 On Sat, Aug 21, 2010 at 11:29 PM, Christoph Gohlkecgoh...@uci.edu  wrote:


 On 8/21/2010 1:44 PM, Sebastian Haase wrote:
 Hi,

 this is somewhat OT for this list, but since I know that David and
 many others here have lot's of experience compiling C extensions I
 thought I could just ask:
 Looking at
 http://sourceforge.net/projects/mingw-w64/files/
 I did not know (even after reading the FAQ) which file to download and
 how things would eventually work.

 I have a 64bit windows 7 installed, and got many precompiled packages
 for amd64 Python 2.7 from
 http://www.lfd.uci.edu/~gohlke/pythonlibs/
 (thanks to  Christoph Gohlke for all the work)
 But now I have some C++ extensions on my own, and know how build them
 using cygwin -- but that would only produce 32bit modules and should
 be unusable.

 So, the question is if someone has or knows of some tutorial about how
 to go about this - step by step. This info could maybe even go the
 scipy wiki

 Thanks,
 Sebastian Haase


 Hi Sebastian,

 I am not aware of such a tutorial. There's some information at
 http://projects.scipy.org/numpy/wiki/MicrosoftToolchainSupport

 I did not have good experience last time (about a year ago) I tried
 mingw-w64. Occasional crashes during compilation and at runtime.
 Probably that has changed. At least you have to create the missing
 libpython and libmsvcr90 libraries from the dlls and make libmsvcr90 the
 default crt.

 You probably know that the free Windows 7 Platform SDK can be used to
 build Python=2.6 extensions written in C89.
 http://mattptr.net/2010/07/28/building-python-extensions-in-a-modern-windows-environment/

 --
 Hi Christoph,

 I did not exactly know this - thanks for the info (I knew about
 something called Visual Studio Express 2003- but that only
 works/worked for Python 2.5, I think...)

You can use Visual Studio Express 2008 for building 32 bit extensions 
for Python =2.6.


 Rephrasing my original question: Is the mingw-w64 at all easy by now

Don't know. David Cournapeau probably has the most experience.

http://bugs.python.org/issue4709
http://www.equation.com/servlet/equation.cmd?call=fortran

 ? How about cross compiling to 64bit Windows from a 32bit Ubuntu (that
 I could easily run on virtualbox) ?

I am also interested in cross compiling on Ubuntu but have not found the 
time to get started. The IOCBio project cross-compiles their 32 bit 
extensions on Linux 
http://code.google.com/p/iocbio/wiki/BuildWindowsInstallersOnLinux. 
But as you can see they use Wine and Mingw...


 (But I'm not apposed at all to the free Windows 7 Platform SDK, so
 I'll look into that -- giant download !?)

About one GB.


 Thanks,
 Sebastian

--
Christoph
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] NumPy-Discussion Digest, Vol 47, Issue 61

2010-08-21 Thread David Goldsmith
On Sat, Aug 21, 2010 at 10:00 AM, numpy-discussion-requ...@scipy.orgwrote:

 Date: Fri, 20 Aug 2010 14:30:58 -0500
 From: Robert Kern robert.k...@gmail.com
 Subject: Re: [Numpy-discussion] Making MATLAB and Python play nice
 To: Discussion of Numerical Python numpy-discussion@scipy.org
 Message-ID:

 aanlktikrwzd0vtjisk+6xh2djbca1v1sxx_ln6g4g...@mail.gmail.comaanlktikrwzd0vtjisk%2b6xh2djbca1v1sxx_ln6g4g...@mail.gmail.com
 
 Content-Type: text/plain; charset=UTF-8

 On Fri, Aug 20, 2010 at 14:25, David Goldsmith d.l.goldsm...@gmail.com
 wrote:
  Hi! ?Please forgive the re-post: I forgot to change the subject line
  and I haven't seen a response to this yet, so I'm assuming the former
  might be the cause of the latter.

 Or perhaps because the licenses are plainly visible at the links?


Ah, I see: if I can't be bothered to click on the links, no one else can be
bothered to tell me that that's all I need to do to get my question
answered.  Unfortunately, the solutions are useless to me if they're not
freely redistributable, so I have no incentive to click on the links--which
do not advertise that they answer the licensing question--'til that question
is answered - catch-22.

DG



 --
 Robert Kern

 I have come to believe that the whole world is an enigma, a harmless
 enigma that is made terrible by our own mad attempt to interpret it as
 though it had an underlying truth.
 ? -- Umberto Eco


 --

 Message: 4
 Date: Sat, 21 Aug 2010 00:31:03 +0100
 From: Francesc Alted fal...@pytables.org
 Subject: Re: [Numpy-discussion] [ANN] carray: an in-memory compressed
datacontainer
 To: Discussion of Numerical Python numpy-discussion@scipy.org
 Message-ID:

 aanlktimuaahqvgg4xwktwtv+cw+s_ypavxtpxnxfa...@mail.gmail.comaanlktimuaahqvgg4xwktwtv%2bcw%2bs_ypavxtpxnxfa...@mail.gmail.com
 
 Content-Type: text/plain; charset=ISO-8859-1

 2010/8/20, Zbyszek Szmek zbys...@in.waw.pl:
  OK, I've got a case where carray really shines :|
 
  zbys...@escher:~/python/numpy/carray-0.1.dev$ PYTHONPATH=. python
  bench/concat.py numpy 80 1000 4 1
  problem size: (80) x 1000 = 10^8.90309
  time for concat: 4.806s
  size of the final container: 6103.516 MB
  zbys...@escher:~/python/numpy/carray-0.1.dev$ PYTHONPATH=. python
  bench/concat.py concat 80 1000 4 1
  problem size: (80) x 1000 = 10^8.90309
  time for concat: 3.475s
  size of the final container: 6103.516 MB
  zbys...@escher:~/python/numpy/carray-0.1.dev$ PYTHONPATH=. python
  bench/concat.py carray 80 1000 4 1
  problem size: (80) x 1000 = 10^8.90309
  time for concat: 1.434s
  size of the final container: 373.480 MB
 
  Size is set to NOT hit the swap. This is still the easily compressible
  arange... but still, the results are very nice.

 Wow, the results with your processor are much nicer than with my Atom
 indeed.  But yeah, I somewhat expected this because Blosc works much
 faster with recent processors, as can be seen in:

 http://blosc.pytables.org/trac/wiki/SyntheticBenchmarks

 BTW, the difference between memcpy and memmove times for this
 benchmark is almost 40% for your computer, which is really large :-/
 Hmm, something must go really wrong with memcpy in some glibc
 distributions...

 At any rate, for real data that is less compressible the advantages of
 carray will be less apparent, but at least the proof of concept seems
 to work as intended, so I'm very happy with it.  I'm also expecting
 that the combination carray/numexpr would perform faster than plain
 computations programmed in C, most specially with modern processors,
 but will see how much faster exactly.

  Of course when the swap is hit, the ratio between carray and a normal
 array
  can grow to infinity :)
 
  zbys...@escher:~/python/numpy/carray-0.1.dev$ PYTHONPATH=. python
  bench/concat.py numpy 100 1000 3 1
  problem size: (100) x 1000 = 10^9
  time for concat: 35.700s
  size of the final container: 7629.395 MB
  zbys...@escher:~/python/numpy/carray-0.1.dev$ PYTHONPATH=. python
  bench/concat.py carray 100 1000 3 1
  problem size: (100) x 1000 = 10^9
  time for concat: 1.751s
  size of the final container: 409.633 MB

 Exactly.  This is another scenario where the carray concept can be
 really useful.

 --
 Francesc Alted


 --

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


 End of NumPy-Discussion Digest, Vol 47, Issue 61
 




-- 
Privacy is overrated; Identity isn't.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] NumPy-Discussion Digest, Vol 47, Issue 61

2010-08-21 Thread Robert Kern
On Sat, Aug 21, 2010 at 17:58, David Goldsmith d.l.goldsm...@gmail.com wrote:
 On Sat, Aug 21, 2010 at 10:00 AM, numpy-discussion-requ...@scipy.org
 wrote:

 Date: Fri, 20 Aug 2010 14:30:58 -0500
 From: Robert Kern robert.k...@gmail.com
 Subject: Re: [Numpy-discussion] Making MATLAB and Python play nice
 To: Discussion of Numerical Python numpy-discussion@scipy.org
 Message-ID:
        aanlktikrwzd0vtjisk+6xh2djbca1v1sxx_ln6g4g...@mail.gmail.com
 Content-Type: text/plain; charset=UTF-8

 On Fri, Aug 20, 2010 at 14:25, David Goldsmith d.l.goldsm...@gmail.com
 wrote:
  Hi! ?Please forgive the re-post: I forgot to change the subject line
  and I haven't seen a response to this yet, so I'm assuming the former
  might be the cause of the latter.

 Or perhaps because the licenses are plainly visible at the links?

 Ah, I see: if I can't be bothered to click on the links, no one else can be
 bothered to tell me that that's all I need to do to get my question
 answered.

Well, the real reason you didn't get a response is most likely the
usual: no one got around to it, or lost track of it. Though responding
to the digest is not helping you now.

 Unfortunately, the solutions are useless to me if they're not
 freely redistributable, so I have no incentive to click on the links--which
 do not advertise that they answer the licensing question--

It's pretty standard these days for home pages of software to contain
the license information. No one needs to advertise that any more.

 'til that question
 is answered - catch-22.

Do you really think that actually solving the problem you posed is
not enough of an incentive to do the minimal action of clicking a link
and reading a little? Hell, answering the question Are they
redistributable? is incentive to click the links. It's faster and
easier than posting to the list.

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
  -- Umberto Eco
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [ANN] carray: an in-memory compressed data container

2010-08-21 Thread Francesc Alted
Hey Sebastian,

2010/8/21, Sebastian Haase seb.ha...@gmail.com:
 Hi Francesc,

 another exciting project ... congratulations !

Thanks!

 Am I correct in thinking that memmapping a carray would also be a
 great speed advantage over memmapped ndarrays ?  Let's say I have a
 2Gbyte ndarray memmaped over a NFS network connection, should the
 speed increase simply scale with the compression factor ?

Mmh, in principle yes.  However, carray is based on the concept of
independent chunks of data and frankly, it does not make a lot of
sense to me having to create many small memmapped files in order to
keep the chunks.

Instead, I'd use PyTables (what else? ;-) for this because it is also
based on the same chunk concept than carray, but chunks are saved on a
monolithic (HDF5) file, which is much easier to handle.  These chunks
can be compressed with Blosc too, so I/O is fast (although due to the
HDF5 overhead, probably a compressed memmap approach might be faster
yet, but much more difficult to manage). And last but not least, this
does not have the limitation of virtual memory size of memmaped
solutions, which I find quite uncomfortable.

-- 
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] mingw-w64 tutorial ?

2010-08-21 Thread Sebastian Haase
On Sun, Aug 22, 2010 at 12:02 AM, Christoph Gohlke cgoh...@uci.edu wrote:


 On 8/21/2010 2:37 PM, Sebastian Haase wrote:
 On Sat, Aug 21, 2010 at 11:29 PM, Christoph Gohlkecgoh...@uci.edu  wrote:


 On 8/21/2010 1:44 PM, Sebastian Haase wrote:
 Hi,

 this is somewhat OT for this list, but since I know that David and
 many others here have lot's of experience compiling C extensions I
 thought I could just ask:
 Looking at
 http://sourceforge.net/projects/mingw-w64/files/
 I did not know (even after reading the FAQ) which file to download and
 how things would eventually work.

 I have a 64bit windows 7 installed, and got many precompiled packages
 for amd64 Python 2.7 from
 http://www.lfd.uci.edu/~gohlke/pythonlibs/
 (thanks to  Christoph Gohlke for all the work)
 But now I have some C++ extensions on my own, and know how build them
 using cygwin -- but that would only produce 32bit modules and should
 be unusable.

 So, the question is if someone has or knows of some tutorial about how
 to go about this - step by step. This info could maybe even go the
 scipy wiki

 Thanks,
 Sebastian Haase


 Hi Sebastian,

 I am not aware of such a tutorial. There's some information at
 http://projects.scipy.org/numpy/wiki/MicrosoftToolchainSupport

 I did not have good experience last time (about a year ago) I tried
 mingw-w64. Occasional crashes during compilation and at runtime.
 Probably that has changed. At least you have to create the missing
 libpython and libmsvcr90 libraries from the dlls and make libmsvcr90 the
 default crt.

 You probably know that the free Windows 7 Platform SDK can be used to
 build Python=2.6 extensions written in C89.
 http://mattptr.net/2010/07/28/building-python-extensions-in-a-modern-windows-environment/

 --
 Hi Christoph,

 I did not exactly know this - thanks for the info (I knew about
 something called Visual Studio Express 2003- but that only
 works/worked for Python 2.5, I think...)

 You can use Visual Studio Express 2008 for building 32 bit extensions
 for Python =2.6.


 Rephrasing my original question: Is the mingw-w64 at all easy by now

 Don't know. David Cournapeau probably has the most experience.

 http://bugs.python.org/issue4709
 http://www.equation.com/servlet/equation.cmd?call=fortran

 ? How about cross compiling to 64bit Windows from a 32bit Ubuntu (that
 I could easily run on virtualbox) ?

 I am also interested in cross compiling on Ubuntu but have not found the
 time to get started. The IOCBio project cross-compiles their 32 bit
 extensions on Linux
 http://code.google.com/p/iocbio/wiki/BuildWindowsInstallersOnLinux.
 But as you can see they use Wine and Mingw...


 (But I'm not apposed at all to the free Windows 7 Platform SDK, so
 I'll look into that -- giant download !?)

 About one GB.


Do you know if that contains a C++ compiler ?  The first page before
it starts the actual download has Visual C++ Compilers grayed out
... !?

-Sebastian
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Making MATLAB and Python play nice

2010-08-21 Thread Ken Watford
On Fri, Aug 20, 2010 at 3:25 PM, David Goldsmith
d.l.goldsm...@gmail.com wrote:
 Hi!  Please forgive the re-post: I forgot to change the subject line
 and I haven't seen a response to this yet, so I'm assuming the former
 might be the cause of the latter.  My question follows the quoted
 posts.  Thanks!

 *snip*

 Thanks for the ideas; are any/all of these solutions freely/easily
 redistributable?

 DG

The one Robin suggested is mine. It uses the MIT license, so basically
you can do whatever you want with it.

I haven't really been maintaining it, as I've been busy with other
things, though I have been pondering a rewrite in Cython. If someone
else wants to take charge of it, though, feel free.

(It's amusing that both are called pymex. I spoke with the other guy
once, and apparently we picked names at around the same time. I guess
for what it is, that's the most obvious name one could pick.)
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion