Re: [Numpy-discussion] N-D array interface page is out of date
Regarding http://numpy.scipy.org/array_interface.shtml : I just noticed that this out of date page is now featuring in recent discussions about the future of Numpy in Ubuntu: https://bugs.launchpad.net/ubuntu/+source/python-numpy/+bug/309215 Can someone with appropriate permissions fix the page or give me the appropriate permissions so I can do it? I think even deleting the page is better than keeping it as-is. -Andrew Andrew Straw wrote: Hi, I just noticed that the N-D array interface page is outdated and doesn't mention the buffer interface that is standard with Python 2.6 and Python 3.0: http://numpy.scipy.org/array_interface.shtml This page is linked to from http://numpy.scipy.org/ I suggest, at the minimum, modifying the page with really annoying blinking red letters at the top (or other suitable warning) that this is deprecated at that people should use http://www.python.org/dev/peps/pep-3118/ instead. ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] SVD errors
Mon, 02 Feb 2009 18:27:05 -0600, Robert Kern wrote: On Mon, Feb 2, 2009 at 18:21, mtrum...@berkeley.edu wrote: Hello list.. I've run into two SVD errors over the last few days. Both errors are identical in numpy/scipy. I've submitted a ticket for the 1st problem (numpy ticket #990). Summary is: some builds of the lapack_lite module linking against system LAPACK (not the bundled dlapack_lite.o, etc) give a LinAlgError: SVD did not converge exception on my matrix. This error does occur using Mac's Accelerate framework LAPACK, and a coworker's Ubuntu LAPACK version. It does not seem to happen using ATLAS LAPACK (nor using Octave/Matlab on said Ubuntu) These are almost certainly issues with the particular implementations of LAPACK that you are using. I don't think there is anything we can do from numpy or scipy to change this. Yes, this is almost certainly a LAPACK problem. If in doubt, you can test it with the following F90 program (be sure to link it against the same LAPACK as Numpy). Save the matrices with 'np.savetxt(foo.txt, x.ravel ())' and run './test foo.txt'. program test implicit none integer, parameter :: N = 128 double precision :: A(N*N), S(N), U(N,N), Vh(N,N) double precision, allocatable :: WORK(:) double precision :: tmp integer :: IWORK(8*N) integer :: INFO = 0, LWORK read(*,*) A A = reshape(transpose(reshape(A, (/N, N/))), (/ N*N /)) call dgesdd('A', N, N, A, N, S, U, N, Vh, N, tmp, -1, IWORK, INFO) LWORK = tmp if (info .ne. 0) stop 'lwork query failed' write(*,*) 'lwork:', lwork allocate(WORK(LWORK)) call dgesdd('A', N, N, A, N, S, U, N, Vh, N, WORK, LWORK, IWORK, INFO) write(*,*) 'info:', INFO write(*,*) 'min(S):', minval(S) if (INFO .ne. 0) then write(*,*) ' - SVD failed to converge' end if if (minval(S) .lt. 0) then write(*,*) ' - negative singular value' end if end program ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] porting NumPy to Python 3
Hi James, On Thu, Jan 29, 2009 at 2:11 AM, James Watson watson@gmail.com wrote: Hi, I am interested in contributing to the port of NumPy to Python 3. Who I should coordinate effort with? I have started at the Python end of the problem (as opposed to http://www.scipy.org/Python3k), e.g. I have several patches to get 2to3 to work on NumPy's Python source code. I am sorry that noone has replied to your email. Could you please upload your patches somewhere? Ondrej ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] example reading binary Fortran file
A Friday 30 January 2009, David Froger escrigué: ok for f2py! Otherwise, you will have to figure out how your Fortran program writes the file. I.e. what padding, metainformation, etc. that are used. If you switch Fortran compiler, or even compiler version from the same vendor, you must start over again. In my experience, I never had this kind of problem. I just have to convert files between big/little endian with uswap (http://linux.die.net/man/1/uswap), but I never see, in my experience, a Fortran program writting data differently given the compilator. For my own work, I just makes sure NEVER to do any I/O in Fortran! It is asking for trouble. I leave the I/O to Python or C, where it belongs. That way I know what data are written and what data are read. Unfortunately., binary files are mandatory in the contex I work. I use a scientific code written in Fortran to compute fluid dynamics. Typically the simulation is run on supercalculator and generate giga and giga of datum, so we must use the binary format, which recquire less memory for the stockage. Then, I like to post-trait the datum using Python and Gnuplot.py. That's why I'm looking for a performantant, easy and 'standard' way to read binary Fortran files. (I think many people have the same need). If you need to compact your datafiles to a maximum, you may want to write your data with the HDF5 library [1] that, besides using a binary format, it allows on-the-flight compression. HDF5 is a fairly growing standard in scientific computing and it has wrappers for the most important languages like C, Fortran, Java and, of course, Python ;-) In particular, the available HDF5 interfaces to Python allows to read/write HDF5 native files very easily. Also, many computational environments, like Matlab, Octave or IDL do support HDF5 files. [1] http://www.hdfgroup.org/HDF5/ Cheers, -- Francesc Alted ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] from_function
I've been using something I wrote: coef_from_function (function, delta, size) which does (c++ code): double center = double(size-1)/2; for (int i = 0; i size; ++i) coef[i] = callvalue_t (func, double(i - center) * delta); I thought to translate this to np.fromfunction. It seems fromfunction is not as flexible, it uses only a fixed integer grid? ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] Numpy 1.3 release date ?
All, When can we expect numpy 1.3 to be released ? Sincerely, P. ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] f2py: VS version on Windows
Thanks to all for clearing this up. I have been bouncing this issue off the folks at Intel and they allege that Intel C++ should be able to do this independent of the version of VS used originally (I am skeptical). I am still getting some MKL-related missing symbol errors that we are clearing up and I will post anything useful that I discover. ~Mike C. -Original Message- From: numpy-discussion-boun...@scipy.org [mailto:numpy-discussion-boun...@scipy.org] On Behalf Of David Cournapeau Sent: Monday, February 02, 2009 7:26 PM To: Discussion of Numerical Python Subject: Re: [Numpy-discussion] f2py: VS version on Windows On Tue, Feb 3, 2009 at 10:55 AM, Sturla Molden stu...@molden.no wrote: Python extensions must be built using the same compiler as was used to build the Python binary. Python 2.5.4 was built using MSVC2003 and so extensions for it must be built using the same compiler. The exception to this rule is that extensions built using mingw32 (and msys) will work with most, if not all, windows Python binaries. This is NOT correct. Although it is technically true, that's relatively irrelevant for practical matters: it is not currently possible to build numpy with VS 2008 and 7.1 CRT, and you have to build numpy with the same compiler as the one used by python if you use MS compilers. David ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] genloadtxt question
Pierre, Should the following work? import numpy as np from StringIO import StringIO converter = {'date':lambda s: datetime.strptime(s,'%Y-%m-%d %H:%M:%SZ')} data = np.ndfromtxt(StringIO('2009-02-03 12:00:00Z,72214.0'), delimiter=',', names=['date','stid'], dtype=None, converters=converter) Right now, it's giving me the following: Traceback (most recent call last): File check_oban.py, line 15, in module converters=converter) File /home/rmay/.local/lib64/python2.5/site-packages/numpy/lib/io.py, line 993, in ndfromtxt return genfromtxt(fname, **kwargs) File /home/rmay/.local/lib64/python2.5/site-packages/numpy/lib/io.py, line 842, in genfromtxt locked=True) File /home/rmay/.local/lib64/python2.5/site-packages/numpy/lib/_iotools.py, line 472, in update self.type = self._getsubdtype(func('0')) File check_oban.py, line 9, in lambda lambda s: datetime.strptime(s,'%Y-%m-%d %H:%M:%SZ').replace(tzinfo=UTC)} File /usr/lib64/python2.5/_strptime.py, line 330, in strptime (data_string, format)) ValueError: time data did not match format: data=0 fmt=%Y-%m-%d %H:%M:%SZ Which comes from a part of the code in updating converters where it passes the string '0' to the converter. Are the converters expected to handle what amounts to bad input even though the file itself has no such problems? Specifying the dtype doesn't appear to help either. Ryan -- Ryan May Graduate Research Assistant School of Meteorology University of Oklahoma ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] genloadtxt question
On Feb 3, 2009, at 11:24 AM, Ryan May wrote: Pierre, Should the following work? import numpy as np from StringIO import StringIO converter = {'date':lambda s: datetime.strptime(s,'%Y-%m-%d %H:%M: %SZ')} data = np.ndfromtxt(StringIO('2009-02-03 12:00:00Z,72214.0'), delimiter=',', names=['date','stid'], dtype=None, converters=converter) Well, yes, it should work. That's indeed a problem with the getsubdtype method of the converter. The problem is that we need to estimate the datatype of the output of the converter. In most cases, trying to convert '0' works properly, not in yours however. In r6338, I force the type to object if converting '0' does not work. That's a patch till the next corner case... ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] genloadtxt question
Pierre GM wrote: On Feb 3, 2009, at 11:24 AM, Ryan May wrote: Pierre, Should the following work? import numpy as np from StringIO import StringIO converter = {'date':lambda s: datetime.strptime(s,'%Y-%m-%d %H:%M: %SZ')} data = np.ndfromtxt(StringIO('2009-02-03 12:00:00Z,72214.0'), delimiter=',', names=['date','stid'], dtype=None, converters=converter) Well, yes, it should work. That's indeed a problem with the getsubdtype method of the converter. The problem is that we need to estimate the datatype of the output of the converter. In most cases, trying to convert '0' works properly, not in yours however. In r6338, I force the type to object if converting '0' does not work. That's a patch till the next corner case... Thanks for the quick patch! And yeah, I can't think of any better behavior. It's actually what I ended up doing in my conversion function, so, if nothing else, it removes the user from having to write that kind of boilerplate code. Thanks, Ryan -- Ryan May Graduate Research Assistant School of Meteorology University of Oklahoma ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] N-D array interface page is out of date
2009/2/3 Andrew Straw straw...@astraw.com: Can someone with appropriate permissions fix the page or give me the appropriate permissions so I can do it? I think even deleting the page is better than keeping it as-is. Who all has editing access to this page? Is it hosted on scipy.org? Stéfan ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] example reading binary Fortran file
Thanks a lot Fransesc and Neil, yours messages really help me. I'll look at these solutions attentively. Here is what I write recently, but I begin to understand it's effectively not portable... def fread(fileObject,*arrayAttributs): Reading in a binary (=unformatted) Fortran file Let's call 'record' the list of arrays written with one write in the Fortran file. Call one fread per write. Parameter : * fileObject, eg: fileObject = open(data.bin,'rb') * arrayAttributs = ( (shape1,dtype1,readorskip1), (shape2,dtype2,readorskip2) ...) * shape: eg: (100,200) * dtype: eg: 'f4' (little endian, simple precision), 'f8' (big endian, double precision) * readorskip = [0|1] * 1: the array is read and return * 0: the size of the array is skipped to read the array after, the array isn't returned Exemples (with write ux,uy,p in the Fortran code) * f = open(uxuyp.bin,'rb') nx,ny = 100,200 p = readFortran( f, ((nx,ny),'f8',0), ((nx,ny),'f8',0), ((nx,ny),'f8',1) ) f.close import numpy,os,utils # compute the recordSize to be read recordBytes = 8 for (shape,dtype,read) in arrayAttributs: dtype = numpy.dtype(dtype) # number of elements to be read in this array count = 1 for size in shape: count *= size # size of the record recordBytes += count*dtype.itemsize # the fileSize fileSize = os.stat(fileObject.name)[6] # compare recordSize and fileSize if recordBytes fileSize: import logging logging.error('To much data to be read in %r',fileObject.name) logging.error('File Size: %r',fileSize) logging.error('To be read: %r',recordBytes) NoneList = [] for (shape,dtype,read) in arrayAttributs: if read: NoneList.append(None) return NoneList # skip the four bytes in the beginning of the record fileObject.seek(4,1) # read the arrays in record arrays = [] for (shape,dtype,read) in arrayAttributs: # number of elements to be read in this array count=1 for size in shape: count *= size if read: array = numpy.fromfile(fileObject, count=count, dtype=dtype).reshape(shape, order='F') arrays.append(array) else: dtype = numpy.dtype(dtype) arrayBytes = count*dtype.itemsize fileObject.seek(arrayBytes,1) # skip the four bytes at the end of the record fileObject.seek(4,1) ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] example reading binary Fortran file
the last line was missing : return arrays ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Operations on masked items
Ryan May wrote: Pierre, I know you did some preliminary work on helping to make sure that doing operations on masked arrays doesn't change the underlying data. I ran into the following today. import numpy as np a = np.ma.array([1,2,3], mask=[False, True, False]) b = a * 10 c = 10 * a print b.data # Prints [10 2 30] Good! print c.data # Prints [10 10 30] Oops. I tracked it down to __call__ on the _MaskedBinaryOperation class. If there's a mask on the data, you use: result = np.where(m, da, self.f(da, db, *args, **kwargs)) You can see that if a (and hence da) is a scalar, your masked values end up with the value of the scalar. If this is getting too hairy to handle not touching data, I understand. I just thought I should point out the inconsistency here. Well, I guess I hit send too soon. Here's one easy solution (consistent with what you did for __radd__), change the code for __rmul__ to do: return multiply(self, other) instead of: return multiply(other, self) That fixes it for me, and I don't see how it would break anything. Ryan -- Ryan May Graduate Research Assistant School of Meteorology University of Oklahoma ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] numscons/numpy.distutils bug related to MACOSX_DEPLOYMENT_TARGET
I am trying to use numscons to build a project and have run into a show stopper. I am using: OS X 10.5 The builtin Python 2.5.2 Here is what I see upon running python setup.py scons: scons: Reading SConscript files ... DistutilsPlatformError: $MACOSX_DEPLOYMENT_TARGET mismatch: now 10.3 but 10.5 during configure: File /Users/bgranger/Library/Python/2.5/src/numscons/tests/examples/checkers/SConstruct, line 2: GetInitEnvironment(ARGUMENTS).DistutilsSConscript('SConscript') File /Users/bgranger/Library/Python/2.5/site-packages/numscons-0.9.4-py2.5.egg/numscons/core/numpyenv.py, line 108: [this goes on for a while] This bug is one that I am familiar with. Here is a sketch: * numpy.distutils sets MACOSX_DEPLOYMENT_TARGET=10.3 if MACOSX_DEPLOYMENT_TARGET is not set in the environment. * But, the built-in Python on OS X 10.5 has MACOSX_DEPLOYMENT_TARGET=10.5. When Python is built, it saves this info in a file. * When called distutils checks to make sure that the current value of MACOSX_DEPLOYMENT_TARGET matches the one that was used to build Python. Hence the mismatch. I am pretty sure that the offending code is in: numpy.distutils.fcompiler.gnu.get_flags_linker_so I think I know how to fix this and will get started on it, but I wanted to see if anyone else had any experience with this or knew another way around this. Cheers, Brian ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] numscons/numpy.distutils bug related to MACOSX_DEPLOYMENT_TARGET
On Tue, Feb 3, 2009 at 18:12, Brian Granger ellisonbg@gmail.com wrote: I am trying to use numscons to build a project and have run into a show stopper. I am using: OS X 10.5 The builtin Python 2.5.2 Here is what I see upon running python setup.py scons: scons: Reading SConscript files ... DistutilsPlatformError: $MACOSX_DEPLOYMENT_TARGET mismatch: now 10.3 but 10.5 during configure: File /Users/bgranger/Library/Python/2.5/src/numscons/tests/examples/checkers/SConstruct, line 2: GetInitEnvironment(ARGUMENTS).DistutilsSConscript('SConscript') File /Users/bgranger/Library/Python/2.5/site-packages/numscons-0.9.4-py2.5.egg/numscons/core/numpyenv.py, line 108: [this goes on for a while] This bug is one that I am familiar with. Here is a sketch: * numpy.distutils sets MACOSX_DEPLOYMENT_TARGET=10.3 if MACOSX_DEPLOYMENT_TARGET is not set in the environment. * But, the built-in Python on OS X 10.5 has MACOSX_DEPLOYMENT_TARGET=10.5. When Python is built, it saves this info in a file. * When called distutils checks to make sure that the current value of MACOSX_DEPLOYMENT_TARGET matches the one that was used to build Python. Hence the mismatch. I am pretty sure that the offending code is in: numpy.distutils.fcompiler.gnu.get_flags_linker_so I think I know how to fix this and will get started on it, but I wanted to see if anyone else had any experience with this or knew another way around this. Well, the workaround is to set MACOSX_DEPLOYMENT_TARGET=10.5 in your environment. -- Robert Kern I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth. -- Umberto Eco ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] numscons/numpy.distutils bug related to MACOSX_DEPLOYMENT_TARGET
On Tue, Feb 3, 2009 at 18:20, Brian Granger ellisonbg@gmail.com wrote: Robert, Thanks. Yes, I just saw that this will work. When I fixed this in Cython a while back this workaround wouldn't work. Would you still consider this a bug? The logic to fix it is fairly simply. What is the fix you are thinking of? -- Robert Kern I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth. -- Umberto Eco ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Operations on masked items
On Feb 3, 2009, at 4:00 PM, Ryan May wrote: Well, I guess I hit send too soon. Here's one easy solution (consistent with what you did for __radd__), change the code for __rmul__ to do: return multiply(self, other) instead of: return multiply(other, self) That fixes it for me, and I don't see how it would break anything. Good call, but once again: Thou shalt not put trust in ye masked values [1]. a = np.ma.array([1,2,3],mask=[0,1,0]) b = np.ma.array([10, 20, 30], mask=[0,1,0]) (a*b).data array([10, 2, 90]) (b*a).data array([10, 20, 90]) So yes, __mul__ is not commutative when you deal w/ masked arrays (at least, when you try to access the data under a mask). Nothing I can do. Remember that preventing the underlying data to be modified is NEVER guaranteed... [1] Epistle of Paul (Dubois). ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] Operations on masked items
Pierre, I know you did some preliminary work on helping to make sure that doing operations on masked arrays doesn't change the underlying data. I ran into the following today. import numpy as np a = np.ma.array([1,2,3], mask=[False, True, False]) b = a * 10 c = 10 * a print b.data # Prints [10 2 30] Good! print c.data # Prints [10 10 30] Oops. I tracked it down to __call__ on the _MaskedBinaryOperation class. If there's a mask on the data, you use: result = np.where(m, da, self.f(da, db, *args, **kwargs)) You can see that if a (and hence da) is a scalar, your masked values end up with the value of the scalar. If this is getting too hairy to handle not touching data, I understand. I just thought I should point out the inconsistency here. Ryan -- Ryan May Graduate Research Assistant School of Meteorology University of Oklahoma ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] Few minor issues with numscons
David, I am trying to use numscons to build a project and am running into some problems: Two smaller issues and one show stopper. First, the smaller ones: * The web presense of numscons is currently very confusing. There are a couple of locations with info about it, but the most prominent ones appear to be quite outdated: http://projects.scipy.org/scipy/numpy/wiki/NumScons ...refers to... https://code.launchpad.net/numpy.scons.support Which is no longer being used and doesn't have any links to the most recent versions I had to hunt for a while to find the develoment repo, which is on github and the most recent release, which is now at pypi. It is probably a good idea to update these locations so as not to confuse new folks (or old folks too). * The scons/scons-local subdir is not installed when running python setup.py install. I had to use python setupegg.py to get this to install in the right place. I will send a different email in a second about the show stopper as it is a bigger topic related to a numpy bug. Cheers, Brian ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] reading binary Fortran
I've noticed a lot of discussion on how to read binary files written out from Fortran, and nobody seems to have mentioned how to modify your Fortran code so it writes out a file that can be read with numpy.fromfile() in a single line. For example, to write out a NLINE x NSMP array of floats in Fortran in a line by line fashion, the code below works just fine. open(iunit,file=flname,form='unformatted',access='direct', recl=nsmp*4,status='replace',action='write') do iline=1,nline write(iunit,rec=iline) (data(iline,ismp),ismp=1,nsmp) end do close(iunit) It's more code than simply write(iunit) data, but it has the advantage of being easily imported into python, matlab and other software packages that can read flat binary data. To import a file written out in this fashion into numpy, a single call to numpy.fromfile(flname) works with no need to define datatypes or the like. Apologies if I'm missing some reason why the above doesn't work or is not preferable to the write(iunit) data. Catherine ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Operations on masked items
Pierre GM wrote: On Feb 3, 2009, at 4:00 PM, Ryan May wrote: Well, I guess I hit send too soon. Here's one easy solution (consistent with what you did for __radd__), change the code for __rmul__ to do: return multiply(self, other) instead of: return multiply(other, self) That fixes it for me, and I don't see how it would break anything. Good call, but once again: Thou shalt not put trust in ye masked values [1]. a = np.ma.array([1,2,3],mask=[0,1,0]) b = np.ma.array([10, 20, 30], mask=[0,1,0]) (a*b).data array([10, 2, 90]) (b*a).data array([10, 20, 90]) So yes, __mul__ is not commutative when you deal w/ masked arrays (at least, when you try to access the data under a mask). Nothing I can do. Remember that preventing the underlying data to be modified is NEVER guaranteed... Fair enough. Ryan -- Ryan May Graduate Research Assistant School of Meteorology University of Oklahoma ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] numscons/numpy.distutils bug related to MACOSX_DEPLOYMENT_TARGET
What is the fix you are thinking of? This is how Cython currently handles this logic. This would have to be modified to include the additional case of a user setting MACOSX_DEPLOYMENT_TARGET in their environment, but that logic is already in numpy.distutils.fcompiler.gnu.get_flags_linker_so This is really just a special case for 1) OS X 10.5 and 2) built-in Python. # MACOSX_DEPLOYMENT_TARGET can be set to 10.3 in most cases. # But for the built-in Python 2.5.1 on Leopard, it needs to be set for 10.5. # This looks like a bug that will be fixed in 2.5.2. If Apple updates their # Python to 2.5.2, this fix should be OK. import distutils.sysconfig as sc python_prefix = sc.get_config_var('prefix') leopard_python_prefix = '/System/Library/Frameworks/Python.framework/Versions/2.5' full_version = %s.%s.%s % sys.version_info[:3] if python_prefix == leopard_python_prefix and full_version == '2.5.1': os.environ[MACOSX_DEPLOYMENT_TARGET] = 10.5 else: os.environ[MACOSX_DEPLOYMENT_TARGET] = 10.3 ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] numscons/numpy.distutils bug related to MACOSX_DEPLOYMENT_TARGET
On Tue, Feb 3, 2009 at 18:34, Brian Granger ellisonbg@gmail.com wrote: What is the fix you are thinking of? This is how Cython currently handles this logic. This would have to be modified to include the additional case of a user setting MACOSX_DEPLOYMENT_TARGET in their environment, but that logic is already in numpy.distutils.fcompiler.gnu.get_flags_linker_so This is really just a special case for 1) OS X 10.5 and 2) built-in Python. # MACOSX_DEPLOYMENT_TARGET can be set to 10.3 in most cases. # But for the built-in Python 2.5.1 on Leopard, it needs to be set for 10.5. # This looks like a bug that will be fixed in 2.5.2. If Apple updates their # Python to 2.5.2, this fix should be OK. import distutils.sysconfig as sc python_prefix = sc.get_config_var('prefix') leopard_python_prefix = '/System/Library/Frameworks/Python.framework/Versions/2.5' full_version = %s.%s.%s % sys.version_info[:3] if python_prefix == leopard_python_prefix and full_version == '2.5.1': os.environ[MACOSX_DEPLOYMENT_TARGET] = 10.5 else: os.environ[MACOSX_DEPLOYMENT_TARGET] = 10.3 Hmm, that's still going to break for any custom build that decides to build Python with a specific MACOSX_DEPLOYMENT_TARGET. If you're going to fix it at all, it should default to the value in the Makefile that sysconfig is going to check against. The relevant code to copy is in sysconfig._init_posix(). -- Robert Kern I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth. -- Umberto Eco ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] numscons/numpy.distutils bug related to MACOSX_DEPLOYMENT_TARGET
Hmm, that's still going to break for any custom build that decides to build Python with a specific MACOSX_DEPLOYMENT_TARGET. If you're going to fix it at all, it should default to the value in the Makefile that sysconfig is going to check against. The relevant code to copy is in sysconfig._init_posix(). Yes, I agree that sysconfig._init_posix() has the proper logic for this. This logic should also be applied to Cython as well probably. Would you say that the proper fix then is to inspect the Makefile and set MACOSX_DEPLOYMENT_TARGET to the valued used to build Python itself. Or should we still try to set it to 10.3 in some cases (like the current numpy.distutils does) or look at the environment as well? Cheers, Brian ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] numscons/numpy.distutils bug related to MACOSX_DEPLOYMENT_TARGET
On Tue, Feb 3, 2009 at 18:53, Brian Granger ellisonbg@gmail.com wrote: Hmm, that's still going to break for any custom build that decides to build Python with a specific MACOSX_DEPLOYMENT_TARGET. If you're going to fix it at all, it should default to the value in the Makefile that sysconfig is going to check against. The relevant code to copy is in sysconfig._init_posix(). Yes, I agree that sysconfig._init_posix() has the proper logic for this. This logic should also be applied to Cython as well probably. Would you say that the proper fix then is to inspect the Makefile and set MACOSX_DEPLOYMENT_TARGET to the valued used to build Python itself. Or should we still try to set it to 10.3 in some cases (like the current numpy.distutils does) or look at the environment as well? 1) Trust the environment variable if given and let distutils raise its error message (why not raise it ourselves? distutils' error message and explanation is already out in THE GOOGLE.) 2) Otherwise, use the value in the Makefile if it's there. 3) If it's not even in the Makefile for whatever reason, go with 10.3. -- Robert Kern I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth. -- Umberto Eco ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] array vector elementwise multiplication
On Tue, Feb 3, 2009 at 10:19 AM, Gideon Simpson simp...@math.toronto.eduwrote: I have an M x N matrix A and two vectors, an M dimensional vector x and an N dimensional vector y. I would like to be able to do two things. 1. Multiply, elementwise, every column of A by x 2. Multiply, elementwise, every row of A by y. What's the quick way to do this in numpy? In [1]: M = ones((3,3)) In [2]: x = arange(3) In [3]: M*x Out[3]: array([[ 0., 1., 2.], [ 0., 1., 2.], [ 0., 1., 2.]]) In [4]: x[:,newaxis]*M Out[4]: array([[ 0., 0., 0.], [ 1., 1., 1.], [ 2., 2., 2.]]) Chuck ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Few minor issues with numscons
Hi Brian, On Wed, Feb 4, 2009 at 9:00 AM, Brian Granger ellisonbg@gmail.com wrote: David, I am trying to use numscons to build a project and am running into some problems: Two smaller issues and one show stopper. First, the smaller ones: * The web presense of numscons is currently very confusing. There are a couple of locations with info about it, but the most prominent ones appear to be quite outdated: http://projects.scipy.org/scipy/numpy/wiki/NumScons ...refers to... https://code.launchpad.net/numpy.scons.support Which is no longer being used and doesn't have any links to the most recent versions I had to hunt for a while to find the develoment repo, which is on github and the most recent release, which is now at pypi. The releases are on Pypi for quite some time. I converted the repo to git and put it on github, but I have not really worked on numscons for several months now for lack of time ( and because numscons it mostly done and the main limitations of numscons are not fixable without fixing some fairly major scons limitations). Basically, the repo on github is only the convertion from bzr, without any new features. It is probably a good idea to update these locations so as not to confuse new folks (or old folks too). * The scons/scons-local subdir is not installed when running python setup.py install. I had to use python setupegg.py to get this to install in the right place. That's strange, you are not the first one to mention this bug, but I never had this problem myself - I myself never use setupegg except when creating eggs for pypi. cheers, David ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Few minor issues with numscons
The releases are on Pypi for quite some time. I converted the repo to git and put it on github, but I have not really worked on numscons for several months now for lack of time ( and because numscons it mostly done and the main limitations of numscons are not fixable without fixing some fairly major scons limitations). Basically, the repo on github is only the convertion from bzr, without any new features. Do you have plans to continue to maintain it though? One other things I forgot to mention. I first tried the head of your git repo and numpy complained that the version of numscons (0.10) was too *old*. It wanted the version to be greater than somehting like 0.9.1, and clearly it was, so it looks like there is a bug in the numscons version parsing in numpy.distutils. * The scons/scons-local subdir is not installed when running python setup.py install. I had to use python setupegg.py to get this to install in the right place. That's strange, you are not the first one to mention this bug, but I never had this problem myself - I myself never use setupegg except when creating eggs for pypi. OK, I will have a look at this further to see what the cause is. Thanks for a great package that solve a really painful set of issues! Brian cheers, David ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] numscons/numpy.distutils bug related to MACOSX_DEPLOYMENT_TARGET
1) Trust the environment variable if given and let distutils raise its error message (why not raise it ourselves? distutils' error message and explanation is already out in THE GOOGLE.) 2) Otherwise, use the value in the Makefile if it's there. 3) If it's not even in the Makefile for whatever reason, go with 10.3. Sounds good, do you want to me work up a patch? Brian -- Robert Kern I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth. -- Umberto Eco ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] numscons/numpy.distutils bug related to MACOSX_DEPLOYMENT_TARGET
On Tue, Feb 3, 2009 at 23:22, Brian Granger ellisonbg@gmail.com wrote: 1) Trust the environment variable if given and let distutils raise its error message (why not raise it ourselves? distutils' error message and explanation is already out in THE GOOGLE.) 2) Otherwise, use the value in the Makefile if it's there. 3) If it's not even in the Makefile for whatever reason, go with 10.3. Sounds good, do you want to me work up a patch? Yes, please. -- Robert Kern I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth. -- Umberto Eco ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion