Re: [Numpy-discussion] Status Report for NumPy 1.1.0
Hello, On Wed, May 7, 2008 at 8:12 AM, Jarrod Millman [EMAIL PROTECTED] wrote: I have just created the 1.1.x branch: http://projects.scipy.org/scipy/numpy/changeset/5134 In about 24 hours I will tag the 1.1.0 release from the branch. At this point only critical bug fixes should be applied to the branch. The trunk is now open for 1.2 development. Might be a bit late now, but are you (or is someone) still Valgrinding NumPy on a semi-regular basis, or at least before a release? Cheers, Albert ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] access ndarray in C++
Hello, On Tue, Apr 22, 2008 at 11:38 PM, Thomas Hrabe [EMAIL PROTECTED] wrote: I am currently developing a python module in C/C++ which is supposed to access nd arrays as defined by the following command in python You might also be interested in: http://mathema.tician.de/software/pyublas Regards, Albert ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] ticket #655
This code can probably be incorporated into NumPy in the 1.1 timeframe. I don't think anyone is going to miss it before then. On Wed, Apr 9, 2008 at 2:04 PM, Jarrod Millman [EMAIL PROTECTED] wrote: This ticket has a patch: http://projects.scipy.org/scipy/numpy/ticket/655 -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Closing tickets
Hello, Here's two tickets without a milestone that could probably do with some attention (setting a milestone would be a start): numpy.scons branch: setuptools' develop mode broken http://scipy.org/scipy/numpy/ticket/596 If float('123.45') works, so should numpy.float32('123.45') http://scipy.org/scipy/numpy/ticket/630 Cheers, Albert___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] numpy 1:1.0.4: numpy.average() returns the wrong result with weights
Hello, - Original Message - From: Robert Kern [EMAIL PROTECTED] To: Discussion of Numerical Python numpy-discussion@scipy.org Sent: Saturday, February 23, 2008 3:30 AM Subject: Re: [Numpy-discussion] numpy 1:1.0.4: numpy.average() returns the wrong result with weights Ondrej Certik wrote: I'll add it. I registered on the trac, as required, but I am still denied, when filling my username and password when logging in. How can I create an account? That should have done it. When you say you are denied, exactly what happens? I've run into times when I've logged in and I get the unaltered front page again. Logging in again usually works. There is something strange going on. Logging in on projects.scipy.org/scipy/numpy usually redirects you to scipy.org/scipy/numpy, at which point you need to log in again. Cheers, Albert ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] [ANN] numscons 0.4.1. Building numpy with MS compiler + g77 works !
Hello, On Feb 19, 2008 11:34 AM, David Cournapeau [EMAIL PROTECTED] wrote: Matthieu Brucher wrote: Now that you provide an installer for Atlas, it may become the same problem as MKL, can't it ? It is exactly the same problem, yes. Right now, my installer does not modify the environment at all (like MKL or ACML, actually), and you have to do it manually (add PATH, or put in system32). Have you tried installing the DLLs to C:\Python2x or to the same directory as the numpy .pyd? As far as I know, this should work. Obviously there are some issues: multiple modules might be linked to MKL, in which case you would be duplicating a lot of files, but hey, so it goes. Ideally, all the modules should be using one module to interact with native BLAS and LAPACK. In my opinion, modifying the PATH or installing to System32 are not ways to properly deploy DLLs on Windows. Cheers, Albert ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] [ANN] numscons 0.4.1. Building numpy with MS compiler + g77 works !
Hello, On Feb 18, 2008 9:30 PM, David Cournapeau [EMAIL PROTECTED] wrote: Hi, I've just finished a new release of numscons. This version is the first one to be able to compile numpy using MS compilers and g77 compiler on windows, which was the last big platform not supported by numscons. Good stuff. I noticed that the Launchpad page says: I decided not to support dynamic linking against 3rd party dll. Because of intrinsics windows limitations, it is impossible to do it in a reliable way without putting too much burden on the maintainer. I might note that I had problems with NumPy and other C code crashing randomly when statically linked against MKL 9.1 on Win32 (probably when using multiple cores). Dynamically linking fixed the problem. I haven't tested with MKL 10 yet. Cheers, Albert ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] best way for C code wrapping
Hello, On Feb 16, 2008 9:14 PM, dmitrey [EMAIL PROTECTED] wrote: hi all, I intend to connect some C code to Python for some my purposes. What is the best software for the aim? Is it numpy.ctypes or swig or something else? IIRC ctypes are present in Python since v2.5, so it's ok to use just ctypes, not numpy.ctypes, or some difference is present? I would definitely recommend ctypes. numpy.ctypes is just some extra code to make it easier to use NumPy arrays with ctypes. It's not a standalone thing. There are some example of what you can do with the ctypes support in NumPy here: http://www.scipy.org/Cookbook/Ctypes Look for ndpointer. Another one question: if I'll translate a fortran code to C via f2c, which % of speed down should I expect (in average, using gfortran and gcc)? Afaik it contains operations with sparse matrices. Depending on what you want to do exactly, you might consider wrapping the Fortran code using f2py instead of translating it to C. You could also build the Fortran code as a shared library and wrap it using ctypes. Regards, Albert ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] C Extensions, CTypes and external code librarie
Hello, On Feb 12, 2008 6:19 PM, Lou Pecora [EMAIL PROTECTED] wrote: Albert, Yes, I think you got the idea right. I want to call my own C code using CTypes interface, then from within my C code call GSL C code, i.e. a C function calling another C function directly. I do *not* want to go back out through the Python interface. So you are right, I do not want to wrap GSL. It sounds like I can just add something like -lnameofGSLdylib (where I put in the real name of the GSL library after the -l) in my gcc command to make my shared lib. Is that right? Sounds about right. I don't know the Mac that well as far as the various types of dynamic libraries go, so just check that you're working with the right type of libraries, but you've got the right idea. Regards, Albert ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Intel MKL - was: parallel numpy - any info?
Hello On Jan 10, 2008 6:56 AM, David Cournapeau [EMAIL PROTECTED] wrote: The one thing which I am not sure about is: say one MKL binary does not work, and say I (or anyone outside your company) build numpy with the MKL ro debug it, can I redistribute a new binary, even if it is just for testing purpose ? Let's say Ray's company buys one copy of Intel MKL. This gives them the Intel MKL DLL and link libraries (.libs). Now they compile NumPy and link it against Intel MKL. They can then (as I understand the license agreement, IANAL, etc.) distribute this binary and the Intel MKL DLLs. They may *not* distribute the link libraries. This means that there is no easy way for anyone else to build a new executable that is linked against MKL. Cheers, Albert ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] parallel numpy - any info?
Hello On Jan 8, 2008 5:31 PM, Ray Schumacher [EMAIL PROTECTED] wrote: At 04:27 AM 1/8/2008, you wrote: 4. Re: parallel numpy (by Brian Granger) - any info? (Matthieu Brucher) From: Matthieu Brucher [EMAIL PROTECTED] MKL does the multithreading on its own for level 3 BLAS instructions (OpenMP). There was brief debate yesterday among the Pythonians in the lab as to whether the MKL operates outside of the GIL, but it was general understanding that it does. It is still unclear to me whether Python/numpy compiled with MKL would be freely re-distributable, as the MSVC version is. Read the License Agreement on Intel's site. My interpretation is that it would be redistributable. http://www.intel.com/cd/software/products/asmo-na/eng/266854.htm In a related matter, Intel MKL on a MP Xeon with OMP_NUM_THREADS=8 flies on large matrices. Cheers, Albert ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] planet.scipy.org
Hello I also seem to be experiencing this problem. I have been able to successfully browse Planet SciPy for the past two days, but right now the DNS isn't even resolving. www.scipy.org still works though. Cheers, Albert - Original Message - From: Ondrej Certik [EMAIL PROTECTED] To: Discussion of Numerical Python numpy-discussion@scipy.org Sent: Wednesday, January 02, 2008 2:53 PM Subject: Re: [Numpy-discussion] planet.scipy.org On Jan 2, 2008 12:45 PM, Adam Mercer [EMAIL PROTECTED] wrote: On Dec 31, 2007 10:43 PM, Jarrod Millman [EMAIL PROTECTED] wrote: Hey, I just wanted to announce that we now have a NumPy/SciPy blog aggregator thanks to Gaƫl Varoquaux: http://planet.scipy.org/ When I try to load http://planet.scipy.org I get an error saying that the page doesn't exist, however the scipy home page, i.e. http://www.scipy.org, now appears to be the planet aggregator is this just a temporary DNS issue? It works for me. Ondrej ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Changing the distributed binary for numpy 1.0.4 for windows ?
Hello I think this idea is the way to go (maybe along with an ACML build, but my limited testing seemed to indicate that MKL works on AMD CPUs). In fact, I apparently proposed it about a year ago: https://svn.enthought.com/enthought/ticket/899 No takers so far... Cheers, Albert P.S. NumPy on Windows and Linux built with MKL works like a charm for me. - Original Message - From: Christopher Barker [EMAIL PROTECTED] To: Discussion of Numerical Python numpy-discussion@scipy.org Sent: Tuesday, December 11, 2007 7:28 AM Subject: Re: [Numpy-discussion] Changing the distributed binary for numpy 1.0.4 for windows ? Andrew Straw wrote: A function could be called at numpy import time that specifically checks for the instruction set on the CPU running Even better would be a run-time selection of the best version. I've often fantasized about an ATLAS that could do this. I think the Intel MKL has this feature (though maybe only for Intel processors). The MKL runtime is re-distributable, but somehow I doubt that we could have one person buy one copy and distribute binaries to the entire numpy-using world --- but does anyone know? http://www.intel.com/cd/software/products/asmo-na/eng/346084.htm and http://www.intel.com/cd/software/products/asmo-na/eng/266854.htm#copies -Chris ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Changing the distributed binary for numpy 1.0.4 for windows ?
Hello all, I'm not sure the licensing really makes it possible though. Numpy isn't exactly an application, but rather a development tool, so I'm not sure how Intel would feel about it being distributed. Also, it looks like they require each developer to have license, rather than only the person building the final binary -- so having the one person building the final distro may not be kosher. IANAL. It comes down to who is allowed to have the link libraries and who isn't. I doubt whether Intel's license agreement distinguishes between normal programs and development tools. If you're a developer, you need the link libraries (.lib files) to link your program against Intel MKL. According to Intel's redist.txt, you are not allowed to redistribute these files. Without these files, you can't link a new program against the Intel MKL DLLs (generally speaking). You are allowed to redistribute the DLLs (as listed in redist.txt), without having to pay any further royalties. This means that you give any user the files they need to run a program you have linked against Intel MKL. So as I see it, one NumPy developer would need to pay for Intel MKL. Cheers, Albert ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Numpy book and PyArray_CastToType
Hello, Have you considered looking at the source for PyArray_CastToType in core/src/arrayobject.c? Cheers, Albert On Mon, 12 Nov 2007, Matthieu Brucher wrote: Nobody can answer this ? Matthieu 2007/11/4, Matthieu Brucher [EMAIL PROTECTED]: Hi, I'm trying to use PyArray_CastToType(), but according to the book, there are two arguments, a PyArrayObject* and a PyArrayDescr*, but according to the header file, there are three arguments, an additional int. Does someone know its use ? Matthieu ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Scons and numpy, second milestone: all numpy built with scons.
Hello Firstly, great work. I always thought SCons was the way to go for NumPy and SciPy, and you've pulled it off. So basically, I believe most of the things planned in http://projects.scipy.org/scipy/numpy/wiki/DistutilsRevamp are now available because they are available in scons, if numpy developers decide to follow the scons route. Before being usable, I need to finish fortran support and blas/lapack/atlas detection: once this is done, numpy should be able to support some platforms not supported yet (intel compiler on windows, sun compiler with performance libraries, etc...), which was the reason I started this work in the first place. I don't think you should make the autodetection of BLAS and LAPACK too auto. A .cfg file like mpi4py's mpi.cfg (almost like NumPy's site.cfg, but simpler) would be great. This way, you can provide a few default build configurations for common platforms, but still make it easy for people to configure their build exactly the way they want it, if they so choose. Cheers, Albert ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Request for advice: project to get NumPy working in IronPython
Hello On Mon, 15 Oct 2007, Giles Thomas wrote: * Would it be better to try for some kind of binary compatibility, where we'd write some kind of glue that sat between the existing C extension .pyd files and the IronPython engine? Our gut feeling is that this would be much more work, but we might be missing something. Along these lines, what ever happened to Sanghyeon Seo's CPython reflector? http://lists.ironpython.com/pipermail/users-ironpython.com/2006-June/002503.html I think it might be an avenue worth exploring, since this would also make other native modules available as a by-product. Last I heard, the JRuby folks were also trying something these lines for native MRI modules, using JNA (essentially, a Java equivalent to ctypes). Good luck with your project. I hope it succeeds. Cheers, Albert ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Intel MKL 9.1 on Windows (was: Re: VMWare Virtual Appliance...)
Hello all Turns out there's a third option: [mkl] include_dirs = C:\Program Files\Intel\MKL\9.1\include library_dirs = C:\Program Files\Intel\MKL\9.1\ia32\lib mkl_libs = mkl_c_dll, libguide40 lapack_libs = mkl_lapack Note mkl_c_dll, not mkl_c. From what I understand from the MKL release notes and user guide, one should be able to mix mkl_c (a static library) and libguide40 (a dynamic library), but this seems to crash in practice. Anyway, with the site.cfg as above, NumPy passes its tests with MKL 9.1 on 32-bit Windows. Whoot! Regards, Albert On Tue, 12 Jun 2007, Albert Strasheim wrote: Cancel that. It seems the problems with these two tests are being caused by Intel MKL 9.1 on Windows. However, 9.0 works fine. You basically you have 2 options when linking against MKL on Windows as far as the mkl_libs go. [mkl] include_dirs = C:\Program Files\Intel\MKL\9.1\include library_dirs = C:\Program Files\Intel\MKL\9.1\ia32\lib mkl_libs = mkl_c, libguide40 lapack_libs = mkl_lapack or mkl_libs = mkl_c, libguide I think libguide is the library that contains various thread and OpenMP related bits and pieces. If you link against libguide, you get the following error when running the NumPy tests: OMP abort: Initializing libguide.lib, but found libguide.lib already initialized. This may cause performance degradation and correctness issues. Set environment variable KMP_DUPLICATE_LIB_OK=TRUE to ignore this problem and force the program to continue anyway. Please note that the use of KMP_DUPLICATE_LIB_OK is unsupported and using it may cause undefined behavior. For more information, please contact Intel(R) Premier Support. I think this happens because multiple submodules inside NumPy are linked against this libguide library, but this caused some initialization code to be executed multiple times inside the same process, which shouldn't happen. If one sets KMP_DUPLICATE_LIB_OK=TRUE, the tests actually work with Intel MKL 9.0, but no matter what you do with Intel MKL 9.1, i.e., - link against libguide40 or - link against libguide and don't set KMP_... or - link against libguide and set KMP_... the following tests always segfault: numpy.core.tests.test_defmatrix.test_casting.check_basic numpy.core.tests.test_numeric.test_dot.check_matmat Cheers, Albert On Tue, 12 Jun 2007, Albert Strasheim wrote: I've set up a 32-bit Windows XP guest inside VMWare Server 1.0.3 on a 64-bit Linux machine and two of the NumPy tests are segfaulting for some strange reason. They are: numpy.core.tests.test_defmatrix.test_casting.check_basic numpy.core.tests.test_numeric.test_dot.check_matmat Do these pass for you? I'm inclined to blame VMWare at this point... ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] build on windows 64-bit platform
Hello all On Sat, 28 Jul 2007, Stefan van der Walt wrote: On Sat, Jul 28, 2007 at 12:54:52AM +0200, Pearu Peterson wrote: Ok, I have now enabled DISTUTILS_USE_SDK for AMD64 Windows platform and it seems working.. Fantastic, thanks! However, the build still fails but now the reason seems to be related to numpy ticket 164: http://projects.scipy.org/scipy/numpy/ticket/164 I'll ask Albert whether he would have a look at it again. Let's see. Using this build log: http://buildbot.scipy.org/Windows%20XP%20x86_64%20MSVC/builds/31/step-shell/0 numpy\core\src\umathmodule.c.src(73) : warning C4273: 'logf' : inconsistent dll linkage numpy\core\src\umathmodule.c.src(74) : warning C4273: 'sqrtf' : inconsistent dll linkage Judging from the math.h on my 32-bit system, these declarations should look like this: float __cdecl logf(float); float __cdecl sqrtf(float); but they're missing the __cdecl in the NumPy code. Somewhere a macro needs to be defined to __cdecl on Windows (and left empty on other platforms) and including in the NumPy declarations. numpy\core\src\umathmodule.c.src(604) : warning C4013: 'fabsf' undefined; assuming extern returning int numpy\core\src\umathmodule.c.src(604) : warning C4013: 'hypotf' undefined; assuming extern returning int Judging from the patch attached to ticket #164, these functions aren't available for some reason. Maybe check the header to see if there's a way to turn them on using some preprocessor magic. If not, do what the patch does. numpy\core\src\umathmodule.c.src(604) : warning C4244: 'function' : conversion from 'int' to 'float', possible loss of data A cast should suppress this warning. numpy\core\src\umathmodule.c.src(625) : warning C4013: 'rintf' undefined; assuming extern returning int Add this function like the patch does. numpy\core\src\umathmodule.c.src(625) : warning C4244: '=' : conversion from 'int' to 'float', possible loss of data numpy\core\src\umathmodule.c.src(626) : warning C4244: '=' : conversion from 'int' to 'float', possible loss of data numpy\core\src\umathmodule.c.src(632) : warning C4244: 'initializing' : conversion from 'int' to 'float', possible loss of data numpy\core\src\umathmodule.c.src(641) : warning C4244: 'initializing' : conversion from 'int' to 'float', possible loss of data numpy\core\src\umathmodule.c.src(1107) : warning C4244: '=' : conversion from 'double' to 'float', possible loss of data numpy\core\src\umathmodule.c.src(1107) : warning C4244: '=' : conversion from 'double' to 'float', possible loss of data numpy\core\src\umathmodule.c.src(1107) : warning C4244: '=' : conversion from 'double' to 'float', possible loss of data numpy\core\src\umathmodule.c.src(1107) : warning C4244: '=' : conversion from 'double' to 'float', possible loss of data numpy\core\src\umathmodule.c.src(1349) : warning C4244: '=' : conversion from 'npy_longlong' to 'double', possible loss of data numpy\core\src\umathmodule.c.src(1350) : warning C4244: '=' : conversion from 'npy_longlong' to 'double', possible loss of data numpy\core\src\umathmodule.c.src(1349) : warning C4244: '=' : conversion from 'npy_ulonglong' to 'double', possible loss of data numpy\core\src\umathmodule.c.src(1350) : warning C4244: '=' : conversion from 'npy_ulonglong' to 'double', possible loss of data More casts probably. numpy\core\src\umathmodule.c.src(1583) : warning C4146: unary minus operator applied to unsigned type, result still unsigned numpy\core\src\umathmodule.c.src(1583) : warning C4146: unary minus operator applied to unsigned type, result still unsigned numpy\core\src\umathmodule.c.src(1583) : warning C4146: unary minus operator applied to unsigned type, result still unsigned Potential bugs. Look closely at these. numpy\core\src\umathmodule.c.src(1625) : warning C4244: '=' : conversion from 'int' to 'float', possible loss of data Cast. numpy\core\src\umathmodule.c.src(2013) : warning C4013: 'frexpf' undefined; assuming extern returning int Add this function. numpy\core\src\umathmodule.c.src(2013) : warning C4244: '=' : conversion from 'int' to 'float', possible loss of data Cast probably. numpy\core\src\umathmodule.c.src(2030) : warning C4013: 'ldexpf' undefined; assuming extern returning int Add this function. numpy\core\src\umathmodule.c.src(2030) : warning C4244: '=' : conversion from 'int' to 'float', possible loss of data Cast probably. build\src.win32-2.5\numpy\core\__umath_generated.c(15) : error C2099: initializer is not a constant build\src.win32-2.5\numpy\core\__umath_generated.c(21) : error C2099: initializer is not a constant build\src.win32-2.5\numpy\core\__umath_generated.c(27) : error C2099: initializer is not a constant build\src.win32-2.5\numpy\core\__umath_generated.c(30) : error C2099: initializer is not a constant build\src.win32-2.5\numpy\core\__umath_generated.c(45) : error C2099: initializer is not a constant build\src.win32-2.5\numpy\core\__umath_generated.c(45) :
Re: [Numpy-discussion] build on windows 64-bit platform
Hello On Sat, 28 Jul 2007, Albert Strasheim wrote: float __cdecl logf(float); float __cdecl sqrtf(float); but they're missing the __cdecl in the NumPy code. Somewhere a macro needs to be defined to __cdecl on Windows (and left empty on other platforms) and including in the NumPy declarations. included numpy\core\src\umathmodule.c.src(632) : warning C4244: 'initializing' : conversion from 'int' to 'float', possible loss of data numpy\core\src\umathmodule.c.src(641) : warning C4244: 'initializing' : conversion from 'int' to 'float', possible loss of data More casts probably. Looks like initializing these values with a float value (e.g., 0.0f and not 0) will fix these. If it's hard to modify the code generate to do this, a cast should be fine. Cheers, Albert ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Buildbot for numpy
Hello On Mon, 02 Jul 2007, Barry Wark wrote: I have the potential to add OS X Server Intel (64-bit) and OS X Intel (32-bit) to the list, if I can convince my boss that the security risk Sounds good. We could definitely use these platforms. (including DOS from compile times) is minimal. I've compiled both Currently we don't allow builds to be forced from the web page, but this might change in future. numpy and scipy many times, so I'm not worried about resources for a single compile/test, but can any of the regular developers tell me about how many commits there are per day that will trigger a compile/test? We currently only build NumPy. SciPy should probably be added at some point, once we figure out how we want to configure the Buildbot to do this. NumPy averages close to 0 commits per day at this point. SciPy is more active. Between the two, on a busy day, you could expect more than 10 and less than 100 builds. About the more general security risk of running a buildbot slave, from my reading of the buildbot manual (not the source, yet), it looks like the slave is a Twisted server that runs as a normal user process. Is there any sort of sandboxing built into the buildbot slave or is that the responsibility of the OS (an issue I'll have to discuss with our IT)? Through the buildbot master configuration, we tell your buildslave what to check out and which commands to execute. We have set it up to do the build in terms of a Makefile, so the master will tell the slave to run make build followed by make test. Here you can make your own machine do anything that hopefully involves running python setup.py, etc. However, the configuration on the master can be changed to make your slave execute any command. In short, any NumPy/SciPy committer or anyone who controls the build master configuration (i.e., me, Stefan, our admin person, a few other people who have root access on that machine and anybody who successfully breaks into it) can make your build machine execute arbitrary code as the build slave user. The chance of this happening is small, but it's not impossible, so if this risk is unacceptable to you/your IT people, running a build slave might not be for you. ;-) Cheers, Albert ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Incompatability of svn 3868 distutils with v10.0 Intel compilers and MKL9.1
Hello all On Thu, 14 Jun 2007, Matthieu Brucher wrote: cc_exe = 'icc -g -fomit-frame-pointer -xT -fast' Just some comments on that : - in release mode, you should not use '-g', it slows down the execution of your program Do you have a reference that explains this in more detail? I thought -g just added debug information without changing the generated code? Regards, Albert ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] build problem on RHE3 machine
Hello all - Original Message - From: David M. Cooke [EMAIL PROTECTED] To: Discussion of Numerical Python numpy-discussion@scipy.org Sent: Friday, May 25, 2007 7:50 PM Subject: Re: [Numpy-discussion] build problem on RHE3 machine On Fri, May 25, 2007 at 12:45:32PM -0500, Robert Kern wrote: David M. Cooke wrote: On Fri, May 25, 2007 at 07:25:15PM +0200, Albert Strasheim wrote: I'm still having problems on Windows with r3828. Build command: python setup.py -v config --compiler=msvc build_clib --compiler=msvc build_ext --compiler=msvc bdist_wininst Can you send me the output of python setup.py -v config_fc --help-fcompiler And what fortran compiler are you trying to use? If he's trying to build numpy, he shouldn't be using *any* Fortran compiler. Ah true. Still, config_fc wll say it can't find one (and that should be fine). I think the bug has to do with how it searches for a compiler. I see there's been more work on numpy.distutils, but I still can't build r3841 on a normal Windows system with Visual Studio .NET 2003 installed. Is there any info I can provide to get this issue fixed? Thanks. Cheers, Albert ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] build problem on RHE3 machine
Hello all - Original Message - From: David M. Cooke [EMAIL PROTECTED] To: Discussion of Numerical Python numpy-discussion@scipy.org Cc: Albert Strasheim [EMAIL PROTECTED] Sent: Tuesday, May 29, 2007 6:58 PM Subject: Re: [Numpy-discussion] build problem on RHE3 machine Is there any info I can provide to get this issue fixed? Anything you've got :) The output of these are hopefully useful to me (after removing build/): $ python setup.py -v build $ python setup.py -v config_fc --help-fcompiler Attached as build1.log and build2.log. Cheers, Albert build2.log Description: Binary data build1.log Description: Binary data ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] build problem on RHE3 machine
I'm still having problems on Windows with r3828. Build command: python setup.py -v config --compiler=msvc build_clib --compiler=msvc build_ext --compiler=msvc bdist_wininst Output: F2PY Version 2_3828 blas_opt_info: blas_mkl_info: ( library_dirs = C:\Program Files\Intel\MKL\9.0\ia32\lib:C:\Python24\lib:C:\:C:\Python24\libs ) ( include_dirs = C:\Program Files\Intel\MKL\9.0\include:C:\Python24\include ) (paths: ) (paths: ) (paths: ) (paths: ) (paths: C:\Program Files\Intel\MKL\9.0\ia32\lib\mkl_c.lib) (paths: ) (paths: ) (paths: ) (paths: ) (paths: C:\Program Files\Intel\MKL\9.0\ia32\lib\libguide40.lib) ( library_dirs = C:\Program Files\Intel\MKL\9.0\ia32\lib:C:\Python24\lib:C:\:C:\Python24\libs ) FOUND: libraries = ['mkl_c', 'libguide40'] library_dirs = ['C:\\Program Files\\Intel\\MKL\\9.0\\ia32\\lib'] define_macros = [('SCIPY_MKL_H', None)] include_dirs = ['C:\\Program Files\\Intel\\MKL\\9.0\\include', 'C:\\Python24\\include'] ( library_dirs = C:\Python24\lib:C:\:C:\Python24\libs ) FOUND: libraries = ['mkl_c', 'libguide40'] library_dirs = ['C:\\Program Files\\Intel\\MKL\\9.0\\ia32\\lib'] define_macros = [('SCIPY_MKL_H', None)] include_dirs = ['C:\\Program Files\\Intel\\MKL\\9.0\\include', 'C:\\Python24\\include'] lapack_opt_info: lapack_mkl_info: mkl_info: ( library_dirs = C:\Program Files\Intel\MKL\9.0\ia32\lib:C:\Python24\lib:C:\:C:\Python24\libs ) ( include_dirs = C:\Program Files\Intel\MKL\9.0\include:C:\Python24\include ) (paths: ) (paths: ) (paths: ) (paths: ) (paths: C:\Program Files\Intel\MKL\9.0\ia32\lib\mkl_c.lib) (paths: ) (paths: ) (paths: ) (paths: ) (paths: C:\Program Files\Intel\MKL\9.0\ia32\lib\libguide40.lib) ( library_dirs = C:\Program Files\Intel\MKL\9.0\ia32\lib:C:\Python24\lib:C:\:C:\Python24\libs ) FOUND: libraries = ['mkl_c', 'libguide40'] library_dirs = ['C:\\Program Files\\Intel\\MKL\\9.0\\ia32\\lib'] define_macros = [('SCIPY_MKL_H', None)] include_dirs = ['C:\\Program Files\\Intel\\MKL\\9.0\\include', 'C:\\Python24\\include'] ( library_dirs = C:\Program Files\Intel\MKL\9.0\ia32\lib:C:\Python24\lib:C:\:C:\Python24\libs ) FOUND: libraries = ['mkl_lapack', 'mkl_c', 'libguide40'] library_dirs = ['C:\\Program Files\\Intel\\MKL\\9.0\\ia32\\lib'] define_macros = [('SCIPY_MKL_H', None)] include_dirs = ['C:\\Program Files\\Intel\\MKL\\9.0\\include', 'C:\\Python24\\include'] ( library_dirs = C:\Python24\lib:C:\:C:\Python24\libs ) FOUND: libraries = ['mkl_lapack', 'mkl_c', 'libguide40'] library_dirs = ['C:\\Program Files\\Intel\\MKL\\9.0\\ia32\\lib'] define_macros = [('SCIPY_MKL_H', None)] include_dirs = ['C:\\Program Files\\Intel\\MKL\\9.0\\include', 'C:\\Python24\\include'] running config running build_clib running build_ext running build_src building py_modules sources building extension numpy.core.multiarray sources Generating build\src.win32-2.4\numpy\core\config.h No module named msvccompiler in numpy.distutils; trying from distutils new_compiler returns distutils.msvccompiler.MSVCCompiler 0 1 customize GnuFCompiler find_executable('g77') Could not locate executable g77 find_executable('f77') Could not locate executable f77 _find_existing_fcompiler: compiler_type='gnu' not found customize IntelVisualFCompiler find_executable('ifl') Could not locate executable ifl _find_existing_fcompiler: compiler_type='intelv' not found customize AbsoftFCompiler find_executable('f90') Could not locate executable f90 find_executable('f77') Could not locate executable f77 _find_existing_fcompiler: compiler_type='absoft' not found customize CompaqVisualFCompiler find_executable('DF') Could not locate executable DF _find_existing_fcompiler: compiler_type='compaqv' not found customize IntelItaniumVisualFCompiler find_executable('efl') Could not locate executable efl _find_existing_fcompiler: compiler_type='intelev' not found customize Gnu95FCompiler find_executable('gfortran') Could not locate executable gfortran find_executable('f95') Could not locate executable f95 _find_existing_fcompiler: compiler_type='gnu95' not found customize G95FCompiler find_executable('g95') Could not locate executable g95 _find_existing_fcompiler: compiler_type='g95' not found removed c:\docume~1\albert\locals~1\temp\tmp1gax32__dummy.f removed c:\docume~1\albert\locals~1\temp\tmp_lgu9f__dummy.f removed c:\docume~1\albert\locals~1\temp\tmp4vpnwa__dummy.f removed c:\docume~1\albert\locals~1\temp\tmp8xx1ll__dummy.f removed c:\docume~1\albert\locals~1\temp\tmp4veorf__dummy.f removed c:\docume~1\albert\locals~1\temp\tmpwjdbiy__dummy.f Running from numpy source directory. error: don't know how to compile Fortran code on platform 'nt' - Original Message - From: Christopher Hanley [EMAIL PROTECTED] To: Discussion of Numerical Python numpy-discussion@scipy.org Sent: Friday, May 25, 2007 6:52 PM Subject: Re: [Numpy-discussion] build problem on RHE3 machine Sorry I didn't respond sooner. It seems to have taken
[Numpy-discussion] Another flags question
Hello all Me vs the flags again. I found another case where the flags aren't what I would expect: In [118]: x = N.array(N.arange(24.0).reshape(6,4), order='F') In [119]: x Out[119]: array([[ 0., 1., 2., 3.], [ 4., 5., 6., 7.], [ 8., 9., 10., 11.], [ 12., 13., 14., 15.], [ 16., 17., 18., 19.], [ 20., 21., 22., 23.]]) In [120]: x[:,0:1] Out[120]: array([[ 0.], [ 4.], [ 8.], [ 12.], [ 16.], [ 20.]]) # start=0, stop=1, step=2 In [121]: x[:,0:1:2] Out[121]: array([[ 0.], [ 4.], [ 8.], [ 12.], [ 16.], [ 20.]]) In [122]: x[:,0:1].flags Out[122]: C_CONTIGUOUS : False F_CONTIGUOUS : True OWNDATA : False WRITEABLE : True ALIGNED : True UPDATEIFCOPY : False In [123]: x[:,0:1:2].flags Out[123]: C_CONTIGUOUS : False F_CONTIGUOUS : False OWNDATA : False WRITEABLE : True ALIGNED : True UPDATEIFCOPY : False In [124]: x[:,0:1].strides Out[124]: (8, 48) In [125]: x[:,0:1:2].strides Out[125]: (8, 96) The views are slightly different (as can be seen from at least the strides), but I'd expect F_CONTIGUOUS to be true in both cases. I'm guessing that somewhere this special case isn't being checked for, which translates into a missed opportunity for marking the view as contiguous. Probably not a bug per se, but I thought I'd mention it here. Cheers, Albert ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] Question about flags of fancy indexed array
Hello all Consider the following example: In [43]: x = N.zeros((3,2)) In [44]: x.flags Out[44]: C_CONTIGUOUS : True F_CONTIGUOUS : False OWNDATA : True WRITEABLE : True ALIGNED : True UPDATEIFCOPY : False In [45]: x[:,[1,0]].flags Out[45]: C_CONTIGUOUS : False F_CONTIGUOUS : True OWNDATA : False WRITEABLE : True ALIGNED : True UPDATEIFCOPY : False Is it correct that the F_CONTIGUOUS flag is set in the case of the fancy indexed x? I'm running NumPy 1.0.3.dev3792 here. Cheers, Albert ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Question about flags of fancy indexed array
Hello all On Wed, 23 May 2007, Anne Archibald wrote: On 23/05/07, Albert Strasheim [EMAIL PROTECTED] wrote: Consider the following example: First a comment: almost nobody needs to care how the data is stored internally. Try to avoid looking at the flags unless you're interfacing with a C library. The nice feature of numpy is that it hides all that junk - strides, contiguous storage, iteration, what have you - so that you don't have to deal with it. As luck would have it, I am interfacing with a C library. Is it correct that the F_CONTIGUOUS flag is set in the case of the fancy indexed x? I'm running NumPy 1.0.3.dev3792 here. Numpy arrays are always stored in contiguous blocks of memory with uniform strides. The CONTIGUOUS flag actually means something totally different, which is unfortunate, but in any case, fancy indexing can't be done as a simple reindexing operation. It must make a copy of the array. So what you're seeing is the flags of a fresh new array, created from scratch (and numpy always creates arrays in C order internally, though that is an implementation detail you should not rely on). If you are correct that this is in fact a fresh new array, I really don't understand where the values of these flags. To recap: In [19]: x = N.zeros((3,2)) In [20]: x.flags Out[20]: C_CONTIGUOUS : True F_CONTIGUOUS : False OWNDATA : True WRITEABLE : True ALIGNED : True UPDATEIFCOPY : False In [21]: x[:,[1,0]].flags Out[21]: C_CONTIGUOUS : False F_CONTIGUOUS : True OWNDATA : False WRITEABLE : True ALIGNED : True UPDATEIFCOPY : False So since x and x[:,[1,0]] are both new arrays, shouldn't their flags be identical? I'd expect at least C_CONTIGUOUS and OWNDATA to be True. Thanks. Cheers, Albert ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] numpy array sharing between processes? (and ctypes)
Agreed. Could someone with wiki admin access please delete the Ctypes page and rename type Ctypes2 page to Ctypes? As far as I know, Ctypes2 is really what you want to look at (at least it was, last time I worked on it). Thanks. Cheers, Albert On Mon, 14 May 2007, Stefan van der Walt wrote: On Mon, May 14, 2007 at 11:44:11AM -0700, Ray S wrote: While investigating ctypes and numpy for sharing, I saw that the example on http://www.scipy.org/Cookbook/Ctypes#head-7def99d882618b52956c6334e08e085e297cb0c6 does not quite work. However, with numpy.version.version=='1.0b1', ActivePython 2.4.3 Build 12: That page should probably be replaced by http://www.scipy.org/Cookbook/Ctypes2 ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] NumPy 1.0.3 release next week
Hello all On Sat, 12 May 2007, Charles R Harris wrote: On 5/12/07, Albert Strasheim [EMAIL PROTECTED] wrote: I've more or less finished my quick triage effort. Thanks, Albert. The tickets look much better organized now. My pleasure. Stefan van der Walt has also gotten in on the act and we're now down to 19 open tickets with 1.0.3 as the milestone. http://projects.scipy.org/scipy/numpy/query?status=newstatus=assignedstatus=reopenedmilestone=1.0.3+Release Regards, Albert ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] NumPy 1.0.3 release next week
On Fri, 11 May 2007, Travis Oliphant wrote: Thanks for the ticket reviews, Albert. That is really helpful. My pleasure. Found two more issues that look like they could be addressed: http://projects.scipy.org/scipy/numpy/ticket/422 http://projects.scipy.org/scipy/numpy/ticket/450 Cheers, Albert ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] NumPy 1.0.3 release next week
Hello all On Fri, 11 May 2007, David M. Cooke wrote: I've added a 1.0.3 milestone and set these to them (or to 1.1, according to Travis's comments). I've reviewed some more tickets and filed everything that looks like it can be resolved for this release under 1.0.3. To see which tickets are still outstanding (some need fixes, some can just be closed if appropriate), take a look at the list under 1.0.3 Release Release on this page: http://projects.scipy.org/scipy/numpy/report/3 Tickets marked with the 1.0.3 milestone that aren't going to be fixed, but should be, should have their milestone changed to 1.1 (and maybe we should add a 1.0.4 milestone too). Regards, Albert ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] NumPy 1.0.3 release next week
I've more or less finished my quick triage effort. Issues remaining to be resolved for the 1.0.3 release: http://projects.scipy.org/scipy/numpy/query?status=newstatus=assignedstatus=reopenedmilestone=1.0.3+Release If they can't be fixed for this release, we should move them over to 1.1 or maybe 1.0.4 (when it is created) if they can be fixed soon but not now. There are a few tickets that don't have a milestone yet: http://projects.scipy.org/scipy/numpy/query?status=newstatus=assignedstatus=reopenedmilestone= The roadmap also shows a better picture of what's going on: http://projects.scipy.org/scipy/numpy/roadmap?show=all Cheers, Albert On Thu, 10 May 2007, Travis Oliphant wrote: Hi all, I'd like to relase NumPy 1.0.3 next week (on Tuesday) along with a new release of SciPy. Please let me know of changes that you are planning on making before then. Best, -Travis ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] NumPy 1.0.3 release next week
Here are a few tickets that might warrant some attention from someone who is intimately familiar with NumPy's internals ;-) http://projects.scipy.org/scipy/numpy/ticket/390 http://projects.scipy.org/scipy/numpy/ticket/405 http://projects.scipy.org/scipy/numpy/ticket/466 http://projects.scipy.org/scipy/numpy/ticket/469 On Thu, 10 May 2007, Travis Oliphant wrote: Hi all, I'd like to relase NumPy 1.0.3 next week (on Tuesday) along with a new release of SciPy. Please let me know of changes that you are planning on making before then. Best, -Travis ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Building numpy - setting the run path
Hello - Original Message - From: Peter C. Norton [EMAIL PROTECTED] To: numpy-discussion@scipy.org Sent: Saturday, April 28, 2007 12:27 AM Subject: [Numpy-discussion] Building numpy - setting the run path Building numpy for my company's solaris distribution involves requring a run_path for the lapack+blas libraries we're using (libsunperf, though I'm considering swapping out for gsl since we may use that). The situation we're in is that we need gcc to get the -R for paths like /usr/local/gnu/lib /usr/local/python2.5.1/lib, etc. but the default python distutils raise a not implemented exception on this. On Linux I've set the LD_RUN_PATH environment variable prior to compiling my programs. It seemed to do the trick. I don't know if this works on Solaris, though. A quick Google seems to indicate that LD_RUN_PATH is a GCC thing. Cheers, Albert ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] NumPy benchmark
Hello - Original Message - From: Ray Schumacher [EMAIL PROTECTED] To: numpy-discussion@scipy.org Sent: Tuesday, April 17, 2007 4:56 PM Subject: Re: [Numpy-discussion] NumPy benchmark I'm still curious about the licensing aspects of using Intel's compiler and libs. Is the compiled Python/numpy result distributable, like any other compiled program? I'm not a lawyer, but I think the answer is yes. The license agreements for the compiler and MKL: http://www.intel.com/cd/software/products/asmo-na/eng/compilers/219625.htm http://www.intel.com/cd/software/products/asmo-na/eng/219845.htm and the FAQ for MKL: http://www.intel.com/cd/software/products/asmo-na/eng/266854.htm I put in a feature request with the Enthought folks to do exactly this: https://svn.enthought.com/enthought/ticket/899 but it got postponed again recently. Regards, Albert ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] Possible bug in PyArray_RemoveSmallest?
Hello all I was poking around in the NumPy internals and I came across the following code in PyArray_RemoveSmallest in arrayobject.c: intp sumstrides[NPY_MAXDIMS]; ... for (i=0; imulti-nd; i++) { sumstrides[i] = 0; for (j=0; jmulti-numiter; j++) { sumstrides[i] = multi-iters[j]-strides[i]; } } This might be a red herring, but from the name of the variable (sumstrides) and the code (iterating over a bunch of strides) I'm guessing the author might have intended to write: sumstrides[i] += multi-iters[j]-strides[i]; Cheers, Albert ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion