Re: [Numpy-discussion] Tutorial topics for SciPy'09 Conference
Hi all, In order to proceed with contacting speakers, we'd now like to get some feedback from you. This Doodle poll should take no more than a couple of minutes to fill out (no password or registration required): http://doodle.com/hb5bea6fivm3b5bk So please let us know which topics you are most interested in, and we'll do our best to accommodate everyone. Keep in mind that speaker availability and balancing out the topics means that the actual tutorials offered probably won't be exactly the list of top 8 voted topics, but the feedback will certainly help us steer the decision process. Thanks for your time, Dave Peterson and Fernando Perez On Mon, Jun 1, 2009 at 10:21 PM, Fernando Perez wrote: > Hi all, > > The time for the Scipy'09 conference is rapidly approaching, and we > would like to both announce the plan for tutorials and solicit > feedback from everyone on topics of interest. > > Broadly speaking, the plan is something along the lines of what we > had last year: one continuous 2-day tutorial aimed at introductory > users, starting from the very basics, and in parallel a set of > 'advanced' tutorials, consisting of a series of 2-hour sessions on > specific topics. > > We will request that the presenters for the advanced tutorials keep > the 'tutorial' word very much in mind, so that the sessions really > contain hands-on learning work and not simply a 2-hour long slide > presentation. We will thus require that all the tutorials will be > based on tools that the attendees can install at least 2 weeks in > advance on all platforms (no "I released it last night" software). > > With that in mind, we'd like feedback from all of you on possible > topics for the advanced tutorials. We have space for 8 slots total, > and here are in no particular order some possible topics. At this > point there are no guarantees yet that we can get presentations for > these, but we'd like to establish a first list of preferred topics to > try and secure the presentations as soon as possible. > > This is simply a list of candiate topics that various people have > informally suggested so far: > > - Mayavi/TVTK > - Advanced topics in matplotlib > - Statistics with Scipy > - The TimeSeries scikit > - Designing scientific interfaces with Traits > - Advanced numpy > - Sparse Linear Algebra with Scipy > - Structured and record arrays in numpy > - Cython > - Sage - general tutorial > - Sage - specific topics, suggestions welcome > - Using GPUs with PyCUDA > - Testing strategies for scientific codes > - Parallel processing and mpi4py > - Graph theory with Networkx > - Design patterns for efficient iterator-based scientific codes. > - Symbolic computing with sympy > > We'd like to hear from any ideas on other possible topics of interest, > and we'll then run a doodle poll to gather quantitative feedback with > the final list of candidates. > > Many thanks, > > f > ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] More on doc-ing new functions
Sat, 13 Jun 2009 12:58:42 -0700, David Goldsmith kirjoitti: > Are new functions automatically added to the Numpy Doc Wiki? In > particular: 0) is the documentation itself (assuming there is some) > added in such a way that it can be edited by Wiki users; Yes, new functions appear in the wiki, but, > and 1) is the > name of the function automatically added to a "best guess" category in > the Milestones? they do not automatically appear on the Milestones page. More importantly, new functions must also be added (via the wiki) to the proper .rst file, eg., http://docs.scipy.org/numpy/docs/numpy-docs/reference/routines.set.rst/ in order to be included in the final documentation. -- Pauli Virtanen ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Ready for review: PyArrayNeighIterObject, an iterator to iterate over a neighborhood in arbitrary arrays
Sat, 13 Jun 2009 12:00:53 -0600, Charles R Harris kirjoitti: > > 3) Documentation is needed. In particular, I think it worth mentioning > that the number of bounds is taken from the PyArrayIterObject, which > isn't the most transparent thing. For reference, the docs should probably go here: http://docs.scipy.org/numpy/docs/numpy-docs/reference/c-api.array.rst/#array-iterators Probably as a new subsection. -- Pauli Virtanen ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] Interleaved Arrays and
Hi, So I'm trying to get a certain sort of 3D terrain working in PyOpenGL. The idea is to get vertex buffer objects to draw a simple 2D plane comprised of many flat polygons, and use a vertex shader to deform that with a heightmap and map that on a sphere. I've managed to do this with a grid (simple points), making the vertex buffer object: threedimensionalgrid = dstack(mgrid[0:size,0:size,0:1])/float(size-1) twodimensionalgrid = threedimensionalgrid.reshape(self.size_squared,3) floattwodimensionalgrid = array(twodimensionalgrid,"f") self.vertex_vbo = vbo.VBO(floattwodimensionalgrid) However, landscapes tend to be, um, solid :D So, the landscape needs to be drawn as quads or triangles. Strips of triangles will be most effective, and the data must be specified to vbo.VBO() in a certain way: n = #blah testlist = [] for x in xrange(n): for y in xrange(n): testlist.append([x,y]) testlist.append([x+1,y]) If "testlist" is an array (i.e., I could go: "array(testlist)"), it works nicely. However, my Python method is certainly improveable with numpy. I suspect the best way is interleaving the arrays [x,y->yn] and [x+1,y->yn] ntimes, but I couldn't figure out how to do that... Help? Thanks, Ian ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] improving arraysetops
Neil Crighton wrote: > Robert Cimrman ntc.zcu.cz> writes: > >> Hi, >> >> I am starting a new thread, so that it reaches the interested people. >> Let us discuss improvements to arraysetops (array set operations) at [1] >> (allowing non-unique arrays as function arguments, better naming >> conventions and documentation). >> >> r. >> >> [1] http://projects.scipy.org/numpy/ticket/1133 >> > > Hi, > > These changes looks good to me. For point (1) I think we should fold the > unique and _nu code into a single function. For point (3) I like in1d - it's > shorter than isin1d but is still clear. yes, the _nu functions will be useless then, their bodies can be moved into the generic functions. > What about merging unique and unique1d? They're essentially identical for an > array input, but unique uses the builtin set() for non-array inputs and so is > around 2x faster in this case - see below. Is it worth accepting a speed > regression for unique to get rid of the function duplication? (Or can they > be > combined?) unique1d can return the indices - can this be achieved by using set(), too? The implementation for arrays is the same already, IMHO, so I would prefer adding return_index, return_inverse to unique (automatically converting input to array, if necessary), and deprecate unique1d. We can view it also as adding the set() approach to unique1d, when the return_index, return_inverse arguments are not set, and renaming unique1d -> unique. > Neil > > > In [24]: l = list(np.random.randint(100, size=1)) > In [25]: %timeit np.unique1d(l) > 1000 loops, best of 3: 1.9 ms per loop > In [26]: %timeit np.unique(l) > 1000 loops, best of 3: 793 µs per loop > In [27]: l = list(np.random.randint(100, size=100)) > In [28]: %timeit np.unique(l) > 10 loops, best of 3: 78 ms per loop > In [29]: %timeit np.unique1d(l) > 10 loops, best of 3: 233 ms per loop I have found a strange bug in unique(): In [24]: l = list(np.random.randint(100, size=1000)) In [25]: %timeit np.unique(l) --- UnicodeEncodeErrorTraceback (most recent call last) /usr/lib64/python2.5/site-packages/IPython/iplib.py in ipmagic(self, arg_s) 951 else: 952 magic_args = self.var_expand(magic_args,1) --> 953 return fn(magic_args) 954 955 def ipalias(self,arg_s): /usr/lib64/python2.5/site-packages/IPython/Magic.py in magic_timeit(self, parameter_s) 1829 precision, 1830 best * scaling[order], -> 1831 units[order]) 1832 if tc > tc_min: 1833 print "Compiler time: %.2f s" % tc UnicodeEncodeError: 'ascii' codec can't encode character u'\xb5' in position 28: ordinal not in range(128) It disappears after increasing the array size, or the integer size. In [39]: np.__version__ Out[39]: '1.4.0.dev7047' r. ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] Scipy 0.6.0 to 0.7.0, sparse matrix change
I'm trying to track down a numerical discrepancy in our proejct. We noticed that a certain set of results are different having upgraded from scipy 0.6.0 to 0.7.0. The following item from the Scipy change-log is our current number-one suspect. Could anybody who knows suggest what was actually involved in the change which I have highlighted with stars below? Thanks Sparse Matrices --- [...] The handling of diagonals in the ``spdiags`` function has been changed. It now agrees with the MATLAB(TM) function of the same name. *** Numerous efficiency improvements to format conversions and sparse matrix arithmetic have been made. Finally, this release contains numerous bugfixes. *** Disclaimer CALYON UK: This email does not create a legal relationship between any member of the Cr=E9dit Agricole group and the recipient or constitute investment advice. The content of this email should not be copied or disclosed (in whole or part) to any other person. It may contain information which is confidential, privileged or otherwise protected from disclosure. If you are not the intended recipient, you should notify us and delete it from your system. Emails may be monitored, are not secure and may be amended, destroyed or contain viruses and in communicating with us such conditions are accepted. Any content which does not relate to business matters is not endorsed by us. Calyon is authorised by the Comit=e9 des Etablissements de Cr=e9dit et des Entreprises d'Investissement (CECEI) and supervised by the Commission Bancaire in France and subject to limited regulation by the Financial Services Authority. Details about the extent of our regulation by the Financial Services Authority are available from us on request. Calyon is incorporated in France with limited liability and registered in England & Wales. Registration number: FC008194. Registered office: Broadwalk House, 5 Appold Street, London, EC2A 2DA. Disclaimer CALYON France: This message and/or any attachments is intended for the sole use of its addressee. If you are not the addressee, please immediately notify the sender and then destroy the message. As this message and/or any attachments may have been altered without our knowledge, its content is not legally binding on CALYON Crédit Agricole CIB. All rights reserved. Ce message et ses pièces jointes est destiné à l'usage exclusif de son destinataire. Si vous recevez ce message par erreur, merci d'en aviser immédiatement l'expéditeur et de le détruire ensuite. Le présent message pouvant être altéré à notre insu, CALYON Crédit Agricole CIB ne peut pas être engagé par son contenu. Tous droits réservés. ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] passing arrays between processes
On Mon, Jun 15, 2009 at 01:22, Bryan Cole wrote: > On Sun, 2009-06-14 at 15:50 -0500, Robert Kern wrote: >> On Sun, Jun 14, 2009 at 14:31, Bryan Cole wrote: >> > I'm starting work on an application involving cpu-intensive data >> > processing using a quad-core PC. I've not worked with multi-core systems >> > previously and I'm wondering what is the best way to utilise the >> > hardware when working with numpy arrays. I think I'm going to use the >> > multiprocessing package, but what's the best way to pass arrays between >> > processes? >> > >> > I'm unsure of the relative merits of pipes vs shared mem. Unfortunately, >> > I don't have access to the quad-core machine to benchmark stuff right >> > now. Any advice would be appreciated. >> >> You can see a previous discussion on scipy-user in February titled >> "shared memory machines" about using arrays backed by shared memory >> with multiprocessing. Particularly this message: >> >> http://mail.scipy.org/pipermail/scipy-user/2009-February/019935.html >> > > Thanks. > > Does Sturla's extension have any advantages over using a > multiprocessing.sharedctypes.RawArray accessed as a numpy view? It will be easier to write code that correctly holds and releases the shared memory with Sturla's extension. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] More on doc-ing new functions
Thanks, Pauli. Obvious follow-up: --- On Mon, 6/15/09, Pauli Virtanen wrote: > David Goldsmith kirjoitti: > > > Are new functions automatically added to the Numpy Doc > Wiki? In > > particular: 0) is the documentation itself (assuming > there is some) > > added in such a way that it can be edited by Wiki > users; > > Yes, new functions appear in the wiki, but, > > > and 1) is the > > name of the function automatically added to a "best > guess" category in > > the Milestones? > > they do not automatically appear on the Milestones page. > > More importantly, new functions must also be added (via the > wiki) to the > proper .rst file, eg., > > http://docs.scipy.org/numpy/docs/numpy-docs/reference/routines.set.rst/ > > in order to be included in the final documentation. > > Pauli Virtanen Is there a protocol for making sure these things get done? (Just don't want to reinvent the wheel.) DG ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Ready for review: PyArrayNeighIterObject, an iterator to iterate over a neighborhood in arbitrary arrays
Thanks, Pauli! DG --- On Mon, 6/15/09, Pauli Virtanen wrote: > From: Pauli Virtanen > Subject: Re: [Numpy-discussion] Ready for review: PyArrayNeighIterObject, an > iterator to iterate over a neighborhood in arbitrary arrays > To: numpy-discussion@scipy.org > Date: Monday, June 15, 2009, 1:31 AM > Sat, 13 Jun 2009 12:00:53 -0600, > Charles R Harris kirjoitti: > > > > 3) Documentation is needed. In particular, I think it > worth mentioning > > that the number of bounds is taken from the > PyArrayIterObject, which > > isn't the most transparent thing. > > For reference, the docs should probably go here: > > http://docs.scipy.org/numpy/docs/numpy-docs/reference/c-api.array.rst/#array-iterators > > Probably as a new subsection. > > -- > Pauli Virtanen > > ___ > Numpy-discussion mailing list > Numpy-discussion@scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] passing arrays between processes
On Sun, Jun 14, 2009 at 5:27 PM, Bryan Cole wrote: >> In fact, I should have specified previously: I need to >> deploy on MS-Win. On first glance, I can't see that mpi4py is >> installable on Windows. > > My mistake. I see it's included in Enthon, which I'm using. > Hi, Bryan... I'm the author of mpi4py... If you are going to run your code in a single multicore machine, then you should likely use Sturla's extension... As you noticed, MPI is a bit "complicated". Moreover, you will have two dependencies: the core MPI implementation, and mpi4py. These "complications" and extra dependencies however do make sense in the case of DISTRIBUTED computing, i.e, you want to take advantage of many machines to perform your computations. In such cases, MPI is the "smart" approach, and mpi4py the best wrapper out there... -- Lisandro Dalcín --- Centro Internacional de Métodos Computacionales en Ingeniería (CIMEC) Instituto de Desarrollo Tecnológico para la Industria Química (INTEC) Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET) PTLC - Güemes 3450, (3000) Santa Fe, Argentina Tel/Fax: +54-(0)342-451.1594 ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] Join us for the 2nd Scientific Computing with Python Webinar
Hello all Python users: I am pleased to announce the second installment of a free Webinar series that discusses using Python for scientific computing. Enthought hosts this free series which takes place once a month for about 60-90 minutes. The schedule and length may change based on participation feedback, but for now it is scheduled for the third Friday of every month. This free webinar should not be confused with the EPD webinar on the first Friday of each month which is open only to subscribers to the Enthought Python Distribution at the Basic level or above. This session's speakers will be me (Travis Oliphant) and Peter Wang. I will show off a bit of EPDLab which is an interactive Python environment built using IPython, Traits, and Envisage. Peter Wang will present a demo of Chaco and provide some examples of interactive visualizations that can be easily constructed using it's classes. If there is time after the Chaco demo, I will continue the discussion about Mayavi, but I suspect this will have to wait until the next session. All of the tools we will show are open-source, freely- available tools from multiple sources. They can all be conveniently installed using the Enthought Python Distribution. This event will take place on Friday, June 19th at 1:00pm CDT and will last 60 to 90 minutes depending on the questions asked. If you would like to participate, please register by clicking on the link below or going to https://www1.gotomeeting.com/register/303689873. There will be a 15 minute technical help-session prior to the on-line meeting which you should plan to use if you have never participated in a GoToWebinar previously. During this time you can test your connection and audio equipment as well as familiarize yourself with the GoTo Meeting software (which currently only works with Mac and Windows systems). I am looking forward to interacting with many of you again this Friday. Best regards, Travis Oliphant Enthought, Inc. Enthought is the company that sponsored the creation of SciPy and the Enthought Tool Suite. It continues to sponsor the SciPy community by hosting the SciPy mailing list and website and participating in the development of SciPy and NumPy. Enthought creates custom scientific and technical software applications and provides training on using Python for technical computing. Enthought also provides the Enthought Python Distribution. Learn more at http://www.enthought.com Bios for Travis Oliphant and Peter Wang can be read at http://www.enthought.com/company/executive-team.php -- Travis Oliphant Enthought Inc. 1-512-536-1057 http://www.enthought.com oliph...@enthought.com ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] npfile deprecation warning
Hi I'm using npfile which is giving me a deprecation warning. For the time being I want to continue using it but I would like to suppress the warning messages. Is it possible to trap the deprecation warning but still have the npfile go ahead? Thanks Brennan ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] npfile deprecation warning
On Mon, Jun 15, 2009 at 17:27, Brennan Williams wrote: > Hi > > I'm using npfile which is giving me a deprecation warning. For the time > being I want to continue using it but I would like to suppress > the warning messages. Is it possible to trap the deprecation warning but > still have the npfile go ahead? http://docs.python.org/library/warnings -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] npfile deprecation warning
Robert Kern wrote: > On Mon, Jun 15, 2009 at 17:27, Brennan > Williams wrote: > >> Hi >> >> I'm using npfile which is giving me a deprecation warning. For the time >> being I want to continue using it but I would like to suppress >> the warning messages. Is it possible to trap the deprecation warning but >> still have the npfile go ahead? >> > > http://docs.python.org/library/warnings > > Thanks. OK I've put the following in my code... import warnings def fxn(): warnings.warn("deprecated", DeprecationWarning) with warnings.catch_warnings(): warnings.simplefilter("ignore") fxn() but I'm getting an invalid syntax error... with warnings.catch_warnings(): ^ SyntaxError: invalid syntax I haven't used "with" before. Is this supposed to go in the function def where I use npfile? I've put it near the top of my .py file after my imports and before my class definitions. btw I'm using Python 2.5.4 Brennan ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] npfile deprecation warning
On Mon, Jun 15, 2009 at 18:48, Brennan Williams wrote: > Robert Kern wrote: >> On Mon, Jun 15, 2009 at 17:27, Brennan >> Williams wrote: >> >>> Hi >>> >>> I'm using npfile which is giving me a deprecation warning. For the time >>> being I want to continue using it but I would like to suppress >>> the warning messages. Is it possible to trap the deprecation warning but >>> still have the npfile go ahead? >>> >> >> http://docs.python.org/library/warnings >> >> > Thanks. > OK I've put the following in my code... > > import warnings > > def fxn(): > warnings.warn("deprecated", DeprecationWarning) > > with warnings.catch_warnings(): > warnings.simplefilter("ignore") > fxn() catch_warnings() was added in Python 2.6, as stated in the documentation. I recommend setting up the simplefilter in your main() function, and only for DeprecationWarnings. > but I'm getting an invalid syntax error... > > with warnings.catch_warnings(): > ^ > SyntaxError: invalid syntax > > I haven't used "with" before. Is this supposed to go in the function def > where I use npfile? I've put it near the top of my .py file after my > imports and before my class definitions. You would use the with statement only around code that calls the function. > btw I'm using Python 2.5.4 In Python 2.5, you need this at the top of your file (after docstrings but before any other code): from __future__ import with_statement -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] npfile deprecation warning
Robert Kern wrote: > On Mon, Jun 15, 2009 at 18:48, Brennan > Williams wrote: > >> Robert Kern wrote: >> >>> On Mon, Jun 15, 2009 at 17:27, Brennan >>> Williams wrote: >>> >>> Hi I'm using npfile which is giving me a deprecation warning. For the time being I want to continue using it but I would like to suppress the warning messages. Is it possible to trap the deprecation warning but still have the npfile go ahead? >>> http://docs.python.org/library/warnings >>> >>> >>> >> Thanks. >> OK I've put the following in my code... >> >> import warnings >> >> def fxn(): >>warnings.warn("deprecated", DeprecationWarning) >> >> with warnings.catch_warnings(): >>warnings.simplefilter("ignore") >>fxn() >> > > catch_warnings() was added in Python 2.6, as stated in the > documentation. My mistake. I saw the "new in 2.1" at the top of the page but didn't read all the way to the bottom where catch_warnings is documented (with "new in 2.6"). > I recommend setting up the simplefilter in your main() > function, and only for DeprecationWarnings. > > done and it works. Thanks. >> but I'm getting an invalid syntax error... >> >> with warnings.catch_warnings(): >> ^ >> SyntaxError: invalid syntax >> >> I haven't used "with" before. Is this supposed to go in the function def >> where I use npfile? I've put it near the top of my .py file after my >> imports and before my class definitions. >> > > You would use the with statement only around code that calls the function. > > >> btw I'm using Python 2.5.4 >> > > In Python 2.5, you need this at the top of your file (after docstrings > but before any other code): > > from __future__ import with_statement > > ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion