Re: Need some IPC pointers
I'm surprised no one has mentioned zeromq as transport yet. It provides scaling from in proc (between threads) to inter-process and remote machines in a fairly transparent way. It's obviously not the python stdlib and as any system there are downsides too. Regards, Floris -- http://mail.python.org/mailman/listinfo/python-list
Re: Condition.wait(timeout) oddities
On Monday, 23 May 2011 17:32:19 UTC, Chris Torek wrote: > In article > <94d1d127-b423-4bd4...@glegroupsg2000goo.googlegroups.com> > Floris Bruynooghe wrote: > >I'm a little confused about the corner cases of Condition.wait() with a > >timeout parameter in the threading module. > > > >When looking at the code the first thing that I don't quite get is that > >the timeout should never work as far as I understand it. .wait() always > >needs to return while holding the lock, therefore it does an .acquire() > >on the lock in a finally clause. Thus pretty much ignoring the timeout > >value. > > It does not do a straight acquire, it uses self._acquire_restore(), > which for a condition variable, does instead: > > self.__block.acquire() > self.__count = count > self.__owner = owner > > (assuming that you did not override the lock argument or passed > in a threading.RLock() object as the lock), due to this bit of > code in _Condition.__init__(): > > # If the lock defines _release_save() and/or _acquire_restore(), > # these override the default implementations (which just call > # release() and acquire() on the lock). Ditto for _is_owned(). > [snippage] > try: > self._acquire_restore = lock._acquire_restore > except AttributeError: > pass Ah, I missed this bit in the __init__() and the fact that RLock provides the _acquire_restore() and _release_save(). I was wondering why they jumped around via self._acquire_restore() and self._release_save(), it seemed rather a lot of undocumented effort for custom locks. > That is, lock it holds is the one on the "blocking lock" (the > __block of the underlying RLock), which is the same one you had > to hold in the first place to call the .wait() function. > > To put it another way, the lock that .wait() waits for is > a new lock allocated for the duration of the .wait() operation: That makes more sense now. I knew that really, just never quite realised until you wrote this here so clearly. Thanks. So essentially the condition's lock is only meant to lock the internal state of the condition and not meant to be acquired for long times outside that as .wait() calls will not be able to return. My confusion started from looking at queue.Queue which replaces the lock with a regular lock and uses it to lock the Queue's resource. I guess the Queue's mutex satisfies the requirement of never being held for long times. > >The second issue is that while looking around for this I found two bug > >reports: http://bugs.python.org/issue1175933 and > >http://bugs.python.org/issue10218. Both are proposing to add a return > >value indicating whether the .wait() timed out or not similar to the > >other .wait() methods in threading. However the first was rejected > >after some (seemingly inconclusive) discussion. > > Tim Peters' reply seemed pretty conclusive to me. :-) Which is why I'm surprised that it now does. Cheers Floris -- http://mail.python.org/mailman/listinfo/python-list
Condition.wait(timeout) oddities
Hi all I'm a little confused about the corner cases of Condition.wait() with a timeout parameter in the threading module. When looking at the code the first thing that I don't quite get is that the timeout should never work as far as I understand it. .wait() always needs to return while holding the lock, therefore it does an .acquire() on the lock in a finally clause. Thus pretty much ignoring the timeout value. The second issue is that while looking around for this I found two bug reports: http://bugs.python.org/issue1175933 and http://bugs.python.org/issue10218. Both are proposing to add a return value indicating whether the .wait() timed out or not similar to the other .wait() methods in threading. However the first was rejected after some (seemingly inconclusive) discussion. While the latter had minimal discussion and and was accepted without reference to the earlier attempt. Not sure if this was a process oversight or what, but it does leave the situation confusing. But regardless I don't understand how the return value can be used currently: yes you did time out but you're still promised to hold the lock thanks to the .acquire() call on the lock in the finally block. In my small brain I just can't figure out how Condition.wait() can both respect a timeout parameter and the promise to hold the lock on return. It seems to me that the only way to handle the timeout is to raise an exception rather then return a value because when you get an exception you can break the promise of holding the lock. But maybe I'm missing something important or obvious, so I'd be happy to be enlightened! Regards Floris -- http://mail.python.org/mailman/listinfo/python-list
Re: Twisted and txJSON-RPC
On Sunday, April 11, 2010 5:04:49 PM UTC+1, writeson wrote: > I get an error message: error: docs/PRELUDE.txt: No such file or > directory The setup.py code is trying to be too clever and the released package is missing files it requires. The easiest way to fix it is to simply get the latests code from the VCS which contains all the required files. That's what I did anyway: brz branch lp:txjsonrpc Then just do your usual favourite incantation of "python setup.py install --magic-options-to-make-setuptools-sane-for-you". Regards Floris -- http://mail.python.org/mailman/listinfo/python-list
Re: How to read source code of python?
On Jun 10, 8:55 am, Thomas Jollans wrote: > On 06/10/2010 07:25 AM, Qijing Li wrote: > > > Thanks for your reply. > > I'm trying to understand python language deeply and use it efficiently. > > For example: How the operator "in" works on list? the running time is > > be O(n)? if my list is sorted, what the running time would be? Taking this example, you know you want the "in" operator. Which you somehow need to know is implemented by the "__contains__" protocol (you can find this in the "expressions" section of the "Language Reference"). Now you can either know how objects look like in C (follow the "Extending and Embedding" tutorial, specifically the "Defining New Types" section) and therefore know you need to look at the sq_contains slot of the PySequenceMethods sturcture. Or you could just locate the list object in Objects/listobjects.c (which you can easily find by looking at the source tree) and search for "contains". Both ways will lead you pretty quickly to the list_contains() function in Objects/listobject.c. And now you just need to know the C-API (again in the docs) to be able to read it (even if you don't that's a pretty straightforward function to read). Hope that helps Floris -- http://mail.python.org/mailman/listinfo/python-list
Re: converting a timezone-less datetime to seconds since the epoch
On Apr 7, 9:57 am, Chris Withers wrote: > Chris Rebert wrote: > > To convert from struct_time in ***UTC*** > > to seconds since the epoch > > use calendar.timegm() > > ...and really, wtf is timegm doing in calendar rather than in time? ;-) You're not alone in finding this strange: http://bugs.python.org/issue6280 (the short apologetic reason is that timegm is written in python rather the C) Regards Floris -- http://mail.python.org/mailman/listinfo/python-list
Re: Queue peek?
On Mar 2, 6:18 pm, Raymond Hettinger wrote: > On Mar 2, 8:29 am, Veloz wrote: > > > Hi all > > I'm looking for a queue that I can use with multiprocessing, which has > > a peek method. > > > I've seen some discussion about queue.peek but don't see anything in > > the docs about it. > > > Does python have a queue class with peek semantics? > > Am curious about your use case? Why peek at something > that could be gone by the time you want to use it. > > val = q.peek() > if something_i_want(val): > v2 = q.get() # this could be different than val > > Wouldn't it be better to just get() the value and return if you don't > need it? > > val = q.peek() > if not something_i_want(val): > q.put(val) What I have found myself wanting when thinking of this pattern is a "q.put_at_front_of_queue(val)" method. I've never actually used this because of not having such a method. Not that it's that much of an issue as I've never been completely stuck and usually found a way to solve whatever I was trying to do without peeking, which could be argued as a better design in the first place. I was just wondering if other people ever missed the "q.put_at_front_of_queue()" method or if it is just me. Regards Floris PS: assuming "val = q.get()" on the first line -- http://mail.python.org/mailman/listinfo/python-list
Re: Ad hoc lists vs ad hoc tuples
On Jan 27, 10:15 pm, Terry Reedy wrote: > On 1/27/2010 12:32 PM, Antoine Pitrou wrote: > > > Le Wed, 27 Jan 2010 02:20:53 -0800, Floris Bruynooghe a écrit : > > >> Is a list or tuple better or more efficient in these situations? > > > Tuples are faster to allocate (they are allocated in one single step) and > > quite a bit smaller too. Thanks for all the answers! This is what I was expecting but it's nice to see it confirmed. Regards Floris -- http://mail.python.org/mailman/listinfo/python-list
Ad hoc lists vs ad hoc tuples
One thing I ofter wonder is which is better when you just need a throwaway sequence: a list or a tuple? E.g.: if foo in ['some', 'random', 'strings']: ... if [bool1, bool2, boo3].count(True) != 1: ... (The last one only works with tuples since python 2.6) Is a list or tuple better or more efficient in these situations? Regards Floris PS: This is inspired by some of the space-efficiency comments from the list.pop(0) discussion. -- http://mail.python.org/mailman/listinfo/python-list
Re: question about subprocess and shells
On Dec 4, 9:38 pm, Ross Boylan wrote: > If one uses subprocess.Popen(args, ..., shell=True, ...) > > When args finishes execution, does the shell terminate? Either way > seems problematic. Essentially this is executing "/bin/sh args" so if you're unsure as to the behaviour just try it on your command line. Basically once the pipeline in "args" had finished the shell has nothing more to do and will return itself (and the return code for the shell depends on the return code of the pipeline executed which is normally the return code of the last process executed). Of course when I say "pipeline" it could also be a single command or a list or any valid shell. Regards Floris -- http://mail.python.org/mailman/listinfo/python-list
Re: Install script under a different name
On Dec 5, 1:52 am, Lie Ryan wrote: > on linux/unix, you need to add the proper #! line to the top of any > executable scripts and of course set the executable bit permission > (chmod +x scriptname). In linux/unix there is no need to have the .py > extension for a file to be recognized as python script (i.e. just remove > it). The #! line will even get replaced by the interpreter used during installation, so you can safely write "#!/usr/bin/env python" in your development copy and get "#!/usr/bin/python" when users install it. -- http://mail.python.org/mailman/listinfo/python-list
Re: Bored.
On Nov 30, 11:52 pm, Stef Mientki wrote: > Well I thought that after 2 years you would know every detail of a > language ;-) Ouch, I must be especially stupid then! ;-) Floris -- http://mail.python.org/mailman/listinfo/python-list
Re: how to create a pip package
On Nov 10, 2:30 pm, Phlip wrote: > On Nov 10, 1:54 am, Wolodja Wentland > wrote: > > >http://docs.python.org/library/distutils.html#module-distutils > >http://packages.python.org/distribute/ > > ktx... now some utterly retarded questions to prevent false starts. > > the distutils page starts with "from distutils.core import setup". > > but a sample project on github, presumably a pippable project, starts > with: > > from setuptools import setup, find_packages > > (and it also has ez_setup()) > > I don't foresee my project growing larger than one* file any time > soon. 'pip freeze' prob'ly won't write a setup.py. What is the > absolute simplest setup.py to stick my project on the PYTHONPATH, and > be done with it? Do what the distutils page says, setuptools tries to extend distutils with many fancy things but if you don't need those (and that's what it sounds like) then sticking to distutils is better as you only require the python stdlib. Regards Floris -- http://mail.python.org/mailman/listinfo/python-list
Re: Best Way to Handle All Exceptions
On Jul 13, 2:26 pm, seldan24 wrote: > The first example: > > from ftplib import FTP > try: > ftp = FTP(ftp_host) > ftp.login(ftp_user, ftp_pass) > except Exception, err: > print err *If* you really do want to catch *all* exceptions (as mentioned already it is usually better to catch specific exceptions) this is the way to do it. To know why you should look at the class hierarchy on http://docs.python.org/library/exceptions.html. The reason is that you almost never want to be catching SystemExit, KeyboardInterrupt etc. catching them will give you trouble at some point (unless you really know what you're doing but then I would suggest you list them explicitly instead of using the bare except statement). While it is true that you could raise an object that is not a subclass from Exception it is very bad practice, you should never do that. And I've haven't seen an external module in the wild that does that in years and the stdlib will always play nice. Regards Floris -- http://mail.python.org/mailman/listinfo/python-list
Re: Where does setuptools live?
On Jul 4, 4:50 pm, David Wilson wrote: > I'm trying to create a patch for a diabolical issue I keep running > into, but I can't seem to find the setuptools repository. Is it this > one? > > http://svn.python.org/view/sandbox/trunk/setuptools/ It is, see http://mail.python.org/pipermail/distutils-sig/2009-July/012374.html > It's seen no changes in 9 months. It's setuptools... I'm sure you can find many flamefests on distutils- sig about this. Regards Floris -- http://mail.python.org/mailman/listinfo/python-list
Re: multi-thread python interpreaters and c++ program
On Jun 9, 6:50 am, "myopc" wrote: > I am ruuning a c++ program (boost python) , which create many python > interpreaters and each run a python script with use multi-thread > (threading). > when the c++ main program exit, I want to shut down python interpreaters, > but it crashed. Your threads are daemonic, you could be seeing http://bugs.python.org/issue1856 You'll have to check your stack in a debugger to know. But as said this can be avoided by making the threads finish themself and joining them. Regards Floris -- http://mail.python.org/mailman/listinfo/python-list
Re: Python C API String Memory Consumption
On Apr 7, 2:10 pm, John Machin wrote: > On Apr 7, 9:19 pm, MRAB wrote: > > > > > k3xji wrote: > > > Interestaing I changed malloc()/free() usage with PyMem_xx APIs and > > > the problem resolved. However, I really cannot understand why the > > > first version does not work. Here is the latest code that has no > > > problems at all: > > > > static PyObject * > > > penc(PyObject *self, PyObject *args) > > > { > > > PyObject * result = NULL; > > > unsigned char *s= NULL; > > > unsigned char *buf = NULL; > > > unsigned int v,len,i = 0; > > > > if (!PyArg_ParseTuple(args, "s#", &s, &len)) > > > return NULL; > > > > buf = (unsigned char *) PyMem_Malloc(len); > > > if (buf == NULL) { > > > PyErr_NoMemory(); > > > return NULL; > > > } > > > > /* string manipulation. */ > > > > result = PyString_FromStringAndSize((char *)buf, len); > > > PyMem_Free(buf); > > > return result; > > > } I assume you're doing a memcpy() somewhere in there... This is also safer then your first version since the python string can contain an embeded \0 and the strdup() of the first version would not copy that. But maybe you're sure your input doesn't have NULLs in them so it might be fine. > > > In general I'd say don't mix your memory allocators. I don't know > > whether CPython implements PyMem_Malloc using malloc, > > The fantastic manual (http://docs.python.org/c-api/ > memory.html#overview) says: """the C allocator and the Python memory > manager ... implement different algorithms and operate on different > heaps""". > > > but it's better to > > stick with CPython's memory allocators when writing for CPython. > > for the reasons given in the last paragraph of the above reference. That document explictly says you're allowed to use malloc() and free() in extensions. There is nothing wrong with allocating things on different heaps, I've done and seen it many times and never had trouble. Why the original problem ocurred I don't understand either tough. Regards Floris -- http://mail.python.org/mailman/listinfo/python-list
Re: PEP 3143: Standard daemon process library
On Mar 21, 11:06 pm, Ben Finney wrote: > Floris Bruynooghe writes: > > Had a quick look at the PEP and it looks very nice IMHO. > > Thank you. I hope you can try the implementation and report feedback > on that too. > > > One of the things that might be interesting is keeping file > > descriptors from the logging module open by default. > > Hmm. I see that this would be a good idea. but it raises the question > of how to manage the set of file handles that should not be closed on > becoming a daemon. > > So far, the logic of closing the file descriptors is a little complex: > > * Close all open file descriptors. This excludes those listed in > the `files_preserve` attribute, and those that correspond to the > `stdin`, `stdout`, or `stderr` attributes. > > Extending that by saying “… and also any file descriptors for > ``logging.FileHandler`` objects” starts to make the description too > complex. I have a strong instinct that it the description is complex, > the design might be bad. > > Can you suggest an alternative API that will ensure that all file > descriptors get closed *except* those that should not be closed? Not an answer yet, but I'll try to find time in the next few days to play with this and tell you what I think. logging.FileHandler would be too narrow in any case I think. Regards Floris -- http://mail.python.org/mailman/listinfo/python-list
Re: PEP 3143: Standard daemon process library (was: Writing a well-behaved daemon)
On Mar 20, 9:58 am, Ben Finney wrote: > Ben Finney writes: > > Writing a Python program to become a Unix daemon is relatively > > well-documented: there's a recipe for detaching the process and > > running in its own process group. However, there's much more to a > > Unix daemon than simply detaching. > > […] > > > My searches for such functionality haven't borne much fruit though. > > Apart from scattered recipes, none of which cover all the essentials > > (let alone the optional features) of 'daemon', I can't find anything > > that could be relied upon. This is surprising, since I'd expect this > > in Python's standard library. > > I've submitted PEP 3143 http://www.python.org/dev/peps/pep-3143/> > to meet this need, and have re-worked an existing library into a new > ‘python-daemon’ http://pypi.python.org/pypi/python-daemon/> > library, the reference implementation. > > Now I need wider testing and scrutiny of the implementation and > specification. Had a quick look at the PEP and it looks very nice IMHO. One of the things that might be interesting is keeping file descriptors from the logging module open by default. So that you can setup your loggers before you daemonise --I do this so that I can complain on stdout if that gives trouble-- and are still able to use them once you've daemonised. I haven't looked at how feasable this is yet so it might be difficult, but useful anyway. Regards Floris -- http://mail.python.org/mailman/listinfo/python-list
Re: distutils compiler flags for extension modules
On Mar 20, 9:48 am, Christian Meesters wrote: > as I got no answers with the previous question (subject: disabling > compiler flags in distutils), I thought I should ask the question in a > different way: Is there an option to set the compiler flags for a C/C++ > extension in distutils? There is the extra_compile_args-option in the > Extension class, yet this offers only to give additional flags, but I'd > like to have 'total' control about the compile args. Any hint? You can subclass the build_ext class and overwrite .finalize_options() to do something like: for ext in self.extensions: build_ext.finalize_options() # fiddle with ext.extra_compile_args And if that isn't enough you can modify the compiler (with some flags) by overwriting .build_extension() and modify self.compiler using it's .set_executables() method (file:///usr/share/doc/python2.5/html/ dist/module-distutils.ccompiler.html#l2h-37) before calling build_ext.build_extension(). Regards Floris -- http://mail.python.org/mailman/listinfo/python-list
Re: Pythonic way to determine if a string is a number
On Feb 16, 7:09 am, Python Nutter wrote: > silly me, forgot to mention > > build a set from digits + '.' and use that for testing. `.' is locale dependent. Some locales might use `,' instead and maybe there's even more out there that I don't know of. So developing this yourself from scratch seems dangerous, let it bubble down to libc which should handle it correctly. Regards Floris -- http://mail.python.org/mailman/listinfo/python-list
Re: Pythonic way to determine if a string is a number
On Feb 16, 12:05 am, Mel wrote: > Christian Heimes wrote: > > Roy Smith wrote: > >> They make sense when you need to recover from any error that may occur, > >> possibly as the last resort after catching and dealing with more specific > >> exceptions. In an unattended embedded system (think Mars Rover), the > >> top-level code might well be: > > >> while 1: > >> try: > >> main() > >> except: > >> reset() > > > Do you really want to except SystemExit, KeyboardInterrupt, MemoryError > > and SyntaxError? > > Exactly. A normal program should never do anything more comprehensive than > > try: > some_function () > except StandardError: > some_handling () Hmm, most places advocate or even outright recommend derriving your own exceptions from Exception and not from StandardError. So maybe your catch-all should be Exception? In that case you would be catching warnings though, no idea what influence that has on the warning system. Regards Floris PS: Does anybody know why StopIterantion derrives from Exception while GeneratorExit derrives from BaseException? This could be as annoying/ confusing as Warning. -- http://mail.python.org/mailman/listinfo/python-list
Re: memory recycling/garbage collecting problem
On Feb 17, 5:31 am, Chris Rebert wrote: > My understanding is that for efficiency purposes Python hangs on to > the extra memory even after the object has been GC-ed and doesn't give > it back to the OS right away. Even if Python would free() the space no more used by it's own memory allocator (PyMem_Malloc(), PyMem_Free() & Co) the OS usually doesn't return this space to the global free memory pool but instead leaves it assigned to the process, again for performance reasons. Only when the OS is running out of memory it will go and get the free()ed memory of processes back. There might be a way to force your OS to do so earlier manually if you really want but I'm not sure how you'd do that. Regards Floris -- http://mail.python.org/mailman/listinfo/python-list
Re: x64 speed
On Feb 4, 10:14 am, Robin Becker wrote: > > [rpt...@localhost tests]$ time python25 runAll.py > > . > > . > > > -- > > Ran 193 tests in 27.841s > > > OK > > > real 0m28.150s > > user 0m26.606s > > sys 0m0.917s > > [rpt...@localhost tests]$ > > magical how the total python time is less than the real time. Not really. Python was still running at the time that it prints the time of the tests. So it's only natural that the wall time Python prints on just the tests is going to be smaller then the wall time time prints for the entire python process. Same for when it starts, some stuff is done in Python before it starts its timer. Regards Floris -- http://mail.python.org/mailman/listinfo/python-list
Overriding base class methods in the C API
Hello I've been trying to figure out how to override methods of a class in the C API. For Python code you can just redefine the method in your subclass, but setting tp_methods on the type object does not seem to have any influcence. Anyone know of a trick I am missing? Cheers Floris -- http://mail.python.org/mailman/listinfo/python-list
Re: Using exceptions in defined in an extension module inside another extension module
Christian Heimes wrote: > Floris Bruynooghe schrieb: > > What I can't work out however is how to then be able to raise this > > exception in another extension module. Just defining it as "extern" > > doesn't work, even if I make sure the first module -that creates the > > exception- gets loaded first. Because the symbol is defined in the > > first extension module the dynamic linker can't find it as it only > > seems to look in the main python executable for symbols used in > > dlloaded sofiles. > > > > Does anyone have an idea of how you can do this? > > The answer is so obvious that you are going to bang your head against > the next wall. You have to do exactly the same as you'd do with a pure > Python module: import it. :) Well, I hope the wall hurts as much as my head... Great tip, at first I wasn't looking forward to importing the module in every function where I wanted the exceptions. But then I realised they are global variables anyway so I could have them as such again and just assign them in the module init function. Thanks Floris -- http://mail.python.org/mailman/listinfo/python-list
Using exceptions in defined in an extension module inside another extension module
Hello If I have an extension module and want to use an exception I can do by declaring the exception as "extern PyObject *PyExc_FooError" in the object files if I then link those together inside a module where the module has them declared the same (but no extern) and then initialises them in the PyMODINIT_FUNC using PyErr_NewException. What I can't work out however is how to then be able to raise this exception in another extension module. Just defining it as "extern" doesn't work, even if I make sure the first module -that creates the exception- gets loaded first. Because the symbol is defined in the first extension module the dynamic linker can't find it as it only seems to look in the main python executable for symbols used in dlloaded sofiles. Does anyone have an idea of how you can do this? Thanks Floris -- http://mail.python.org/mailman/listinfo/python-list
Re: C API and memory allocation
On Dec 18, 6:43 am, Stefan Behnel wrote: > Floris Bruynooghe wrote: > > I'm slightly confused about some memory allocations in the C API. > > If you want to reduce the number of things you have to get your head > around, learn Cython instead of the raw C-API. It's basically Python, does > all the reference counting for you and also reduces the amount of memory > handling you have to care about. > > http://cython.org/ Sure that is a good choice in some cases. Not in my case currently though, it would mean another build dependency on all our build hosts and I'm just (trying to) stop an existing extension module from leaking memory, no way I'm going to re-write that from scratch. But interesting discussion though, thanks! Floris -- http://mail.python.org/mailman/listinfo/python-list
Re: C API and memory allocation
Hello again On Dec 17, 11:06 pm, Floris Bruynooghe wrote: > So I'm assuming PyArg_ParseTuple() > must allocate new memory for the returned string. However there is > nothing in the API that provides for freeing that allocated memory > again. I've dug a little deeper into this and found that PyArg_ParseTuple (and friends) end up using PyString_AS_STRING() (Python/getargs.c:793) which according to the documentation returns a pointer to the internal buffer of the string and not a copy and that because of this you should not attempt to free this buffer. But how can python now know how long to keep that buffer object in memory for? When the reference count of the string object goes to zero the object can be deallocated I though, and then your pointer will point to something different all of a sudden. Does this mean you always have too keep a reference to the original objects when you've extracted information from them with PyArg_Parse*() functions? (At least while you want to hang on to that information.) Regards Floris -- http://mail.python.org/mailman/listinfo/python-list
C API and memory allocation
Hi I'm slightly confused about some memory allocations in the C API. Take the first example in the documentation: static PyObject * spam_system(PyObject *self, PyObject *args) { const char *command; int sts; if (!PyArg_ParseTuple(args, "s", &command)) return NULL; sts = system(command); return Py_BuildValue("i", sts); } What I'm confused about is the memory usage of "command". As far as I understand the compiler provides space for the size of the pointer, as sizeof(command) would indicate. So I'm assuming PyArg_ParseTuple() must allocate new memory for the returned string. However there is nothing in the API that provides for freeing that allocated memory again. So does this application leak memory then? Or am I misunderstanding something fundamental? Regards Floris -- http://mail.python.org/mailman/listinfo/python-list
Re: C Module question
On Nov 10, 1:18 pm, Floris Bruynooghe <[EMAIL PROTECTED]> wrote: > On Nov 10, 11:11 am, "[EMAIL PROTECTED]" > > <[EMAIL PROTECTED]> wrote: > > 1. How can I pass a file-like object into the C part? The PyArg_* > > functions can convert objects to all sort of types, but not FILE*. > > Parse it as a generic PyObject object (format string of "O" in > PyArg_*), check the type and cast it. Or use "O!" as format string > and the typechecking can be done for you, only thing left is casting > it. > > Seehttp://docs.python.org/c-api/arg.htmlandhttp://docs.python.org/c-api/file.html > for exact details. Sorry, I probably should have mentioned you want to cast the object to PyFileObject and then use the PyFile_AsFile() function to get the FILE* handle. Floris -- http://mail.python.org/mailman/listinfo/python-list
Re: C Module question
Hi On Nov 10, 11:11 am, "[EMAIL PROTECTED]" <[EMAIL PROTECTED]> wrote: > 1. How can I pass a file-like object into the C part? The PyArg_* > functions can convert objects to all sort of types, but not FILE*. Parse it as a generic PyObject object (format string of "O" in PyArg_*), check the type and cast it. Or use "O!" as format string and the typechecking can be done for you, only thing left is casting it. See http://docs.python.org/c-api/arg.html and http://docs.python.org/c-api/file.html for exact details. (2 is answered already...) Regards Floris -- http://mail.python.org/mailman/listinfo/python-list
Re: Logging thread with Queue and multiple threads to log messages
Hi On Nov 9, 8:28 pm, "[EMAIL PROTECTED]" <[EMAIL PROTECTED]> wrote: > I am trying to put up a queue (through a logging thread) so that all > worker threads can ask it to log messages. There is no need to do anything like this, the logging module is thread safe and you can happily just create loggers in a thread and use them, you can even use loggers that where created outside of the thread. We use logging from threads all the time and it works flawlessly. As mentioned by Vinay in your other thread the problem you have is that your main thread exists before the worker threads, which makes atexit run the exithandlers and logging registers the logging.shutdown() function which does flush all logging buffers and closes all handles (i.e. closes the logfile). So as said before by simply calling .join() on all the worker threads inside the main thread your problem will be solved. You might get away with making your threads daemonic, but I can't guarentee you won't run into race conditions in that case. If you want to be really evil you could get into muddling with atexit... Regards Floris -- http://mail.python.org/mailman/listinfo/python-list
Re: Module clarification
On Jul 28, 9:54 am, Hussein B <[EMAIL PROTECTED]> wrote: > Hi. > I'm a Java guy and I'm playing around Python these days... > In Java, we organize our classes into packages and then jarring the > packages into JAR files. > What are modules in Python? An importable or runable (i.e. script) collection of classes, functions, variables etc... > What is the equivalent of modules in Java? Don't know. Not even sure if it exists, but my Java is old and never been great. > Please correct me if I'm wrong: > I saved my Python code under the file Wow.py > Wow.py is now a module and I can use it in other Python code: > import Wow Indeed, you can now access things defined in Wow as Wow.foo Regards Floris -- http://mail.python.org/mailman/listinfo/python-list
lxml validation and xpath id function
Hi I'm trying to use the .xpath('id("foo")') function on an lxml tree but can't get it to work. Given the following XML: And it's XMLSchema: http://www.w3.org/2001/XMLSchema"; elementFormDefault="qualified"> Or in more readable, compact RelaxNG, form: element root { element child { attribute id { xsd:ID } } } Now I'm trying to parse the XML and use the .xpath() method to find the element using the id XPath function: from lxml import etree schema_root = etree.parse(file('schema.xsd')) schema = etree.XMLSchema(schema_root) parser = etree.XMLParser(schema=schema) root = etree.fromstring('', parser) root.xpath('id("foo")') --> [] I was expecting to get the element with that last statement (well, inside a list that is), but instead I just get an empty list. Is there anything obvious I'm doing wrong? As far as I can see the lxml documentation says this should work. Cheers Floris -- http://mail.python.org/mailman/listinfo/python-list
Context manager for files vs garbage collection
Hi I was wondering when it was worthwil to use context managers for file. Consider this example: def foo(): t = False for line in file('/tmp/foo'): if line.startswith('bar'): t = True break return t What would the benefit of using a context manager be here, if any? E.g.: def foo(): t = False with file('/tmp/foo') as f: for line in f: if line.startswith('bar'): t = True break return t Personally I can't really see why the second case would be much better, I've got used to just relying on the garbage collector... :-) But what is the use case of a file as a context manager then? Is it only useful if your file object is large and will stay in scope for a long time? Regards Floris -- http://mail.python.org/mailman/listinfo/python-list
Re: Running commands on cisco routers using python
On May 19, 4:18 pm, SPJ <[EMAIL PROTECTED]> wrote: > Is it possible to run specific commands on cisco router using Python? > I have to run command "show access-list" on few hundred cisco routers > and get the dump into a file. Please let me know if it is feasible and > the best way to achieve this. There's no way I'd think about doing this in python. The best tool for the task is just shell IMHO: [EMAIL PROTECTED]:~$ ssh mercury show access-lists Welcome to mercury [EMAIL PROTECTED]'s password: Standard IP access list 1 10 permit any (265350 matches) Standard IP access list 23 10 permit 192.168.2.0, wildcard bits 0.0.0.255 (2 matches) Extended IP access list 100 10 deny ip any 192.168.0.0 0.0.255.255 log-input (8576 matches) 20 permit ip any any (743438 matches)Connection to mercury closed by remote host. [EMAIL PROTECTED]:~$ You could plug in expect to solve the password thing. Search for "ssh expect" for that (and ignore suggestions about public keys, I haven't found yet how to use those on cisco). -- http://mail.python.org/mailman/listinfo/python-list
Re: How to kill Python interpreter from the command line?
On May 9, 11:19 am, [EMAIL PROTECTED] wrote: > Thanks for the replies. > > On May 8, 5:50 pm, Jean-Paul Calderone <[EMAIL PROTECTED]> wrote: > > > Ctrl+C often works with Python, but as with any language, it's possible > > to write a program which will not respond to it. You can use Ctrl+\ > > instead (Ctrl+C sends SIGINT which can be masked or otherwise ignored, > > Ctrl+\ sends SIGQUIT which typically isn't) > > Yes, thank you, this seems to work. :) > > I did some more testing and found out that the problem seems to be > thread-related. If I have a single-threaded program, then Ctrl+C > usually works, but if I have threads, it is usually ignored. For > instance, the below program does not respond to Ctrl+C (but it does > die when issued Ctrl+\): > > import threading > > def loop(): > while True: > pass > > threading.Thread(target=loop,args=()).start() Your thread needs to be daemonised for this work. Othewise your main thread will be waiting for your created thread to finish. E.g.: thread = threading.Thread(target=loop) thread.setDaemon(True) thread.start() But now it will exit immediately as soon as your main thread has nothing to do anymore (which is right after .start() in this case), so plug in another infinite loop at the end of this: while True: time.sleep(10) And now your threaded app will stop when using C-c. -- http://mail.python.org/mailman/listinfo/python-list
Re: @x.setter property implementation
Oh, that was a good hint! See inline On Apr 11, 12:02 pm, Arnaud Delobelle <[EMAIL PROTECTED]> wrote: > On Apr 11, 11:19 am, Floris Bruynooghe <[EMAIL PROTECTED]> > wrote: > [...] > > > > Unfortunatly both this one and the one I posted before work when I try > > > them out on the commandline but both fail when I try to use them in a > > > module. And I just can't figure out why. > > > This in more detail: Imaging mod.py: > > > import sys > > > _property = property > > > class property(property): > > """Python 2.6/3.0 style property""" > > def setter(self, fset): > > cls_ns = sys._getframe(1).f_locals > > for k, v in cls_ns.iteritems(): > > if v == self: > > propname = k > > break > > cls_ns[propname] = property(self.fget, fset, > > self.fdel, self.__doc__) > > return fset return cls_ns[propname] And then it works as I tried originally! > > class Foo(object): > > @property > > def x(self): > > return self._x > > > @x.setter > > def x(self, v): > > ^ > Don't call this 'x', it will override the property, change it to > 'setx' and everything will work. The same probably goes for your own > 'propset' decorator function. > > > self._x = v + 1 > > > Now enter the interpreter: > > >> import mod > > >>> f = mod.Foo() > > >>> f.x = 4 > > >>> f.x > > > 4 > > > I don't feel like giving up on this now, so close... > > -- > Arnaud -- http://mail.python.org/mailman/listinfo/python-list
Re: @x.setter property implementation
On Apr 11, 10:16 am, Floris Bruynooghe <[EMAIL PROTECTED]> wrote: > On Apr 10, 5:09 pm, Arnaud Delobelle <[EMAIL PROTECTED]> wrote: > > > > > On Apr 10, 3:37 pm, Floris Bruynooghe <[EMAIL PROTECTED]> > > wrote: > > > > On Apr 7, 2:19 pm, "Andrii V. Mishkovskyi" <[EMAIL PROTECTED]> wrote: > > > > > 2008/4/7, Floris Bruynooghe <[EMAIL PROTECTED]>: > > > > > > Have been grepping all over the place and failed to find it. I found > > > > > the test module for them, but that doesn't get me very far... > > > > > I think you should take a look at 'descrobject.c' file in 'Objects' > > > > directory. > > > > Thanks, I found it! So after some looking around here was my > > > implementation: > > > > class myproperty(property): > > > def setter(self, func): > > > self.fset = func > > > > But that doesn't work since fset is a read only attribute (and all of > > > this is implemented in C). > > > > So I've settled with the (nearly) original proposal from Guido on > > > python-dev: > > > > def propset(prop): > > > assert isinstance(prop, property) > > > @functools.wraps > > > def helper(func): > > > return property(prop.fget, func, prop.fdel, prop.__doc__) > > > return helper > > > > The downside of this is that upgrade from 2.5 to 2.6 will require code > > > changes, I was trying to minimise those to just removing an import > > > statement. > > > > Regards > > > Floris > > > Here's an implementation of prop.setter in pure python < 2.6, but > > using sys._getframe, and the only test performed is the one below :) > > > import sys > > > def find_key(mapping, searchval): > > for key, val in mapping.iteritems(): > > if val == searchval: > > return key > > > _property = property > > > class property(property): > > def setter(self, fset): > > cls_ns = sys._getframe(1).f_locals > > propname = find_key(cls_ns, self) > > # if not propname: there's a problem! > > cls_ns[propname] = property(self.fget, fset, > > self.fdel, self.__doc__) > > return fset > > # getter and deleter can be defined the same way! > > > # Example --- > > > class Foo(object): > > @property > > def bar(self): > > return self._bar > > @bar.setter > > def setbar(self, x): > > self._bar = '<%s>' % x > > > # Interactive test - > > > >>> foo = Foo() > > >>> foo.bar = 3 > > >>> foo.bar > > '<3>' > > >>> foo.bar = 'oeufs' > > >>> foo.bar > > '' > > > Having fun'ly yours, > > Neat! > > Unfortunatly both this one and the one I posted before work when I try > them out on the commandline but both fail when I try to use them in a > module. And I just can't figure out why. This in more detail: Imaging mod.py: import sys _property = property class property(property): """Python 2.6/3.0 style property""" def setter(self, fset): cls_ns = sys._getframe(1).f_locals for k, v in cls_ns.iteritems(): if v == self: propname = k break cls_ns[propname] = property(self.fget, fset, self.fdel, self.__doc__) return fset class Foo(object): @property def x(self): return self._x @x.setter def x(self, v): self._x = v + 1 Now enter the interpreter: >>> import mod >>> f = mod.Foo() >>> f.x = 4 >>> f.x 4 I don't feel like giving up on this now, so close... -- http://mail.python.org/mailman/listinfo/python-list
Re: @x.setter property implementation
On Apr 10, 5:09 pm, Arnaud Delobelle <[EMAIL PROTECTED]> wrote: > On Apr 10, 3:37 pm, Floris Bruynooghe <[EMAIL PROTECTED]> > wrote: > > > > > On Apr 7, 2:19 pm, "Andrii V. Mishkovskyi" <[EMAIL PROTECTED]> wrote: > > > > 2008/4/7, Floris Bruynooghe <[EMAIL PROTECTED]>: > > > > > Have been grepping all over the place and failed to find it. I found > > > > the test module for them, but that doesn't get me very far... > > > > I think you should take a look at 'descrobject.c' file in 'Objects' > > > directory. > > > Thanks, I found it! So after some looking around here was my > > implementation: > > > class myproperty(property): > > def setter(self, func): > > self.fset = func > > > But that doesn't work since fset is a read only attribute (and all of > > this is implemented in C). > > > So I've settled with the (nearly) original proposal from Guido on > > python-dev: > > > def propset(prop): > > assert isinstance(prop, property) > > @functools.wraps > > def helper(func): > > return property(prop.fget, func, prop.fdel, prop.__doc__) > > return helper > > > The downside of this is that upgrade from 2.5 to 2.6 will require code > > changes, I was trying to minimise those to just removing an import > > statement. > > > Regards > > Floris > > Here's an implementation of prop.setter in pure python < 2.6, but > using sys._getframe, and the only test performed is the one below :) > > import sys > > def find_key(mapping, searchval): > for key, val in mapping.iteritems(): > if val == searchval: > return key > > _property = property > > class property(property): > def setter(self, fset): > cls_ns = sys._getframe(1).f_locals > propname = find_key(cls_ns, self) > # if not propname: there's a problem! > cls_ns[propname] = property(self.fget, fset, > self.fdel, self.__doc__) > return fset > # getter and deleter can be defined the same way! > > # Example --- > > class Foo(object): > @property > def bar(self): > return self._bar > @bar.setter > def setbar(self, x): > self._bar = '<%s>' % x > > # Interactive test - > > >>> foo = Foo() > >>> foo.bar = 3 > >>> foo.bar > '<3>' > >>> foo.bar = 'oeufs' > >>> foo.bar > '' > > Having fun'ly yours, Neat! Unfortunatly both this one and the one I posted before work when I try them out on the commandline but both fail when I try to use them in a module. And I just can't figure out why. Floris -- http://mail.python.org/mailman/listinfo/python-list
Re: @x.setter property implementation
On Apr 7, 2:19 pm, "Andrii V. Mishkovskyi" <[EMAIL PROTECTED]> wrote: > 2008/4/7, Floris Bruynooghe <[EMAIL PROTECTED]>: > > > > > Have been grepping all over the place and failed to find it. I found > > the test module for them, but that doesn't get me very far... > > I think you should take a look at 'descrobject.c' file in 'Objects' directory. Thanks, I found it! So after some looking around here was my implementation: class myproperty(property): def setter(self, func): self.fset = func But that doesn't work since fset is a read only attribute (and all of this is implemented in C). So I've settled with the (nearly) original proposal from Guido on python-dev: def propset(prop): assert isinstance(prop, property) @functools.wraps def helper(func): return property(prop.fget, func, prop.fdel, prop.__doc__) return helper The downside of this is that upgrade from 2.5 to 2.6 will require code changes, I was trying to minimise those to just removing an import statement. Regards Floris -- http://mail.python.org/mailman/listinfo/python-list
Re: @x.setter property implementation
On Apr 6, 6:41 pm, "Daniel Fetchinson" <[EMAIL PROTECTED]> wrote: > > I found out about the new methods on properties, .setter() > > and .deleter(), in python 2.6. Obviously that's a very tempting > > syntax and I don't want to wait for 2.6... > > > It would seem this can be implemented entirely in python code, and I > > have seen hints in this directrion. So before I go and try to invent > > this myself does anyone know if there is an "official" implementation > > of this somewhere that we can steal until we move to 2.6? > > The 2.6 source? Have been grepping all over the place and failed to find it. I found the test module for them, but that doesn't get me very far... -- http://mail.python.org/mailman/listinfo/python-list
@x.setter property implementation
Hello I found out about the new methods on properties, .setter() and .deleter(), in python 2.6. Obviously that's a very tempting syntax and I don't want to wait for 2.6... It would seem this can be implemented entirely in python code, and I have seen hints in this directrion. So before I go and try to invent this myself does anyone know if there is an "official" implementation of this somewhere that we can steal until we move to 2.6? Cheers Floris -- http://mail.python.org/mailman/listinfo/python-list
Re: Any fancy grep utility replacements out there?
On Mar 19, 2:44 am, Peter Wang <[EMAIL PROTECTED]> wrote: > On Mar 18, 5:16 pm, Robert Kern <[EMAIL PROTECTED]> wrote: > > > > > [EMAIL PROTECTED] wrote: > > > So I need to recursively grep a bunch of gzipped files. This can't be > > > easily done with grep, rgrep or zgrep. (I'm sure given the right > > > pipeline including using the find command it could be donebut > > > seems like a hassle). > > > > So I figured I'd find a fancy next generation grep tool. Thirty > > > minutes of searching later I find a bunch in Perl, and even one in > > > Ruby. But I can't find anything that interesting or up to date for > > > Python. Does anyone know of something? > > > I have a grep-like utility I call "grin". I wrote it mostly to recursively > > grep > > SVN source trees while ignoring the garbage under the .svn/ directories and > > more > > or less do exactly what I need most frequently without configuration. It > > could > > easily be extended to open gzip files with GzipFile. > > >https://svn.enthought.com/svn/sandbox/grin/trunk/ > > > Let me know if you have any requests. > > And don't forget: Colorized output! :) I tried to find something similar a while ago and found ack[1]. I do realise it's written in perl but it does the job nicely. Never needed to search in zipfiles though, just unzipping them in /tmp would always work... I'll check out grin this afternoon! Floris [1] http://petdance.com/ack/ -- http://mail.python.org/mailman/listinfo/python-list
Ignoring windows registry PythonPath subkeys
Hi We basically want the same as the OP in [1], i.e. when python starts up we don't want to load *any* sys.path entries from the registry, including subkeys of the PythonPath key. The result of that thread seems to be to edit PC/getpathp.c[2] and recompile. This isn't that much of a problem since we're compiling python anyway, but is that really still the only way? Surely this isn't such an outlandish requirement? Regards Floris [1] http://groups.google.com/group/comp.lang.python/browse_frm/thread/4df87ffb23ac0c78/1b47f905eb3f990a?lnk=gst&q=sys.path+registry#1b47f905eb3f990a [2] By looking at getpathp.c it seems just commenting out the two calls to getpythonregpath(), for machinepath and userpath should work in most cases. -- http://mail.python.org/mailman/listinfo/python-list
Re: How to send a var to stdin of an external software
On Mar 14, 11:37 am, Benjamin Watine <[EMAIL PROTECTED]> wrote: > Bryan Olson a écrit : > > > I wrote: > >> [...] Pipe loops are tricky business. > > >> Popular solutions are to make either the input or output stream > >> a disk file, or to create another thread (or process) to be an > >> active reader or writer. > > > Or asynchronous I/O. On Unix-like systems, you can select() on > > the underlying file descriptors. (MS-Windows async mechanisms are > > not as well exposed by the Python standard library.) > > Hi Bryan > > Thank you so much for your advice. You're right, I just made a test with > a 10 MB input stream, and it hangs exactly like you said (on > cat.stdin.write(myStdin))... > > I don't want to use disk files. In reality, this script was previously > done in bash using disk files, but I had problems with that solution > (the files wasn't always cleared, and sometimes, I've found a part of > previous input at the end of the next input.) > > That's why I want to use python, just to not use disk files. > > Could you give me more information / examples about the two solutions > you've proposed (thread or asynchronous I/O) ? The source code of the subprocess module shows how to do it with select IIRC. Look at the implementation of the communicate() method. Regards Floris -- http://mail.python.org/mailman/listinfo/python-list
Re: Altering imported modules
On Mar 1, 11:56 pm, Tro <[EMAIL PROTECTED]> wrote: > I'd like to know if it's possible to make tlslite load *my* asyncore module > without changing any of the tlslite code. the pkgutil module might be helpful, not sure though as I've never used it myself. http://blog.doughellmann.com/2008/02/pymotw-pkgutil.html is a fairly detailed look at what pkgutil can do. Regards Floris -- http://mail.python.org/mailman/listinfo/python-list
Re: MSI read support in msilib?
On Jan 16, 7:03 pm, "Martin v. Löwis" <[EMAIL PROTECTED]> wrote: > > The introduction from the msilib documentation in python 2.5 claims it > > supports reading an msi. However on the Record class there is only a > > GetFieldCount() method and some Set*() methods. I was expecting to > > see GetString() and GetInteger() methods to be able to read the > > values. > > > Maybe I'm missing something? > > I think you are right - there is indeed stuff missing; few people have > noticed so far. Ok, below is the patch I made and tested. It is still missing a wrapper for MsiRecordReadStream() but I didn't need that right now ;-). It is a patch against Python 2.5.1 (ignore revision numbers as they're not from svn.python.org). Let me know if this is any good or comments etc. Would it be good to include in python? Maybe I should create the diff against the trunk and file a patch report? Regards Floris Index: _msi.c === --- _msi.c (revision 2547) +++ _msi.c (working copy) @@ -339,6 +339,53 @@ } static PyObject* +record_getinteger(msiobj* record, PyObject* args) +{ +unsigned int field; +int status; + +if (!PyArg_ParseTuple(args, "I:GetInteger", &field)) +return NULL; + +status = MsiRecordGetInteger(record->h, field); +if (status == MSI_NULL_INTEGER){ +PyErr_SetString(MSIError, "could not convert record field to integer"); +return NULL; +} +return PyInt_FromLong((long) status); +} + +static PyObject* +record_getstring(msiobj* record, PyObject* args) +{ +unsigned int field; +unsigned int status; +char buf[2000]; +char *res = buf; +int malloc_flag = 0; +DWORD size = sizeof(buf); +PyObject* string; + +if (!PyArg_ParseTuple(args, "I:GetString", &field)) +return NULL; + +status = MsiRecordGetString(record->h, field, res, &size); +if (status == ERROR_MORE_DATA) { +res = (char*) malloc(size + 1); +if (res == NULL) +return PyErr_NoMemory(); +status = MsiRecordGetString(record->h, field, res, &size); +} + +if (status != ERROR_SUCCESS) +return msierror((int) status); +string = PyString_FromString(res); +if (buf != res) +free(res); +return string; +} + +static PyObject* record_cleardata(msiobj* record, PyObject *args) { int status = MsiRecordClearData(record->h); @@ -405,6 +452,10 @@ static PyMethodDef record_methods[] = { { "GetFieldCount", (PyCFunction)record_getfieldcount, METH_NOARGS, PyDoc_STR("GetFieldCount() -> int\nWraps MsiRecordGetFieldCount")}, +{ "GetInteger", (PyCFunction)record_getinteger, METH_VARARGS, +PyDoc_STR("GetInteger(field) -> int\nWraps MsiRecordGetInteger")}, +{ "GetString", (PyCFunction)record_getstring, METH_VARARGS, +PyDoc_STR("GetString(field) -> string\nWraps MsiRecordGetString")}, { "SetString", (PyCFunction)record_setstring, METH_VARARGS, PyDoc_STR("SetString(field,str) -> None\nWraps MsiRecordSetString")}, { "SetStream", (PyCFunction)record_setstream, METH_VARARGS, -- http://mail.python.org/mailman/listinfo/python-list
MSI read support in msilib?
Hi The introduction from the msilib documentation in python 2.5 claims it supports reading an msi. However on the Record class there is only a GetFieldCount() method and some Set*() methods. I was expecting to see GetString() and GetInteger() methods to be able to read the values. Maybe I'm missing something? Is there an other way to read the data of a record? Regards Floris -- http://mail.python.org/mailman/listinfo/python-list
Re: Creating the windows MSI of python
On Nov 28, 5:26 pm, Christian Heimes <[EMAIL PROTECTED]> wrote: > Floris Bruynooghe wrote: > > It would be great if someone knows how Python builds it's MSI. > > The Tools/ directory contains a script in Tools/msi/msi.py. Martin von > Löwis is using the script to generate the official MSI bundles. You need > to run it from a development shell. Good luck! Thanks! Floris -- http://mail.python.org/mailman/listinfo/python-list
Creating the windows MSI of python
Hello I've managed to build python2.4 and python2.5 in windows with MSVC++ 7.1 fine following the instructions in the PCbuild directory. However now I am wondering how to create the MSI from this[1], but can't find any instructions. All I'm looking for is the equivalent of "make install" (or "make install DESTDIR=/alternative/root"). I could just look at what files and registry settings etc the installer creates and collect all the files manually from the build directory, and then poor them into an MSI. But that seems a bit dangerous for missing something and it will become a maintenance headache on upgrades too. Isn't there some script shipped with python that allows you to build a completely compatible distribution? It would be great if someone knows how Python builds it's MSI. Thanks Floris [1] Ok, not really. What I really want is a merge module or .msm, but never mind that part. -- http://mail.python.org/mailman/listinfo/python-list
Re: python in academics?
On Oct 30, 3:39 am, sandipm <[EMAIL PROTECTED]> wrote: > seeing posts from students on group. I am curious to know, Do they > teach python in academic courses in universities? In Southampton Uni (UK) they do teach (some) Python to Engineering undergrads (aero, mech, ship, maybe more) thanks to one lecturer pushing it afaik. Regards Floris -- http://mail.python.org/mailman/listinfo/python-list
Module level descriptors or properties
Hi When in a new-style class you can easily transform attributes into descriptors using the property() builtin. However there seems to be no way to achieve something similar on the module level, i.e. if there's a "version" attribute on the module, the only way to change that to some computation later is by using a getter from the start as your public API. This seems ugly to me. Does anyone know of a better way to handle this? Regards Floris -- http://mail.python.org/mailman/listinfo/python-list
native hotshot stats module
As part of my Google Summer of Code project I have developed a hstats module. It reads a file with profile data saved by hotshot and displays statistics of it after sorting it. The current interface is very basic in the philosophy of You Arent Gonna Need It[1]. So my question here is: please test it and comment on the missing features and other problems you'd like to see. You can just grab the hprof/hstats.py file from CVS[2] and use it right away. It only depends on the standard library, so just make sure you can import it. It is (hopefully) sufficiently documented in the docstrings. >From my experience with pystones it is 35% faster in loading the hotshot data then the hotshot.stats.load() method. So there is actually some motivation to use this module. I'm eager to hear your comments. Floris [1] http://c2.com/cgi/wiki?YouArentGonnaNeedIt [2] http://savannah.nongnu.org/cvs/?group=pyprof -- Debian GNU/Linux -- The power of freedom www.debain.org | www.gnu.org | www.kernel.org -- http://mail.python.org/mailman/listinfo/python-list