ANN: Louie-1.0b2 - Signal dispatching mechanism
Louie 1.0b2 is available: http://cheeseshop.python.org/pypi/Louie Louie provides Python programmers with a straightforward way to dispatch signals between objects in a wide variety of contexts. It is based on PyDispatcher_, which in turn was based on a highly-rated recipe_ in the Python Cookbook. .. _PyDispatcher: http://pydispatcher.sf.net/ .. _recipe: http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/87056 For more information, visit the Louie project site: http://louie.berlios.de/ Patrick K. O'Brien and contributors [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-announce-list Support the Python Software Foundation: http://www.python.org/psf/donations.html
Python bug day on Sunday Dec. 4
Let's have a Python bug day this Sunday. One goal might be to assess bugs and patches, and make a list of ones we can work on at the Python core sprint at PyCon http://wiki.python.org/moin/PyCon2006/Sprints/PythonCore. Meeting on IRC: #python-dev on irc.freenode.net Date: Sunday, December 4th Time: roughly 9AM to 3PM Eastern (2PM to 8PM UTC). People on the US West Coast may want to show up from 9AM to 3PM Pacific time (12PM to 6PM Eastern), because it'll be more convenient. Meeting on IRC: #python-dev on irc.freenode.net --amk -- http://mail.python.org/mailman/listinfo/python-announce-list Support the Python Software Foundation: http://www.python.org/psf/donations.html
Re: General question about Python design goals
Mike Meyer wrote: I get all that, I really do. I would phrase it as a tuple is a set of attributes that happen to be named by integers. count doesn't make sense on the attributes of an object - so it doesn't make sense on a tuple. index doesn't make sense on the attributes of an object - so it doesn't make sense on a tuple. A loop over the attributes of an object doesn't make sense - so it doesn't make sense on a tuple. So why the $*@ (please excuse my Perl) does for x in 1, 2, 3 work? Seriously. Why doesn't this have to be phrased as for x in list((1, 2, 3)), just like you have to write list((1, 2, 3)).count(1), etc.? I don't know what you use tuple for, but I do need to loop over attributes of an object(be it tuple or dict or even set), though no need for count(), index(), not even on list, so far. I think why people may ask for .count() and .index() for tuple is that if I am writing a generic function that receive such a thing, I don't need to do a type check or try/except block, should the caller tries to be smart and use tuple instead of list(say for optimization reason). -- http://mail.python.org/mailman/listinfo/python-list
Re: One module cannot be found by the interpreter
Anthony Liu wrote: I downloaded and built the python/c++ maxent package ( http://homepages.inf.ed.ac.uk/s0450736/maxent_toolkit.html ). I don't know what happened, the interpreter cannot find the cmaxent module, whereas cmaxent.py is right under the current directory. from maxent import * cmaxent module not found, fall back to python implementation. Could you please kindly advise? that's a message from the maxent package, and a two-second look at the sources indicate that this message means that the cmaxent module cannot be imported. that's not necessarily the same thing as not being found. what happens if you import cmaxint by hand? have you built the C++ extensions? what does the maxent test script say about your installation? /F -- http://mail.python.org/mailman/listinfo/python-list
Re: Python CGI
jbrewer [EMAIL PROTECTED] wrote: I need to update a CGI script I have been working on to perform validation of input files. The basic idea is this: 1.) HTML page is served 2.) User posts file and some other info 3.) Check file for necessary data * If data is missing, then post a 2nd page requesting needed data * If data is present, continue to 4 4.) Process input file and display results (processing path depends on whether data from 3 was present in file or had to be input by user) I'm unsure of the best way to implement step 3. Is it even possible to put all of this into one script? Or should I have one script check the file for the data, then pass the info on to a 2nd script? The cgi.FieldStorage() function should only be called once, so it seems like it's not possible to read a form and then repost another one and read it. If this is true and I need 2 scripts, how do I pass on the information in step 3 automatically if the file has the proper data and no user input is needed? Each web transaction standsalone. User sends request, you send reply, your CGI process ends. Every new request is another run. So, you can either use one script per operation, or you can use one script and detect which step it is using, for example a hidden field. If you need common file processing in both scripts, I'm sure you already know you can put that code in a separate Python file and import it into the two CGI scripts. -- - Tim Roberts, [EMAIL PROTECTED] Providenza Boekelheide, Inc. -- http://mail.python.org/mailman/listinfo/python-list
Re: General question about Python design goals
Mike Meyer wrote: So why the $*@ (please excuse my Perl) does for x in 1, 2, 3 work? because the syntax says so: http://docs.python.org/ref/for.html Seriously. Why doesn't this have to be phrased as for x in list((1, 2, 3)), just like you have to write list((1, 2, 3)).count(1), etc.? because anything that supports [] can be iterated over. /F -- http://mail.python.org/mailman/listinfo/python-list
Re: python speed
Krystian wrote: I would also like to see Half Life 2 in pure Python. or even quake1, do you think it could have any chances to run smoothly? If http://www.abrahamjoffe.com.au/ben/canvascape/ can run at a reasonably speed, yes. -- http://mail.python.org/mailman/listinfo/python-list
Embedding Python in Borland Builder C++ (6.0)
Hi There almost no pages on how to embed Python in Borland Builder C++ (6.0). Any hints in this matter are very welcome... Thanks, Marco -- http://mail.python.org/mailman/listinfo/python-list
RE: How to get a network resource on a windows server
[Frank.Lin] I am working in windows environment and I want copy some files from a remote share folder. Now I want to let my script establish a connection between NULL device name and a shared resource then return a list of files matching the given pattern and copy them to local host It's not entirely clear exactly what you're trying to do. I'll present some (hopefully) useful tips based on a bit of guesswork. I'm assuming you're familiar with the various ways of copying things around (copy, xcopy, robocopy, Python's shutil module) but feel free to come back and ask if that's where the stumbling block is. WARNING: Code snippets are untested. Guess 1) You're trying to copy, eg, \\server\share\*.txt to somewhere local, eg c:\temp. You could simply let the OS do the work for you: code import os os.system (rcopy \\server\share\*.txt c:\temp) /code Guess 2) You're trying to do something a bit more fancy which requires you to make a local connection to the remote share. NB You *don't* need to do this just to copy files, but let's imagine you have some other purpose in mind. You can use the win32wnet module from the pywin32 extensions to map the drive, and then use the os or any other technique to copy the files. code import win32wnet import win32netcon win32wnet.WNetAddConnection2 ( win32netcon.RESOURCETYPE_DISK, Z:, r\\server\share, None, None, None, 0 ) # then os.system (rcopy z:\*.txt c:\temp) or whatever /code Guess 3) You need to create an authenticated connection using the NULL device. (Re-reading, this is the most likely interpretation of your question). Use the win32wnet module again, this time passing in no device, but adding a username/password code import win32wnet import win32netcon win32wnet.WNetAddConnection2 ( win32netcon.RESOURCETYPE_DISK, None, r\\server\share, None, username, password, 0 ) # then os.system (rcopy \\server\share\*.txt c:\temp) or whatever /code This e-mail has been scanned for all viruses by Star. The service is powered by MessageLabs. For more information on a proactive anti-virus service working around the clock, around the globe, visit: http://www.star.net.uk -- http://mail.python.org/mailman/listinfo/python-list
Re: HTML parsing/scraping python
The standard library module for fetching HTML is urllib2. The best module for scraping the HTML is BeautifulSoup. There is a project called mechanize, built by John Lee on top of urllib2 and other standard modules. It will emulate a browsers behaviour - including history, cookies, basic authentication, etc. There are several modules for automated form filling - FormEncode being one. All the best, Fuzzyman http://www.voidspace.org.uk/python/index.shtml -- http://mail.python.org/mailman/listinfo/python-list
Re: Python CGI
That doesn't sound too difficult to do in a single script. As Tim has mentioned, you can always separate the code in two modules and just import the one needed for the action being performed. Your script just needs to output different HTML depending on the result - with a different form depending on what information is required. All the best, Fuzzyman http://www.voidspace.org.uk/python/index.shtml -- http://mail.python.org/mailman/listinfo/python-list
Re: General question about Python design goals
On 29/11/05, Christoph Zwerschke [EMAIL PROTECTED] wrote: Fredrik Lundh wrote: on the other hand, it's also possible that there are perfectly usable ways to keep bikes and bike seats dry where Christoph works, but that he prefers not to use them because they're violating some design rule. Depends on how you understand perfectly usable. My collegue always carries his expensive racing bike to our office in the 3rd floor out of fear it may get wet or stolen. But I think this is not very convenient and I want to avoid discussions with our boss about skid marks on the carpet and things like that. Probably that collegue would not complain as well if he had to cast tuples to lists for counting items - you see, people are different ;-) If you're leaving skid marks on the floor, maybe you need better underwear? http://www.google.co.uk/search?q=skid+marks+slang Sorry to lower the tone. Ed -- http://mail.python.org/mailman/listinfo/python-list
Re: Death to tuples!
On 2005-11-30, Duncan Booth [EMAIL PROTECTED] wrote: Antoon Pardon wrote: The left one is equivalent to: __anon = [] def Foo(l): ... Foo(__anon) Foo(__anon) So, why shouldn't: res = [] for i in range(10): res.append(i*i) be equivallent to: __anon = list() ... res = __anon for i in range(10): res.append(i*i) Because the empty list expression '[]' is evaluated when the expression containing it is executed. This doesn't follow. It is not because this is how it is now, that that is the way it should be. I think one could argue that since '[]' is normally evaluated when the expression containing it is excuted, it should also be executed when a function is called, where '[]' is contained in the expression determining the default value. The left has one list created outside the body of the function, the right one has two lists created outside the body of the function. Why on earth should these be the same? Why on earth should it be the same list, when a function is called and is provided with a list as a default argument? Because the empty list expression '[]' is evaluated when the expression containing it is executed. Again you are just stating the specific choice python has made. Not why they made this choice. I see no reason why your and my question should be answered differently. We are agreed on that, the answers should be the same, and indeed they are. In each case the list is created when the expression (an assigment or a function definition) is executed. The behaviour, as it currently is, is entirely self-consistent. I think perhaps you are confusing the execution of the function body with the execution of the function definition. They are quite distinct: the function definition evaluates any default arguments and creates a new function object binding the code with the default arguments and any scoped variables the function may have. I know what happens, I would like to know, why they made this choice. One could argue that the expression for the default argument belongs to the code for the function and thus should be executed at call time. Not at definion time. Just as other expressions in the function are not evaluated at definition time. So when these kind of expression are evaluated at definition time, I don't see what would be so problematic when other functions are evaluated at definition time to. If the system tried to delay the evaluation until the function was called you would get surprising results as variables referenced in the default argument expressions could have changed their values. This would be no more surprising than a variable referenced in a normal expression to have changed values between two evaluations. -- Antoon Pardon -- http://mail.python.org/mailman/listinfo/python-list
Re: General question about Python design goals
Donn Cave schrieb: As I'm sure everyone still reading has already heard, the natural usage of a tuple is as a heterogenous sequence. I would like to explain this using the concept of an application type, by which I mean the set of values that would be valid when applied to a particular context. For example, os.spawnv() takes as one of its arguments a list of command arguments, time.mktime() takes a tuple of time values. A homogeneous sequence is one where a and a[x:y] (where x:y is not 0:-1) have the same application type. A list of command arguments is clearly homogeneous in this sense - any sequence of strings is a valid input, so any slice of this sequence must also be valid. (Valid in the type sense, obviously the value and thus the result must change.) A tuple of time values, though, must have exactly 9 elements, so it's heterogeneous in this sense, even though all the values are integer. I understand what you want to say, but I would not use the terms homogenuous or heterogenous since their more obvious meaning is that all elements of a collection have the same type. What you are calling an application type is a range of values, and the characteristic you are describing is that the range of values is not left when you slice (or extend) an object. So what you are describing is simply a slicable/extendable application type. It is obvious that you would use lists for this purpose, and not tuples, I completely agree with you here. But this is just a consequence of the immutability of tuples which is their more fundamental characteristic. Let me give an example: Take all nxn matrices as your application type. That applicaiton type is clearly not slicable/extendable, because this would change the dimension, thus not heterogenous in your definition. So would you use tuples (of tuples) or lists (of lists) here? Usually you will use lists, because you want to be able to operate on the matrices and transform them in place. So you see the more fundamental characteristic and reason for prefering lists over tuples is mutability. Let us assume you want to calculate the mathematical rank of such a matrix. You would bring it in upper echelon shape (here, you are operating on the rows, thus you would use lists) and then you would count the all-zero rows. Ok, this is not an example for using count() on tuples, but it is an example for using count() on a heterogenous collection in your definition. I completely agree that you will need count() and item() much less frequently on tuples because of their immutability. This is obvious. (Tuples themselves are already used less frequently than lists for this reason.) But I still cannot see why you would *never* use it or why it would be bad style. And I don't understand why those who smile at my insistence on design principles of consistence - propagating practicability instead - are insisting themselves on some very philosophical and non-obvious design principles or characteristics of tuples. -- Christoph -- http://mail.python.org/mailman/listinfo/python-list
Re: Death to tuples!
On 2005-11-30, Christophe [EMAIL PROTECTED] wrote: Antoon Pardon a écrit : On 2005-11-30, Duncan Booth [EMAIL PROTECTED] wrote: Antoon Pardon wrote: But lets just consider. Your above code could simply be rewritten as follows. res = list() for i in range(10): res.append(i*i) I don't understand your point here? You want list() to create a new list and [] to return the same (initially empty) list throughout the run of the program? No, but I think that each occurence returning the same (initially empty) list throughout the run of the program would be consistent with how default arguments are treated. What about that : def f(a): res = [a] return res How can you return the same list that way ? Do you propose to make such construct illegal ? I don't propose anything. This is AFAIC just a philosophical exploration about the cons and pros of certain python decisions. To answer your question. The [a] is not a constant list, so maybe it should be illegal. The way python works now each list is implicitely constructed. So maybe it would have been better if python required such a construction to be made explicit. If people would have been required to write: a = list() b = list() Instead of being able to write a = [] b = [] It would have been clearer that a and b are not the same list. -- Antoon Pardon -- http://mail.python.org/mailman/listinfo/python-list
Re: mmm-mode, python-mode and doctest-mode?
John J Lee wrote: Is it possible to get doctest-mode to work with mmm-mode and python-mode nicely so that docstrings containing doctests are editable in doctest-mode? I don't know. (snip) Any tips appreciated! Seems like comp.emacs could be a good place for this question -- bruno desthuilliers python -c print '@'.join(['.'.join([w[::-1] for w in p.split('.')]) for p in '[EMAIL PROTECTED]'.split('@')]) -- http://mail.python.org/mailman/listinfo/python-list
Re: an intriguing wifi http server mystery...please help
[EMAIL PROTECTED] wrote: Hi again Istvan, Good suggestion. I have tried another server and it works flawlessly, regardless of the computers being wireless or wired. Excellent. However, i am still intrigued as to why the server is fast when both computers are wireless and the desktop is the server (while the laptop is the client), but slow when both computers are wireless and the desktop is the client (while the laptop is the server). I guess i am just curious as to what possible thing (most likely in software, as we have discovered) could cause this assymetry. Thanks for any ideas, jojoba Just a guess: could it be that your server is doing reverse-dns lookups? (i.e. it does socket.gethostbyaddr to get names by ip addresses, perhaps for logging or whatnot) This call is expensive. Sometimes this call takes ages to complete, if you have a broken DNS config. --Irmen -- http://mail.python.org/mailman/listinfo/python-list
Re: Why are there no ordered dictionaries?
Christoph Zwerschke wrote: Hello Christoph, I think re-ordering will be a very rare use case anyway and slicing even more. As a use case, I think of something like mixing different configuration files and default configuration parameters, while trying to keep a certain order of parameters and parameters blocks. In actual fact - being able to *reorder* the dictionary is the main way I use this dictionary. All the best, Fuzzyman http://www.voidspace.org.uk/python/index.shtml -- Christoph -- http://mail.python.org/mailman/listinfo/python-list
FW: Python Query: How to i share variable between two processes without IPC.
Hi everyone, Question: how do i share variable between two processes without IPC. context: i have a process which is running and i want to update one of the variables use by this process with ANOTHER process. I understand there are various IPC mechanisms like shared memory, pipes, signals, etc., but iwould like to usesomething simple. A similar thing is available in perl (check the mail attached). Would appreciate, if anyone of you can help me out on this. Thanks for your time. warm regards, Mukesh Rijhwani This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. ---BeginMessage--- Chapter 16Process Management and Communication 16.12. Sharing Variables in Different Processes Problem You want to share variables across forks or between unrelated processes. Solution Use SysV IPC, if your operating system supports it. Discussion While SysV IPC (shared memory, semaphores, etc.) isn't as widely used as pipes, named pipes, and sockets for interprocess communication, it still has some interesting properties. Normally, however, you can't expect to use shared memory via shmget or the mmap (2) system call to share a variable among several processes. That's because Perl would reallocate your string when you weren't wanting it to. The CPAN module IPC::Shareable takes care of that. Using a clever tie module, SysV shared memory, and the Storable module from CPAN allows data structures of arbitrary complexity to be shared among cooperating processes on the same machine. These processes don't even have to be related to each other. Example 16.11 is a simple demonstration of the module. Example 16.11: sharetest#!/usr/bin/perl # sharetest - test shared variables across forks use IPC::Shareable; $handle = tie $buffer, 'IPC::Shareable', undef, { destroy = 1 }; $SIG{INT} = sub { die "$$ dying\n" }; for (1 .. 10) { unless ($child = fork) {# i'm the child die "cannot fork: $!" unless defined $child; squabble(); exit; } push @kids, $child; # in case we care about their pids } while (1) { print "Buffer is $buffer\n"; sleep 1; } die "Not reached"; sub squabble { my $i = 0; while (1) { next if $buffer =~ /^$$\b/o; $handle-shlock(); $i++; $buffer = "$$ $i"; $handle-shunlock(); } } The starting process creates the shared variable, forks off 10 children, and then sits back and prints out the value of the buffer every second or so, forever, or until you hit Ctrl-C. Because the SIGINT handler was set before any forking, it got inherited by the squabbling children as well, so they'll also bite the dust when the process group is interrupted. Keyboard interrupts send signals to the whole process group, not just one process. What do the kids squabble over? They're bickering over who gets to update that shared variable. Each one looks to see whether someone else was here or not. So long as the buffer starts with their own signature (their PID), they leave it alone. As soon as someone else has changed it, they lock the shared variable using a special method call on the handle returned from the tie, update it, and release the lock. The program runs much faster by commenting out the line that starts with next where each process is checking that they were the last one to touch the buffer. The /^$$\b/o may look suspicious, since /o tells Perl to compile the pattern once only, but then went and changed the variable's value by forking. Fortunately, the value isn't locked at program compile time, but only the first time the pattern is itself compiled in each process, during whose own lifetime $$ does not alter. The IPC::Sharable module also supports sharing variables among unrelated processes on the same machine. See its documentation for details. See Also The semctl, semget, semop, shmctl, shmget, shmread, and shmwrite functions in Chapter 3 of Programming Perl and in perlfunc (1); the documentation for the IPC::Shareable module from CPAN 16.11. Making a Process Look Like a File with Named Pipes 16.13. Listing Available Signals [ Library Home | Perl in a Nutshell | Learning Perl | Learning Perl on Win32 | Programming Perl | Advanced Perl Programming | Perl Cookbook ] ---End Message--- -- http://mail.python.org/mailman/listinfo/python-list
Re: General question about Python design goals
Christoph Zwerschke wrote: I understand what you want to say, but I would not use the terms homogenuous or heterogenous since their more obvious meaning is that all elements of a collection have the same type. http://www.google.com/search?q=duck+typing /F -- http://mail.python.org/mailman/listinfo/python-list
Re: python speed
Mike Meyer wrote: If you wire everything down, you can always hand-code assembler that will be faster than HLL code but that doesn't mean that your hand-coded assembler will always be faster than an HLL implementation that addresses the same problem: http://mail.python.org/pipermail/python-announce-list/2005-November/004519.html Pure Python division is 16x slower than GMP but can actually be faster in some instances; for example, dividing a 2,000,000 digit number by an 800,000 digit number /F -- http://mail.python.org/mailman/listinfo/python-list
Re: General question about Python design goals
On 2005-12-01, Donn Cave [EMAIL PROTECTED] wrote: Quoth [EMAIL PROTECTED]: | Christoph Zwerschke wrote: ... | Sorry, but I still do not get it. Why is it a feature if I cannot count | or find items in tuples? Why is it bad program style if I do this? So | far I haven't got any reasonable explanation and I think there is no. | | I have no idea, I can understand their view, not necessarily agree. And | reasonable explanation is not something I usually find on this group, | for issues like this. It's hard to tell from this how well you do understand it, and of course it's hard to believe another explanation is going to make any difference to those who are basically committed to the opposing point of view. But what the hell. Tuples and lists really are intended to serve two fundamentally different purposes. We might guess that just from the fact that both are included in Python, in fact we hear it from Guido van Rossum, and one might add that other languages also make this distinction (more clearly than Python.) As I'm sure everyone still reading has already heard, the natural usage of a tuple is as a heterogenous sequence. I would like to explain this using the concept of an application type, by which I mean the set of values that would be valid when applied to a particular context. For example, os.spawnv() takes as one of its arguments a list of command arguments, time.mktime() takes a tuple of time values. A homogeneous sequence is one where a and a[x:y] (where x:y is not 0:-1) have the same application type. A list of command arguments is clearly homogeneous in this sense - any sequence of strings is a valid input, so any slice of this sequence must also be valid. (Valid in the type sense, obviously the value and thus the result must change.) A tuple of time values, though, must have exactly 9 elements, so it's heterogeneous in this sense, even though all the values are integer. One doesn't count elements in this kind of a tuple, because it's presumed to have a natural predefined number of elements. One doesn't search for values in this kind of a tuple, because the occurrence of a value has meaning only in conjunction with its location, e.g., t[4] is how many minutes past the hour, but t[5] is how many seconds, etc. I don't agree with this. Something can be a hetergenous sequence, but the order can be arbitrary, so that any order of the elements can work as long as at is well defined beforehand. Points on a 2D latice are by convention notated as (x,y) but (y,x) works just as well. When working in such a lattice it is possible to be interested in those points that lay on one of the axes. Since the X-axis and the Y-axis play a symmetrical role, it is possible that it doesn't matter which axis the point is on. So counting how many of the coordinates are zero is natural way to check if a point is on an axe. Doing a find is a natural way to check if a point is on an axis and at the same time find out which one. So that a sequence is heterogenous in this sense doesn't imply that count, find and other methods of such kind don't make sense. -- Antoon Pardon -- http://mail.python.org/mailman/listinfo/python-list
Re: General question about Python design goals
On 2005-12-01, Fredrik Lundh [EMAIL PROTECTED] wrote: Mike Meyer wrote: So why the $*@ (please excuse my Perl) does for x in 1, 2, 3 work? because the syntax says so: http://docs.python.org/ref/for.html Seriously. Why doesn't this have to be phrased as for x in list((1, 2, 3)), just like you have to write list((1, 2, 3)).count(1), etc.? because anything that supports [] can be iterated over. This just begs the question. If tuples are supposed to be such heterogenous sequences, one could indeed question why they support []. And even if good arguments are given why tuples shouls support [], the fact that the intention of tuples and list are so different casts doubts on the argument that supporting [] is enough reason to support iteration. One could equally also argue that since iteration is at the heart of methods like index, find and count, that supporting iteration is sufficient reason to support these methods. -- Antoon Pardon -- http://mail.python.org/mailman/listinfo/python-list
Re: Death to tuples!
Antoon Pardon wrote: I know what happens, I would like to know, why they made this choice. One could argue that the expression for the default argument belongs to the code for the function and thus should be executed at call time. Not at definion time. Just as other expressions in the function are not evaluated at definition time. Yes you could argue for that, but I think it would lead to a more complex and confusing language. The 'why' is probably at least partly historical. When default arguments were added to Python there were no bound variables, so the option of delaying the evaluation simply wasn't there. However, I'm sure that if default arguments were being added today, and there was a choice between using closures or evaluating the defaults at function definition time, the choice would still come down on the side of simplicity and clarity. (Actually, I think it isn't true that Python today could support evaluating default arguments inside the function without further changes to how it works: currently class variables aren't in scope inside methods so you would need to add support for that at the very least.) If you want the expressions to use closures then you can do that by putting expressions inside the function. If you changed default arguments to make them work in the same way, then you would have to play a lot more games with factory functions. Most of the tricks you can play of the x=x default argument variety are just tricks, but sometimes they can be very useful tricks. -- http://mail.python.org/mailman/listinfo/python-list
Re: mmm-mode, python-mode and doctest-mode?
bruno at modulix [EMAIL PROTECTED] writes: John J Lee wrote: Is it possible to get doctest-mode to work with mmm-mode and python-mode nicely so that docstrings containing doctests are editable in doctest-mode? I don't know. (snip) Any tips appreciated! Seems like comp.emacs could be a good place for this question I've only posted to gnu.emacs.help previously (the message you reply to was cross-posted there), since the name seemed to suggest friendliness to people like me who use emacs a lot but are clueless with elisp :-). But thinking again I guess comp.emacs is just the general emacs group, while gnu.emacs.help is the GNU-specific one? John -- http://mail.python.org/mailman/listinfo/python-list
Re: Python as Guido Intended
On 2005-11-30, Dave Hansen [EMAIL PROTECTED] wrote: On 30 Nov 2005 10:57:04 GMT in comp.lang.python, Antoon Pardon [EMAIL PROTECTED] wrote: On 2005-11-29, Mike Meyer [EMAIL PROTECTED] wrote: Antoon Pardon [EMAIL PROTECTED] writes: You see, you can make languages more powerful by *removing* things from it. You cast this in way to general terms. The logic conclusion from this statements is that the most powerfull language is the empty language. The only way you reach that conclusion is if you read the statement as saying that removing things *always* makes a langauge more powerful. That's not what I said, I would say it is the common interpretation for such a sentence. I hope no one ever tells you that you'd be healthier if you ate less and exercised more. (Perhaps it's not true in your case, but it certainly is in mine.) But that is IMO not a good analogy. A better analogy would be: You can make persons healthier by making them eat less and execise more. -- Antoon Pardon -- http://mail.python.org/mailman/listinfo/python-list
Re: Problem cmpiling M2Crypto
Dear friend Heikki Toivonen, thank you for replying to my quest... C:\Program Files\Plone 2\Python\lib\distutils\extension.py:128: UserWarning: Unknown Extension options: 'swig_opts' Hmm, I don't remember seeing that. But then again, I haven't compiled with the same setup as you either. I am in a quest to find where this comes from. But it is difficult to me as i am a newbie to python and totaly new to SWIG... anyway Then i open SWIG\_lib.i file and change all %name with %rename You don't need to change those, the name vs rename is just a warning. appart from this i can not find Python.h anywhere ni my pc (WinXp) I think that's your real problem. You need the python header(s) etc. to build M2Crypto. Seems like your Python distribution does not contain them. -- Heikki Toivonen After changing all %name to %rename the message about Python.h stoped appearing. I also took the steps described in the INSTALL file of m2crypto pack: Preparation ~ Read Sebastien Sauvage's webpage: http://sebsauvage.net/python/mingw.html; but this is about using mingw32, isn't it? What i get now is: 1) using python setup.py build C:\Program Files\Plone 2\Python\lib\distutils\extension.py:128: UserWarning: Unknown Extension options: 'swig_opts' warnings.warn(msg) running build running build_py creating build creating build\lib.win32-2.3 creating build\lib.win32-2.3\M2Crypto copying M2Crypto\ASN1.py - build\lib.win32-2.3\M2Crypto copying M2Crypto\AuthCookie.py - build\lib.win32-2.3\M2Crypto . . . copying M2Crypto\_version.py - build\lib.win32-2.3\M2Crypto copying M2Crypto\__init__.py - build\lib.win32-2.3\M2Crypto creating build\lib.win32-2.3\M2Crypto\SSL copying M2Crypto\SSL\cb.py - build\lib.win32-2.3\M2Crypto\SSL . . . copying M2Crypto\SSL\__init__.py - build\lib.win32-2.3\M2Crypto\SSL creating build\lib.win32-2.3\M2Crypto\PGP copying M2Crypto\PGP\constants.py - build\lib.win32-2.3\M2Crypto\PGP . . . copying M2Crypto\PGP\__init__.py - build\lib.win32-2.3\M2Crypto\PGP running build_ext building '__m2crypto' extension C:\SWIG\swig.exe -python -ISWIG -Ic:\openssl\include -o SWIG/_m2crypto.c SWIG/_m2crypto.i SWIG\_lib.i(527): Error: Syntax error in input(1). error: command 'swig.exe' failed with exit status 1 2) using python setup.py build -cmingw32 C:\Program Files\Plone 2\Python\lib\distutils\extension.py:128: UserWarning: Unknown Extension options: 'swig_opts' warnings.warn(msg) running build running build_py creating build creating build\lib.win32-2.3 creating build\lib.win32-2.3\M2Crypto copying M2Crypto\ASN1.py - build\lib.win32-2.3\M2Crypto copying M2Crypto\AuthCookie.py - build\lib.win32-2.3\M2Crypto . . . copying M2Crypto\PGP\RSA.py - build\lib.win32-2.3\M2Crypto\PGP copying M2Crypto\PGP\__init__.py - build\lib.win32-2.3\M2Crypto\PGP running build_ext warning: Python's pyconfig.h doesn't seem to support your compiler. Reason: couldn't read 'C:\Program Files\Plone 2\Pyth on\include\pyconfig.h': No such file or directory. Compiling may fail because of undefined preprocessor macros. building '__m2crypto' extension C:\SWIG\swig.exe -python -ISWIG -Ic:\openssl\include -o SWIG/_m2crypto.c SWIG/_m2crypto.i SWIG\_lib.i(527): Error: Syntax error in input(1). error: command 'swig.exe' failed with exit status 1 any help would be nice. i would like to buy the 0.13 but the matter is for me to LEARN how to build those packages, read and write Python, use Plone etc. thanks to all of you people that have given so much of your time for a FREE world... Thomas G. Apostolou -- http://mail.python.org/mailman/listinfo/python-list
Re: Problem cmpiling M2Crypto
Thomas G. Apostolou wrote: C:\Program Files\Plone 2\Python\lib\distutils\extension.py:128: UserWarning: Unknown Extension options: 'swig_opts' Hmm, I don't remember seeing that. But then again, I haven't compiled with the same setup as you either. I am in a quest to find where this comes from. google can provide a hint, at least: http://www.mail-archive.com/distutils-sig@python.org/msg00084.html /F -- http://mail.python.org/mailman/listinfo/python-list
Re: General question about Python design goals
I think this all boils down to the following: * In their most frequent use case where tuples are used as lightweight data structures keeping together heterogenous values (values with different types or meanings), index() and count() do not make much sense. I completely agree that his is the most frequent case. Still there are cases where tuples are used to keep homogenous values together (for instance, RGB values, points in space, rows of a matrix). In these cases it would be principally useful to have index() and count() methods. But: * Very frequently you will use only 2- or 3-tuples, where direct queries may be faster than item() and count(). (That's probably why Antoon's RGB example was rejected as use case though it was principally a good one). * Very frequently you want to perform operations on these objects and change their elements, so you would use lists instead of tuples anyway. See my use case where you would determine whether a vector is zero by count()ing its zero entries or the rank of a matrix by count()ing zero rows. * You will use item() and count() in situations where you are dealing with a small discrete range of values in your collection. Often you will use strings instead of tuples in these cases, if you don't need to sum() the items, for instance. So, indeed, very few use cases will remain if you filter throught the above. But this does not mean that they do not exist. And special cases aren't special enough to break the rules. It should be easy to imagine use cases now. Take for example, a chess game. You are storing the pieces in a 64-tuple, where every piece has an integer value corresponding to its value in the game (white positive, black negative). You can approximate the value of a position by building the sum(). You want to use the tuple as a key for a dictionary of stored board constellations (e.g. an opening dictionary), therefore you don't use a list. Now you want to find the field where the king is standing. Very easy with the index() method. Or you want to find the number of pawns on the board. Here you could use the count() method. -- Christoph -- http://mail.python.org/mailman/listinfo/python-list
Re: Python CGI
Here's a snippet of code I use in a CGI I use... I check to see if the params has any data, if it does then return that data and some other data that comes from the params. If params is empty, then draw a different page that says give me some data. if len(params): return params,inc_fields else: generate_form() sys.exit(1) def generate_form(): html_stuff.print_headers('Error','FF') print Need some information for which to search...\n\nbrbr\n html_stuff.form_open('search.cgi') html_stuff.submit_button('Try Again') print 'bra href="" New User/abr' html_stuff.form_close() html_stuff.print_footers() On 30 Nov 2005 20:52:01 -0800, jbrewer [EMAIL PROTECTED] wrote: I need to update a CGI script I have been working on to performvalidation of input files.The basic idea is this:1.) HTML page is served2.) User posts file and some other info3.) Check file for necessary data * If data is missing, then post a 2nd page requesting needed data* If data is present, continue to 44.) Process input file and display results (processing path depends onwhether data from 3 was present in file or had to be input by user) I'm unsure of the best way to implement step 3.Is it even possible toput all of this into one script?Or should I have one script check thefile for the data, then pass the info on to a 2nd script?The cgi.FieldStorage() function should only be called once, so it seemslike it's not possible to read a form and then repost another one andread it.If this is true and I need 2 scripts, how do I pass on theinformation in step 3 automatically if the file has the proper data and no user input is needed?Sorry if this is a dumb question (or unclear), but this is my(ever-growing) first CGI script.Thanks.Jeremy-- http://mail.python.org/mailman/listinfo/python-list-- But we also know the dangers of a religion that severs its links with reason and becomes prey to fundamentalism --Cardinal Paul PoupardIt morphs into the Republican party!-- BJ -- http://mail.python.org/mailman/listinfo/python-list
Re: ANN: Dao Language v.0.9.6-beta is release!
The best way to compare Dao with Python in detail would be, join a SF project PLEAC, and provide solutions in Dao to those common programming problems listed in Perl Cookbook(by Tom Christiansen Nathan Torkington, published by O'Reilly). The solutions in Python is there, when solutions in Dao would be available, one can quickly compare it with other languages using PLEAC project. But unfortunately, I don't have enough time to it. So I will spend some time to grab some examples from python tutorials, and show how they can be done in Dao, but you have to wait for some days :-) For the second question, I will list some. First I should admit I don't know well python, so maybe there are convenient solutions for something that I think not convenient in python. 1. Multi-threaded programming: In Dao, for any function or expression, one can evaluate them in an independent thread as: thread.create( myfunc() ); or thread.create( myexprs ); In python, probably one have to subclass from a thread class, and reimplement a method (something like run(), if I remember correctly), and then call that method. In Dao, one can create and access thread specific data through a hash/dictionary data structure thread.my[data_key], which is thread global. In python, I don't know how to do it yet. 2. Numeric array: Dao have built-in numeric array type, one can create a numeric array in the following ways: array1 = [ 1, 2, 3 ]; # {1,2,3} will create a normal array array2 = [ 0: 2 :4 ]; # create [0,2,4,6] array3 = [5] : 1; # [1,1,1,1,1] array4 = [3] : [1,2]; # [[1,2],[1,2],[1,2]] ... one can use normal operators +,-,*,/,++,--, +=,-=,etc. to operate on numeric array and scale number, or two numeric array of the same size, or two numeric array with different size but constraint the operations on specific elements by subindex. There are also other features make operations on numeric array convenient. I am sure python can do them, but I wonder if they are convenient. 3.Transient variable and magic functions: Dao provides so called features such transient variable (composed of @ and digits) and magic functions, which are provided as a kind of functionaly programming tools, and in particular, transient variable provides an explicit control during implicit parameter passing in such magic functions. I think this is not something available in python. As exmaples: sort( array, @1@2 ); Or: sort( array, exprs( @1, @2 ) ); where @1 represents the first of the two elements for comparison during sorting, and @2 represents the second. It will sort array so that any two neighbors elements satisfy (if possible) the second expression in sort(). iterate( array, exprs( @1 ) ); will iterate on array and execute the expressions after the first parameter. Here @1 represents each element of the array. This function can be nested, in this case one may use @1, @2, @3 ... These two features are even more useful in numeric array operations, I will not show example here. If you want to find it out, please have a look at the documentation for Dao. 4. Extending using C++: To extend Dao, one must use C++ (at least as an intermediate interface). However, the C++ to extend Dao is very simple, and transparent. And one only need two header files to build a Dao plugin WITHOUT linking to Dao library! I believe the extending of Dao using C++ is much simpler and more convenient than python. That's enough. If I say something wrong about python, please point out. Limin -- http://mail.python.org/mailman/listinfo/python-list
Re: FW: Python Query: How to i share variable between two processes without IPC.
Mukesh Question: how do i share variable between two processes without Mukesh IPC. ... Mukesh A similar thing is available in perl (check the mail attached). What makes you think the Perl code isn't using IPC? use IPC::Shareable; $handle = tie $buffer, 'IPC::Shareable', undef, { destroy = 1 }; Sure looks like IPC to me. From the README: IPC::Shareable allows you to tie a variable to shared memory making it easy to share the contents of that variable with other Perl processes. Scalars, arrays, and hashes can be tied. The variable being tied may contain arbitrarily complex data structures - including references to arrays, hashes of hashes, etc. That the variable $buffer uses Perl's tie mechanism to hide most of the details doesn't make it not shared memory. You might check out pyro: http://pyro.sourceforge.net/ It's not based on shared memory, but will also work across networks. Skip -- http://mail.python.org/mailman/listinfo/python-list
Re: Python as Guido Intended
On 2005-11-30, Mike Meyer [EMAIL PROTECTED] wrote: Antoon Pardon [EMAIL PROTECTED] writes: On 2005-11-29, Mike Meyer [EMAIL PROTECTED] wrote: Antoon Pardon [EMAIL PROTECTED] writes: You see, you can make languages more powerful by *removing* things from it. You cast this in way to general terms. The logic conclusion from this statements is that the most powerfull language is the empty language. The only way you reach that conclusion is if you read the statement as saying that removing things *always* makes a langauge more powerful. That's not what I said, I would say it is the common interpretation for such a sentence. You'd be wrong. Can denotes a possibility, not a certainty. You didn't write: Removing things can make a language more powerfull. You wrote: You can make a language more powerfull by removing things. In the first case can describes a possible outcome of removing things. In the second case can describes that this is a decision you are allowed and able to make. Not the possibility of the outcomes should you make the decision. What your sentence states is that you can make this decision and that if you do so, removing things will accomplish this goal If I'd say You *will* make languages more powerful by removing features, then you'd be right. But that isn't what I said. No this states that I don't have a choice in removing features or not. Anyway that is how I read those sentences. I just want to add that I'm not interrested in whether your interpretation is the right one or mine. I understand now what you meant and that is enough for meu. The above should be considered as an explanation of how understood, not as a correction of how yuo should have wrote it. We don't talk much about how you produce buffer overfows in Python, but people have asked for that as well. Adding ways to write hard-to-read code is frowned upon. And so on. Do you mean people have asked for the possibility that a buffer overflow would overwrite other variables? Buffer overflows don't have to overwrite variables. They just asked how you create buffer overflows in Python. I do wonder what they mean with a buffer overflow. Would the following classify: buf = range(10) buf[10] = 10 Well, you'd have to ask them. Personsally, I wouldn't count that, because no data outside the scope of buf got written to. I find this odd. You seem to argue that you don't want bufferoverflows allowed in python, but then you don't seem to know what is meant by it. If you don't know what they mean, how can you decide you don't want it. And I seem to remember people asking about the possibility of overflow in Python, but I never understood those inquiries in the sense they would want it, but more in a sense of how python protects against it. So I have a bit of a problem understanding the relevance here. I won't speak for others, but I wouldn't reject it out of hand. You haven't provided enough information. Accepting it just because it adds a way to do something is wrong. First, you have to ask whether or not that something is something we want in Python at all. Then you consider whether how the way proposed fits with the language: is it ugly? That is a highly subjective question, answering it says more about the person then about the proposal. True. But whether or not a language is good is a highly subjective question. Since the goal - keeping python good to the people who already consider it good - is subjective, it's only natural that part of the evaluation process be sujectie. Is it prone to abuse? I don't see why this is always brought up, given the number of features that can be abused in python. Just because Python isn't perfect is no reason to make it worse. Why is it worse. You seem to think that if one has a toolbox, which lacks a hammer, that the fact that the hammer can be abused makes your toolbox less usefull if you add a hammer to it. Look again. I never said it would make Python less useful, I said it would make it worse. Those aren't the same thing. It's possible to make a language both more useful and worse at the same time. For instance, Python would clearly be more useful if it could interpret perl 6 scripts. I disagree. Having more possibilities doesn't imply more usefull. One of my nefews has this kind of army swiss knife with tons of possibilities. But I find the few tools I have that total less possibilities more usefull. Now adding a extra possibilty to the army swiss knife may make it worse, less usefull. Putting an extra tool in my toolbox doesn't make it worse or less usefull, since I can just ignore the tools I don't use. Personally, I think adding the features required to do that would make the language (much, much) worse. Oddly enough, I think adding the features to Perl so it could interpret Python scripts would make it better as well as more useful :-). We have a toolbox, full of equipment that can be abused, yet that
Re: General question about Python design goals
Chris Mellon schrieb: First, remember that while your idea is obvious and practical and straightforward to you, everybodys crummy idea is like that to them. And while I'm not saying your idea is crummy, bear in mind that not everyone is sharing your viewpoint. That's completely ok. What I wanted to know is *why* people do not share my viewpoint and whether I can understand their reasons. Often, there are good reasons that I do understand after some discussion. In this case, there were some arguments but they were not convincing for me. I think the rest of your arguments have been already discussed in this thread. People seem to have different opinions here. -- Christoph -- http://mail.python.org/mailman/listinfo/python-list
Re: New Ordered Dictionery to Criticise
Fuzzyman wrote: Sorry for this hurried message - I've done a new implementation of out ordered dict. This comes out of the discussion on this newsgroup (see blog entry for link to archive of discussion). See the latest blog entry to get at it : http://www.voidspace.org.uk/python/weblog/index.shtml Hello all, I've just done a new beta 2 version. It has a full version of FancyODict with the custome callable sequence objects for keys, values and items. They are almost completely covered by tests. You can download the new(er) version from : http://www.voidspace.org.uk/cgi-bin/voidspace/downman.py?file=odictbeta2.py Full discussion of the remaining issues below, or at : http://www.voidspace.org.uk/python/weblog/arch_d7_2005_11_26.shtml#e147 Progress on the updated implementation of dict continues. (I hestitate to say *new* version, as it's just a heavy makeover for the old code - which was basically sound). ``FancyODict`` is now a full implementation of an Ordered Dictionary, with custom *callable sequence objects* for ``keys``, ``values``, and ``items``. These can be called like normal methods, but can also be accessed directly as sequence objects. This includes assigning to, indexing, and slicing - as well as all the other relevant sequence methods. {sm;:-)} I've also added an optional index to ``OrderedDict.popitem``. I'm sure there are lots of ways this can be optimised for efficiency - but the new objects have got pretty full test coverage. You can download the new version (for testing) from `odict Beta 2 http://www.voidspace.org.uk/cgi-bin/voidspace/downman.py?file=odictbeta2.py`_ The following issues still remain : * ``FancyOdict`` is a separate class from ``OrderedDict``. Because this version is *undoubtably* less efficient than OrderedDict, my current thinking is that I should leave them separate (and have both available). Being able to operate on the keys/values/items as sequences is for convenience only. Anyone got a suggestion for a better name than ``FancyODict`` ? * You can no longer access the key order directly. The old ``sequence`` attribute is depracated and will eventually go away. You can currently alter the order (of keys, values *and* items) by passing an iterable into those methods. Someone has suggested that this smells bad - and it ought to be done through separate `setkeys``, ``setvalues``, and ``setitems`` methods. I'm *inclined* to agree, but I don't feel strongly about it. Anyone else got any opinions ? * ``repr`` ought to return a value that ``eval`` could use to turn back into an OrderedDict. I have actually done an implementation of this, but it would mean that *all* the doctests need to be changed. I *will* do this at some point. * Slice assignment. The semantics for slice assignment are fiddly. For example, what do you do if in a slice assignment a key collides with an existing key ? My current implementation does what an ordinary dictionary does, the new value overwrites the previous one. This means that the dictionary can reduce in size as the assignment progresses. {sm;:?} I think this is preferable to raising an error and preventing assignment. It does prevent an optimisation whereby I calculate the indexes of all the new items in advance. It also means you can't rely on the index of a key from a slice assignment, unless you know that there will be no key collisions. In general I'm *against* preventing programmers from doing things, so long as the documentation carries an appropriate warning. An example will probably help illustrate this : .. raw:: html {+coloring} d = OrderedDict() d[1] = 1 d[2] = 2 d[3] = 3 d[4] = 4 d.keys() [1, 2, 3] # fetching every other key # using an extended slice # this actually returns an OrderedDict d[::2] {1: 1, 3: 3} # we can assign to every other key # using an ordered dict d[::2] = OrderedDict([(2, 9), (4, 8)]) len(d) == 4 False d {2: 9, 4: 8} Because of the key collisions the length of d has changed - it now only has two keys instead of four. {-coloring} Criticism solicited (honestly) :-) We (Nicola Larosa and I) haven't yet made any optimisations - but there are two implementations to play with. One allows you to access the keys attribute as if it was a sequence (as well as a method). All the best, Fuzzyman http://www.voidspace.org.uk/python/index.shtml -- http://mail.python.org/mailman/listinfo/python-list
Re: How to get a network resource on a windows server
Hi TIm : The 3rd case fit me and all tips are useful ^_^ Thanks for you help! -- On Thu, 1 Dec 2005 08:49:42 -, Tim Golden wrote: It's not entirely clear exactly what you're trying to do. I'll present some (hopefully) useful tips based on a bit of guesswork. I'm assuming you're familiar with the various ways of copying things around (copy, xcopy, robocopy, Python's shutil module) but feel free to come back and ask if that's where the stumbling block is. -- http://mail.python.org/mailman/listinfo/python-list
Re: Why are there no ordered dictionaries?
Hmmm... it would be interesting to see if these tests can be used with odict. The odict implementation now has full functionality by the way. Optimisations to follow and maybe a few *minor* changes. Fuzzyman http://www.voidspace.org.uk/python/index.shtml -- http://mail.python.org/mailman/listinfo/python-list
Re: General question about Python design goals
Fredrik Lundh wrote: Christoph Zwerschke wrote: now, I'm no expert on data structures for chess games, but I find it hard to believe that any chess game implementer would use a data structure that re- quires linear searches for everything... Using linear arrays to represent chess boards is pretty common in computer chess. Often, the array is made larger than 64 elements to make sure moves do not go off the board but hit unbeatable pseudo pieces standing around the borders. But in principle, linear arrays of that kind are used, and for good reasons. I already feared that a discussion about the details and efficiency of this implementation would follow. But this was not the point here. -- Christoph -- http://mail.python.org/mailman/listinfo/python-list
Re: Why are there no ordered dictionaries?
The semantics of assigning slices to d.keys[i:j] and d.values[i:j] are kind of tricky when the size changes and/or key names match or don't match in various ways, or the incoming data represents collapsing redundant keys that are legal sequential assignment overrides but change the size, etc. I have come against the same problem with slice assignment, when doing odict. :-) Allowing the size to change prevents a useful optimisation - but I dislike *preventing* programmers from doing things. All the best, Fuzzyman http://www.voidspace.org.uk/python/index.shtml Regards, Bengt Richter -- http://mail.python.org/mailman/listinfo/python-list
Re: UnicodeDecodeError
Thanks Fredrik, i decoded both qu[i][0] and self.query to latin_1 (self.query.decode(latin_1)) and i am not getting the error now. If you dont mind, I have another question for you. I use wxPython for GUI development. When i use a string containing character as a label for statictext, the does'nt appear.It is replaced by a short _. I have tried different encodings but have no success. what should i do so that appears on screen? thanks once againg for your help. regards, Ashoka -- http://mail.python.org/mailman/listinfo/python-list
Re: FW: Python Query: How to i share variable between two processes without IPC.
[EMAIL PROTECTED] wrote: Mukesh Question: how do i share variable between two processes without Mukesh IPC. Using some subtle application of quantum mechanics? ;-) Paul P.S. I suppose it depends on any definition of interprocess communication, but if the processes weren't to talk directly with each other then any kind of mechanism employing a shared resource would be appropriate. If non-POSIX IPC is actually meant, one could still employ shared files, TCP/IP communications or one's favourite distributed objects technology: CORBA, COM, etc. -- http://mail.python.org/mailman/listinfo/python-list
Re: General question about Python design goals
Antoon Pardon wrote: On 2005-12-01, Fredrik Lundh [EMAIL PROTECTED] wrote: Mike Meyer wrote: So why the $*@ (please excuse my Perl) does for x in 1, 2, 3 work? because the syntax says so: http://docs.python.org/ref/for.html Seriously. Why doesn't this have to be phrased as for x in list((1, 2, 3)), just like you have to write list((1, 2, 3)).count(1), etc.? because anything that supports [] can be iterated over. This just begs the question. If tuples are supposed to be such heterogenous sequences, one could indeed question why they support []. Presumably because it's necessary to extract the individual values (though os.stat results recently became addressable by attribute name as well as by index, and this is an indication of the originally intended purpose of tuples). And even if good arguments are given why tuples shouls support [], the fact that the intention of tuples and list are so different casts doubts on the argument that supporting [] is enough reason to support iteration. One could equally also argue that since iteration is at the heart of methods like index, find and count, that supporting iteration is sufficient reason to support these methods. One could even go so far as to prepare a patch to implement the required methods and see if it were accepted (though wibbling is a much easier alternative). Personally I find the collective wisdom of the Python developers, while not infallible, a good guide as to what's pythonistic and what's not. YMMV. regards Steve -- Steve Holden +44 150 684 7255 +1 800 494 3119 Holden Web LLC www.holdenweb.com PyCon TX 2006 www.python.org/pycon/ -- http://mail.python.org/mailman/listinfo/python-list
Re: General question about Python design goals
Christoph Zwerschke wrote: I think this all boils down to the following: * In their most frequent use case where tuples are used as lightweight data structures keeping together heterogenous values (values with different types or meanings), index() and count() do not make much sense. I completely agree that his is the most frequent case. Still there are cases where tuples are used to keep homogenous values together (for instance, RGB values, points in space, rows of a matrix). In these cases it would be principally useful to have index() and count() methods. Why? Why does it make sense to ask whether an RGB color has a particular value for one of red, green or blue? Why does it make sense to ask how many elements there are in an RGB color? It doesn't, so you must be talking about (ordered) *collections* of such items. If you want a list of RGB colors then use a list. If you want a list of points in space then use a list. Why is a tuple preferable? [If the answer is because a tuple can't be changed go to the bottom of the class]. But: * Very frequently you will use only 2- or 3-tuples, where direct queries may be faster than item() and count(). (That's probably why Antoon's RGB example was rejected as use case though it was principally a good one). * Very frequently you want to perform operations on these objects and change their elements, so you would use lists instead of tuples anyway. See my use case where you would determine whether a vector is zero by count()ing its zero entries or the rank of a matrix by count()ing zero rows. * You will use item() and count() in situations where you are dealing with a small discrete range of values in your collection. Often you will use strings instead of tuples in these cases, if you don't need to sum() the items, for instance. So, indeed, very few use cases will remain if you filter throught the above. But this does not mean that they do not exist. And special cases aren't special enough to break the rules. It should be easy to imagine use cases now. Take for example, a chess game. You are storing the pieces in a 64-tuple, where every piece has an integer value corresponding to its value in the game (white positive, black negative). You can approximate the value of a position by building the sum(). You want to use the tuple as a key for a dictionary of stored board constellations (e.g. an opening dictionary), therefore you don't use a list. This is a pretty bogus use case. Seems to me like a special case it's not worth breaking the rules for! Now you want to find the field where the king is standing. Very easy with the index() method. Or you want to find the number of pawns on the board. Here you could use the count() method. Bearing in mind the (likely) performance impact of using these items as dict keys don't you think some other representation would be preferable? regards Steve -- Steve Holden +44 150 684 7255 +1 800 494 3119 Holden Web LLC www.holdenweb.com PyCon TX 2006 www.python.org/pycon/ -- http://mail.python.org/mailman/listinfo/python-list
Re: General question about Python design goals
Christoph Zwerschke wrote: Christoph Zwerschke wrote: now, I'm no expert on data structures for chess games, but I find it hard to believe that any chess game implementer would use a data structure that re- quires linear searches for everything... Using linear arrays to represent chess boards is pretty common in computer chess. Often, the array is made larger than 64 elements to make sure moves do not go off the board but hit unbeatable pseudo pieces standing around the borders. But in principle, linear arrays of that kind are used, and for good reasons. really? a quick literature search only found clever stuff like bitboards, pregenerated move tables, incremental hash algorithms, etc. the kind of stuff you'd expect from a problem domain like chess. I already feared that a discussion about the details and efficiency of this implementation would follow. But this was not the point here. so pointing out that the use case you provided has nothing to do with reality is besides the point ? you're quickly moving into kook-country here. /F -- http://mail.python.org/mailman/listinfo/python-list
Re: Python as Guido Intended
Antoon Pardon [EMAIL PROTECTED] wrote: On 2005-11-30, Mike Meyer [EMAIL PROTECTED] wrote: You'd be wrong. Can denotes a possibility, not a certainty. You didn't write: Removing things can make a language more powerfull. You wrote: You can make a language more powerfull by removing things. In the first case can describes a possible outcome of removing things. In the second case can describes that this is a decision you are allowed and able to make. Not the possibility of the outcomes should you make the decision. That's a distinction that exists. I don't see how it's significant here. Do you believe that these two statements are contradictory? You can make X more powerful by removing things. You can make X less powerful by removing things. How about these two, do you think they're contradictory? You can make X more powerful by removing things. You can make X more powerful by adding things. In fact, neither of those pairs are contradictory. All of them can be truthful about the same X simultaneously. What your sentence states is that you can make this decision and that if you do so, removing things will accomplish this goal No. The statement says nothing about *which* things need to be removed to meet the goal. It doesn't say that removing *any* thing from the language will make it more powerful. I think this is an understandably subtle linguistic point for someone who doesn't have English as a first language; it's ambiguous enough that even native speakers could twist it to read more that it actually says. Now that you understand what Mike was trying to say, and Mike has clarified what his statement meant, can we move on to another argument? I just want to add that I'm not interrested in whether your interpretation is the right one or mine. I understand now what you meant and that is enough for meu. The above should be considered as an explanation of how understood, not as a correction of how yuo should have wrote it. An object lesson in taking programmers at their word :-) -- \ I went to San Francisco. I found someone's heart. -- Steven | `\Wright | _o__) | Ben Finney -- http://mail.python.org/mailman/listinfo/python-list
Is there no compression support for large sized strings in Python?
What started as a simple test if it is better to load uncompressed data directly from the harddisk or load compressed data and uncompress it (Windows XP SP 2, Pentium4 3.0 GHz system with 3 GByte RAM) seems to show that none of the in Python available compression libraries really works for large sized (i.e. 500 MByte) strings. Test the provided code and see yourself. At least on my system: zlib fails to decompress raising a memory error pylzma fails to decompress running endlessly consuming 99% of CPU time bz2 fails to compress running endlessly consuming 99% of CPU time The same works with a 10 MByte string without any problem. So what? Is there no compression support for large sized strings in Python? Am I doing something the wrong way here? Is there any and if yes, what is the theoretical upper limit of string size which can be processed by each of the compression libraries? The only limit I know about is 2 GByte for the python.exe process itself, but this seems not to be the actual problem in this case. There are also some other strange effects when trying to create large strings using following code: m = 'm'*1048576 # str1024MB = 1024*m # fails with memory error, but: str512MB_01 = 512*m # works ok # str512MB_02 = 512*m # fails with memory error, but: str256MB_01 = 256*m # works ok str256MB_02 = 256*m # works ok etc. . etc. and so on down to allocation of each single MB in separate string to push python.exe to the experienced upper limit of memory reported by Windows task manager available to python.exe of 2.065.352 KByte. Is the question why did the str1024MB = 1024*m instruction fail, when the memory is apparently there and the target size of 1 GByte can be achieved out of the scope of this discussion thread, or is this the same problem causing the compression libraries to fail? Why is no memory error raised then? Any hints towards understanding what is going on and why and/or towards a workaround are welcome. Claudio # HDvsArchiveUnpackingSpeed_WriteFiles.py strSize10MB = '1234567890'*1048576 # 10 MB strSize500MB = 50*strSize10MB fObj = file(r'c:\strSize500MB.dat', 'wb') fObj.write(strSize500MB) fObj.close() fObj = file(r'c:\strSize500MBCompressed.zlib', 'wb') import zlib strSize500MBCompressed = zlib.compress(strSize500MB) fObj.write(strSize500MBCompressed) fObj.close() fObj = file(r'c:\strSize500MBCompressed.pylzma', 'wb') import pylzma strSize500MBCompressed = pylzma.compress(strSize500MB) fObj.write(strSize500MBCompressed) fObj.close() fObj = file(r'c:\strSize500MBCompressed.bz2', 'wb') import bz2 strSize500MBCompressed = bz2.compress(strSize500MB) fObj.write(strSize500MBCompressed) fObj.close() print print ' Created files: ' print '%s \n%s \n%s \n%s' %( r'c:\strSize500MB.dat' ,r'c:\strSize500MBCompressed.zlib' ,r'c:\strSize500MBCompressed.pylzma' ,r'c:\strSize500MBCompressed.bz2' ) raw_input(' EXIT with Enter / ') # HDvsArchiveUnpackingSpeed_TestSpeed.py import time startTime = time.clock() fObj = file(r'c:\strSize500MB.dat', 'rb') strSize500MB = fObj.read() fObj.close() print print ' loading uncompressed data from file: %7.3f seconds'%(time.clock()-startTime,) startTime = time.clock() fObj = file(r'c:\strSize500MBCompressed.zlib', 'rb') strSize500MBCompressed = fObj.read() fObj.close() print print 'loading compressed data from file: %7.3f seconds'%(time.clock()-startTime,) import zlib try: startTime = time.clock() strSize500MB = zlib.decompress(strSize500MBCompressed) print 'decompressing zlib data: %7.3f seconds'%(time.clock()-startTime,) except: print 'decompressing zlib data FAILED' startTime = time.clock() fObj = file(r'c:\strSize500MBCompressed.pylzma', 'rb') strSize500MBCompressed = fObj.read() fObj.close() print print 'loading compressed data from file: %7.3f seconds'%(time.clock()-startTime,) import pylzma try: startTime = time.clock() strSize500MB = pylzma.decompress(strSize500MBCompressed) print 'decompressing pylzma data: %7.3f seconds'%(time.clock()-startTime,) except: print 'decompressing pylzma data FAILED' startTime = time.clock() fObj = file(r'c:\strSize500MBCompressed.bz2', 'rb') strSize500MBCompressed = fObj.read() fObj.close() print print 'loading compressed data from file: %7.3f seconds'%(time.clock()-startTime,) import bz2 try: startTime = time.clock() strSize500MB = bz2.decompress(strSize500MBCompressed) print 'decompressing bz2 data:%7.3f seconds'%(time.clock()-startTime,) except: print 'decompressing bz2 data FAILED' raw_input(' EXIT with Enter / ') -- http://mail.python.org/mailman/listinfo/python-list
Re: Which License Should I Use?
On Tue, 29 Nov 2005 22:04:50 -0800, Paul Rubin wrote: Please note that merely putting the code under a GPL or other OSS licence is NOT sufficient -- they must agree to let you DISTRIBUTE the code. If it's under the GPL, they're not allowed to prevent you from distributing it, if you have a copy. Only if the copy is licenced to you by the copyright owner under the GPL in the first place. You can't just take source code you have no rights to, or some other set of rights, stick the GPL on it without the copyright owner's permission, and then legally distribute it. -- Steven. -- http://mail.python.org/mailman/listinfo/python-list
Re: General question about Python design goals
Mike Meyer wrote: Paul Rubin http://[EMAIL PROTECTED] writes: [...] Did somebody actually use Practicality beats purity as an excuse for not making list.count and string.count have the same arguments? If so, I missed it. I certainly don't agree with that - count ought to work right in this case. I agree that the two methods are oddly inconsonant. But then since the introduction of the has substring meaning for the in operator, strings and lists are also inconsistent over that operation. A suitably factored implementation might handle lists and strings with the exact same code and not incur any extra cost at all. That type of thing happens all the time here. I don't think this would make much sense in present-day Python, mostly because there's no such thing as a character, only a string of length one. So a string is actually a sequence of sequences of length one. I guess this is one of those cases where practicality beat purity. It's as though an indexing operation on a list got you a list of length one rather than the element at that index. Strings have always been anomalous in this respect. If it happens all the time, you shouldn't have any trouble nameing a number of things that a majority of users think are misfeatures that aren't being fixed. Could you do that? Doubtless he could. Paul's a smart guy. The missing element will be unanimity on the rationale. regards Steve -- Steve Holden +44 150 684 7255 +1 800 494 3119 Holden Web LLC www.holdenweb.com PyCon TX 2006 www.python.org/pycon/ -- http://mail.python.org/mailman/listinfo/python-list
Re: General question about Python design goals
On 12/1/05, Christoph Zwerschke [EMAIL PROTECTED] wrote: I think this all boils down to the following: * In their most frequent use case where tuples are used as lightweight data structures keeping together heterogenous values (values with different types or meanings), index() and count() do not make much sense. I completely agree that his is the most frequent case. Still there are cases where tuples are used to keep homogenous values together (for instance, RGB values, points in space, rows of a matrix). In these cases it would be principally useful to have index() and count() methods. But: * Very frequently you will use only 2- or 3-tuples, where direct queries may be faster than item() and count(). (That's probably why Antoon's RGB example was rejected as use case though it was principally a good one). * Very frequently you want to perform operations on these objects and change their elements, so you would use lists instead of tuples anyway. See my use case where you would determine whether a vector is zero by count()ing its zero entries or the rank of a matrix by count()ing zero rows. * You will use item() and count() in situations where you are dealing with a small discrete range of values in your collection. Often you will use strings instead of tuples in these cases, if you don't need to sum() the items, for instance. So, indeed, very few use cases will remain if you filter throught the above. But this does not mean that they do not exist. And special cases aren't special enough to break the rules. It should be easy to imagine use cases now. Take for example, a chess game. You are storing the pieces in a 64-tuple, where every piece has an integer value corresponding to its value in the game (white positive, black negative). You can approximate the value of a position by building the sum(). You want to use the tuple as a key for a dictionary of stored board constellations (e.g. an opening dictionary), therefore you don't use a list. This really looks to me like you have your priorities inverted. Practically everything you want to do with this structure is list-like, so why not make it a list and convert it to a tuple when you need to use it as an index? Even better, since you're doing a lot of list operations, why not make it a list and define unique IDs or something to use as indices? A minor change in your design/thinking (not trying to use a tuple as a frozen list) instead of a change in the language (making tuples more like frozen lists) seems to be the warranted solution. But maybe thats just me. Now you want to find the field where the king is standing. Very easy with the index() method. Or you want to find the number of pawns on the board. Here you could use the count() method. -- Christoph -- http://mail.python.org/mailman/listinfo/python-list -- http://mail.python.org/mailman/listinfo/python-list
Re: Database Module in a Web Application
mohammad babaei wrote: I'm going to write my first web application in Python, is it an good idea to write a database module that handles the connection to database executing queries ? No, it's not a good idea to reinvent the wheel if someone else has already done the work for you. All databases which you would probably want to use for this already have Python wrappers which will do the job. -Peter -- http://mail.python.org/mailman/listinfo/python-list
Re: Is there no compression support for large sized strings in Python?
Claudio Grondi wrote: What started as a simple test if it is better to load uncompressed data directly from the harddisk or load compressed data and uncompress it (Windows XP SP 2, Pentium4 3.0 GHz system with 3 GByte RAM) seems to show that none of the in Python available compression libraries really works for large sized (i.e. 500 MByte) strings. Test the provided code and see yourself. At least on my system: zlib fails to decompress raising a memory error pylzma fails to decompress running endlessly consuming 99% of CPU time bz2 fails to compress running endlessly consuming 99% of CPU time The same works with a 10 MByte string without any problem. So what? Is there no compression support for large sized strings in Python? you're probably measuring windows' memory managment rather than the com- pression libraries themselves (Python delegates all memory allocations 256 bytes to the system). I suggest using incremental (streaming) processing instead; from what I can tell, all three libraries support that. /F -- http://mail.python.org/mailman/listinfo/python-list
Re: Database Module in a Web Application
Peter Hansen wrote: mohammad babaei wrote: I'm going to write my first web application in Python, is it an good idea to write a database module that handles the connection to database executing queries ? No, it's not a good idea to reinvent the wheel if someone else has already done the work for you. All databases which you would probably want to use for this already have Python wrappers which will do the job. unless you interpret database module as a database abstraction layer for my application, in which case it's a good idea -- unless you prefer to use a ready-made ORM: http://projects.amor.org/dejavu http://skunkweb.sourceforge.net/pydo.html http://sqlobject.org/ (etc) or a web framework that contains an ORM: http://www.djangoproject.com/ http://www.turbogears.org/ (etc) /F -- http://mail.python.org/mailman/listinfo/python-list
need reading file in binary mode
this part of my code: f = file(work_dir + filename,'r') n = int(totalSize/recordLenth) i = 0 while i n: buf = f.read(recordLenth); sometime (when find something like \0A\00\00 in data) returm less bytes then file have. Q: how-to read all data from binary file with constant_length portions? -- http://mail.python.org/mailman/listinfo/python-list
Re: A bug in struct module on the 64-bit platform?
I'm guessing that the expected behavior is struct.calcsize('idi') 20 because the double should be aligned to an 8-byte boundary. This is the case on my linux/x86_64 machine: $ python -c 'import struct; print struct.calcsize(idi)' 20 I don't know much about 'itanium', but i'd be surprised if they chose 4-byte alignment for doubles. http://h21007.www2.hp.com/dspp/tech/tech_TechDocumentDetailPage_IDX/1,1701,180,00.html Jeff pgpgMOnsB8Jx5.pgp Description: PGP signature -- http://mail.python.org/mailman/listinfo/python-list
Re: Problem cmpiling M2Crypto
Fredrik Lundh [EMAIL PROTECTED] wrote in message news:[EMAIL PROTECTED] Thomas G. Apostolou wrote: C:\Program Files\Plone 2\Python\lib\distutils\extension.py:128: UserWarning: Unknown Extension options: 'swig_opts' Hmm, I don't remember seeing that. But then again, I haven't compiled with the same setup as you either. I am in a quest to find where this comes from. google can provide a hint, at least: http://www.mail-archive.com/distutils-sig@python.org/msg00084.html That was realy some help! Thank you. I changed %rename back to %name as Heikki Toivonen sujested and changed the setup.py to read as: m2crypto = Extension(name = '__m2crypto', sources = ['SWIG/_m2crypto.i'], include_dirs = include_dirs, library_dirs = library_dirs, libraries = libraries, extra_compile_args = ['-DTHREADING', '-DSWIG_COBJECT_PYTHON'], #swig_opts = [swig_opts_str] # only works for 2.4 ) What i get now is the warnings about %name is deprecated - witch i think is better than the errors about Syntax error in input(1). The message about C:\Program Files\Plone 2\Python\lib\distutils\extension.py:128: UserWarning: Unknown Extension options: 'swig_opts' is gone. I still get the error: SWIG/_m2crypto.c(80) : fatal error C1083: Cannot open include file: 'Python.h': No such file or directory error: command 'C:\Program Files\Microsoft Visual Studio\VC98\BIN\cl.exe' failed with exit status 2 witch is -as Heikki Toivonen told- because of lack of Python.h I do not hane this file anywhere in my PC. Could anyone sujest of a package to install that contains Python.h and that would be of use for me later on? Thanks in advance Thomas G. Apostolou /F -- http://mail.python.org/mailman/listinfo/python-list -- http://mail.python.org/mailman/listinfo/python-list
Re: Is there no compression support for large sized strings in Python?
On this system (Linux 2.6.x, AMD64, 2 GB RAM, python2.4) I am able to construct a 1 GB string by repetition, as well as compress a 512MB string with gzip in one gulp. $ cat claudio.py s = '1234567890'*(1048576*50) import zlib c = zlib.compress(s) print len(c) open(/tmp/claudio.gz, wb).write(c) $ python claudio.py 1017769 $ python -c 'print len(m * (1048576*1024))' 1073741824 I was also able to create a 1GB string on a different system (Linux 2.4.x, 32-bit Dual Intel Xeon, 8GB RAM, python 2.2). $ python -c 'print len(m * 1024*1024*1024)' 1073741824 I agree with another poster that you may be hitting Windows limitations rather than Python ones, but I am certainly not familiar with the details of Windows memory allocation. Jeff pgp5mU0n0xkOj.pgp Description: PGP signature -- http://mail.python.org/mailman/listinfo/python-list
Automate webpage refresh
I am trying to write a script (python2.3) which will be used with linux konqueror to retrive 2 webpages alternatively every 2 minutes. My question is how can I send alternative args (the url) to the same invocation of konqueror which I started with def pipedreams(program, *args): pid = os.fork() if not pid: os.execvp(program, (program,) + args) return os.wait()[0] pipedreams(konqueror,url1) Obviously every time I call pipedreams I would open a new instance which is exactly what I do not want. My pseudocode : if timecounter == 60 : send url1 to konqueror to fetch display elif timecounter == 120: send url2 to konqueror to fetch % display timecounter = 0 Thanks for any hints D.B. -- http://mail.python.org/mailman/listinfo/python-list
Re: need reading file in binary mode
Problem solved. Sergey P. Vazulia [EMAIL PROTECTED] ÓÏÏÂÝÉÌ/ÓÏÏÂÝÉÌÁ × ÎÏ×ÏÓÔÑÈ ÓÌÅÄÕÀÝÅÅ: news:[EMAIL PROTECTED] this part of my code: f = file(work_dir + filename,'rb') ^ n = int(totalSize/recordLenth) i = 0 while i n: buf = f.read(recordLenth); sometime (when find something like \0A\00\00 in data) returm less bytes then file have. Q: how-to read all data from binary file with constant_length portions? -- http://mail.python.org/mailman/listinfo/python-list
[[x,f(x)] for x in list that maximizes f(x)] --newbie help
I just started learning python and I have been wondering. Is there a short pythonic way to find the element, x, of a list, mylist, that maximizes an expression f(x). In other words I am looking for a short version of the following: pair=[mylist[0],f(mylist[0])] for x in mylist[1:]: if f(x) pair[1]: pair=[x,f(x)] -- http://mail.python.org/mailman/listinfo/python-list
Re: ANN: Dao Language v.0.9.6-beta is release!
From An Introduction to Dao: So I realized the goodness of scripting languages, and spent about two weeks to write some Perl scripts to retrieve informa- tion from GO and construct the DAG. But that experience with Perl was not very nice, because of its complicated syntax. Then I started to think about the possiblity to design a new language with simple syntax, and formed some rough ideas. Maybe trying Python requires less time :-) But trying to implement new languages (like Dao, Boo, etc) is useful because they can start again with fresh ideas and less legacy syntax (and some of the new ideas can be backported to Python). sort( array, exprs( @1, @2 ) ); This @ syntax reminds me of a similar one into Mathematica. It can be useful, but using lambdas is acceptable enough, I think. I believe the extending of Dao using C++ is much simpler and more convenient than python. If this is true, then this is an interesting (useful) advantage. Maybe Python can learn something from Dao. Bye, bearophile -- http://mail.python.org/mailman/listinfo/python-list
Re: sax.make_parser() segfaults
Frank Millman [EMAIL PROTECTED] writes: If I call sax.make_parser() from the interpreter or from a stand-alone program, it works fine on all machines, but in the following setup it works correctly on MSW, but segfaults on both FC4 and RH9. [...] Progress report - I have narrowed it down to wxPython. I wrote small stand-alone programs, one using Twisted, one using wxPython. Twisted works fine, wxPython segfaults. Could this be the following python bug: https://sourceforge.net/tracker/index.php?func=detailaid=1075984group_id=5470atid=105470 Bernhard -- Intevation GmbH http://intevation.de/ Skencil http://skencil.org/ Thuban http://thuban.intevation.org/ -- http://mail.python.org/mailman/listinfo/python-list
Re: General question about Python design goals
Steve Holden wrote: Christoph Zwerschke wrote: I completely agree that his is the most frequent case. Still there are cases where tuples are used to keep homogenous values together (for instance, RGB values, points in space, rows of a matrix). In these cases it would be principally useful to have index() and count() methods. Why? Why does it make sense to ask whether an RGB color has a particular value for one of red, green or blue? Why does it make sense to ask how many elements there are in an RGB color? It doesn't, so you must be talking about (ordered) *collections* of such items. If you want a list of RGB colors then use a list. If you want a list of points in space then use a list. Why is a tuple preferable? [If the answer is because a tuple can't be changed go to the bottom of the class]. I cannot follow you here. How would you store RGB values? I think they are a perfect use case for tuples in the spirit of Guido's lightweight C structs. So, in the spirit of Guido I would store them as tuples, or return them as tuples by a function getpixel(x,y). Why should I not be allowed to check for getpixel(xy).count(0) == n for black pixels in an image with n layers? Yes, you could set BLACK=(0,)*n and test against BLACK. You can always do things differently. Take for example, a chess game. You are storing the pieces in a 64-tuple, where every piece has an integer value corresponding to its value in the game (white positive, black negative). You can approximate the value of a position by building the sum(). You want to use the tuple as a key for a dictionary of stored board constellations (e.g. an opening dictionary), therefore you don't use a list. This is a pretty bogus use case. Seems to me like a special case it's not worth breaking the rules for! I already explained why use cases for count() and index() on tuples will principally be rare. So it will always be easy for you to call them special cases. But they are there, and they make sense. Also, we are not talking about breaking a rule or any existing code here, but about generalizing or *broadening* a rule. (What do you do if a rule itself is broken?) Here is another example illustrating the problem - coincidentally, from Fredrik's blog, http://effbot.org/zone/image-histogram-optimization.htm (found it when googling for getpixel): def histogram4(image): # wait, use the list.count operator data = list(image.getdata()) result = [] for i in range(256): result.append(data.count(i)) return result Here, we could imagine the getdata() method returning a tuple. Again, why must it be casted to a list, just to use count()? In reality, getdata() returns a sequence. But again, wouldn't it be nice if all sequences provided count() and index() methods? Then there would be no need to create a list from the sequence before counting. -- Christoph -- http://mail.python.org/mailman/listinfo/python-list
Re: for x in list that maximizes f(x)] --newbie help
If the elements of mylist can be compared (es. not complex values), then this can be a solution for you: from math import sin as f mylist = [float(2*i)/10 for i in xrange(10)] pairs = [(f(x), x) for x in mylist] ymax = max(pairs)[1] print pairs, \n print ymax You can also try this, for Py2.4: print max((f(x), x) for x in mylist)[1] Bye, bearophile -- http://mail.python.org/mailman/listinfo/python-list
Re: python speed
Pypy is not the only promisory project we have for seeing Python running like compiled languages. Shed Skin is already a quite usable Python-to-C++ compiler which, in version 0.5.1, can actually compile many python scripts to fully optimized stand-alone executables. Next version will probably support the use of the standard python library and many, many exciting enhancements. The main difference between these two projects is that while PYPY aims to support 100% of python semantics including (all its dynamic features) on top of a virtual machine. It uses type inference for static compilation and just-in-time techniques. On the other hand, Shed Skin is purely a static compiler that translates python to c++ and then compiles to a stand alone executable. It will never support the most dynamic features of Python, but in most cases, it only requires to restrict our coding style a little bit to avoid these features and produce highly optimized code. http://shedskin.sf.net http://shed-skin.blogspot.com/ -- http://mail.python.org/mailman/listinfo/python-list
Re: Which License Should I Use?
On 11/30/05, Robert Kern [EMAIL PROTECTED] wrote: Paul Boddie wrote: Paul Rubin wrote: That is the guy who claims it is impossible to release anything into the public domain, other than by dying and then waiting 70 years. Is that an indirect reference to the following article? http://www.linuxjournal.com/article/6225 Among other places where Rosen has said it, like his book. In fairness, when on the one hand a lawyer (or 2, in this case) who specializes in IP law tell you that something is uncertain, and on the other hand, a non-lawyer (but certainly a smart guy) dismisses the whole thing as stupid, I kinda tend toward listening to the lawyer. Especially as, if you carefully read what is and isn't said, DJB doesn't actually contradict Rosen or Lessig - he says that as far as he knows nobody has ever bothered the court with it, which is one way of telling he's not a lawyer - a lawyer would say (as Lessig does in his blog post) that there have been no test cases but his analysis of the law is that there are inconsistencies and that were such a case to occur, he is not sure who who would prevail. I'm not a legal expert or a lawyer. But I certainly find Rosens detailed and well-explained analysis of the situation to be much more convincing than Dans hand-waving. -- Robert Kern [EMAIL PROTECTED] In the fields of hell where the grass grows high Are the graves of dreams allowed to die. -- Richard Harter -- http://mail.python.org/mailman/listinfo/python-list -- http://mail.python.org/mailman/listinfo/python-list
Re: Is there no compression support for large sized strings in Python?
Fredrik Lundh [EMAIL PROTECTED] schrieb im Newsbeitrag news:[EMAIL PROTECTED] Claudio Grondi wrote: What started as a simple test if it is better to load uncompressed data directly from the harddisk or load compressed data and uncompress it (Windows XP SP 2, Pentium4 3.0 GHz system with 3 GByte RAM) seems to show that none of the in Python available compression libraries really works for large sized (i.e. 500 MByte) strings. Test the provided code and see yourself. At least on my system: zlib fails to decompress raising a memory error pylzma fails to decompress running endlessly consuming 99% of CPU time bz2 fails to compress running endlessly consuming 99% of CPU time The same works with a 10 MByte string without any problem. So what? Is there no compression support for large sized strings in Python? you're probably measuring windows' memory managment rather than the com- pression libraries themselves (Python delegates all memory allocations 256 bytes to the system). I suggest using incremental (streaming) processing instead; from what I can tell, all three libraries support that. /F Have solved the problem with bz2 compression the way Frederic suggested: fObj = file(r'd:\strSize500MBCompressed.bz2', 'wb') import bz2 objBZ2Compressor = bz2.BZ2Compressor() lstCompressBz2 = [] for indx in range(0, len(strSize500MB), 1048576): lowerIndx = indx upperIndx = indx+1048576 if(upperIndx len(strSize500MB)): upperIndx = len(strSize500MB) lstCompressBz2.append(objBZ2Compressor.compress(strSize500MB[lowerIndx:upper Indx])) #:for lstCompressBz2.append(objBZ2Compressor.flush()) strSize500MBCompressed = ''.join(lstCompressBz2) fObj.write(strSize500MBCompressed) fObj.close() :-) so I suppose, that the decompression problems can also be solved that way, but : This still doesn't for me answer the question what the core of the problem was, how to avoid it and what are the memory request limits which should be considered when working with large strings? Is it actually so, that on other systems than Windows 2000/XP there is no problem with the original code I have provided? Maybe a good reason to go for Linux instead of Windows? Does e.g. Suse or Mandriva Linux have also a memory limit a single Python process can use? Please let me know about your experience. Claudio -- http://mail.python.org/mailman/listinfo/python-list
Re: General question about Python design goals
[Since part of my post seems to have gotten lost in this thread, I figured I would repeat it] In article [EMAIL PROTECTED], Aahz [EMAIL PROTECTED] wrote: In article [EMAIL PROTECTED], Christoph Zwerschke [EMAIL PROTECTED] wrote: Or, another example, the index() method has start and end parameters for lists and strings. The count() method also has start and end parameters for strings. But it has no such parameters for lists. Why? That's a fair cop. Submit a patch and it'll probably get accepted. This is one of those little things that happens in language evolution; not everything gets done right the first time. But Python is developed by volunteers: if you want this fixed, the first step is to submit a bug report on SF (or go ahead and submit a patch if you have the expertise). (I'm quite comfortable channeling Guido and other developers in saying a patch will get accepted.) -- Aahz ([EMAIL PROTECTED]) * http://www.pythoncraft.com/ Don't listen to schmucks on USENET when making legal decisions. Hire yourself a competent schmuck. --USENET schmuck (aka Robert Kern) -- http://mail.python.org/mailman/listinfo/python-list
Re: Newbie: Python Serial Port question
On 2005-11-30, Sanjay Arora [EMAIL PROTECTED] wrote: Am a python newbie. Does python have a native way to communicate with a PC serial port? Yes. Under Unix you can use os.open() os.read() os.write() and the termios module. I found that pyserial needs java. No it doesn't. I am using Linux..CentOS 4.2, to be exact, no java installed and zilch/no/none/maybe never experience of java too. You don't need Java. I have an application where I want to resd logs from the serial port of an EPABX and log them to a postgreSQL database. What's the best way to do this using python? Use pyserial and a database interface module. Does anyone know of an existing software/python library/module for this? Sorry. -- Grant Edwards grante Yow! I'm ANN LANDERS!! I at can SHOPLIFT!! visi.com -- http://mail.python.org/mailman/listinfo/python-list
Re: Newbie: Python Serial Port question
On 2005-12-01, Kinsley Turner [EMAIL PROTECTED] wrote: Am a python newbie. Does python have a native way to communicate with a PC serial port? I found that pyserial needs java. You can't just open the serial port like a file? Yes. Perhaps you'd need to set the correct port parameters with some other app, Just use termios. Or better yet, pyserial. but it should be do-able. -- Grant Edwards grante Yow! Intra-mural sports at results are filtering visi.comthrough th' plumbing... -- http://mail.python.org/mailman/listinfo/python-list
Re: [[x,f(x)] for x in list that maximizes f(x)] --newbie help
Niels L Ellegaard wrote: I just started learning python and I have been wondering. Is there a short pythonic way to find the element, x, of a list, mylist, that maximizes an expression f(x). In other words I am looking for a short version of the following: pair=[mylist[0],f(mylist[0])] for x in mylist[1:]: if f(x) pair[1]: pair=[x,f(x)] this is already very short, what else you want? May be this : max(((f(x), x) for x in mylist)) That is first generate the (f(x),x) pairs then find the max one(first compare f(x) then x) -- http://mail.python.org/mailman/listinfo/python-list
Re: General question about Python design goals
Fredrik Lundh wrote: Christoph Zwerschke wrote: Using linear arrays to represent chess boards is pretty common in computer chess. Often, the array is made larger than 64 elements to make sure moves do not go off the board but hit unbeatable pseudo pieces standing around the borders. But in principle, linear arrays of that kind are used, and for good reasons. really? a quick literature search only found clever stuff like bitboards, pregenerated move tables, incremental hash algorithms, etc. the kind of stuff you'd expect from a problem domain like chess. I don't know where you googled, but my sources do not say that bitboards are the *only* possible or reasonable representation: http://chess.verhelst.org/1997/03/10/representations/ http://en.wikipedia.org/wiki/Computer_chess#Board_representations http://www.aihorizon.com/essays/chessai/boardrep.htm http://www.oellermann.com/cftchess/notes/boardrep.html Many programs still use the array representation. For example: http://www.nothingisreal.com/cheops/ http://groups.msn.com/RudolfPosch/technicalprogamdescription1.msnw Even GNU Chess did not use bitboards before version 5. Here is an example in Python: http://www.kolumbus.fi/jyrki.alakuijala/pychess.html I did not say that there aren't more sophisticated and elaborate board representations than linear or two-dimensional arrays. But they are the simplest and most immediate and intuitive solution, and they have indeed been used for a long time in the 8-bit aera. Bitboards may be more performant, particularly if you are directly programming in assembler or C on a 64 bit machine, but not necessarily in Python. But they are also more difficult to handle. Which representation to use also depends on the algorithms you are using. You wouldn't write a performant chess engine in Python anyway. But assume you want to test a particular chess tree pruning algorithm (that does not depend on board representation) and write a prototype for that in Python, later making a performant implementation in assembler. You would not care so much about the effectivity of your board representation in the prototype, but rather about how easy it can be handled. I think it is telling that you have to resort to a debate about bitboards vs. arrays in order to dismiss my simple use case for index() and count() as unreal. -- Christoph -- http://mail.python.org/mailman/listinfo/python-list
Re: [[x,f(x)] for x in list that maximizes f(x)] --newbie help
wrote: In other words I am looking for a short version of the following: pair=[mylist[0],f(mylist[0])] for x in mylist[1:]: if f(x) pair[1]: pair=[x,f(x)] this is already very short, what else you want? May be this : max(((f(x), x) for x in mylist)) That is first generate the (f(x),x) pairs then find the max one(first compare f(x) then x) It might be better to do: max((f(x),i,x) for i,x in enumerate(mylist))[2] as that will handle the case where x is not comparable but f(x) is. e.g. mylist = (3j, 5j+2, 1j) max((abs(x),i,x) for i,x in enumerate(mylist))[2] (2+5j) -- http://mail.python.org/mailman/listinfo/python-list
Re: sax.make_parser() segfaults
Bernhard Herzog wrote: Frank Millman [EMAIL PROTECTED] writes: If I call sax.make_parser() from the interpreter or from a stand-alone program, it works fine on all machines, but in the following setup it works correctly on MSW, but segfaults on both FC4 and RH9. [...] Progress report - I have narrowed it down to wxPython. I wrote small stand-alone programs, one using Twisted, one using wxPython. Twisted works fine, wxPython segfaults. Could this be the following python bug: https://sourceforge.net/tracker/index.php?func=detailaid=1075984group_id=5470atid=105470 Bernhard -- The symptoms certainly look the same - thanks for this link. It also fits in with my workaround, as someone commented 'if you import pyexpat first, it looks fine', which is effectively what I am doing. Many thanks for the response. Frank -- http://mail.python.org/mailman/listinfo/python-list
Re: ANN: Dao Language v.0.9.6-beta is release!
Maybe trying Python requires less time :-) Yes. Maybe if I tried python instead of perl, probably there would be no Dao language :). However there is one thing I don't like in python, that is, scoping by indentation. But it would not annoy me so much that make me decide to implement a new language^_^. Regards, Limin -- http://mail.python.org/mailman/listinfo/python-list
Re: General question about Python design goals
Donn Cave [EMAIL PROTECTED] wrote in news:[EMAIL PROTECTED]: [...] Tuples and lists really are intended to serve two fundamentally different purposes. We might guess that just from the fact that both are included in Python, in fact we hear it from Guido van Rossum, and one might add that other languages also make this distinction (more clearly than Python.) As I'm sure everyone still reading has already heard, the natural usage of a tuple is as a heterogenous sequence. I would like to explain this using the concept of an application type, by which I mean the set of values that would be valid when applied to a particular context. For example, os.spawnv() takes as one of its arguments a list of command arguments, time.mktime() takes a tuple of time values. A homogeneous sequence is one where a and a[x:y] (where x:y is not 0:-1) have the same application type. A list of command arguments is clearly homogeneous in this sense - any sequence of strings is a valid input, so any slice of this sequence must also be valid. (Valid in the type sense, obviously the value and thus the result must change.) A tuple of time values, though, must have exactly 9 elements, so it's heterogeneous in this sense, even though all the values are integer. One doesn't count elements in this kind of a tuple, because it's presumed to have a natural predefined number of elements. One doesn't search for values in this kind of a tuple, because the occurrence of a value has meaning only in conjunction with its location, e.g., t[4] is how many minutes past the hour, but t[5] is how many seconds, etc. I have to confess that this wasn't obvious to me, either, at first, and in fact probably about half of my extant code is burdened with the idea that a tuple is a smart way to economize on the overhead of a list. Somewhere along the line, I guess about 5 years ago? maybe from reading about it here, I saw the light on this, and since then my code has gotten easier to read and more robust. Lists really are better for all the kinds of things that lists are for -- just for example, [1] reads a lot better than (1,) -- and the savings on overhead is not worth the cost to exploit it. My tendency to seize on this foolish optimization is however pretty natural, as is the human tendency to try to make two similar things interchangeable. So we're happy to see that tuple does not have the features it doesn't need, because it helps in a small way to make Python code better. If only by giving us a chance to have this little chat once in a while. Donn, this is a reasonable argument, and in general I don't have a problem with the distinction between tuples and lists. I have heard and understand the argument that the intended purpose of tuple creation is to mimic C structs, so it seems reasonable to suppose that one knows what was placed in them. Lists are dynamic by nature, so you need a little more help getting information about their current state. However, there is at least one area where this distinction is bogus. Lists cannot be used as dictionary keys (as it now stands). But in practice, it is often useful to create a list of values, cast the list to a tuple, and use that as a dictionary key. It makes little sense to keep a list of that same information around, so in practice, the tuple/key is the container that retains the original information. But that tuple was dynamically created, and it isn't always true that items were placed in it deliberately. In other words, the fact that the key is now a tuple is unrelated to the essential nature of tuples. Not all of the tools used in examining lists are available to the key as a tuple, though it is really nothing more than a frozen list. Sure, you can cast it to a list to use the list methods, but that requires creating objects just to throw away, which seems a little wasteful, especially since that's what you had to do to create the key to begin with. I'm sure Antoon wouldn't object if lists were to be allowed as dictionary keys, which would eliminate the multiple castings for that situation. I wouldn't, either. I'd extend this a little to say that tuples are (at least potentially) created dynamically quite often in other contexts as well, so that despite their designed intent, in practice they are used a little differently a good bit of the time. So why not adjust the available features to the practice? -- rzed -- http://mail.python.org/mailman/listinfo/python-list
Re: [[x,f(x)] for x in list that maximizes f(x)] --newbie help
Duncan Booth wrote: wrote: In other words I am looking for a short version of the following: pair=[mylist[0],f(mylist[0])] for x in mylist[1:]: if f(x) pair[1]: pair=[x,f(x)] this is already very short, what else you want? May be this : max(((f(x), x) for x in mylist)) That is first generate the (f(x),x) pairs then find the max one(first compare f(x) then x) It might be better to do: max((f(x),i,x) for i,x in enumerate(mylist))[2] as that will handle the case where x is not comparable but f(x) is. e.g. mylist = (3j, 5j+2, 1j) max((abs(x),i,x) for i,x in enumerate(mylist))[2] (2+5j) thanks. I don't know what max can or cannot compare. -- http://mail.python.org/mailman/listinfo/python-list
Re: Is there no compression support for large sized strings in Python?
Did you consider the mmap library? Perhaps it is possible to avoid to hold these big stings in memory. BTW: AFAIK it is not possible in 32bit windows for an ordinary programm to allocate more than 2 GB. That restriction comes from the jurrasic MIPS-Processors, that reserved the upper 2 GB for the OS. HTH, Gerald Claudio Grondi schrieb: Fredrik Lundh [EMAIL PROTECTED] schrieb im Newsbeitrag news:[EMAIL PROTECTED] Claudio Grondi wrote: What started as a simple test if it is better to load uncompressed data directly from the harddisk or load compressed data and uncompress it (Windows XP SP 2, Pentium4 3.0 GHz system with 3 GByte RAM) seems to show that none of the in Python available compression libraries really works for large sized (i.e. 500 MByte) strings. Test the provided code and see yourself. At least on my system: zlib fails to decompress raising a memory error pylzma fails to decompress running endlessly consuming 99% of CPU time bz2 fails to compress running endlessly consuming 99% of CPU time The same works with a 10 MByte string without any problem. So what? Is there no compression support for large sized strings in Python? you're probably measuring windows' memory managment rather than the com- pression libraries themselves (Python delegates all memory allocations 256 bytes to the system). I suggest using incremental (streaming) processing instead; from what I can tell, all three libraries support that. /F Have solved the problem with bz2 compression the way Frederic suggested: fObj = file(r'd:\strSize500MBCompressed.bz2', 'wb') import bz2 objBZ2Compressor = bz2.BZ2Compressor() lstCompressBz2 = [] for indx in range(0, len(strSize500MB), 1048576): lowerIndx = indx upperIndx = indx+1048576 if(upperIndx len(strSize500MB)): upperIndx = len(strSize500MB) lstCompressBz2.append(objBZ2Compressor.compress(strSize500MB[lowerIndx:upper Indx])) #:for lstCompressBz2.append(objBZ2Compressor.flush()) strSize500MBCompressed = ''.join(lstCompressBz2) fObj.write(strSize500MBCompressed) fObj.close() :-) so I suppose, that the decompression problems can also be solved that way, but : This still doesn't for me answer the question what the core of the problem was, how to avoid it and what are the memory request limits which should be considered when working with large strings? Is it actually so, that on other systems than Windows 2000/XP there is no problem with the original code I have provided? Maybe a good reason to go for Linux instead of Windows? Does e.g. Suse or Mandriva Linux have also a memory limit a single Python process can use? Please let me know about your experience. Claudio -- http://mail.python.org/mailman/listinfo/python-list
Re: General question about Python design goals
Christoph Zwerschke wrote: I think it is telling that you have to resort to a debate about bitboards vs. arrays in order to dismiss my simple use case for index() and count() as unreal. kook. /F -- http://mail.python.org/mailman/listinfo/python-list
Re: ANN: Louie-1.0b2 - Signal dispatching mechanism
[EMAIL PROTECTED] writes: Louie 1.0b2 is available: http://cheeseshop.python.org/pypi/Louie Louie provides Python programmers with a straightforward way to dispatch signals between objects in a wide variety of contexts. It is based on PyDispatcher_, which in turn was based on a highly-rated recipe_ in the Python Cookbook. .. _PyDispatcher: http://pydispatcher.sf.net/ .. _recipe: http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/87056 For more information, visit the Louie project site: http://louie.berlios.de/ Patrick K. O'Brien and contributors [EMAIL PROTECTED] What is the difference between PyDispatcher and Louie? (I'm still using a hacked version of the original cookbook recipe...) Thanks, Thomas -- http://mail.python.org/mailman/listinfo/python-list
Re: Is there no compression support for large sized strings in Python?
Gerald Klix a écrit : Did you consider the mmap library? Perhaps it is possible to avoid to hold these big stings in memory. BTW: AFAIK it is not possible in 32bit windows for an ordinary programm to allocate more than 2 GB. That restriction comes from the jurrasic MIPS-Processors, that reserved the upper 2 GB for the OS. As a matter of fact, it's Windows which reserved the upper 2 GB. There a simple setting to change that value so that you have 3 GB available and another setting which can even go as far as 3.5 GB available per process. -- http://mail.python.org/mailman/listinfo/python-list
Re: General question about Python design goals
Aahz wrote: This is one of those little things that happens in language evolution; not everything gets done right the first time. But Python is developed by volunteers: if you want this fixed, the first step is to submit a bug report on SF (or go ahead and submit a patch if you have the expertise). (I'm quite comfortable channeling Guido and other developers in saying a patch will get accepted.) Ok, I submitted it as feature request #1370948. I currently don't have the time and expertise to submit a patch. -- Christoph -- http://mail.python.org/mailman/listinfo/python-list
Re: General question about Python design goals
Rick Wotnaz wrote: I'm sure Antoon wouldn't object if lists were to be allowed as dictionary keys, which would eliminate the multiple castings for that situation. I wouldn't, either. so what algorithm do you suggest for the new dictionary im- plementation? /F -- http://mail.python.org/mailman/listinfo/python-list
RE: Making immutable instances
Delaney, Timothy (Tim) [EMAIL PROTECTED] wrote in news:[EMAIL PROTECTED]: Was it *really* necessary to send 4 separate emails to reply to four sections of the same email? Good netiquette is to intersperse your comments with quoted sections in a single email. Tim Delaney Good netiquette might also suggest quoting what you're replying to, wouldn't you think? -- rzed -- http://mail.python.org/mailman/listinfo/python-list
Re: General question about Python design goals
Fredrik Lundh [EMAIL PROTECTED] wrote in news:[EMAIL PROTECTED]: Rick Wotnaz wrote: I'm sure Antoon wouldn't object if lists were to be allowed as dictionary keys, which would eliminate the multiple castings for that situation. I wouldn't, either. so what algorithm do you suggest for the new dictionary im- plementation? Beats the heck outta me. I seem to remember that Antoon supplied one awhile ago (for allowing lists to serve as dictionary keys, that is). That's why I mentioned him in the first place. I didn't pay much attention to it at the time, and I imagine it would raise some issues, like everything does. I'm fairly indifferent to the idea in any case; I'm just saying I wouldn't object if lists could function as dictionary keys. It would save that casting step I was talking about. -- rzed -- http://mail.python.org/mailman/listinfo/python-list
Re: General question about Python design goals
Steve Holden [EMAIL PROTECTED] wrote: ... Presumably because it's necessary to extract the individual values (though os.stat results recently became addressable by attribute name as well as by index, and this is an indication of the originally intended purpose of tuples). Yep -- time tuples have also become pseudo-tuples (each element can be accessed by name as well as by index) a while ago, and I believe there's one more example besides stats and times (but I can't recall which one). Perhaps, if the tuple type _in general_ allowed naming the items in a smooth way, that might help users see a tuple as a kind of ``struct''... which also happens to be immutable. There are a few such supertuples (with item-naming) in the cookbook, but I wonder if it might not be worth having such functionality in the standard library (for this clarification as well as, sometimes, helping the readability of some user code). Alex -- http://mail.python.org/mailman/listinfo/python-list
DCOracle2/python/zope
Hi, I have problems to get ZOracleDA run in Zope - I cannot found a solution wheter in Zope Groups nor searching google and I think it's a python problem. The connection in python - without zope - functions without problems and the missing library (see errors later) is obviously found. Is it possible to set an library path for python? I tested changing the ld.conf.so and running ldconfig or setting LD_LIBRARY_PATH environment var, but without success. I am running SuSE 9.3/Python 2.4/DCOracle2 1_3b/PDO 1.3.1 I get the following error starting the zope server: 2005-12-01T16:33:08 ERROR(200) Zope Couldn't install ZOracleDA Traceback (most recent call last): File /opt/zope/lib/python/OFS/Application.py, line 723, in install_product global_dict, global_dict, silly) File /opt/zope/lib/python/Products/ZOracleDA/__init__.py, line 91, in ? import DA File /opt/zope/lib/python/Products/ZOracleDA/DA.py, line 90, in ? from db import DB File /opt/zope/lib/python/Products/ZOracleDA/db.py, line 89, in ? import DCOracle2, DateTime File /opt/zope/lib/python/Products/ZOracleDA/DCOracle2/__init__.py, line 37, in ? from DCOracle2 import * File /opt/zope/lib/python/Products/ZOracleDA/DCOracle2/DCOracle2.py, line 104, in ? import dco2 ImportError: libclntsh.so.10.1: cannot open shared object file: No such file or directory -- http://mail.python.org/mailman/listinfo/python-list
Re: Is there no compression support for large sized strings in Python?
Christophe wrote: Did you consider the mmap library? Perhaps it is possible to avoid to hold these big stings in memory. BTW: AFAIK it is not possible in 32bit windows for an ordinary programm to allocate more than 2 GB. That restriction comes from the jurrasic MIPS-Processors, that reserved the upper 2 GB for the OS. As a matter of fact, it's Windows which reserved the upper 2 GB. There a simple setting to change that value so that you have 3 GB available and another setting which can even go as far as 3.5 GB available per process. random raymond chen link: http://blogs.msdn.com/oldnewthing/archive/2004/08/05/208908.aspx /F -- http://mail.python.org/mailman/listinfo/python-list
Re: Is there no compression support for large sized strings in Python?
I was also able to create a 1GB string on a different system (Linux 2.4.x, 32-bit Dual Intel Xeon, 8GB RAM, python 2.2). $ python -c 'print len(m * 1024*1024*1024)' 1073741824 I agree with another poster that you may be hitting Windows limitations rather than Python ones, but I am certainly not familiar with the details of Windows memory allocation. Jeff -- Here my experience with hunting after the memory limit exactly the way Jeff did it (Windows 2000, Intel Pentium4, 3GB RAM, Python 2.4.2): \python -c print len('m' * 1024*1024*1024) 1073741824 \python -c print len('m' * 1136*1024*1024) 1191182336 \python -c print len('m' * 1236*1024*1024) Traceback (most recent call last): File string, line 1, in ? MemoryError Anyone on a big Linux machine able to do e.g. : \python -c print len('m' * 2500*1024*1024) or even more without a memory error? I suppose, that on the Dual Intel Xeon, even with 8 GByte RAM the upper limit for available memory will be not larger than 4 GByte. Can someone point me to an Intel compatible PC which is able to provide more than 4 GByte RAM to Python? Claudio -- http://mail.python.org/mailman/listinfo/python-list
SCU3 Python package 0.2 released: wxPython support added
Dear all, I have released SCU3Python.u3p (www.snakecard.com/src) . It includes: - SCU3 0.1: a python wrapper for the U3 SDK (www.u3.com) - Python 2.4.2 - wxPython 2.6 (ansi) - A hello-World-style basic wxPython application which shows how to retrieve the U3 information (ex: current virtual disks) necessary for U3 compliance. Best regards, Philippe -- http://mail.python.org/mailman/listinfo/python-list
ANN: SPE 0.8.0.b Python IDE (new: Mac support, doc viewer and workspaces)
:**What's new?**: SPE 'Kay release' 0.8.0.b This release is a major step forward for all platforms, especially for MacOS X. It offers you basic project management through workspaces (thanks to Thurston Stone), an improved sidebar and pydoc viewer. This is the first release which is also developed and tested on the Mac. If SPE is stable enough, I'll try to build SPE as an application for the Mac. This version aims stability, so please help to report and fix all bugs! You can get involved on the SPE dev mailing list: https://developer.berlios.de/mail/?group_id=4161 The advertisements on SPE's websites generate a moderate, but necesssary income. From now on if you donate, you can ask me by email an ad-free, nice pdf version of the manual. :**About SPE**: SPE is a python IDE with auto-indentation, auto completion, call tips, syntax coloring, uml viewer, syntax highlighting, class explorer, source index, auto todo list, sticky notes, integrated pycrust shell, python file browser, recent file browser, dragdrop, context help, ... Special is its blender support with a blender 3d object browser and its ability to run interactively inside blender. Spe ships with wxGlade (gui designer), PyChecker (source code doctor) and Kiki (regular expression console). :**New features**: - stability - workspaces to manage your projects - documentation viewer (pydoc) - smart indentation: dedent after break, return, etc... - realtime updating of sidebar: explore, todo amp; index - check source realtime for syntax errors (optional) - backup files (.py.bak) are now optional :**Improved features**: - find in files :**Bug fixes**: - Workspace bugs (closing files) - Installer (Windows) - all these mac bugs are fixed: + Find dialog focus + Sticky notes + Find in files tab + Toolbar keyboard shortcut display + Kiki: regex console + SPE crashes on close + preferences dialog: combobox bug + debug dialog: combobox bug + wxGlade won't launch + winPdb won't launch + 2x window menu bug + context menu bug + open amp; run in terminal emulator + pythonw debug message + spe script bug + exception when closing file + PyChecker :**Installation**: See http://pythonide.stani.be/manual/html/manual2.html There is now an update section for MacOS X! :**Websites**: - homepage: http://pythonide.stani.be - manual:http://pythonide.stani.be/manual/html/manual.html - news blog: http://pythonide.stani.be/blog :**Contributors**: - Thurston Stone (workspaces) - Glenn Washburn (patches) - Henning Hraban Ramm (Mac support) - Kay Fricke (Mac support) - Stefan Csomor (wxMac support) - and many more... :**Donations**: Special thanks to Richard Brown of Dartware for starting the fund raising campaign of the mini-mac for SPE. A lot of the donations below are done specifically for this purpose. Impressive that the smallest user base of SPE generated generously the largest donations ever. Many thanks. Maybe it's time now for Window and Linux users to catch up! - Dartware (250 euro) - Rick Thomas (161 euro) - Brian Myers (100 euro) - Advanced Concepts AG (50 euro) - Ernesto Costa (50 euro) - Henning Hraban Ramm (50 euro) - Peter Koppatz (50 euro) - Kevin Murphy (50 euro) - Stefano Morandi (50 euro) - Kenneth McDonald (40 euro) - Barrett Smith (25 euro) - Irakli Nadareishvili (20 euro) - Jonathan Roadley-Battin (20 euro) - Brendan Simons (15 euro) - Edwin Ellis (15 euro) - Ian Ozsvald (15 euro) - Michael Smith (15 euro) - Eduardo Gonzalez (10 euro) - Phil Hargett (10 euro) - David Diskin (5 euro) Thanks for this generous support! Stani -- http://mail.python.org/mailman/listinfo/python-list
Re: General question about Python design goals
Fredrik Lundh wrote: Christoph Zwerschke wrote: I think it is telling that you have to resort to a debate about bitboards vs. arrays in order to dismiss my simple use case for index() and count() as unreal. kook. Thanks. Just to clarify: I was referring to the weakness of the argument tuple.count() is evil when I wrote telling. It was not meant derogatory towards your person in any way - if you understood it that way I apologize. Maybe I write in a way that can be easily misunderstood, but I would not deliberately libel others, least of all if it is only about a petty issue of a programming language. -- Christoph -- http://mail.python.org/mailman/listinfo/python-list
Re: General question about Python design goals
Rick Wotnaz wrote: Rick Wotnaz wrote: I'm sure Antoon wouldn't object if lists were to be allowed as dictionary keys, which would eliminate the multiple castings for that situation. I wouldn't, either. so what algorithm do you suggest for the new dictionary im- plementation? Beats the heck outta me. I seem to remember that Antoon supplied one awhile ago (for allowing lists to serve as dictionary keys, that is). anyone has a pointer? all I can remember is that people have posted various if the key is mutated, I don't care if the value can no longer be found proposals (besides the usual SEP variants, of course), but Antoon's endless stream of I claim that it be argued that posts makes it easy to miss things... /F -- http://mail.python.org/mailman/listinfo/python-list
Re: [[x,f(x)] for x in list that maximizes f(x)] --newbie help
[EMAIL PROTECTED] wrote: ... thanks. I don't know what max can or cannot compare. Just the same things that you can compare with, say, . I believe in 2.5 max and min will also accept a key= argument (like sorted etc) to tweak what to compare, so max(thelist, key=f) should then work (but in 2.4 you do need to more explicitly use some kind of decorate-sort-undecorate idiom, as explained in previous posts). Alex -- http://mail.python.org/mailman/listinfo/python-list
Re: After migrating from debian to ubuntu, tkinter hello world doesn't work
yes, but everybody using ubuntu tells me it works fine for them. The problem must be something very specific to my laptop and x window. I am using need 855resolution , I'd like to know if it works for somedy else with ubuntu and 855resolution. thanks for your interest -- http://mail.python.org/mailman/listinfo/python-list
Re: General question about Python design goals
Alex Martelli wrote: Steve Holden [EMAIL PROTECTED] wrote: ... Presumably because it's necessary to extract the individual values (though os.stat results recently became addressable by attribute name as well as by index, and this is an indication of the originally intended purpose of tuples). Yep -- time tuples have also become pseudo-tuples (each element can be accessed by name as well as by index) a while ago, and I believe there's one more example besides stats and times (but I can't recall which one). Perhaps, if the tuple type _in general_ allowed naming the items in a smooth way, that might help users see a tuple as a kind of ``struct''... which also happens to be immutable. There are a few such supertuples (with item-naming) in the cookbook, but I wonder if it might not be worth having such functionality in the standard library (for this clarification as well as, sometimes, helping the readability of some user code). iirc, providing a python-level API to the SequenceStruct stuff has been proposed before, and rejected. (fwiw, I'm not sure the time and stat tuples would have been tuples if the standard library had been designed today; the C- level stat struct doesn't have a fixed number of members, and the C-level time API would have been better off as a light- weight time type (similar to sockets, stdio-based files, and other C-wrapper types)) /F -- http://mail.python.org/mailman/listinfo/python-list
Re: After migrating from debian to ubuntu, tkinter hello world doesn't work
thank you very much, but now I don't think it is a problem of dependencies -- http://mail.python.org/mailman/listinfo/python-list
Re: [[x,f(x)] for x in list that maximizes f(x)] --newbie help
Alex Martelli wrote: [EMAIL PROTECTED] wrote: ... thanks. I don't know what max can or cannot compare. Just the same things that you can compare with, say, . I believe in 2.5 max and min will also accept a key= argument (like sorted etc) to tweak what to compare, so max(thelist, key=f) should then work (but in 2.4 you do need to more explicitly use some kind of decorate-sort-undecorate idiom, as explained in previous posts). Thanks. In that case, would it be easier to understand(beside the original iterative loop) if I use reduce and lambda ? reduce(lambda (mv,mx), (v,x): mv v and (mv,mx) or (v,x), ((f(x), x) for x in mylist))) As while DSU is a very smart way to guard the max compare thing, it is still being introduced as a way that is not related to the original problem, i.e. I just want to compare f(x) -- http://mail.python.org/mailman/listinfo/python-list
Re: Problem cmpiling M2Crypto
Thomas G. Apostolou wrote: I still get the error: SWIG/_m2crypto.c(80) : fatal error C1083: Cannot open include file: 'Python.h': No such file or directory error: command 'C:\Program Files\Microsoft Visual Studio\VC98\BIN\cl.exe' failed with exit status 2 witch is -as Heikki Toivonen told- because of lack of Python.h I do not hane this file anywhere in my PC. Could anyone sujest of a package to install that contains Python.h and that would be of use for me later on? most standard Python installers come with this file (it's usually in ./include relative to the installation directory). I suggest installing a python.org release: http://www.python.org/ and using that to build the extensions. (make sure you get the same version as your Plone uses) /F -- http://mail.python.org/mailman/listinfo/python-list
Re: Automate webpage refresh
Hi! DarkBlue wrote: I am trying to write a script (python2.3) which will be used with linux konqueror to retrive 2 webpages alternatively every 2 minutes. My question is how can I send alternative args (the url) to the same invocation of konqueror which I started with This can be done using dcop, the KDE interprocess communication system. dcop can be used as a simple command line utility that sends messages to running applications. The easiest way to use it is to just play around with it on the commandline. For the konqueror usecase you would do something like: dcop konqueror-pid konqueror-mainwindow#1 openURL http://google.net; this command can easily be executed with os.system. Hope this gave some hints, Carl Friedrich Bolz -- http://mail.python.org/mailman/listinfo/python-list
Re: Death to tuples!
Antoon Pardon [EMAIL PROTECTED] writes: I know what happens, I would like to know, why they made this choice. One could argue that the expression for the default argument belongs to the code for the function and thus should be executed at call time. Not at definion time. Just as other expressions in the function are not evaluated at definition time. The idiom to get a default argument evaluated at call time with the current behavior is: def f(arg = None): if arg is None: arg = BuildArg() What's the idiom to get a default argument evaluated at definition time if it were as you suggested? mike -- Mike Meyer [EMAIL PROTECTED] http://www.mired.org/home/mwm/ Independent WWW/Perforce/FreeBSD/Unix consultant, email for more information. -- http://mail.python.org/mailman/listinfo/python-list