Re: Development time vs. runtime performance (was: Fibonacci series recursion error)
Teemu Likonen tliko...@iki.fi writes: * 2011-05-08T12:59:02Z * Steven D'Aprano wrote: On Sun, 08 May 2011 01:44:13 -0400, Robert Brown wrote: Python requires me to rewrite the slow bits of my program in C to get good performance. Python doesn't require you to re-write anything in C. If you want to make a different trade-off (faster runtime, slower development time) then you use a language that has made the appropriate trade-off. I believe that Robert Brown wanted to imply that Common Lisp is quite optimal on both sides. It supports dynamic interactive development and yet it has implementations with very efficient compilers. The trade-off is not necessarily between the two. Yes, exactly. Sometimes I don't know in advance which parts of my Python program will run too slowly or take too much memory. I only find out after the code is written, when it fails to run fast enough or doesn't fit in my computer's 32-bit address space. Then the slow parts get recoded in C. Common Lisp supports a larger performance range. A developer can write slow code quickly or faster code by giving the compiler more information, which requires more developer time. But of course development time is a nicely vague concept. Depending on the argument it can include just the features of language and implementation. Other times it could include all the available resources such as documentation, library archives and community mailing lists. All these can reduce development time. This is definitely true. The total time to complete any task always depends on tons of factors outside of the programming language. The ecosystem is important ... How much code can I avoid writing myself? bob -- http://mail.python.org/mailman/listinfo/python-list
Re: Fibonacci series recursion error
Steven D'Aprano steve+comp.lang.pyt...@pearwood.info writes: If you value runtime efficiency over development time, sure. There are plenty of languages which have made that decision: Pascal, C, Java, Lisp, Forth, and many more. I don't understand why you place Lisp and Forth in the same category as Pascal, C, and Java. Lisp and Forth generally have highly interactive development environments, while the other languages generally require an edit, compile, run it again debugging cycle. Lisp in particular does a better job than Python in optimizing developer time. I can change class definitions interactively without restarting my program. I can add type declarations to a single function and recompile it without restarting my program. Python requires me to rewrite the slow bits of my program in C to get good performance. Why is that an efficient use of developer time? bob -- http://mail.python.org/mailman/listinfo/python-list
Re: python simply not scaleable enough for google?
Vincent Manis vma...@telus.net writes: The false statement you made is that `... Python *the language* is specified in a way that makes executing Python programs quickly very very difficult. I refuted it by citing several systems that implement languages with semantics similar to those of Python, and do so efficiently. The semantic details matter. Please read Willem's reply to your post. It contains a long list of specific differences between Python (CPython) language semantics and Common Lisp language semantics that cause Python performance to suffer. OK, let me try this again. My assertion is that with some combination of JITting, reorganization of the Python runtime, and optional static declarations, Python can be made acceptably fast, which I define as program runtimes on the same order of magnitude as those of the same programs in C (Java and other languages have established a similar goal). I am not pushing optional declarations, as it's worth seeing what we can get out of JITting. If you wish to refute this assertion, citing behavior in CPython or another implementation is not enough. You have to show that the stated feature *cannot* be made to run in an acceptable time. It's hard to refute your assertion. You're claiming that some future hypothetical Python implementation will have excellent performance via a JIT. On top of that you say that you're willing to change the definition of the Python language, say by adding type declarations, if an implementation with a JIT doesn't pan out. If you change the Python language to address the semantic problems Willem lists in his post and also add optional type declarations, then Python becomes closer to Common Lisp, which we know can be executed efficiently, within the same ballpark as C and Java. bob -- http://mail.python.org/mailman/listinfo/python-list
Re: python simply not scaleable enough for google?
Vincent Manis vma...@telus.net writes: On 2009-11-11, at 14:31, Alain Ketterlin wrote: I'm having some trouble understanding this thread. My comments aren't directed at Terry's or Alain's comments, but at the thread overall. 1. The statement `Python is slow' doesn't make any sense to me. Python is a programming language; it is implementations that have speed or lack thereof. This is generally true, but Python *the language* is specified in a way that makes executing Python programs quickly very very difficult. I'm tempted to say it's impossible, but great strides have been made recently with JITs, so we'll see. 2. A skilled programmer could build an implementation that compiled Python code into Common Lisp or Scheme code, and then used a high-performance Common Lisp compiler such as SBCL, or a high-performance Scheme compiler such as Chez Scheme, to produce quite fast code ... A skilled programmer has done this for Common Lisp. The CLPython implementation converts Python souce code to Common Lisp code at read time, which is then is compiled. With SBCL you get native machine code for every Python expression. http://github.com/franzinc/cl-python/ http://common-lisp.net/project/clpython/ If you want to know why Python *the language* is slow, look at the Lisp code CLPython generates and at the code implementing the run time. Simple operations end up being very expensive. Does the object on the left side of a comparison implement compare? No, then does the right side implement it? No, then try something else I'm sure someone can come up with a faster Python implementation, but it will have to be very clever. This whole approach would be a bad idea, because the compile times would be dreadful, but I use this example as an existence proof that Python implementations can generate reasonably efficient executable programs. The compile times are fine, not dreadful. Give it a try. 3. It is certainly true that CPython doesn't scale up to environments where there are a significant number of processors with shared memory. Even on one processor, CPython has problems. I last seriously used CPython to analyze OCRed books. The code read in the OCR results for one book at a time, which included the position of every word on every page. My books were long, 2000 pages, and dense and I was constantly fighting address space limitations and CPython slowness related to memory usage. I had to resort to packing and unpacking data into Python integers in order to fit all the OCR data into RAM. bob -- http://mail.python.org/mailman/listinfo/python-list
Re: python simply not scaleable enough for google?
Vincent Manis vma...@telus.net writes: My point in the earlier post about translating Python into Common Lisp or Scheme was essentially saying `look, there's more than 30 years experience building high-performance implementations of Lisp languages, and Python isn't really that different from Lisp, so we ought to be able to do it too'. Common Lisp and Scheme were designed by people who wanted to write complicated systems on machines with a tiny fraction of the horsepower of current workstations. They were carefully designed to be compiled efficiently, which is not the case with Python. There really is a difference here. Python the language has features that make fast implementations extremely difficult. bob -- http://mail.python.org/mailman/listinfo/python-list
Re: python simply not scaleable enough for google?
J Kenneth King ja...@agentultra.com writes: mcherm mch...@gmail.com writes: I think you have a fundamental misunderstanding of the reasons why Python is slow. Most of the slowness does NOT come from poor implementations: the CPython implementation is extremely well-optimized; the Jython and Iron Python implementations use best-in-the-world JIT runtimes. Most of the speed issues come from fundamental features of the LANGUAGE itself, mostly ways in which it is highly dynamic. -- Michael Chermside You might be right for the wrong reasons in a way. Python isn't slow because it's a dynamic language. All the lookups you're citing are highly optimized hash lookups. It executes really fast. Sorry, but Michael is right for the right reason. Python the *language* is slow because it's too dynamic. All those hash table lookups are unnecessary in some other dynamic languages and they slow down Python. A fast implementation is going to have to be very clever about memoizing method lookups and invalidating assumptions when methods are dynamically redefined. As an implementation though, the sky really is the limit and Python is only getting started. Yes, but Python is starting in the basement. bob -- http://mail.python.org/mailman/listinfo/python-list
Re: python simply not scaleable enough for google?
Vincent Manis vma...@telus.net writes: On 2009-11-13, at 17:42, Robert Brown wrote, quoting me: ... Python *the language* is specified in a way that makes executing Python programs quickly very very difficult. That is untrue. I have mentioned before that optional declarations integrate well with dynamic languages. Apart from CL and Scheme, which I have mentioned several times, you might check out Strongtalk (typed Smalltalk), and Dylan, which was designed for high-performance compilation, though to my knowledge no Dylan compilers ever really achieved it. You are not making an argument, just mentioning random facts. You claim I've made a false statement, then talk about optional type declarations, which Python doesn't have. Then you mention Smalltalk and Dylan. What's your point? To prove me wrong you have to demonstrate that it's not very difficult to produce a high performance Python system, given current Python semantics. I'm tempted to say it's impossible, but great strides have been made recently with JITs, so we'll see. If you want to know why Python *the language* is slow, look at the Lisp code CLPython generates and at the code implementing the run time. Simple operations end up being very expensive. Does the object on the left side of a comparison implement compare? No, then does the right side implement it? No, then try something else I've never looked at CLPython. Did it use a method cache (see Peter Deutsch's paper on Smalltalk performance in the unfortunately out-of-print `Smalltalk-80: Bits of History, Words of Advice'? That technique is 30 years old now. Please look at CLPython. The complexity of some Python operations will make you weep. CLPython uses Common Lisp's CLOS method dispatch in various places, so yes, those method lookups are definitely cached. Method lookup is just the tip if the iceburg. How about comparison? Here are some comments from CLPython's implementation of compare. There's a lot going on. It's complex and SLOW. ;; This function is used in comparisons like , =, ==. ;; ;; The CPython logic is a bit complicated; hopefully the following ;; is a correct translation. ;; If the class is equal and it defines __cmp__, use that. ;; The rich comparison operations __lt__, __eq__, __gt__ are ;; now called before __cmp__ is called. ;; ;; Normally, we take these methods of X. However, if class(Y) ;; is a subclass of class(X), the first look at Y's magic ;; methods. This allows the subclass to override its parent's ;; comparison operations. ;; ;; It is assumed that the subclass overrides all of ;; __{eq,lt,gt}__. For example, if sub.__eq__ is not defined, ;; first super.__eq__ is called, and after that __sub__.__lt__ ;; (or super.__lt__). ;; ;; object.c - try_rich_compare_bool(v,w,op) / try_rich_compare(v,w,op) ;; Try each `meth'; if the outcome it True, return `res-value'. ;; So the rich comparison operations didn't lead to a result. ;; ;; object.c - try_3way_compare(v,w) ;; ;; Now, first try X.__cmp__ (even if y.class is a subclass of ;; x.class) and Y.__cmp__ after that. ;; CPython now does some number coercion attempts that we don't ;; have to do because we have first-class numbers. (I think.) ;; object.c - default_3way_compare(v,w) ;; ;; Two instances of same class without any comparison operator, ;; are compared by pointer value. Our function `py-id' fakes ;; that. ;; None is smaller than everything (excluding itself, but that ;; is catched above already, when testing for same class; ;; NoneType is not subclassable). ;; Instances of different class are compared by class name, but ;; numbers are always smaller. ;; Probably, when we arrive here, there is a bug in the logic ;; above. Therefore print a warning. -- http://mail.python.org/mailman/listinfo/python-list
Re: python simply not scaleable enough for google?
Vincent Manis vma...@telus.net writes: On 2009-11-13, at 18:02, Robert Brown wrote: Common Lisp and Scheme were designed by people who wanted to write complicated systems on machines with a tiny fraction of the horsepower of current workstations. They were carefully designed to be compiled efficiently, which is not the case with Python. There really is a difference here. Python the language has features that make fast implementations extremely difficult. Not true. Common Lisp was designed primarily by throwing together all of the features in every Lisp implementation the design committee was interested in. Although the committee members were familiar with high-performance compilation, the primary impetus was to achieve a standardized language that would be acceptable to the Lisp community. At the time that Common Lisp was started, there was still some sentiment that Lisp machines were the way to go for performance. Common Lisp blends together features of previous Lisps, which were designed to be executed efficiently. Operating systems were written in these variants. Execution speed was important. The Common Lisp standardization committee included people who were concerned about performance on C-optimized hardware. As for Scheme, it was designed primarily to satisfy an aesthetic of minimalism. Even though Guy Steele's thesis project, Rabbit, was a Scheme compiler, the point here was that relatively simple compilation techniques could produce moderately reasonable object programs. Chez Scheme was indeed first run on machines that we would nowadays consider tiny, but so too was C++. Oh, wait, so was Python! The Scheme standard has gone through many revisions. I think we're up to version 6 at this point. The people working on it are concerned about performance. For instance, see the discussions about whether the order of evaluating function arguments should be specified. Common Lisp evaluates arguments left to right, but Scheme leaves the order unspecified so the compiler can better optimize. You can't point to Rabbit (1978 ?) as representative of the Scheme programming community over the last few decades. Using Python 3 annotations, one can imagine a Python compiler that does the appropriate thing (shown in the comments) with the following code. I can imagine a lot too, but we're talking about Python as it's specified *today*. The Python language as it's specified today is hard to execute quickly. Not impossible, but very hard, which is why we don't see fast Python systems. bob -- http://mail.python.org/mailman/listinfo/python-list
Re: What's the perfect (OS independent) way of storing filepaths ?
Stef Mientki [EMAIL PROTECTED] writes: I (again) wonder what's the perfect way to store, OS-independent, filepaths ? I can think of something like: - use a relative path if drive is identical to the application (I'm still a Windows guy) - use some kind of OS-dependent translation table if on another drive - use ? if on a network drive There is no perfect solution, since file names and semantics differ from one operating system to the next. Genera, the Lisp Machine operating system, has facilities to abstract over the details of the file systems in use in the 1980s: Multics, ITS, TOPS-20, VMS, Unix, etc. Many of the concepts were incorporated into the Common Lisp standard. Here are a couple of references: http://gigamonkeys.com/book/files-and-file-io.html#filenames http://www.lispworks.com/documentation/HyperSpec/Body/19_.htm The system described is not simple. Briefly, there's a machine-independent way (logical pathnames) to specify file names that a program can use to manipulate the files it knows about. There's no guarantee that you can access an arbitrary file with these names. However, there's also the concept of a machine-specific file namestring. Users can type in these machine-specific namestrings, allowing the code to access arbitrary files. Both types of pathnames can be manipulated via an API to derive other file names. Here's how I create a pathname that refers to a subdirectory of my home directory: (merge-pathnames (make-pathname :directory '(:relative .sbcl systems)) (user-homedir-pathname)) The code should work so long as the target file system supports subdirectories, as Windows and Unix do. bob -- http://mail.python.org/mailman/listinfo/python-list
Re: Bizarre method keyword-arg bug.
Fredrik Lundh [EMAIL PROTECTED] writes: Robert Brown wrote: You may find the above surprising, but Common Lisp users expect the default argument expression to be evaluated anew when needed by a function call: well, I'd say an argument based on Common Lisp users is a lot more dubious ;-) Actually, it's really not dubious. Because Lisp is extensible, Lisp *users* have evolved the language considerably over the years. It's an excellent place to look for alternative design ideas. For instance, Lisp users experimented with several ways (LOOPS, Flavors, etc.) of supporting the object oriented style of programming before CLOS became part of the Common Lisp standard. If you're designing a language feature, it's often the case that Lisp users have tried several alternatives over the last few decades and have decided what works best, for them of course. In any case, chances are high that Lisp's way of handling default arguments would have been changed had it been shown to cause performance problems. We're talking about a language used to implement operating systems -- performance is always a consideration. -- http://mail.python.org/mailman/listinfo/python-list
Re: Bizarre method keyword-arg bug.
Steven D'Aprano [EMAIL PROTECTED] writes: On Wed, 20 Aug 2008 13:09:21 -0400, Robert Brown wrote: In any case, chances are high that Lisp's way of handling default arguments would have been changed had it been shown to cause performance problems. But nobody is suggesting that it would cause performance problems in *Lisp*. It might, or it might not. Who cares? We're not talking about Lisp, we're talking about *Python*, and evaluating default arguments every time the function is called would certainly cause a performance hit in Python. Please explain why it's a performance problem for Python but not for other languages. -- http://mail.python.org/mailman/listinfo/python-list
Re: Bizarre method keyword-arg bug.
Steven D'Aprano [EMAIL PROTECTED] writes: On Mon, 18 Aug 2008 03:20:11 -0700, Jasper wrote: And no, the alternative /does not/ have an equivalent set of surprises -- it's not like Python is unique in having default arguments. That's simply not true. I would find this behaviour very surprising, and I bet you would too: x = parrot def foo(obj=x): ... print obj ... foo() # this is the current behaviour parrot x = shrubbery foo() # but this is not shrubbery del x foo() # nor is this Traceback (most recent call last): NameError: name 'x' is not defined You may find the above surprising, but Common Lisp users expect the default argument expression to be evaluated anew when need by a function call: * (defvar *x* parrot) *X* * (defun foo (optional (obj *x*));; optional arg, default is *x* obj) FOO * (foo) parrot * (setf *x* shrubbery) shrubbery * (foo) shrubbery * (makunbound '*x*) *X* * (foo) debugger invoked on a UNBOUND-VARIABLE in thread #THREAD initial thread RUNNING {10023EDE81}: The variable *X* is unbound. I find the Lisp approach more reasonable. Also, an argument based on performance for Python's current behavior seems dubious, given the language's other performance robbing design choices. bob -- http://mail.python.org/mailman/listinfo/python-list
Re: definition of a highlevel language?
inhahe [EMAIL PROTECTED] writes: I like to think of a language that would combine low-level and high-level features to be used at the programmer's whim. C--, High Level Assembly, and C++ with in-line assembly are examples, but none of them come as high-level as Python. Other possible examples might be ctypes, numpy, array.array, and I heard a rumor that Python 3.0 might have optional type declarations. My ideal language would be like a version of C++ (including in-line asm), or C-- with classes, that's compiled, but supports Python abstractions and features wherever possible (including doing away with {}'s and ;'s). Maybe you should give Common Lisp a try. It combines the high-level features you enjoy in Python with features like optional type declarations, which can be used to create high-performance code. You will also have fun playing around with syntactic abstraction (macros) to define very high-level domain specific languages. -- http://mail.python.org/mailman/listinfo/python-list
Re: Article of interest: Python pros/cons for the enterprise
Paul Rubin http://[EMAIL PROTECTED] writes: Robert Brown [EMAIL PROTECTED] writes: This is the approach taken by Common Lisp. Often just a few type declarations, added to code in inner loops, results in vastly faster code. That is just a dangerous hack of improving performance by turning off some safety checks, I'd say. Static typing in the usual sense of the phrase means that the compiler can guarantee at compile time that a given term will have a certain type. That can be done by automatic inference or by checking user annotations, but either way, it should be impossible to compile code that computes improperly typed values. Unfortunately, performance often comes at the cost of safety and correctness. Optimized C programs can crash when pointers walk off the end of arrays or they can yield incorrect results when integers overflow the limits of the hardware. Common Lisp compilers are allowed to completely ignore type declarations, but the compiler I use, SBCL, uses a combination of compile-time type inference and run-time checking to ensure that my variables have the types I've declared them to have. Sometimes I see an error message at compile time but otherwise I get an exception at run time. It works this way because my code contains (declaim (optimize (debug 3) (safety 3) (speed 0))) which indicates I prefer correctness and ease of debugging to run-time speed. Very rarely, say inside a loop, I temporarily change my default compiler settings. Inside the lexical scope of these declarations, the compiled code does no run-time type checking and trusts me. Here, broken Lisp code can crash the system (just as broken C code can), but the compiled code runs very fast. I trade off safety for speed, but only where necessary. bob -- http://mail.python.org/mailman/listinfo/python-list
Re: Article of interest: Python pros/cons for the enterprise
Larry Bugbee [EMAIL PROTECTED] writes: Python's dynamic typing is just fine. But if I know the type, I want the ability to nail it. ...local variables, arguments, return values, etc And if I don't know or care, I'd leave it to dynamic typing. This is the approach taken by Common Lisp. Often just a few type declarations, added to code in inner loops, results in vastly faster code. Also, although I don't tend to use type declarations while interactively developing code, I often add them later. Mostly, I add them to document the code, but the added safety and faster execution time are nice benefits. bob -- http://mail.python.org/mailman/listinfo/python-list
Re: Why must implementing Python be hard unlike Scheme?
[EMAIL PROTECTED] [EMAIL PROTECTED] writes: I'm learning Scheme and I am amazed how easy it is to start building a half baked Scheme implementation that somewhat works. After knowing Python for *years* I have no idea how to actually implement the darn thing. Since you know Scheme, perhaps the CLPython implementation, which is written in Common Lisp, will be interesting to you: http://common-lisp.net/project/clpython/ bob -- http://mail.python.org/mailman/listinfo/python-list
Re: Why is this loop heavy code so slow in Python? Possible Project Euler spoilers
Neil Cerutti [EMAIL PROTECTED] writes: On 2007-09-02, Steven D'Aprano [EMAIL PROTECTED] wrote: A big question mark in my mind is Lisp, which according to aficionados is just as dynamic as Python, but has native compilers that generate code running as fast as highly optimized C. Lisp, as far as I know, requires type declarations, discipline, deep knowledge of Lisp, and more than passing knowledge of your Lisp implementation in order to generate code that's competitive with C. On my Mac Mini, the original Python code runs in 6 minutes 37 seconds using Python 2.3.5. The Common Lisp code below, a straightforward translation, containing *no* type declarations, runs in 27 seconds on the same Mini using SBCL. When the commented out optimization declaration is included in the code, the run time drops to 3.3 seconds. For comparison, run times with GCC on the C version posted earlier are 3.5 seconds unoptimized and 0.58 seconds with optimization flag -O3. So for this example, deep knowledge of the Lisp implementation and type declarations are not required to get speed equivalent to unoptimized C code. Approaching the speed of optimized C code does require more work. (defun doit () ;; (declare (optimize (speed 3) (safety 0) (debug 0))) (let ((solutions (make-array 1001 :initial-element 0))) (loop for a upfrom 1 below 1 do (loop for b upfrom 1 below (- 1000 a) do (loop for c upfrom 1 below (- 1000 a b) do (let ((p (+ a b c))) (when (and ( p 1000) (= (+ (* a a) (* b b)) (* c c))) (incf (aref solutions p))) (loop with max-index = 0 with max = 0 for solution across solutions for index upfrom 0 do (when ( solution max) (setf max solution) (setf max-index index)) finally (print max-index -- http://mail.python.org/mailman/listinfo/python-list
Re: status of Programming by Contract (PEP 316)?
[EMAIL PROTECTED] (Alex Martelli) writes: DbC and allegedly rigorous compile-time typechecking (regularly too weak due to Eiffel's covariant vs countervariant approach, btw...), based on those empirical results, appear to be way overhyped. My experience with writing Eiffel code was a bit different. Integrating code from multiple sources happened much faster than I expected, and the code ran reliably. There were a couple of instances where previously uncombined code was linked together and worked on the first run. Perhaps more important, however, is that method contracts provide important documentation about how each method is supposed to work -- what it assumes and what must be true when it returns. Using Eiffel changed my coding process. Often I'd write the pre- and postconditions first, then write the method body, just as programmers today often write unit tests first. Thinking carefully about the contracts and writing them down, so they could be verified, makes the code more reliable and maintainable. The contracts are part of the source code, not a fuzzy concept in each programmer's head. The contracts are also useful when discussing the code with domain experts who are not programmers. They can read and understand the flat short view of a class, which includes all the method names, method comments, and contracts, but leaves out the method implementations. Here's an example, Eiffel's String class: http://www.nenie.org/eiffel/flatshort/string.html In any case, I'm still not sure whether it would be useful to integrate DbC into Python. A library that implements DbC for Common Lisp has not gotten much traction in that community, which has a similar style of software development. Perhaps it's just too much to ask that programmers write both unit tests and method contracts. bob -- http://mail.python.org/mailman/listinfo/python-list
Re: Python's only one way to do it philosophy isn't good?
Steven D'Aprano [EMAIL PROTECTED] writes: Graham talks about 25% of the Viaweb code base being macros. Imagine how productive his coders would have been if the language was not quite so minimalistic, so that they could do what they wanted without the _lack_ of syntax getting in the way. Paul Graham's Viaweb code was written in Common Lisp, which is the least minimalistic dialect of Lisp that I know. Even though they were using this powerful tool, they still found it useful to create new syntactic abstractions. How much less productive would they have been had they not had this opportunity? -- http://mail.python.org/mailman/listinfo/python-list
Re: PEP 3107 and stronger typing (note: probably a newbie question)
Stephen R Laniel [EMAIL PROTECTED] writes: Granted, in a dynamic language we won't always (maybe won't often) have a situation where the types are known this well at compile time. But sometimes we will. And it would be nice to catch these before the program even runs. So my question is: would bolting on static type checking when we can, no type checking when we can't be too much to ask? Common Lisp allows the programmer to optionally provide type declarations to improve readability or performance. Certain implementations of Common Lisp, such as cmucl and sbcl, check type declarations at compile time, employ type interence to generate efficient machine code, and insert run time checks when the compiler can't prove at compile time that variables have their declared types. -- http://mail.python.org/mailman/listinfo/python-list
Re: Python's only one way to do it philosophy isn't good?
Neil Cerutti [EMAIL PROTECTED] writes: On 2007-06-21, Douglas Alan [EMAIL PROTECTED] wrote: A prime example of this is how CLOS, the Common Lisp Object System was implemented completely as a loadable library (with the help of many macros) into Common Lisp, which was not an OO language prior to the adoption of CLOS. Is there a second example? ;) There are many useful macro packages that syntactically extend Common Lisp. Here are a few representative examples. compan implementation of list comprehensions http://rali.iro.umontreal.ca/Publications/urls/LapalmeLispComp.pdf iterate a domain specific language for expressing complex iteration http://common-lisp.net/project/iterate/ screamersupport for nondeterministic programming http://www.cis.upenn.edu/~screamer-tools/screamer-intro.html cl-who a domain specific language for HTML generation http://weitz.de/cl-who/ parenscript a domain specific language for JavaScript generation http://common-lisp.net/project/parenscript/ -- http://mail.python.org/mailman/listinfo/python-list
Re: Towards faster Python implementations - theory
sturlamolden [EMAIL PROTECTED] writes: On May 10, 7:18 pm, Terry Reedy [EMAIL PROTECTED] wrote: CMUCL and SBCL depends on the dominance of the x86 architecture. CMUCL and SBCL run on a variety of architectures, including x86, 64-bit x86, PowerPC, Sparc, Alpha, and Mips. See http://www.sbcl.org/platform-table.html for platform support information. Or one could translate between Python and Lisp on the fly, and use a compiled Lisp (CMUCL, SBCL, Franz, GCL) as runtime backend. This has been done by Willem Broekema. PLPython is a Python implementation that translates Python source into Common Lisp at read time. Under the covers, the Lisp is compiled into machine code and then run. See http://trac.common-lisp.net/clpython/ Currently, CLPython uses some non-standard Allegro Common Lisp features, so it does not run on all the free implementations of ANSI Common Lisp. The implementation is interesting, in part because it shows how expensive and complex some Python primitives are. -- http://mail.python.org/mailman/listinfo/python-list
Re: merits of Lisp vs Python
Paul Rubin http://[EMAIL PROTECTED] writes: Robert Brown [EMAIL PROTECTED] writes: Luckily, Willem Broekema has written a Python to Lisp compiler called clpython that can be consulted to answer questions like these. http://trac.common-lisp.net/clpython/ Does this count as a children of a lesser Python? How does clpython implement Python's immutable strings, for example? http://dirtsimple.org/2005/10/children-of-lesser-python.html I think CLPython is in the children of a lesser Python category, on the grounds that it doesn't implement the complete language and there's no obvious way to reuse the C packages that make CPython so useful. However, the other distinguishing feature of the children category is bending semantics to gain speed. CLPython doesn't appear to be doing much of this. The author says it runs at about the same speed as CPython. Python strings are implemented in CLPython as instances of a CLOS class, not as raw Common Lisp strings, so they appear to be immutable. -- http://mail.python.org/mailman/listinfo/python-list
Re: merits of Lisp vs Python
Paul Rubin http://[EMAIL PROTECTED] writes: Espen Vestre [EMAIL PROTECTED] writes: Can you redefine CLOS methods without calling CLOS functions that tell the object system what to expect (so it can do things like update the MRO cache)? I.e. can you redefine them by poking some random dictionary? You can in Python. I don't claim that's a good thing. Just as I said: Less managable, but not more dynamic. I'm not getting through to you. Yes, you could create a Python-like object system in Lisp that's separate from CLOS, but nobody would use it I think you are not understanding the point that Espen is trying to make. He is not suggesting a different object system for Lisp. Espen is saying that Common Lisp often offers the same dynamic feature as Python has, such as the ability to redefining a method at runtime. Lisp, however, forces you to call a CLOS function or use an well defined interface when redefining a method. You can't just change a value in a hash table. Does this make Lisp less dynamic than Python? Espen would say it's not less dynamic, but rather that a similar level of dynamism is achieved in Common Lisp via well defined interfaces. The compiler knows the interfaces, so it can do a better job optimizing the code. -- http://mail.python.org/mailman/listinfo/python-list
Re: merits of Lisp vs Python
Stephen Eilert [EMAIL PROTECTED] writes: So, let's suppose I now want to learn LISP (I did try, on several occasions). What I would like to do would be to replace Python and code GUI applications. Yes, those boring business-like applications that have to access databases and consume those new-fangled web-services and whatnot. Heck, maybe even code games using DirectX. So, how would I do that? First, get a copy of Practical Common Lisp, which shows how to build, well, practical programs in Lisp: http://www.gigamonkeys.com/book/ Next, download a Common Lisp implementation. I happen to prefer SBCL, but any of the free commercial trial products will do, as will CLISP, CMUCL, ABCL, Open MCL, etc. To find libraries, look in Cliki, in the Common Lisp Directory, and in Common-Lisp.net. The Directory is especially good for finding obscure stuff. http://www.cliki.net/ http://www.cl-user.net/ http://common-lisp.net/ Bookmark the Hyperspec, which is an HTML version of the ANSI Common Lisp standard. Skim the whole thing once, so you have a vague idea of what functions are available in the standard library. http://www.lispworks.com/documentation/HyperSpec/Front/index.htm Finally, ask questions on the #lisp IRC channel or in comp.lang.lisp when you are stuck. -- http://mail.python.org/mailman/listinfo/python-list
Re: merits of Lisp vs Python
greg [EMAIL PROTECTED] writes: From another angle, think about what a hypothetical Python-to-Lisp translator would have to do. It couldn't just translate a + b into (+ a b). It would have to be something like (*python-add* a b) where *python-add* is some support function doing all the dynamic dispatching that the Python interpreter would have done. Luckily, Willem Broekema has written a Python to Lisp compiler called clpython that can be consulted to answer questions like these. http://trac.common-lisp.net/clpython/ It turns out that addition is fairly straightforward. Attribute lookup, however, turns out to be complex, as is comparing objects. Here is Willem's description of the attribute lookup algorithm from the file doc/pres.txt: To look up person.name (the name attribute of object person) a. if the class of person (say, Person), or one of Person's base classes (say, Animal) defines __getattribute__, that will intercept all attribute lookups. Call: Animal.__getattribute__(person, name) b. look in instance dictionary: val = person.__dict__[name] - but if it has fixed slots: look in person.__slots__ and give error if name is not one of the fixed slots - unless __dict__ is specified as one of the fixed slots: in that case, don't give an error if it is not one of the fixed slots, but search in the instance dictionary too c. look in the classes Person, Animal for an attribute called name - if it is a `descriptor', call its __get__ method - else, if it is a method, make it a bound method - else, return it unchanged d. if nothing found so far: look for __getattr__ method in the classes, and call it: C.__getattr__(person, name) -- http://mail.python.org/mailman/listinfo/python-list
Re: merits of Lisp vs Python
Paul Rubin http://[EMAIL PROTECTED] writes: For a long time Scheme had no macros, and Scheme developers who were exceedingly familiar with Common Lisp were nonetheless willing to get by without them. So I have to think macros aren't all THAT important. Scheme did eventually get macros, but somewhat different from CL's. Macros are important enough that all the old Scheme implementations I used offered macros in the style of Lisp's defmacro. Lisp hackers did not have to suffer without them when writing Scheme code. Relatively recently, the Scheme standard was augmented with hygenic macros, a different beast. Scheme standardizes something only when there's nearly universal support for it, so features appear in the language standard very slowly. bob -- http://mail.python.org/mailman/listinfo/python-list