Dave Benjamin wrote: > ... > I think Python's decision to use reference counting was an instance of > worse-is-better: at the time, reference counting was already known not to be > "the right thing", but it worked, and the implementation was simple. > Likewise with dynamic typing versus type inference. It seems that Guido > shares Alan Kay's viewpoint that type inference is really "the right thing", > but modern language technology is really not ready to make it mainstream, > whereas dynamic typing works today, and is arguably a better "worse" > solution than explicit/manifest static typing due to its verbosity. > > From the perspective of writing C extensions, Guido claims that the amount > of pain (regarding memory management) is about the same whether you're > extending a reference-counted language like CPython or a garbage collected > language like Java. Anyone with experience writing C extensions want to > comment on this? This is probably true for extensions written from scratch. From the point of view of connecting an existing block of code, Python made a great choice. If allocated memory doesn't need to move, and the garbage collector needn't get to _all_ the references in the system, the extension can make blocks of memory and include references to it in their own code, in their own format. They simply need to hold the objects in a Python-visible way. If the memory system needs to move allocated blocks, many data structures lose their efficiency in a haze of either adjusting and identifying references or indirection through piles of tables. One of the hard problems of talking "cross- language" has to do with "who is in charge of memory."
Lots of languages are happy to allow interaction with another language as long as the other language is restricted to writing subroutines fitting the model of "the language really in charge." The fights are usually about memory allocation, non-standard flow of control, and intermediate data structures. C is easy to talk to because it was conceived as a "portable assembly language". Only straight assembler is easier to write language extensions in, because only assembly has less of a preconception of how memory is used and what code is and is not allowed to do. If you want to know pure hell, try to write a program that has significant code in both Smalltalk and Lisp (or even better, ML) in the same address space. They both want to be in charge, and you will quickly decide the best thing to do is put each language into its own process. > Type inferencing is especially difficult to add to a dynamically typed > language, and in general I think results are much better if you have type > inference from the very beginning (like ML and Haskell) rather than trying > to retrofit it later. Guido says that they've only been able to get it to > work reliably in Python with constants, which isn't very useful. Look at the Self language. That language has less of a surface concept of type than Python (classes are not defined). Still, Self was/is used as a platform to investigate type inferencing, code specialization, and native code generation. --Scott David Daniels [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list