Re: How to implement Varient/Tagged Unions/Pattern Matching in Python?
> BTW, Please don't ask "Why do you want to do like this" No, I don't ask although it would be the interesting aspect for me ;) -- http://mail.python.org/mailman/listinfo/python-list
Re: What does Guido want in a GUI toolkit for Python?
On 27 Jun., 23:06, "Martin v. Löwis" wrote: > > I sorta' wish he'd just come out and say, "This is what I think would > > be suitable for a GUI toolkit for Python: ...". > > He is not in the business of designing GUI toolkits, but in the business > of designing programming languages. So he abstains from specifying > (or even recommending) a GUI library. ... which isn't all that different today. One might just take a look at JavaFX and how gracefully it handles declarative data flow a.k.a. data binding. The evolution of programming languages goes on, with or rather without Python. -- http://mail.python.org/mailman/listinfo/python-list
Re: Python 3.0.1 and mingw
On 24 Jun., 00:59, smartmobili wrote: > I wanted to know if you have some patch to compile python 3.x on mingw > platform because I found some > but doesn't work very well : > > make > > gcc -o python.exe \ > Modules/python.o \ > libpython3.0.a-lm > Could not find platform independent libraries > Could not find platform dependent libraries > Consider setting $PYTHONHOME to [:] > Fatal Python error: Py_Initialize: can't initialize sys standard > streams > ImportError: No module named encodings.utf_8 > > I have some questions about posixmodule.c, config.c and makesetup, > I can see in posixmodule that you define a INITFUNC like this : > > #if (defined(_MSC_VER) || defined(__WATCOMC__) || defined > (__BORLANDC__)) && > !defined(__QNX__) > #define INITFUNC PyInit_nt > #define MODNAME "nt" > > #elif defined(PYOS_OS2) > #define INITFUNC PyInit_os2 > #define MODNAME "os2" > > #else > #define INITFUNC PyInit_posix > #define MODNAME "posix" > #endif > > So first I tried to add || defined(MINGW32) to declare a > PyInit_nt > but after config.c > was still using PyInit_posix. How does makesetup choose to include one > function or another ? > So finally python is using PyInit_posix... > > and after any idea why I got the compilation error ? Why on earth do you want to compile Python 3.0.1? -- http://mail.python.org/mailman/listinfo/python-list
Re: Status of Python threading support (GIL removal)?
On 20 Jun., 17:28, Stefan Behnel wrote: > Kay Schluehr wrote: > >> You might want to read about "The Problem with Threads": > > >>http://www.eecs.berkeley.edu/Pubs/TechRpts/2006/EECS-2006-1.pdf > > >> and then decide to switch to an appropriate concurrency model for your use > >> case. > > > and to a programming language that supports it. > > Maybe, yes. But many different concurrency models are supported by a larger > number of programming languages in one way or another, so the choice of an > appropriate library is often sufficient - and usually a lot easier than > using the 'most appropriate' programming language. Matter of available > skills, mostly. There's usually a lot less code to be written that deals > with concurrency than code that implements what the person paying you makes > money with, so learning a new library may be worth it, while learning a new > language may not. > > Stefan This implies that people stay defensive concerning concurrency ( like me right now ) and do not embrace it like e.g. Erlang does. Sometimes there is a radical change in the way we design applications and a language is the appropriate medium to express it succinctly. Concurrency is one example, writing GUIs and event driven programs in a declarative style ( Flex, WPF, JavaFX ) is another one. In particular the latter group shows that new skills are adopted rather quickly. I don't see that a concurrency oriented language has really peaked though yet. -- http://mail.python.org/mailman/listinfo/python-list
Re: Status of Python threading support (GIL removal)?
> You might want to read about "The Problem with Threads": > > http://www.eecs.berkeley.edu/Pubs/TechRpts/2006/EECS-2006-1.pdf > > and then decide to switch to an appropriate concurrency model for your use > case. and to a programming language that supports it. -- http://mail.python.org/mailman/listinfo/python-list
Re: Measuring Fractal Dimension ?
On 14 Jun., 16:00, Steven D'Aprano wrote: > Incorrect. Koch's snowflake, for example, has a fractal dimension of log > 4/log 3 ≈ 1.26, a finite area of 8/5 times that of the initial triangle, > and a perimeter given by lim n->inf (4/3)**n. Although the perimeter is > infinite, it is countably infinite and computable. No, the Koch curve is continuous in R^2 and uncountable. Lawrence is right and one can trivially cover a countable infinite set with disks of the diameter 0, namely by itself. The sum of those diameters to an arbitrary power is also 0 and this yields that the Hausdorff dimension of any countable set is 0. -- http://mail.python.org/mailman/listinfo/python-list
Re: unladen swallow: python and llvm
On 8 Jun., 00:31, bearophileh...@lycos.com wrote: > ShedSkin (SS) is a beast almost totally different from CPython, SS > compiles an implicitly static subset of Python to C++. So it breaks > most real Python programs, and it doesn't use the Python std lib (it > rebuilds one in C++ or compiled Python), and so on. > SS may be useful for people that don't want to mess with the > intricacies of Cython (ex-Pyrex) and its tricky reference count, to > create compiled python extensions. Don't understand your Cython compliant. The only tricky part of Cython is the doublethink regarding Python types and C types. I attempted once to write a ShedSkin like code transformer from Python to Cython based on type recordings but never found the time for this because I have to work on EasyExtend on all fronts at the same time. Maybe next year or when Unladen Swallow becomes a success - never. The advantage of this approach over ShedSkin was that every valid Cython program is also a Python extension module, so one can advance the translator in small increments and still make continuous progress on the execution speed front. -- http://mail.python.org/mailman/listinfo/python-list
Re: Using C++ and ctypes together: a vast conspiracy? ;)
On 3 Jun., 11:13, Lawrence D'Oliveiro wrote: > In message c0e4-479a-85ed-91c26d3bf...@c36g2000yqn.googlegroups.com>, Kay Schluehr > wrote: > > > > > On 3 Jun., 05:51, Lawrence D'Oliveiro > central.gen.new_zealand> wrote: > > >> In message , Sebastian Wiesner wrote: > > >> > > > >> >> That said I've used C++ with ctypes loads of times, but I always wrap > >> >> the exported stuff in extern "C" { } blocks. > > >> > No wonder, you have never actually used C++ with C types. An extern > >> > "C" clause tells the compiler to generate C functions (more precisely, > >> > functions that conform to the C ABI conventions), so effectively you're > >> > calling into C, not into C++. > > >> Seems like the only sane way to do it. In all other directions lies > >> madness. > > > Yes but creating C stubs is also hard in presence of everything that > > is not basic C++. How would you wrap the STL? > > What does the STL offer that Python doesn't already do more flexibly and > more simply? I do not quite understand your concern? Wasn't the whole point of Josephs post that he intended to create C++ bindings effectively s.t. ctypes can be used? -- http://mail.python.org/mailman/listinfo/python-list
Re: Using C++ and ctypes together: a vast conspiracy? ;)
On 3 Jun., 05:51, Lawrence D'Oliveiro wrote: > In message , Sebastian Wiesner wrote: > > > > > >> That said I've used C++ with ctypes loads of times, but I always wrap > >> the exported stuff in extern "C" { } blocks. > > > No wonder, you have never actually used C++ with C types. An extern "C" > > clause tells the compiler to generate C functions (more precisely, > > functions that conform to the C ABI conventions), so effectively you're > > calling into C, not into C++. > > Seems like the only sane way to do it. In all other directions lies madness. Yes but creating C stubs is also hard in presence of everything that is not basic C++. How would you wrap the STL? I suspect one ends up in creating a variant of SWIG and I wonder if it's not a good idea to just use SWIG then. -- http://mail.python.org/mailman/listinfo/python-list
Re: I need help building a data structure for a state diagram
On 25 Mai, 01:46, Matthew Wilson wrote: > On Sun 24 May 2009 03:42:01 PM EDT, Kay Schluehr wrote: > > > > > General answer: you can encode finite state machines as grammars. > > States as non-terminals and transition labels as terminals: > > > UNSTARTED: 'start' STARTED > > STARTED: 'ok' FINISHED | 'cancel' ABANDONED > > ABANDONED: 'done' > > FINISHED: 'done' > > > In some sense each state-machine is also a little language. > > I've never formally studied grammars, but I've worked through trivial > stuff that uses BNF to express ideas like > > ::= > > I don't really understand how to apply that notion to this statement: > > UNSTARTED: 'start' STARTED > > That doesn't seem to be BNF, and that's all I know about grammar stuff. Some comments 1) The separator is arbitrary. You can use ':' or '->' or '::=' etc. 2) A full EBNF grammar can be described in itself: GRAMMAR: RULE+ RULE: NAME ':' RHS RHS: ALT ( '|' ALT )* ALT: ITEM+ ITEM: '[' RHS ']' | ATOM [ '*' | '+' ] ATOM: '(' RHS ')' | NAME | STRING STRING: '"' any* '"' NAME: char (digit | char)* [A] zero or one repetition A*zero or more repetitions A+one or more repetitions A|B A or B A B first A next B (A ..) parentheses for separation "A" keyword A In some sense all the words 'start', 'done', 'ok' etc. are keywords of the language. If I actually attempted to use the grammar for parsing it could parse a few sentences like: 'start ok done' or 'start cancel done' and create parse trees [UNSTARTED, start, [STARTED, ok, [FINSIHED, done]]] etc. One can however use the Finite State Machine generated from the grammar for totally different purposes: interpret each rule as a state and the keywords as events that cause state transitions. UNSTARTED -- start --> STARTED STARTED -- ok --> FINISHED STARTED -- cancel --> ABANDONED FINISHED -- done --> . ABANDONED -- done --> . That's basically the same formal language with a different, more intuitive notation. -- http://mail.python.org/mailman/listinfo/python-list
Re: I need help building a data structure for a state diagram
On 24 Mai, 20:16, Matthew Wilson wrote: > I'm working on a really simple workflow for my bug tracker. I want > filed bugs to start in an UNSTARTED status. From there, they can go to > STARTED. > > From STARTED, bugs can go to FINISHED or ABANDONED. > > I know I can easily hard-code this stuff into some if-clauses, but I > expect to need to add a lot more statuses over time and a lot more > relationships. > > This seems like a crude state diagram. So, has anyone on this list done > similar work? > > How should I design this so that users can add arbitrary new statuses > and then define how to get to and from those statuses? > > TIA > > MAtt General answer: you can encode finite state machines as grammars. States as non-terminals and transition labels as terminals: UNSTARTED: 'start' STARTED STARTED: 'ok' FINISHED | 'cancel' ABANDONED ABANDONED: 'done' FINISHED: 'done' In some sense each state-machine is also a little language. -- http://mail.python.org/mailman/listinfo/python-list
Re: How to get path.py ? http://www.jorendorff.com/ is down
On 21 Mai, 21:43, Jorge Vargas wrote: > Hello. > > Anyone knows what is the problem with this package? apparently the > author's site is down which prevents pip from installing it. I can > download the zip and go from there but It seems most of the docs are > gone with the site. The code comments shall be sufficient. It is a single not very complex module that basically unifies several stdlib APIs as methods of a string subclass. -- http://mail.python.org/mailman/listinfo/python-list
Re: Parsing Strings in Enclosed in Curly Braces
> Since when is a list a number? Perhaps the help needs clarification, > in line with the docs. Everyone is supposed to use reduce() here ;) Kay -- http://mail.python.org/mailman/listinfo/python-list
Re: complementary lists?
On 29 Apr., 05:41, Ross wrote: > If I have a list x = [1,2,3,4,5,6,7,8,9] and another list that is a > subset of x: y = [1,4,7] , is there a quick way that I could return > the complementary subset to y z=[2,3,5,6,8,9] ? > > The reason I ask is because I have a generator function that generates > a list of tuples and I would like to divide this list into > complementary lists. z = [u for u in x if u not in y] or z = [u for u in x if u not in set(y)] Since you are dealing with tuples and a natural order might not be that relevant you can also set objects in the first place: z = x - y -- http://mail.python.org/mailman/listinfo/python-list
Re: Too early implementation
Start to like blogging about your ideas, results and findings. Writing is a process of clarification of the mind. It doesn't matter much whether you design upfront, or mix coding and writing in an incremental process. If I could I'd just write specs, draft my ideas in Python in order to verify that I really know what I'm talking about and let others work out the details. Of course I'm not in the splendid position of a university professor or an intellectual capitalist. -- http://mail.python.org/mailman/listinfo/python-list
Re: Domain Driven Design and Python
On 16 Apr., 19:44, José María wrote: > Hi, > > I've been searching for information about the application of DDD > principles in > Python and I did'nt found anything! > Is DDD obvious in Python or is DDD inherent to static languages like > Java or C#? If you couldn't find anything I conclude that no one in the Python community takes much care about mapping concepts like layers, entities, aggregates, value objects etc. onto Python or even feels that this improves the state of the art. I'm pretty sure though that DDD is not bound to a particular type system. -- http://mail.python.org/mailman/listinfo/python-list
Re: compiler package vs parser
> I realize that I probably ought to be trying this out with the newer ast > stuff, > but currently I am supporting code back to 2.3 and there's not much hope of > doing it right there without using the compiler package. You might consider using the *builtin* parser module and forget about the compiler package if it is broken ( I take your word that it is ) or modern ast representations which aren't really necessary for Python anyway. >>> import parser >>> tree = parser.suite("def foo():\n print 42}\n") >>> code = tree.compile() >>> exec code >>> foo() 42 This is also not 100% reliable ( at least not for all statements in all Python versions ) but it uses the internal parser/compiler and not a standard library compiler package that might not be that well maintained. -- http://mail.python.org/mailman/listinfo/python-list
Re: compiler package vs parser
On 16 Apr., 11:41, Robin Becker wrote: > Is the compiler package actually supposed to be equivalent to the parser > module? No. The parser module creates a concrete parse tree ( CST ) whereas the compiler package transforms this CST into an AST for subsequent computations. In more recent versions those CST -> AST transformations are performed by the runtime and the Python compiler uses those internally produced ASTs. The Python 2.6 API to ASTs is the ast module. -- http://mail.python.org/mailman/listinfo/python-list
Re: setuptools catch 22
On 16 Apr., 17:39, Mac wrote: > We've got ActiveState Python 2.6 installed on a Windows XP box, and I > pulled down the latest archgenxml package (2.2) in order to get it > running under this installation of Python. I unpacked the tarball for > the package and tried running `python setup.py build' but got an > ImportError exception: "no module named setuptools." So back to > Google, where I findhttp://pypi.python.org/pypi/setuptools, which > says "[For Windows] install setuptools using the provided .exe > installer." I go down to the bottom of the page and I see that there > is no .exe installer for Python 2.6. All there is for that version of > Python is setuptools-0.6c9-py2.6.egg. I get the impression from the > references to "Python Eggs" on the setuptools page that setuptools is > a utility for installing Python Eggs. So we're supposed to use a > utility that isn't installed yet to install that utility? Does anyone > else understand how lame this is? Yes, but there is a known workaround: just download the mantioned setuptools egg and unpack it - it's basically just a zipped python package - and place it at your PYTHONPATH. Then it will also be found by every tool that imports setuptools. -- http://mail.python.org/mailman/listinfo/python-list
Re: design question, metaclasses?
On 11 Apr., 20:15, Darren Dale wrote: > I am working on a project that provides a high level interface to hdf5 > files by implementing a thin wrapper around h5py. > I would like to > generalize the project so the same API can be used with other formats, > like netcdf or ascii files. The format specific code exists in File, > Group and Dataset classes, which I could reimplement for each format. > But there are other classes deriving from Group and Dataset which do > not contain any format-specific code, and I would like to find a way > to implement the functionality once and apply uniformly across > supported formats. Seems like you are doing it wrong. The classical OO approach is to add more details / refining classes in subclasses instead of doing it the other way round and derive the less specific classes from the more specific ones. -- http://mail.python.org/mailman/listinfo/python-list
nonlocal in Python 2.6
I always wondered about the decision to omit the nonlocal statement from the Python 2.X series because it seems to be orthogonal to Python 2.5. Are there any chances for it to be back ported? -- http://mail.python.org/mailman/listinfo/python-list
Re: Painful?: Using the ast module for metaprogramming
> -It would be nice if decorators were passed a function's AST instead > of a function object. As it is I have to use inspect.getsource to > retrieve the source for the function in question, and then use > ast.parse, which is a bit inefficient because the cpython parser has > to already have done this once before. It doesn't matter that much though because the Python parser is very efficient and the decorator is applied only once. The PyGPU project used this approach when I remember it correctly: http://www.cs.lth.se/home/Calle_Lejdfors/pygpu/ -- http://mail.python.org/mailman/listinfo/python-list
Re: Generators/iterators, Pythonicity, and primes
On 5 Apr., 18:47, John Posner wrote: > Kay Schluehr wrote: > > > That's because it is *one* expression. The avoidance of named > > functions makes it look obfuscated or prodigious. Once it is properly > > dissected it doesn't look that amazing anymore. > > > > Start with: > > > > (n for n in count(2) if is_prime(n, primes)) > > > > The is_prime function has following implementation: > > > > def is_prime(n, primes): > > if primes and n<=primes[-1]: > > return n in primes > > else: > > if all(n%p for p in primes if p <= sqrt(n)): > > primes.append(n) > > return True > > else: > > return False > > Your explication is excellent, Kay! In the is_prime() function, can't we > omit the first three lines (if ... else), because it will *always* be > the case that "n" exceeds all the primes we've gathered so far. Yes, sure. I developed is_prime as a standalone function with the primes list as a cache and just lambda-fied it. > I've > tested the code with these lines omitted -- both with the separate > is_prime() function and with the generator-expression-only > implementation. It seems to work fine. Ex: > > --- > from itertools import count > from math import sqrt > > g = (lambda primes = []: > (n for n in count(2) if > (lambda x, primes: > (primes.append(x) or True > if all(x%p for p in primes if p <= sqrt(x)) > else False) > )(n, primes) > ) > )() This is of course much more accessible and doesn't even look weird. Regards -- http://mail.python.org/mailman/listinfo/python-list
Re: Generators/iterators, Pythonicity, and primes
On 5 Apr., 17:14, John Posner wrote: > Kay Schluehr said: > > > g = (lambda primes = []: > > (n for n in count(2) \ > > if > > (lambda n, primes: (n in primes if primes and > n<=primes[-1] \ > > else > > (primes.append(n) or True \ > > if all(n%p for p in primes if p <= sqrt(n)) \ > > else False)))(n, primes)))() > > Um ... did you say "easy"? :-) This is prodigious, and it's definitely a > solution to the generator-expression-only challenge I laid down. That's because it is *one* expression. The avoidance of named functions makes it look obfuscated or prodigious. Once it is properly dissected it doesn't look that amazing anymore. Start with: (n for n in count(2) if is_prime(n, primes)) The is_prime function has following implementation: def is_prime(n, primes): if primes and n<=primes[-1]: return n in primes else: if all(n%p for p in primes if p <= sqrt(n)): primes.append(n) return True else: return False But we need a lambda-fied function expression instead of the function definition. This yields: is_prime = lambda n, primes: (n in primes \ if primes and n<=primes[-1] \ else (primes.append(n) or True \ if all(n%p for p in primes if p <= sqrt(n)) \ else False)) Finally the trick with primes definition within an expression by means of an optional argument in the lambda is applied. > But I > think this is a case in which using a generator expression causes a loss > in "Pythonicity", rather than a gain. Many, many thanks for this! I think so too. The solution with the simple generator expression and the fully defined is_prime function may be just adequate. -- http://mail.python.org/mailman/listinfo/python-list
Re: Generators/iterators, Pythonicity, and primes
> Question: Is there a way to implement this algorithm using generator > expressions only -- no "yield" statements allowed? Yes. Avoiding the yield statement is easy but one might eventually end up with two statements because one has to produce a side effect on the primes list. However we can use default parameters in lambdas and finally get a single expression which is a generator expression: g = (lambda primes = []: (n for n in count(2) \ if (lambda n, primes: (n in primes if primes and n<=primes [-1] \ else (primes.append(n) or True \ if all(n%p for p in primes if p <= sqrt(n)) \ else False)))(n, primes)))() assert g.next() == 2 assert g.next() == 3 assert g.next() == 5 assert g.next() == 7 -- http://mail.python.org/mailman/listinfo/python-list
Re: Python Goes Mercurial
> Meh. Use the command line like God intended. I'm sorry to say this Rhodri but there is probably no god ;) The reason I like overlays is that they are data displays that highlight changes without letting me do any action. The VCS works for me before I'm doing any work with it and that's a good thing because I'm *always* lazy. -- http://mail.python.org/mailman/listinfo/python-list
Re: PEP 382: Namespace Packages
On 2 Apr., 17:32, "Martin v. Löwis" wrote: > I propose the following PEP for inclusion to Python 3.1. > Please comment. > > Regards, > Martin > > Abstract > > > Namespace packages are a mechanism for splitting a single Python > package across multiple directories on disk. In current Python > versions, an algorithm to compute the packages __path__ must be > formulated. With the enhancement proposed here, the import machinery > itself will construct the list of directories that make up the > package. > > Terminology > === > > Within this PEP, the term package refers to Python packages as defined > by Python's import statement. The term distribution refers to > separately installable sets of Python modules as stored in the Python > package index, and installed by distutils or setuptools. The term > vendor package refers to groups of files installed by an operating > system's packaging mechanism (e.g. Debian or Redhat packages install > on Linux systems). > > The term portion refers to a set of files in a single directory (possibly > stored in a zip file) that contribute to a namespace package. > > Namespace packages today > > > Python currently provides the pkgutil.extend_path to denote a package as > a namespace package. The recommended way of using it is to put:: > > from pkgutil import extend_path > __path__ = extend_path(__path__, __name__) > > int the package's ``__init__.py``. Every distribution needs to provide > the same contents in its ``__init__.py``, so that extend_path is > invoked independent of which portion of the package gets imported > first. As a consequence, the package's ``__init__.py`` cannot > practically define any names as it depends on the order of the package > fragments on sys.path which portion is imported first. As a special > feature, extend_path reads files named ``*.pkg`` which allow to > declare additional portions. > > setuptools provides a similar function pkg_resources.declare_namespace > that is used in the form:: > > import pkg_resources > pkg_resources.declare_namespace(__name__) > > In the portion's __init__.py, no assignment to __path__ is necessary, > as declare_namespace modifies the package __path__ through sys.modules. > As a special feature, declare_namespace also supports zip files, and > registers the package name internally so that future additions to sys.path > by setuptools can properly add additional portions to each package. > > setuptools allows declaring namespace packages in a distribution's > setup.py, so that distribution developers don't need to put the > magic __path__ modification into __init__.py themselves. > > Rationale > = > > The current imperative approach to namespace packages has lead to > multiple slightly-incompatible mechanisms for providing namespace > packages. For example, pkgutil supports ``*.pkg`` files; setuptools > doesn't. Likewise, setuptools supports inspecting zip files, and > supports adding portions to its _namespace_packages variable, whereas > pkgutil doesn't. > > In addition, the current approach causes problems for system vendors. > Vendor packages typically must not provide overlapping files, and an > attempt to install a vendor package that has a file already on disk > will fail or cause unpredictable behavior. As vendors might chose to > package distributions such that they will end up all in a single > directory for the namespace package, all portions would contribute > conflicting __init__.py files. > > Specification > = > > Rather than using an imperative mechanism for importing packages, a > declarative approach is proposed here, as an extension to the existing > ``*.pkg`` mechanism. > > The import statement is extended so that it directly considers ``*.pkg`` > files during import; a directory is considered a package if it either > contains a file named __init__.py, or a file whose name ends with > ".pkg". > > In addition, the format of the ``*.pkg`` file is extended: a line with > the single character ``*`` indicates that the entire sys.path will > be searched for portions of the namespace package at the time the > namespace packages is imported. > > Importing a package will immediately compute the package's __path__; > the ``*.pkg`` files are not considered anymore after the initial import. > If a ``*.pkg`` package contains an asterisk, this asterisk is prepended > to the package's __path__ to indicate that the package is a namespace > package (and that thus further extensions to sys.path might also > want to extend __path__). At most one such asterisk gets prepended > to the path. > > extend_path will be extended to recognize namespace packages according > to this PEP, and avoid adding directories twice to __path__. > > No other change to the importing mechanism is made; searching > modules (including __init__.py) will continue to stop at the first > module encountered. > > Discussion > == > > With the addition of ``*.
Re: Python Goes Mercurial
On 2 Apr., 15:05, David Smith wrote: > Kay Schluehr wrote: > > On 1 Apr., 07:56, Lawrence D'Oliveiro > central.gen.new_zealand> wrote: > >> In message <35d429fa-5d13-4703- > > >> a443-6a95c740c...@o6g2000yql.googlegroups.com>, John Yeung wrote: > >>> Here's one that clearly expresses strong antipathy: > >>> http://mail.python.org/pipermail/python-dev/2009-March/087971.html > >> There are lots of GUI- and Web-based front ends to Git. And look at on-line > >> services like GitHub and Gitorious. The level of support for it is huge. > > > Ironically Mercurials most popular UI frontend Tortoise is going to > > crash Python tools ( like Wing-IDE ) on Windows. That's a known issue > > for about a year and more and the developers are not inclined to fix > > it. This doesn't really increase my trust that Mercurials UI tools are > > of a higher quality than Git's no matter which platform is used. > > The conflict between TortoiseHg and Wing IDE can be fixed by simply > uninstalling the Tortoise Overlays. You loose the graphic overlay on > folders, but otherwise everything works. > > --David Good to know. Uninstalling a major feature that enhances usability just to make it usable isn't much less ironic though. -- http://mail.python.org/mailman/listinfo/python-list
Re: Python Goes Mercurial
On 1 Apr., 07:56, Lawrence D'Oliveiro wrote: > In message <35d429fa-5d13-4703- > > a443-6a95c740c...@o6g2000yql.googlegroups.com>, John Yeung wrote: > > Here's one that clearly expresses strong antipathy: > > > http://mail.python.org/pipermail/python-dev/2009-March/087971.html > > There are lots of GUI- and Web-based front ends to Git. And look at on-line > services like GitHub and Gitorious. The level of support for it is huge. Ironically Mercurials most popular UI frontend Tortoise is going to crash Python tools ( like Wing-IDE ) on Windows. That's a known issue for about a year and more and the developers are not inclined to fix it. This doesn't really increase my trust that Mercurials UI tools are of a higher quality than Git's no matter which platform is used. -- http://mail.python.org/mailman/listinfo/python-list
Re: Beazley on Generators
On 1 Apr., 07:03, Terry Reedy wrote: > At PyCon2008, David Beazley presented an excellent talk on generators. > Generator Tricks for Systems > Programmershttp://www.dabeaz.com/generators/index.html > > At PyCon2009, he followed up with another talk on more advanced > generator usage, which Guido commended on the python-ideas list: > A Curious Course on Coroutines and Concurrencyhttp://dabeaz.com/coroutines/ > > I have just started (this one will take more than one sitting ;-) but it > looks just as good. > > tjr There is just one thing I find disappointing. Since the talk is almost a compendium of advanced uses of generators I'm missing a reference to Peter Thatchers implementation of monads: http://www.valuedlessons.com/2008/01/monads-in-python-with-nice-syntax.html Peters implementation can be simplified but it already contains all relevant ideas. -- http://mail.python.org/mailman/listinfo/python-list
Re: Thoughts on language-level configuration support?
> "Discoverable", as in built-in tools that let you have the following > conversation: "Program, tell me all the things I can configure about > you" - "Okay, here they all are". No digging through the source > required. But this doesn't have any particular meaning. If I run a dir(obj) command all attributes of obj will be returned and I can be sure these are all. In C# I can reflect on attributes of an assembly which is a well defined entity. "Program" is not an entity. It's kind of a user interface name in StarTreck ( which prefers "computer" for this purpose though ). This way we cannot design systems. -- http://mail.python.org/mailman/listinfo/python-list
Re: Relative Imports, why the hell is it so hard?
On 1 Apr., 00:38, Carl Banks wrote: > On Mar 31, 12:08 pm, Kay Schluehr wrote: > > > > And your proposal is? > > > I have still more questions than answers. > > That's obvious. > > Perhaps you should also refrain from making sweeping negative > judgments about a system you have more questions than answers about. > > (Feel free to make sweeping negative judgments once you have the > answers, though.) > > Carl Banks Diagnosing a problem means having a detailed cure? Wow, that's critical thinking I guess. O.K. I have got some ideas for a new import system and I'm going to blog about them within the next days. If I have some results I'll leave a notice in this thread. -- http://mail.python.org/mailman/listinfo/python-list
Re: Relative Imports, why the hell is it so hard?
On 31 Mrz., 20:50, Terry Reedy wrote: > Nothing is added to sys.modules, except the __main__ module, unless > imported (which so are on startup). Yes. The startup process is opaque but at least user defined modules are not accidentally imported. > > > Although the ceremony has been performed > > basically correct the interpreter god is not pacified and doesn't > > respond. > > But the import 'ceremony' has not been performed. There is no import ceremony. Imports are just stated in the source. There is a package ceremony for whatever reasons. > > But why not? Because it looks up for *living* imported > > packages in the module cache ( in sys.modules ). > > > I don't think there is any particular design idea behind it. The > > module cache is just a simple flat dictionary; a no-brainer to > > implement and efficient for look ups. > > This all dates to the time before packages and imports from zip files > and such. > > > But it counteracts a domain model. > > What is that? Object oriented programming. > > > All you are left with is those Finders, Loaders and Importers > > in Brett Cannons importlib. Everything remains deeply mysterious and I > > don't wonder that it took long for him to work this out. > > And your proposal is? I have still more questions than answers. -- http://mail.python.org/mailman/listinfo/python-list
Re: Relative Imports, why the hell is it so hard?
On 31 Mrz., 18:48, s4g wrote: > Hi, > > I was looking for a nice idiom for interpackage imports as I found > this thread. > Here come a couple of solutions I came up with. Any discussion is > welcome. > > I assume the same file structure > > \ App > | main.py > +--\subpack1 > | | __init__.py > | | module1.py > | > +--\subpack2 > | | __init__.py > | | module2.py > > When you run main.py all imports relative to \App work fine, so the > only problem is running a module from within a subpackage as a script. > I therefore assume we want to run module1.py as a script, which wants > to import module2. > > I hope the following solutions are self-evident > > = solution 1 > --> in module1.py > import sys, os > if __name__ == '__main__': > sys.path.append(os.path.normpath(__file__+'/../..')) > > import subpack2.module2 > > = solution 2 > --> in subpack1/__init__.py > import sys, os > > _top_package_level = 1 # with current package being level 0 > > _top_package = os.path.normpath(__file__ + '/..'*(_top_package_level > +1)) > if _top_package not in sys.path: > sys.path.append(_top_package) > > --> in module1 or any module in the package, which requires import > relative to the package top > import __init__ > import subpack2.module2 > > = solution 3 > --> in package_import.py, somewhere on the PYTHONPATH ( perhaps in > standard lib ;) > > def set_top_package(module, level): > _top_package = os.path.normpath(module + '/..'*(level+1)) > if _top_package not in sys.path: > sys.path.append(_top_package) > > class absolute_import(object): > def __init__(self, module, level): > self.level = level > self.module = module > > def __enter__(self): > sys.path.insert( 0, > os.path.normpath(self.module + '/..'*(self.level+1)) > ) > > def __exit__(self, exc_type, exc_val, exc_tb): > del sys.path[0] > > --> in module1 > import package_import > package_import.set_top_package(__file__, 1) > import subpack2.module2 > > --> or in module1 > import package_import > with package_import.absolute_import(__file__, 1): > import subpack2.module2 > ... This and similar solutions ( see Istvan Alberts ) point me to a fundamental problem of the current import architecture. Suppose you really want to run a module as a script without a prior import from a module path: ...A\B\C> python my_module.py then the current working directory C is added to sys.path which means that the module finder searches in C but C isn't a known package. There is no C package in sys.modules even if the C directory is "declared" as a package by placing an __init__.py file in it. Same goes of course with B and A. Although the ceremony has been performed basically correct the interpreter god is not pacified and doesn't respond. But why not? Because it looks up for *living* imported packages in the module cache ( in sys.modules ). I don't think there is any particular design idea behind it. The module cache is just a simple flat dictionary; a no-brainer to implement and efficient for look ups. But it counteracts a domain model. All you are left with is those Finders, Loaders and Importers in Brett Cannons importlib. Everything remains deeply mysterious and I don't wonder that it took long for him to work this out. -- http://mail.python.org/mailman/listinfo/python-list
Re: Thoughts on language-level configuration support?
On 30 Mrz., 15:40, jfager wrote: > I've written a short post on including support for configuration down > at the language level, including a small preliminary half-functional > example of what this might look like in Python, available > athttp://jasonfager.com/?p=440. > > The basic idea is that a language could offer syntactic support for > declaring configurable points in the program. The language system > would then offer an api to allow the end user to discover a programs > configuration service, as well as a general api for providing > configuration values. > > The included example implements the first bit and hints at the third, > defining a function that looks up what variable its output will be > assigned to and tries to find a corresponding value from a > configuration source. It's very preliminary, but I hope it gives a > flavor of the general idea. > > Any thoughts or feedback would be greatly appreciated. The problem with your idea is that those declared declaration points can be overlooked no matter how much syntactical support is added. Lets say a resource file is loaded and there are a few of the config- properties declared in modules you have written. Now an object wants to access a resource defined in the file and fails because the resource providing property could not be found since the property defining module wasn't loaded yet and the property couldn't register itself. That's why things are centralized as in optparse and the workflow is designed upfront or things are implemented locally and individual units have to take care of their own. -- http://mail.python.org/mailman/listinfo/python-list
Re: Relative Imports, why the hell is it so hard?
On 31 Mrz., 04:55, "Gabriel Genellina" wrote: > En Mon, 30 Mar 2009 21:15:59 -0300, Aahz escribió: > > > In article , > > Gabriel Genellina wrote: > > >> I'd recommend the oposite - use relative (intra-package) imports when > >> possible. Explicit is better than implicit - and starting with 2.7 -when > >> "absolute" import semantics will be enabled by default- you'll *have* to > >> use relative imports inside a package, or fail. > > > Really? I thought you would still be able to use absolute imports; you > > just won't be able to use implied relative imports instead of explicit > > relative imports. > > You're right, I put it wrongly. To make things clear, inside a package > "foo" accessible thru sys.path, containing a.py and b.py: > > site-packages/ >foo/ > a.py > b.py > __init__.py > > Currently, the "a" module can import "b" this way: > > from foo import b > import foo.b > from . import b > import b > > When implicit relative imports are disabled ("from __future__ import > absolute_import", or after 2.7 supposedly) the last one won't find b.py > anymore. > (I hope I put it right this time). > > -- > Gabriel Genellina So it even breaks more code which is great ;) Do you know of any near or far past attempts to re-design the import system from the ground up? I do not mean a rather faithful and accessible reconstruction such as Brett Cannons work but a radical re- design which starts with a domain model and does not end with Loaders, Importers and Finders which are actually services that pretend to be objects. Kay -- http://mail.python.org/mailman/listinfo/python-list
Re: Another form of dynamic import
On 25 Mrz., 15:23, Marco Nawijn wrote: > Hello, > > In short I would like to know if somebody knows if it is possible to > re-execute a statement that raised an exception? I will explain the > reason by providing a small introduction on why this might be nice in > my case > and some example code. > > I am using the python bindings to a *very* large C++ library. About > 5000 classes divided over approx. 450 different > packages are exposed through the Python interface. To reduce the > number of import statements that need to be inserted and to limit the > number of wildcard imports it would be very helpful if class names > could be automatically imported from the proper module. There is no > problem in finding out the proper module given a (valid) class name. > > As an example, look at the following statement > > >> aPoint = gp_Pnt(1.0, 0.0, 0.0) # Oops, this will raise a NameError, > >> since > > # gp_Pnt class > is unknown > > NameError: name 'gp_Pnt' is not defined > > As indicated, this will raise a NameError exception. What I would like > to do is something like the following (pseudo-code): > > try: > > > aPoint = gp_Pnt(1.0, 0.0, 0.0)[1] > > > > except NameError, e: > > name = e.args[0].split[1] > > if isValid(name): > doImport(name) > ===> Can I go back to statement [1] from this point? > else: > raise e > > There is no problem in catching the exception, finding out which name > is unknown to python and check if this is a valid name for my library. > My question is, is there any possibility of going back to the > statement that raised the error, re-execute the statement and > continue? > > Thanks for any thoughts and suggestions. > > Marco There is no call/cc continuation in Python when you are asking for such a thing. I wonder however why you don't try lazy attribute access? Instead of making a raw function call like that to gp_Pnt, one can thread all calls to the C++ system through an object that implements __getattr__ and loads new names incrementally if one is missing. -- http://mail.python.org/mailman/listinfo/python-list
Re: Relative Imports, why the hell is it so hard?
On 25 Mrz., 05:56, Carl Banks wrote: > On Mar 24, 8:32 pm, Istvan Albert wrote: > > > On Mar 24, 9:35 pm, Maxim Khitrov wrote: > > > > Works perfectly fine with relative imports. > > > This only demonstrates that you are not aware of what the problem > > actually is. > > > Try using relative imports so that it works when you import the module > > itself. Now run the module as a program. The same module that worked > > fine when you imported it will raise the exception: > > PEP 366 addresses this issue. > > Not the best solution, one that still involves boilerplate, but it is > much less of a hack than your version, and at least it has the > blessing of the language designers so it won't unceremoniously break > at some point. > > Carl Banks A workaround that is hardly acceptable when we are working with / debugging 3rd party packages. Python was simpler without relative imports and occasional ambiguity resolutions by means of absolute imports. Unfortunately Brett Cannons reconstruction of import semantics comes a little late for Python 3 and I suppose we have to live with the current mess. -- http://mail.python.org/mailman/listinfo/python-list
Re: Does Python have certificate?
On 24 Mrz., 05:30, Steve Holden wrote: > > No, there is no certification for Python. Maybe in the future... > > O'Reilly School of Technology have plans to offer a Python > certification. But I have to write the courses first :) If you're done with it I'd additionally suggest the honory title of a VIPP: Very Important Python Programmer. VIPPs receive a sticker with a Python logo and are immediately spotted this way by their numerous fans. Regards -- http://mail.python.org/mailman/listinfo/python-list
Re: what features would you like to see in 2to3?
On 22 Mrz., 20:39, Benjamin Peterson wrote: > It's GSoC time again, and I've had lots of interested students asking about > doing on project on improving 2to3. What kinds of improvements and features > would you like to see in it which student programmers could accomplish? It would suffice to write some proper 2to3 tutorial level documentation that explains how to extend 2to3 i.e. how to write some new fixers. Most of the stuff I came along was rather special and cheap student labor is hardly ever available or just in time. -- http://mail.python.org/mailman/listinfo/python-list
Re: How to interface with C# without IronPython
On 17 Mrz., 16:22, Mudcat wrote: > On Mar 17, 6:39 am, Kay Schluehr wrote: > > > > > On 16 Mrz., 23:06, Mudcat wrote: > > > > On Mar 13, 8:37 pm, Christian Heimes wrote: > > > > > Chris Rebert wrote: > > > > > Haven't used it, butPythonfor .NET sounds like it might be what you > > > > > want:http://pythonnet.sourceforge.net/ > > > > > I've done some development for and with PythonDotNET. It's definitely > > > > the right thing. It works with .NET, Mono andPython2.4 to 2.6. > > > > > Christian > > > > That looks exactly like what I'm searching for. I'll give it a shot. > > > > One question, the last update for that is back in '07. Do you know if > > > it's still in active development? > > > > Thanks > > > Don't think it's maintained right now. But be aware that it runs well > > for the current CLR 2.0. > > > For using .NET 3.5 features you just have to add > > > c:\Program Files\Reference Assemblies\Microsoft\Framework\v3.5\ > > > to your Python path and add assemblies with clr.AddReference as > > convenient. > > Based on the archived emails I know that it can work on Python 2.6, > but it's not clear what modifications are necessary. I downloaded the > source files to run independently, and that was capable of running > with installs of Python 2.4 and 2.5. > If I download the installer will > it automatically recognize 2.6, or will I need to change paths and > possibly re-compile? > > Thanks You'll have to change the BUILD option in the project settings of the Python.Runtime assembly. There is already a conditional compilation switch for Python 2.6 available in the source. So after this has been done it will build pythonnet for Python 2.6. -- http://mail.python.org/mailman/listinfo/python-list
Re: How to interface with C# without IronPython
On 16 Mrz., 23:06, Mudcat wrote: > On Mar 13, 8:37 pm, Christian Heimes wrote: > > > Chris Rebert wrote: > > > Haven't used it, butPythonfor .NET sounds like it might be what you > > > want:http://pythonnet.sourceforge.net/ > > > I've done some development for and with PythonDotNET. It's definitely > > the right thing. It works with .NET, Mono andPython2.4 to 2.6. > > > Christian > > That looks exactly like what I'm searching for. I'll give it a shot. > > One question, the last update for that is back in '07. Do you know if > it's still in active development? > > Thanks Don't think it's maintained right now. But be aware that it runs well for the current CLR 2.0. For using .NET 3.5 features you just have to add c:\Program Files\Reference Assemblies\Microsoft\Framework\v3.5\ to your Python path and add assemblies with clr.AddReference as convenient. -- http://mail.python.org/mailman/listinfo/python-list
Re: Indentations and future evolution of languages
On 6 Mrz., 02:53, bearophileh...@lycos.com wrote: > This is an interesting post, it shows me that fitness plateau where > design of Python syntax lives is really small, you can't design > something just similar: > > http://unlimitednovelty.com/2009/03/indentation-sensitivity-post-mort... > > Living on a small fitness plateau isn't good, even if it's very high, > because it's evolutionary unstable :-( > Indentation-wise Haskell syntax seems one of the very few local maxima > that is close enough to the little fitness plateau where Python is. > > Bye, > bearophile Here is a somewhat longer comment: http://fiber-space.de/wordpress/?p=121 Take it with a grain of salt and have much fun. -- http://mail.python.org/mailman/listinfo/python-list
Re: Indentations and future evolution of languages
On 6 Mrz., 02:53, bearophileh...@lycos.com wrote: > This is an interesting post, it shows me that fitness plateau where > design of Python syntax lives is really small, you can't design > something just similar: > > http://unlimitednovelty.com/2009/03/indentation-sensitivity-post-mort... > > Living on a small fitness plateau isn't good, even if it's very high, > because it's evolutionary unstable :-( > Indentation-wise Haskell syntax seems one of the very few local maxima > that is close enough to the little fitness plateau where Python is. > > Bye, > bearophile This is all derived from a blog post about a language no one cares about? -- http://mail.python.org/mailman/listinfo/python-list
Re: Python parser
On 2 Mrz., 23:14, Clarendon wrote: > Thank you, Lie and Andrew for your help. > > I have studied NLTK quite closely but its parsers seem to be only for > demo. It has a very limited grammar set, and even a parser that is > supposed to be "large" does not have enough grammar to cover common > words like "I". > > I need to parse a large amount of texts collected from the web (around > a couple hundred sentences at a time) very quickly, so I need a parser > with a broad scope of grammar, enough to cover all these texts. This > is what I mean by 'random'. > > An advanced programmer has advised me that Python is rather slow in > processing large data, and so there are not many parsers written in > Python. He recommends that I use Jython to use parsers written in > Java. What are your views about this? > > Thank you very much. You'll most likely need a GLR parser. There is http://www.lava.net/~newsham/pyggy/ which I tried once and found it to be broken. Then there is the Spark toolkit http://pages.cpsc.ucalgary.ca/~aycock/spark/ I checked it out years ago and found it was very slow. Then there is bison which can be used with a %glr-parser declaration and PyBison bindings http://www.freenet.org.nz/python/pybison/ Bison might be solid and fast. I can't say anything about the quality of the bindings though. -- http://mail.python.org/mailman/listinfo/python-list
Re: is python Object oriented??
On 29 Jan., 11:21, Gary Herron wrote: > Python *is* object-oriented, but it is not (as your definition suggests) > object-fascist. I'd put it more mildly. Python is object oriented. The orientation is there but the fanatism is gone. Kay -- http://mail.python.org/mailman/listinfo/python-list
Re: Dynamic methods and lambda functions
On 26 Jan., 15:13, Steve Holden wrote: > Mark Wooding wrote: > > unine...@gmail.com writes: > [...] > > * Assignment stores a new (reference to a) value in the variable. > > > * Binding modifies the mapping between names and variables. > > I realise I have omitted what was doubtless intended to be explanatory > detail, but I am having trouble reconciling those sentences. Would you > mind explaining "in vacuuo" what you see as the difference between > assignment and binding? > > regards > Steve > -- > Steve Holden+1 571 484 6266 +1 800 494 3119 > Holden Web LLC http://www.holdenweb.com/ "Assignment" is binding values to names whereas "binding" is binding names to scopes. Marks terminology is a bit more standard than Pythons in this respect. As you know, Python avoids the idea of variables as if those were storage cells having a fixed logical address. -- http://mail.python.org/mailman/listinfo/python-list
Re: Dynamic methods and lambda functions
On 23 Jan., 13:28, unine...@gmail.com wrote: > Hi, > I want to add some properties dynamically to a class, and then add the > corresponding getter methods. Something resulting in this: > > class Person: > def Getname(self): > return self.__name > > def Getage(self): > return self.__age > > I've implemented the next code, creating the properties from a list: > > props = [ > ("name", "peter"), > ("age", 31), > ("wife", "mary") > ] > > class Person: > def __init__(self): > for prop in props: > setattr(self, "__" + prop[0], prop[1]) > setattr(Person, "Get" + prop[0], lambda self: getattr > (self, "__" + prop[0])) > > if __name__ == "__main__": > > person = Person() > > print person.__name > print person.__age > print person.__wife > print > print person.Getname() > print person.Getage() > print person.Getwife() > > And the resulting execution of this program is: > > peter > 31 > mary > > mary > mary > mary > > The attributes are right, but the getter are not working. The problem > is that the lambda function always execute the last parameter passed > for all instances of the methods. How could it be done the right way? > > Thanks in advance The standard trick is to "de-closure" the lambda using a keyword argument. So instead of writing lambda self: getattr(self, "__" + prop[0])) you might write lambda self, prop = prop: getattr(self, "__" + prop[0])) Now prop is local to the lambda and the lambda doesn't look up prop in the enclosing environment which certainly stores its last value. -- http://mail.python.org/mailman/listinfo/python-list
Re: Two import questions in Python 3.0
On 24 Jan., 18:51, Scott David Daniels wrote: > Kay Schluehr wrote: > > On 24 Jan., 09:21, "Gabriel Genellina" wrote: > >> If you run A.py as a script, it does not "know" it lives inside a package. > >> You must *import* A for it to become aware of the package. > >> Also, the directory containing the script comes earlier than PYTHONPATH > >> entries in sys.path -- so watch for that case too. > > Thanks, yes. I always make the same error thinking that a directory > > with the ritual __init__ file is actually a package ( as some kind of > > platonic entity ), something that is more obvious to me than it is to > > the runtime. The relative import semantics introduced with Python 2.5 > > has made the error just visible that was hidden to me for about a > > decade. Shit. > > Temper the language a bit. You lose your effectiveness by some people > reading the color of your words, rather than their meaning in code. Sigh, yes... sorry. I'm just too frustrated. Actually I don't even know why the import machinery is such a mess and I don't want to spend a huge amount of time ( like Brett Cannon ) to figure it out. I'll spent a few hours of time writing a script that turns all relative paths into absolute ones without changing the source otherwise. Then I'm at least done with that and won't ever see the "relative import in non-packages" exceptions anymore in any code I touch ( I can also ignore __package__, -m and all the other workarounds ). It's not the first time Python is in my way but this time it hurts. -- http://mail.python.org/mailman/listinfo/python-list
Re: What's the business with the asterisk?
On 24 Jan., 13:31, mk wrote: > Hello everyone, > > From time to time I spot an asterisk (*) used in the Python code > _outside_ the usual *args or **kwargs application. > > E.g. here:http://www.norvig.com/python-lisp.html > > def transpose (m): > return zip(*m) > >>> transpose([[1,2,3], [4,5,6]]) > [(1, 4), (2, 5), (3, 6)] > > What does *m mean in this example and how does it do the magic here? > > Regards, > mk If zip is specified as def zip(*args): ... one can pass zero or more arguments into zip. In the zip body one has access to the argument tuple args. So zip(a, b, c) yields args = (a, b, c). Now suppose you want to pass the tuple t = (a, b, c) to zip. If you call zip(t) then args = ((a, b, c),). When calling zip(*t) instead the tuple is passed as variable arguments just like they are specified in the signature of zip. So args = (a, b, c). Same holds for def foo(**kwd): ... and foo(**kwd) versus foo(kwd). -- http://mail.python.org/mailman/listinfo/python-list
Re: Two import questions in Python 3.0
On 24 Jan., 09:21, "Gabriel Genellina" wrote: > If you run A.py as a script, it does not "know" it lives inside a package. > You must *import* A for it to become aware of the package. > Also, the directory containing the script comes earlier than PYTHONPATH > entries in sys.path -- so watch for that case too. Thanks, yes. I always make the same error thinking that a directory with the ritual __init__ file is actually a package ( as some kind of platonic entity ), something that is more obvious to me than it is to the runtime. The relative import semantics introduced with Python 2.5 has made the error just visible that was hidden to me for about a decade. Shit. -- http://mail.python.org/mailman/listinfo/python-list
Two import questions in Python 3.0
1. I'd expected that absolute imports are used in Python 3.0 by default. I may be wrong. I've written two versions of a module sucks.py sucks.py - print ("import from lib.sucks") sucks.py - print ("import from package.sucks") The first is placed in the lib directory that is globally visible by means of PYTHONPATH. The second one is placed in a package package/ __init__.py sucks.py A.py The package also contains a module A.py defined by A.py -- import sucks Running A yields "import from package.sucks" which means unconditional relative import. It shadows the globally visible sucks.py module. I've expected it the other way round. 2. This is kind of a change request. In a former life I used to call test-scripts as test-scripts. The dumb idea was to point e.g. to lib/tests and run python test_ast.py test_nodeclasses (__main__.AST_Tests) ... ok test_parse (__main__.ASTHelpers_Test) ... ok -- Ran 12 tests in 0.219s OK The new style is implemented rather in lib2to3. If I point to lib/ lib2to3/tests and run python test_parser.py Traceback (most recent call last): File "test_parser.py", line 12, in from . import support ValueError: Attempted relative import in non-package The standard error of the years to come that makes working with Python harder and reminds me that it is not a scripting language anymore because you can't run anything as a script not even a test. For pedagogical reasons the behavior of test_ast.py and other standard library tests shall show uniform behavior when called from the command line i.e. they shall all fail with this import error message. What do you think? -- http://mail.python.org/mailman/listinfo/python-list
Re: The First Law Of comp.lang.python Dynamics
On 23 Jan., 08:13, Philip Semanchuk wrote: > On Jan 23, 2009, at 12:39 AM, Kay Schluehr wrote: > > > Whatever sufficiently sophisticated topic was initially discussed > > it ends all up in a request for removing reference counting and the > > GIL. > > Is this a variant of Godwin's Law for Python? Definitely. It's a stable fixed point attractor. No matter how often it was discussed to dead in the past months the likelihood that someone mentions the GIL or ref-counting approaches 1. This is particularly remarkable because it is inverse proportional to the observable activity in this domain so there are really no news. Other similarly strange phenomena: whenever Xah Lee posts one of his infamous rants it attracts at least a dozen of newsgroup readers that try to persuade each other not to respond which will inevitably grow his thread and keep it alive for a long time. -- http://mail.python.org/mailman/listinfo/python-list
The First Law Of comp.lang.python Dynamics
Whatever sufficiently sophisticated topic was the initially discussed it ends all up in a request for removing reference counting and the GIL. -- http://mail.python.org/mailman/listinfo/python-list
Re: what's the point of rpython?
On 17 Jan., 01:37, "Brendan Miller" wrote: > Is this going anywhere or is this just architecture astronautics? > > The RPython project seems kind of interseting to me and I'd like to > see more python implementations, but looking at the project I can't > help but think that they haven't really explained *why* they are doing > the things they are doing. Remember that the original objective of PyPy was to improve the JIT. Psyco is limited by the fact that the whole runtime is implemented in C. The infamous "faster than C" actually refers to work on program specializers on C code i.e. treating C as a language that is JIT compiled on a fine grained level ( block structure - whole function JIT compilation wouldn't obviously yield any advantages ). So it is not just the application level Python code that shall run through the JIT but also the interpreter level code. So why not equate them and think about interpreter level code as Python as well? This might be the idea. But then the problem of bootstrapping comes up: there is no interpreter level code with the required properties. Hence RPython that can serve as a foundation. I'm also not sure I like the approach. Rather than designing a whole new runtime for having a fully reflective system I'd model C in Python ( call it CiPy ) and create a bijection between CiPy code and C code. Once this has been established one can study other translations of CiPy or Python into CiPy ( not unlike Pyrex/Cython ) doing systematic refactorings on CiPy code in Python, study properties using runtime reflection, experiment with macro systems etc. All of this is done in PyPy as well but sometimes it seems the team has created more new problems than they solved. > Anyway, I can tell this is the sort of question that some people will > interpret as rude. Asking hard questions is never polite, but it is > always necessary :) I think it is a good question. -- http://mail.python.org/mailman/listinfo/python-list
Re: English-like Python
On 16 Jan., 02:02, The Music Guy wrote: > Just out of curiousity, have there been any attempts to make a version > of Python that looks like actual English text? No, but I've once written a Python dialect that uses German text. Just look at how amazing this result is !!! But be warned it requires knowledge of the German language. http://www.fiber-space.de/EasyExtend/doc/teuton/teuton.htm > I mean, so much of Python > is already based on the English language that it seems like the next > natural step would be to make a programming language which is actually a > spoken one. As you know Python 3.0 has full unicode support. Python 4.0 will be surely written in Mandarin or Hindi. > For example, the following code... > > >>> import os > > >>> def list_files(dirname): > >>> for p in os.listdir(dirname): > >>> print p > > >>> list_files("some_dir") > > foo > bar > etc > > ...might be translated as... > > >>> Import the operating system module. > > >>> Define a new function as "list files" which accepts > > "a path" and does the following: > For every item in the list returned by the operating system's > directory listing of the given path, do the following: > Print the item. > > >>> List files from "some_dir". > > foo > bar > etc > > Obviously, creating a parser capable of handling such "code" would > require a very good understanding not only of the English language but > also of how ideas expressed in spoken languages are represented in terms > that a computer can understand. Yep. Resolving ambiguities in natural languages is actually an open research topic. Moving from Python to a language that is more context dependent than Larry Wall ever dreamed about and launch an interpreter on the Enterprise is actually a worthwhile project for future generations. Kay -- http://mail.python.org/mailman/listinfo/python-list
Re: import relative (with a directory)
On 11 Jan., 03:27, Chris Rebert wrote: > You should probably check out the relative import syntax introduced in > PEP 328:http://www.python.org/dev/peps/pep-0328/ > It should be able to do exactly what you want. This should exactly lead to exceptions in all of his demo code because the modules are not "scripts" anymore but "modules in a package" - a semantical difference I wasn't even aware of until relative imports were introduced in Python 2.5. I'd rather avoid the sour grapes and use absolute imports. Kay -- http://mail.python.org/mailman/listinfo/python-list
Re: looking for tips on how to implement "ruby-style" Domain Specific Language in Python
O.K. Mark. Since you seem to accept the basic requirement to build an *external* DSL I can provide some help. I'm the author of EasyExtend ( EE ) which is a system to build external DSLs for Python. http://www.fiber-space.de/EasyExtend/doc/EE.html EE is very much work in progress and in the last year I was more engaged with increasing power than enhance accessibility for beginners. So be warned. A DSL in EE is called a *langlet*. Download the EE package and import it in a Python shell. A langlet can then be built this way: >>> import EasyExtend >>> EasyExtend.new_langlet("my_langlet", prompt = "myl> ", source_ext = ".dsl") This creates a bunch of files in a directory /EasyExtend/langlets/my_langlet Among them is run_my_langet.py and langlet.py. You can cd to the directory and apply $python run_my_langlet.py which opens a console with prompt 'myl>'. Each langlet is immediatly interactive. A user can also run a langlet specific module like $python run_my_langlet.py mod.dsl with the suffix .dsl defined in the langlet builder function. Each module xxx.dsl can be imported from other modules of the my_langlet langlet. EE provides a generic import hook for user defined suffixes. In order to do anything meaningful one has to implement langlet transformations in the langlet.py module. The main transformations are defined in a class called LangletTransformer. It defines a set of visitor methods that are marked by a decorator called @trans. Each @trans method is named like a terminal/non-terminal in a grammar file and responds to a terminal or non-terminal node of the parse tree which is traversed. The structure of the parse tree is the same as those you'd get from Pythons builtin parser. It is entirely determined by 4 files: - Grammar which is precisely the Python grammar found in the Python source distribution. - Grammar.ext which defines new non-terminals and overwrites old ones. - Token which defines Pythons token. - Token.ext which is the analog of Grammar.ext for token definitions. The Grammar.ext file is in the directory my_langlet/parsedef. There is also an analog lexdef directory for Token.ext. A possible Grammar.ext extension of the Python grammar that overwrites two non-terminals of looks like this: Grammar.ext --- trailer: '(' [arglist] ')' | '[' subscriptlist ']' | '.' NAME | NAME | NUMBER | STRING atom: ('(' [yield_expr|testlist_gexp] ')' | '[' [listmaker] ']' | '{' [dictmaker] '}' | '`' testlist1 '`' | NAME | NUMBER | STRING) --- Once this has been defined you can start a new my_langlet session and type myl> navigate_to 'www.openstreetmap.org' website Traceback (most recent call last): File "C:\lang\Python25\lib\site-packages\EasyExtend\eeconsole.py", line 270, in compile_cst _code = compile(src,"","single", COMPILER_FLAGS) File "", line 1 navigate_to 'www.openstreetmap.org' website ^ SyntaxError: invalid syntax myl> It will raise a SyntaxError but notice that this error stems from the *compiler*, not the parser. The parser perfectly accepts the modified non-terminals and produces a parse tree. This parse tree has to be transformed into a valid Python parse tree that can be accepted by Pythons bytecode compiler. I'm not going into detail here but recommend to read the tutorial http://www.fiber-space.de/EasyExtend/doc/tutorial/EETutorial.html that walks through a complete example that defines a few terminals, non-terminals and the transformations accordingly. It also shows how to use command line options to display parse tree properties, unparse parse trees back into source code ( you can eliminate DSL code from the code base entirely and replace it by equivalent Python code ), do some validation on transformed parse trees etc. -- http://mail.python.org/mailman/listinfo/python-list
Re: looking for tips on how to implement "ruby-style" Domain Specific Language in Python
On 8 Jan., 16:25, J Kenneth King wrote: > As another poster mentioned, eventually PyPy will be done and then > you'll get more of an "in-Python" DSL. May I ask why you consider it as important that the interpreter is written in Python? I see no connection between PyPy and syntactical Python extensions and the latter isn't an objective of PyPy. You can write Python extensions with virtually any Python aware parser. M.A.Lemburg already mentioned PLY and PLY is used for Cython. Then there is ANTLR which provides a Python grammar. I also know about two other Python aware parsers. One of them was written by myself. -- http://mail.python.org/mailman/listinfo/python-list
Re: looking for tips on how to implement "ruby-style" Domain Specific Language in Python
On 7 Jan., 16:50, J Kenneth King wrote: > Python expressions are not > data types either and hence no macros -- I can't write a python function > that generates python code at compile time. Have you ever considered there are languages providing macros other than Lisp? Macros have nothing to do with homoiconcity. > I can only write a python > program that parses some other string and generates code that can be run > by another interpreter. No, it is the same interpreter and it is also possible to modify python parsers on the fly. This is just not possible with Pythons builtin parser. > > Consider: > > for i in range(0, 100): > do_something_interesting(i) > > That's a pretty straight forward Python expression, but I can't do > anything with it -- it's not a unit of data, it's a string. > > The distinction is not subtle by any means. -- http://mail.python.org/mailman/listinfo/python-list
Re: looking for tips on how to implement "ruby-style" Domain Specific Language in Python
> How would one approach this in Python? Do I need to build a custom > loader which compiles *.dsl files to *.pyc files? Is it possible to > switch between the custom DSL and the standard Python interpreter? Sure, but there is no way to avoid extending the Python parser and then your DSL becomes external. I remember having had a similar discussion a while ago with Kevin Dangoor the original TurboGears developer who has also written Paver [1]. In the end DSL syntax wasn't worth the hassle and Kevin developed Paver entirely in Python. Kay [1] http://www.blueskyonmars.com/projects/paver/ -- http://mail.python.org/mailman/listinfo/python-list
Re: If your were going to program a game...
On 1 Jan., 12:37, Tokyo Dan wrote: > If your were going to program a game in python what technologies would > you use? > > The game is a board game with some piece animations, but no movement > animation...think of a chess king exploding. The game runs in a > browser in a window of a social site built around the game. The social > site has login, chat, player stats, list of active games, etc. AND > there is also be a desktop client that accesses the game server via > the same communication mechanism (like an AIR-based desktop client/ > app) as the browser-based version - I guess using JSON/RPC. There is no solution to this problem from a Python perspective. Do what everyone does right now: use Flash for the game and manage your site with Python if you like the language. -- http://mail.python.org/mailman/listinfo/python-list
Re: Code coverage to Python code
On 4 Jan., 12:35, Hussein B wrote: > Hey, > What is the best code coverage tool available for Python? > Thanks. It depends. What are your requirements? -- http://mail.python.org/mailman/listinfo/python-list
Re: New Python 3.0 string formatting - really necessary?
On 20 Dez., 02:54, Steven D'Aprano wrote: > Debated by who? The entire Python-using community? Every single Python > programmer? Or just the small proportion of Python developers who are > also core developers? "If I'd asked people what they wanted, they would have asked for a better horse." Henry Ford -- http://mail.python.org/mailman/listinfo/python-list
Re: Relative imports in Python 3.0
On 17 Dez., 11:01, Nicholas wrote: > I am sure I am not the first to run into this issue, but what is the > solution? When you use 2to3 just uncomment or delete the file fix_import.py in lib2to3/fixes/ . -- http://mail.python.org/mailman/listinfo/python-list
Re: ActivePython 2.6.1.1 and 3.0.0.0 released!
On 13 Dez., 00:16, Trent Mick wrote: > Note that currently PyWin32 is not included in ActivePython 3.0. Is there any activity in this direction? -- http://mail.python.org/mailman/listinfo/python-list
Re: python3.0 - any hope it will get faster?
On 9 Dez., 11:51, Helmut Jarausch <[EMAIL PROTECTED]> wrote: > Hi, > > I was somewhat surprised when I ran pystones with python-2.5.2 and > with python-3.0 > > On my old/slow machine I get > > python-2.5.2 > from test import pystone > pystone.pystones() >gives (2.73, 18315.018315018315) > > python-3.0 > from test import pystone > pystone.pystones() >gives (4.2705, 11709.601873536298) > > That's a drop of 36% ! > > I know that processing unicode is inherently slower, > but still I was surprised that it's so much slower. On my WinXP notebook python-3.0 >>> from test import pystone >>> pystone.pystones() (1.1734318188484849, 42610.059823557647) python-2.5.1 >>> from test import pystone >>> pystone.pystones() (1.2927221197107452, 38678.072601703308) -- http://mail.python.org/mailman/listinfo/python-list
Re: Guido's new method definition idea
On 6 Dez., 03:21, "Daniel Fetchinson" <[EMAIL PROTECTED]> wrote: > Hi folks, > > The story of the explicit self in method definitions has been > discussed to death and we all know it will stay. However, Guido > himself acknowledged that an alternative syntax makes perfect sense > and having both (old and new) in a future version of python is a > possibility since it maintains backward compatibility. The alternative > syntax will be syntactic sugar for the old one. This blog post of his > is what I'm talking about: > > http://neopythonic.blogspot.com/2008/10/why-explicit-self-has-to-stay... > > The proposal is to allow this: > > class C: > def self.method( arg ): > self.value = arg > return self.value > > instead of this: > > class C: > def method( self, arg ): > self.value = arg > return self.value > > I.e. explicit self stays only the syntax is slightly different and may > seem attractive to some. As pointed out by Guido classmethods would > work similarly: > > class C: > @classmethod > def cls.method( arg ): > cls.val = arg > return cls.val > > The fact that Guido says, > > "Now, I'm not saying that I like this better than the status quo. But > I like it a lot better than [...] but it has the great advantage that > it is backward compatible, and can be evolved into a PEP with a > reference implementation without too much effort." > > shows that the proposal is viable. So both forms are dual to each other ( "backwards compatibility" ) and can be used both? I'm -0 on this although I think the proposition fits better with the method call syntax. -- http://mail.python.org/mailman/listinfo/python-list
Re: Porting to 3.0, test coverage
On 4 Dez., 23:40, Paul Hildebrandt <[EMAIL PROTECTED]> wrote: > I was just reading what's new with Python > 3.0http://docs.python.org/dev/3.0/whatsnew/3.0.html > > I like this prerequisite to porting: "Start with excellent test > coverage" > > May I suggest looking into Pythoscope for those looking to boost test > coverage. > > http://pythoscope.org > > Paul This is surely interesting but occasionally it would be just too nice if *someone else* would advertise ones own projects ;) Here is more stuff http://www.pycheesecake.org/wiki/PythonTestingToolsTaxonomy -- http://mail.python.org/mailman/listinfo/python-list
Re: Debugging a Python Program that Hangs
On 2 Dez., 17:19, Kevin D. Smith <[EMAIL PROTECTED]> wrote: > I have a fairly large python program that, when a certain combination > of options is used, hangs. I have no idea where it is hanging, so > simply putting in print statements to locate the spot would be quite > difficult. Unfortunately, ctrl-C'ing the program doesn't print a > traceback either. Looking through the python debugger documentation, I > don't see how to run a python program and interactively stopping it > while it is running. Is there a way to stop within a running python > program to see where it is getting hung up? > > -- > Kevin D. Smith You might approximate the critical location using exceptions instead of prints. That's more costly of course because the program has to be restarted more often but it will serve the same purpose. -- http://mail.python.org/mailman/listinfo/python-list
Re: Python+Pyjamas+V8=ftw
On 2 Dez., 14:57, lkcl <[EMAIL PROTECTED]> wrote: > as a general-purpose plugin replacement for /usr/bin/python, however, > it's not quite there. and, given that javascript cheerfully goes > about its way with the "undefined" concept, it's always going to be a > _bit_ tricky to provide absolutely _every_ language feature, > faithfully. For some reasons I'd do it the other way round and model JS entirely in Python first. This is rather gratefully supported by two facts 1) Python is far more powerful than JS and supports lots of metaprogramming facilities. For example providing a Python type that has JS prototype semantics isn't really a big deal. 2) JS is standardized and the standard is very well documented. When JS has been mapped onto a dedicated Python framework one can open the backchannel and translate the Python framework code to JS. Ideally the mapping P_JS -> P_PyJS -> P'_JS is an identity. PyJS is unambiguously fixed by this mapping. Now people may define mappings from Python types to PyJS types and all the checks and tests are performed as usual checks in Python code by API functions of the PyJS framework. The compiler hasn't anything to do with it anymore and can be somewhat liberated from hard work. This roundtrip breaks with the GWT scheme of one way translation for an obvious reasons: Python is not Java. Regards, Kay -- http://mail.python.org/mailman/listinfo/python-list
Re: Python docs and enumeration of sections
On 29 Nov., 09:47, Robert Kern <[EMAIL PROTECTED]> wrote: > Kay Schluehr wrote: > > Is there a reason why enumeration of sections and subsections has been > > dropped after the switch to the Sphinx documentation tool? > > > It doesn't really make quoting library sections easier or do you know > > what I mean when I refer to "How It Works"? > > If you hover the mouse over the area just to the right of a section title, you > will see a paragraph mark which will be a link to the section. > You can > copy-and-paste that URL. The URL is more convenient answer than a section > number. > >http://docs.python.org/library/pdb.html#how-it-works I disagree. This might be adequate for web communication like this which is bound to HTML but nothing else. No one who uses e.g. the python26.chm document or a derived PDF can easily find what we are talking about. I also wonder about convenience? Not a single RFC, public spec, standards document or any other important reference omits chapter enumerations. Kay -- http://mail.python.org/mailman/listinfo/python-list
Python docs and enumeration of sections
Is there a reason why enumeration of sections and subsections has been dropped after the switch to the Sphinx documentation tool? It doesn't really make quoting library sections easier or do you know what I mean when I refer to "How It Works"? -- http://mail.python.org/mailman/listinfo/python-list
Re: Getting in to metaprogramming
On 27 Nov., 06:11, Rafe <[EMAIL PROTECTED]> wrote: > On Nov 27, 11:41 am, "Hendrik van Rooyen" <[EMAIL PROTECTED]> > wrote: > > > > > "Steven D'Aprano" wrote: > > > > Well, I don't know about "any problem". And it's not so much about > > > whether metaprograms can solve problems that can't be solved by anything > > > else, as whether metaprograms can solve problems more effectively than > > > other techniques. > > > > If you include factory functions, class factories, the builder design > > > pattern, metaclasses, etc. as "metaprogramming", then I use it all the > > > time, and find it an excellent technique to use. > > > > But if you mean using a Python program to generate Python source code, > > > then I can't think of any time I used it. Which doesn't mean that others > > > don't find it helpful, only that I haven't yet. > > > I am using the term in the restricted sense of Python writing Python source. > > > Given that, can anybody think of an example that you could not do with > > a class? (excepting the "stored procedure" aspect) > > > Or can I claim a new a new meta - rule - I would call it van Rooyen's > > folly... > > > > Thinking further back, when I was young and programming in Apple's > > > Hypercard 4GL, I used to frequently use Hypercard scripts to generate new > > > Hypercard scripts. That was to work around the limitations of the > > > scripting language. > > > What sort of stuff did you do, and would having had simple OO available > > have rendered it unnecessary? > > > > I don't think metaprogramming in the limited sense (programs to output > > > source code) is a bad idea, but I do think that there are probably better > > > alternatives in a language like Python. > > > True. No argument here - I was just wondering if the relationship holds. > > > - Hendrik > > "Given that, can anybody think of an example that you could not do > with a class?" > > Generating a template for a specific script application. For example, > a script with pre-defined callbacks that only require the addition of > the contents. > > I was really interested in exploring the idea of using python output, > instead of XML, to record something a user did in a GUI. I have seen > it done and it is really advantageous in the 3D industry because it > means the script files can be edited directly, in a pinch, to generate > something slightly different. > > For example, say we have code which generates a cube by plotting it's > points. A user then changes a point position in the GUI. The change is > saved by outputting the function call to a file with new arguments > (the new point location). If I wanted to make 100s of copies of the > cube, but with a slightly different point position, I could edit the > custom cube's python code and hand it back for creation without using > the GUI. I could do this with XML, but it can be harder to work with > in a text editor (though I have seen some XML editors that make it a > bit easier.) In fact, in most 3D applications, the app prints > everything the user does to a log. Sometimes in a choice of languages, > so I guess I am looking to do the same thing with my own custom tools. > > In a real situation the generated code file can build some pretty > complex 3D object hierarchies. It moves beyond simple storage of data > and becomes a real script that can be hacked as necessary. > > It is nice to have everything as python scripts because we always have > a blend of GUI users and developers to get a job done. > > - Rafe A rather advanced approach to deal with "writing XML in Python" and generating "Python from XML" has been chosen in P4D: http://pypi.python.org/pypi/P4D%20Langlet/1.2.4 It is of course a cultural mismatch to do such stuff because it targets a proper superset of Python but I don't think it's worse than what has been done in a plethora of template languages ( P4D is none of them though ), YAML or related approaches. I did it mostly because I wanted to learn Adobe Flex and I wanted to use it as a scripting language, not as something I had to edit in a particular editor like FlexBuilder. -- http://mail.python.org/mailman/listinfo/python-list
Re: Getting in to metaprogramming
On 27 Nov., 05:41, "Hendrik van Rooyen" <[EMAIL PROTECTED]> wrote: > Given that, can anybody think of an example that you could not do with > a class? (excepting the "stored procedure" aspect) I just noticed that corepy 1.0 [1] has been released. Corepy is an embedded DSL for synthesizing machine code from chaining Python commands. This means it provides objects and exploits control structures used to create machine code that can finally be executed interactively. Let's say you have an ordinary Python function that computes a CRC 32. Now you could attempt to translate the function into other Python code that expresses a corepy routine. You could create a decorator that works as follows 1) reads the source of the decorated function 2) transforms the source into corepy source and compiles it or 3) if 2) fails it just returns the passed code object. Kay [1] http://www.corepy.org/ -- http://mail.python.org/mailman/listinfo/python-list
Re: Getting in to metaprogramming
On 25 Nov., 11:08, Rafe <[EMAIL PROTECTED]> wrote: > Hi, > > In the name of self-education can anyone share some pointers, links, > modules, etc that I might use to begin learning how to do some > "metaprogramming". That is, using code to write code (right?) > > Cheers, > > - Rafe http://www.letmegooglethatforyou.com/?q=python+metaprogramming -- http://mail.python.org/mailman/listinfo/python-list
Re: regular expressions ... slow
On 18 Nov., 18:47, Stefan Behnel <[EMAIL PROTECTED]> wrote: > Kay Schluehr wrote: > > All of this is prototyped in Python and it is still work in progress. > > As long as development has not reached a stable state I refuse to > > rebuild the system in an optimized C version. > > And rightfully so: > > 1) the approach is algorithmically better, so it may even beat the current > C implementation by design. > > 2) switching languages before finishing and benchmarking the prototype is a > premature optimisation. It wouldn't be the first prototype going into > production. > > 3) even before considering a reimplementation, you should throw it into > Cython to translate it into C code, and then benchmark that. > > Stefan I fully agree and in fact the Trail parser generator contains a single extension module called cyTrail which is written in Cython - it's not just a trivial recompilation of Python in Cython but it uses all kinds of cdefs. There is just a difference between optimizing existing code using the best techniques available and writing code from scratch for speed. I consider this as a different, subsequent project ( call it cTrail ) and I want to have more fine control than being possible with Cython - actually I do want to understand the code in a simple language as C. I have to idea what the code does, generated by Cython. There are entire layers that can be stripped off when not dealing with Python types and dynamic memory allocation. Kay -- http://mail.python.org/mailman/listinfo/python-list
Re: regular expressions ... slow
On 17 Nov., 22:37, Uwe Schmitt <[EMAIL PROTECTED]> wrote: > Hi, > > Is anobody aware of this post: http://swtch.com/~rsc/regexp/regexp1.html > ? > > Are there any plans to speed up Pythons regular expression module ? > Or > is the example in this artricle too far from reality ??? > > Greetings, Uwe Some remarks. The basic idea outlined in the article is something that is re- discovered from time to time. Last year I started working on a parse tree validator which is a tool that checks whether a parse tree P on which transformative actions were performed belongs to a language described by a grammar G. To check this G was initially transformed into a set of NFAs representing grammars rules. The checker is a tracing tool that selects nodes in a parse tree in order to step through the transitions described in the NFA. If it can't select a state the validation fails. Generally there are rules which are non left factored well just like R: a b c | a b d The NFA transition table accordingly looks roughly like : (R,0) -> (a,1), (a,2) (a,1) -> (b,1) (b,1) -> (c,1) (c,1) -> (None,'-') (a,2) -> (b,2) (b,2) -> (d,2) (d,1) -> (None,'-') A valid parse tree P which corresponds to rule has one of two forms: P1 = [R, a, b, c] or P2 = [R, a, b, d] Suppose we want to validate P1 by our NFA. First we select the NFA using root node-id R. R corresponds to the start state (R,0). Next we select the follow states which are [(a,1), (a,2)]. The projection on the first argument yields a. Obviously we can shift to a in P1 and match the follow states (a,1) and (a,2) in : P1 = [R, a, b, c] ^ Next we shift to b P1 = [R, a, b, c] ^ Now we have two traces in we can follow. The one that is assigned by (a,1) and the other one by (a,2). Since we can't decide which one we *both at the same time*. This yields follow states (b,1) and (b,2) which can be projected on b. Next we shift to c P1 = [R, a, b, c] ^ Again we follow both traces and get (c,1) and (d,1). (c,1) fits well. Now we have to check that termination of P in this state is at least an option. The follow state of (c,1) in is (None,'-') which is the EXIT symbol of the NFA. The validation was successful. The same NFAs used to validate parse trees can be used within a top- down parser generator as "input tables". Such a parser generator automatically handles left-factoring right and is LL(*). It is also O (n) where n is the length of the input token stream. Unlike parser generators such as ANTLR it achieves LL(*) without any advanced lookahead scheme. Instead it produces traces of NFA states and cancels unused traces ( e.g. there are two initial traces in our example ((R,0) (a,1)(b,1)) and ((R,0)(a,2)(b,2)). The first one is canceled once d is selected. Otherwise the second one gets canceled when c gets selected ). A full parser generator for CFGs is more complex in many respects than a regular-expression matcher and AFAIK the one I've built over the last year is unique in its category. It is also simpler in a few aspects because full regexp matchers are also deal with lots of context sensitive issues not being expressed in a CFG. I used the same parsing scheme to produce a lexer. There is considerable overhead though because it produces one parse-tree per character and I intend to avoid this by using a hybrid lexer in the future that contains some very fast and some slower components. The reason why I used the parser generator in the lexer is that it has one signficant advantage over classic regexp parsing schemes: it avoids dependence on order. Whether you write R = ab|abc or R=abc|ab is insignificant. Longest match is always preferred unless otherwise stated. This makes extending the grammar of the lexer very simple. All of this is prototyped in Python and it is still work in progress. As long as development has not reached a stable state I refuse to rebuild the system in an optimized C version. Otherwise if someone e.g. a student intends to write his master thesis about a next generation regexp engine built on that stuff or even intends to contribute to the mentioned lexer I would of course cooperate. Kay www.fiber-space.de [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: best python unit testing framwork
On 11 Nov., 23:59, Brendan Miller <[EMAIL PROTECTED]> wrote: > What would heavy python unit testers say is the best framework? > > I've seen a few mentions that maybe the built in unittest framework > isn't that great. The UT frameworks follow the same principles and are all alike more or less. Of course doctest makes a difference and implies another approach to testing but whether you use unittest, nosetest or py.test is mostly a matter of taste. Of course taste matters a lot for various people and when they have to use CamelCased method names like setUp or assertEqual in Python they feel alienated and reminded that Java is out there. Personally I use unittest.py for a pretty obvious reason: other people can simply run the test scripts without prior installation of a testframework. Kay -- http://mail.python.org/mailman/listinfo/python-list
Re: Is psyco available for python 2.6?
On 9 Nov., 20:44, Fuzzyman <[EMAIL PROTECTED]> wrote: > On Nov 9, 2:18 pm, Anton Vredegoor <[EMAIL PROTECTED]> wrote: > > > > > On Thu, 30 Oct 2008 17:45:40 +0100 > > > Gerhard Häring <[EMAIL PROTECTED]> wrote: > > > psyco seems to just work on Linux with Python 2.6. So it is probably > > > "only" a matter of compiling it on Windows for Python 2.6. > > > Yes. I compiled it using "wp setup.py build --compiler=mingw32" with > > cygwin, where wp was an alias for my windows xp python executable. > > > For the OP and other people interested in windows binaries: > > > I am in no way connected to or involved in the psyco development > > process -- except that I downloaded and compiled it -- but I have put a > > zip file on line for people who have a lot of trust in me and little > > patience for waiting for the official distribution. Just unpack it and > > put it in your site-packages directory. > > >http://members.home.nl/anton.vredegoor/psyco/ > > > A. > > I've built a Windows installer if anyone is interested: > > http://www.voidspace.org.uk/python/modules.shtml > > Michael > --http://www.ironpythoninaction.com/ Thanks. You guys rock! -- http://mail.python.org/mailman/listinfo/python-list
Re: Python 3.0 - is this true?
On 9 Nov., 17:49, Terry Reedy <[EMAIL PROTECTED]> wrote: > I was asking the OP ;-) Thank you for the discussion. -- http://mail.python.org/mailman/listinfo/python-list
Re: Python 3.0 - is this true?
On 9 Nov., 09:26, Rhamphoryncus <[EMAIL PROTECTED]> wrote: > On Nov 8, 10:14 pm, Kay Schluehr <[EMAIL PROTECTED]> wrote: > > > I guess building a multiset is a little more expensive than just O(n). > > It is rather like building a dict from a list which is O(k*n) with a > > constant but small factor k. The comparison is of the same order. To > > enable the same behavior as the applied sorted() a multiset must > > permit unhashable elements. dicts don't do the job. > > Although it has a runtime of k*n, it is still O(n). big-O notation > ignores constant factors, dealing only with scalability. You are right. I remembered this short after my posting was sent. -- http://mail.python.org/mailman/listinfo/python-list
Re: Python 3.0 - is this true?
On 9 Nov., 07:06, Steven D'Aprano <[EMAIL PROTECTED] cybersource.com.au> wrote: > On Sat, 08 Nov 2008 20:36:59 -0800, Kay Schluehr wrote: > > On 9 Nov., 05:04, Terry Reedy <[EMAIL PROTECTED]> wrote: > > >> Have you written any Python code where you really wanted the old, > >> unpredictable behavior? > > > Sure: > > > if len(L1) == len(L2): > > return sorted(L1) == sorted(L2) # check whether two lists contain > > the same elements > > else: > > return False > > > It doesn't really matter here what the result of the sorts actually is > > as long as the algorithm leads to the same result for all permutations > > on L1 ( and L2 ). > > How often do you care about equality ignoring order for lists containing > arbitrary, heterogeneous types? A few times. Why do you care, Steven? > In any case, the above doesn't work now, since either L1 or L2 might > contain complex numbers. > The sorted() trick only works because you're > making an assumption about the kinds of things in the lists. If you want > to be completely general, the above solution isn't guaranteed to work. You are right. I never used complex numbers in Python so problems were not visible. Otherwise the following comp function in Python 2.X does the job: def comp(x1, x2): try: if x1 Here is a way to > solve the problem assuming only that the items support equality: > > def unordered_equals2(L1, L2): > if len(L1) != len(L2): > return False > for item in L1: > if L1.count(item) != L2.count(item): > return False > return True Which is O(n**2) as you might have noted. -- http://mail.python.org/mailman/listinfo/python-list
Re: Python 3.0 - is this true?
On 9 Nov., 05:49, Alex_Gaynor <[EMAIL PROTECTED]> wrote: > On Nov 8, 11:36 pm, Kay Schluehr <[EMAIL PROTECTED]> wrote: > > > > > On 9 Nov., 05:04, Terry Reedy <[EMAIL PROTECTED]> wrote: > > > > Have you written any Python code where you really wanted the old, > > > unpredictable behavior? > > > Sure: > > > if len(L1) == len(L2): > > return sorted(L1) == sorted(L2) # check whether two lists contain > > the same elements > > else: > > return False > > > It doesn't really matter here what the result of the sorts actually is > > as long as the algorithm leads to the same result for all permutations > > on L1 ( and L2 ). > > that same thing could be done with a multiset type, which would also > have better performance(O(n) vs. O(nlogn)). I guess building a multiset is a little more expensive than just O(n). It is rather like building a dict from a list which is O(k*n) with a constant but small factor k. The comparison is of the same order. To enable the same behavior as the applied sorted() a multiset must permit unhashable elements. dicts don't do the job. -- http://mail.python.org/mailman/listinfo/python-list
Re: Python 3.0 - is this true?
On 9 Nov., 05:04, Terry Reedy <[EMAIL PROTECTED]> wrote: > Have you written any Python code where you really wanted the old, > unpredictable behavior? Sure: if len(L1) == len(L2): return sorted(L1) == sorted(L2) # check whether two lists contain the same elements else: return False It doesn't really matter here what the result of the sorts actually is as long as the algorithm leads to the same result for all permutations on L1 ( and L2 ). -- http://mail.python.org/mailman/listinfo/python-list
Re: How to make money with Python!
On 31 Okt., 15:30, Duncan Booth <[EMAIL PROTECTED]> wrote: > If that subject line didn't trip everyone's killfiles, see > http://pythonide.blogspot.com/2008/10/how-to-make-money-with-free-sof... > for a fantastic story involving Python. > > -- > Duncan Boothhttp://kupuguy.blogspot.com Masterpiece. I dare to say it's much better than SPE ... -- http://mail.python.org/mailman/listinfo/python-list
Re: beutifulsoup
On 30 Okt., 18:28, luca72 <[EMAIL PROTECTED]> wrote: > hello > Another stupit question instead of use > sito = urllib.urlopen('http://www.prova.com/') > esamino = BeautifulSoup(sito) > > i do > sito = urllib.urlopen('http://onlygame.helloweb.eu/') > file_sito = open('sito.html', 'wb') > for line in sito : > file_sito.write(line) > file_sito.close() > > how can i pass the file sito.html to beautifulsoup? > > Regards > > Luca download = urllib.urlopen("http://www.fiber-space.de/downloads/ downloads.html") BeautifulSoup(download.read()) Ciao -- http://mail.python.org/mailman/listinfo/python-list
Re: beutifulsoup
On 29 Okt., 17:45, luca72 <[EMAIL PROTECTED]> wrote: > Hello > I try to use beautifulsoup > i have this: > sito = urllib.urlopen('http://www.prova.com/') > esamino = BeautifulSoup(sito) > luca = esamino.findAll('tr', align='center') > > print luca[0] > > >> >>href="#">#144.4MB >>align="left"> Pc-prova.rar > > I need to get the following information: > 1)Only|G|BoT|05 > 2)#1 > 3)44.4MB > 4)Pc-prova.rar > with: print luca[0].a.stringi get #1 > with print luca[0].td.stringi get 44.4MB > can you explain me how to get the others two value > Thanks > Luca The same way you got `luca` 1,2) luca.find("a")["onclick"].split("'") and search through the result list 3) luca.find("td").string 4) luca.find("font").string -- http://mail.python.org/mailman/listinfo/python-list
Re: parsing MS word docs -- tutorial request
On 28 Okt., 15:25, [EMAIL PROTECTED] wrote: > All, > > I am trying to write a script that will parse and extract data from a > MS Word document. Can / would anyone refer me to a tutorial on how to > do that? (perhaps from tables). I am aware of, and have downloaded > the pywin32 extensions, but am unsure of how to proceed -- I'm not > familiar with the COM API for word, so help for that would also be > welcome. > > Any help would be appreciated. Thanks for your attention and > patience. > > ::bp:: One can convert MS-Word documents into some class of XML documents called MHTML. If I remember correctly those documents had an .mht extension. The result is a huge amount of ( nevertheless structured ) markup gibberish together with text. If one spends time and attention one can find pattern in the markup ( we have XML and it's human readable ). A few years ago I used this conversion to implement roughly following thing algorithm: 1. I manually highlighted one or more sections in a Word doc using a background colour marker. 2. I searched for the colour marked section and determined the structure. The structure information was fed into a state machine. 3. With this state machine I searched for all sections that were equally structured. 4. I applied a href link to the text that was surrounded by the structure and removed the colour marker. 5. In another document I searched for the same text and set an anchor. This way I could link two documents ( those were public specifications being originally disconnected ). Kay -- http://mail.python.org/mailman/listinfo/python-list
Re: PyGUI as a standard GUI API for Python?
On 11 Okt., 09:56, lkcl <[EMAIL PROTECTED]> wrote: > > The role of Python is somewhat arbitrary. This could change only if > > Python becomes a client side language executed by AVM,V8etc. > > pyv8 -http://advogato.org/article/985.html > pyjs.py - standalone python-to-javascript compiler, seehttp://pyjs.org. O.K. you got me all. I give up! I've already imlemented some small Flex supporting functionality in the P4D Langlet [1] and I'll also checkout pyjamas ( didn't worn them for long ) and if they are fine I work out a bridge the next days. Then only style sheets are left :) [1] http://pypi.python.org/pypi/P4D Langlet/ -- http://mail.python.org/mailman/listinfo/python-list
Re: Linux.com: Python 3 makes a big break
On 18 Okt., 22:01, Jean-Paul Calderone <[EMAIL PROTECTED]> wrote: > Perhaps it also omitted the fact that nothing prevents you from defining a > function to write things to stdout (or elsewhere) in Python 2.5, making the > Python 3.x change largely a non-feature. ;) > > Jean-Paul Even more. If someone had solved the hard problem of finding a less cumbersome way of writing sys.stdout.write(...) the request for multiline lambdas ( multi expression lambdas actually ) could have been decreased about 75-80%. -- http://mail.python.org/mailman/listinfo/python-list
Re: Overloading operators
On 15 Okt., 14:34, Mr.SpOOn <[EMAIL PROTECTED]> wrote: > Hi, > in a project I'm overloading a lot of comparison and arithmetic > operators to make them working with more complex classes that I > defined. > > Sometimes I need a different behavior of the operator depending on the > argument. For example, if I compare a object with an int, I get a > result, but if I compare the same object with a string, or another > object, I get another result. > > What is the best way to do this? Shall I use a lot of "if...elif" > statements inside the overloaded operator? Or is there a more pythonic > and dynamic way? I can't see anything wrong about it. Sometimes I try to avoid isinstance() though because it is a rather slow operation. If the majority of operations is single-typed one can also use a try-stmt: def __add__(self, other): try: return self._val + other except TypeError: return self.__add__(SomeWrapper(other)) and compare performance using a profiler. Notice that also if type(obj) == T: BLOCK is much faster than isinstance() but it's not OO-ish and very rigid. -- http://mail.python.org/mailman/listinfo/python-list
Re: type-checking support in Python?
On 6 Okt., 16:19, Joe Strout <[EMAIL PROTECTED]> wrote: > I'm just re-learning Python after nearly a decade away. I've learned > a good healthy paranoia about my code in that time, and so one thing > I'd like to add to my Python habits is a way to both (a) make intended > types clear to the human reader of the code, in a uniform manner; and > (b) catch any type errors as early and automatically as possible. > > I found the "typecheck" module (http://oakwinter.com/code/typecheck/), > but I notice that it hasn't been updated since 2006, and it's not > included with the Python distribution. Are there other modules > providing similar functionality that I should consider instead? Incidentally I started to use the typecheck package just yesterday. It's not that I'm using it in a typical application but I'm working on a mapping of a statically typed language onto a Python framework that emerges in parallel. So I have to rebuild the type semantics of the original language in Python but defer typechecks until runtime. My impressions so far have been mixed. I stumbled across some strange failures that required uncommenting source code in the typecheck package which might cause failures elsewhere. I also hit a barrier when working with methods instead of functions. I also noticed that passing two strings to a funtcion: @accepts(Number, Number) def add(x,y): return x+y is acceptable behaviour. >From all those experiences I'd rate the package alpha and I'm sad noticing that it is apparently abandonware. I'll continue to use it nevertheless. -- http://mail.python.org/mailman/listinfo/python-list
Re: python 3: sorting with a comparison function
On 10 Okt., 23:04, Terry Reedy <[EMAIL PROTECTED]> wrote: > Kay Schluehr wrote: > > Me too because I don't get this: > > > "key specifies a function of one argument that is used to extract a > > comparison key from each list element: key=str.lower. The default > > value is None." > > I am not sure what you do not get, but it should say 'for example, > key=str.lower." None is the default value of key. See the discussion above. -- http://mail.python.org/mailman/listinfo/python-list
Re: python 3: sorting with a comparison function
On 10 Okt., 20:38, [EMAIL PROTECTED] wrote: > Kay Schluehr: > > > Sometimes it helps when people just make clear how they use technical > > terms instead of invoking vague associations. > > And generally Python docs can enjoy growing few thousands examples... Cleaning up and extending documentation is a large community effort that requires an informational PEP for guidelines and management support by the python-dev leads. The official documentation is ad hoc and just about better than nothing. A Python documentation guideline might also have positive impact on 3rd party package authors like us. Generally Python has become a very well managed project. I hope the docs as well as the stdlib will become the major concerns of Python 3.1. -- http://mail.python.org/mailman/listinfo/python-list
Re: python 3: sorting with a comparison function
On 10 Okt., 19:22, [EMAIL PROTECTED] wrote: > On Oct 10, 8:35 am, Kay Schluehr <[EMAIL PROTECTED]> wrote: > > > On 9 Okt., 22:36, [EMAIL PROTECTED] wrote: > > > > Yes, that's a wonderful thing, because from the code I see around > > > 99.9% of people see the cmp and just use it, totally ignoring the > > > presence of the 'key' argument, that allows better and shorter > > > solutions of the sorting problem. > > > Me too because I don't get this: > > > "key specifies a function of one argument that is used to extract a > > comparison key from each list element: key=str.lower. The default > > value is None." > > > Kay > > Don't know if further explanation is needed, but here is the deal: > > cmp is a function that receives two values and you return -1, 0 or 1 > depending if the first is smaller, equal or bigger. 99% of the time > you will do some operation on the values that come in and then do a if > statement with ">" or "<" and return -1,0,1. > > key is a function that receives one value and you return the value > that you would normally compare against. > > Let me show an example: > > >>> data=[(4,'v'),(2,'x'),(1,'a')] > >>> sorted(data) > > [(1, 'a'), (2, 'x'), (4, 'v')] > > OK, we sorted the data, but What if we want to sort by the letter > instead of the number? Let's use cmp: > > >>> def comp(x, y): > > key_of_x=x[1] > key_of_y=y[1] > if key_of_x < key_of_y: > return -1 > elif key_of_x > key_of_y: > return 1 > else: > return 0 #key_of_x == key_of_y > > >>> sorted(data,cmp=comp) > > [(1, 'a'), (4, 'v'), (2, 'x')] > > Very well, so how do we do this using key? > > >>> def keyfunc(x): > > key_of_x=x[1] > return key_of_x > > >>> sorted(data,key=keyfunc) > > [(1, 'a'), (4, 'v'), (2, 'x')] > > Same output. Very good. > > (Of course a smart python developer would use the operator module so > he doesn't even have to write keyfunc but this was just an example) > > In summary to transform most cmp functions to a key function you just > take the code that calculates the first value to be compared and leave > out the rest of the logic. > > Hope that was helpful. Yes, thanks a lot. In essence the "key" is a function that maps each list element onto a value of a type for which a known order is defined e.g. an integer. Applying sorted() sorts the list elements according to the list of those values. Sometimes it helps when people just make clear how they use technical terms instead of invoking vague associations. -- http://mail.python.org/mailman/listinfo/python-list
Re: python 3: sorting with a comparison function
On 9 Okt., 22:36, [EMAIL PROTECTED] wrote: > Yes, that's a wonderful thing, because from the code I see around > 99.9% of people see the cmp and just use it, totally ignoring the > presence of the 'key' argument, that allows better and shorter > solutions of the sorting problem. Me too because I don't get this: "key specifies a function of one argument that is used to extract a comparison key from each list element: key=str.lower. The default value is None." Kay -- http://mail.python.org/mailman/listinfo/python-list
Re: How to read a jpg bytearray from a Flash AS3 file
On 26 Sep., 08:47, [EMAIL PROTECTED] wrote: > I'm trying to save an image from a Flash AS3 to my server as a jpg > file. I found some PHP code to do this, but I want to do this in > Python. I'd expect you use AS3 to save the image file ( just looking at Adobes AS3 docs on how this works ) and load it with PIL if you want to post- process it in Python: http://www.pythonware.com/products/pil/ -- http://mail.python.org/mailman/listinfo/python-list
Re: Does anybody use this web framework ?
On 24 Sep., 09:26, Bruno Desthuilliers wrote: > Phil Cataldo a écrit : > > > Hi, > > > I just found this new? python web framework > > (http://pypi.python.org/pypi/nagare/0.1.0). > > > Does anybody know or use it ? > > First time I hear of it, but it looks interesting (note : Stackless > continuation-based framework). Thanks for the link. Dito! -- http://mail.python.org/mailman/listinfo/python-list
Re: Python is slow?
On 23 Sep., 21:23, J Peyret <[EMAIL PROTECTED]> wrote: > On Sep 23, 8:31 am, [EMAIL PROTECTED] wrote: > > Guys, this looks like a great data structure/algo for something I am > working on. > > But... where do I find some definitions of the original BK-tree idea? *geometric data structures*. Just google for it. -- http://mail.python.org/mailman/listinfo/python-list