Re: Python users in Stavanger, Norway?
On Apr 3, 3:32 am, Austin Bingham wrote: > Hei! > > I'm a Python developer in Stavanger, Norway looking for other Python > users/developers/etc. who might be interested in starting a local user > group. Anyone interested? This group might actually evolve into a > general programming/computer group, depending on the mix of people, > but I think that's probably a good thing. > > I know there are a lot of computer types in the area, but there > doesn't seem to be much of a "community". I'd like to change that if > we can, so let me know if you're interested. > > Austin In addition to trying here, try #python on Freenode. You're probably more likely to find users there. You may also want to consider expanding the geographic area if you don't have much luck (perhaps for the region of Norway that Stavanger is in). Rafe (also not from Stavanger) -- http://mail.python.org/mailman/listinfo/python-list
Re: Decorator Syntax
On Mar 21, 8:59 pm, Mike Patterson wrote: > In my Python class the other day, the professor was going over > decorators and he briefly mentioned that there had been this huge > debate about the syntax and using the @ sign to signify decorators. > > I read about the alternative forms proposed here > (http://www.python.org/dev/peps/pep-0318/#syntax-alternatives). > > Has anyone thought about just using dec to declare a decorator? > > For example: > dec dec2 > dec dec1 > def func(arg1, arg2, ...): > pass I personally love the @ syntax for two reasons: 1. It makes it very, very obvious that a decorator is being used 2. It feels closely tied to the function or class that it's decorating The 'dec' syntax isn't quite as good in those regards, IMO. It basically looks like any other statement, which makes it less visible, and it doesn't seem as closely tied syntactically. Of course, that's all opinion. But what's done is done; it's doubtful that the decorator syntax will ever change significantly. Rafe -- http://mail.python.org/mailman/listinfo/python-list
Re: What do you use with Python for GUI programming and why?
On Mar 10, 9:25 pm, Robert wrote: > Is there a push to one toolkit or the other? > > -- > Robert I've mainly used Tkinter for a few reasons: - It's what I already know - It's pretty simple - Most people who have Python have it too, so there's no crazy dependencies - It looks decent on Gnome and KDE, good on Winodws and Mac - I can develop a full featured GUI in a few hours, no sweat (partly because it's simple, partly because I know it) I've tried PyGTK and PyQt before, but they were both more complicated than I'd like. I decided to stick with Tkinter because I was proficient with it. Rafe -- http://mail.python.org/mailman/listinfo/python-list
Re: SCM
On Mar 8, 7:21 am, Stefan Behnel wrote: > Cliff Scherer, 08.03.2011 12:42: > > > I am looking for a Python library, which can handle the modelling of > > material flows in Supply Chains. > > Note that TLAs do not always uniquely identify a subject. "SCM" is easily > read as "source code management" or "software configuration management" on > a programming related list. There's also a Scheme implementation with that > name. > > Stefan That's how I read it -- I was interested because I thought the thread was about "source control management". Rafe -- http://mail.python.org/mailman/listinfo/python-list
Re: having both dynamic and static variables
> Finally, Python 3 introduced type annotations, which are currently a > feature looking for a reason. By type annotations, do you mean function annotations (see PEP 3107, http://www.python.org/dev/peps/pep-3107/)? Or is this some other feature that I'm unaware of? Rafe -- http://mail.python.org/mailman/listinfo/python-list
Re: Python for embedded systems?
On Feb 23, 6:53 pm, Paulito wrote: > Apologies if this has been asked; I haven't yet Googled the archives. > > From a brief email conversation, Guido pointed me to this newsgroup to > ask the following questions: > > "Is Python 'mature' enough to be considered the primary language for > embedded systems? Is the performance there for real-time applications > (eg avionics, real-time control systems) or is it still more suitable > "...as a glue language, used to combine components written in C++" ?" > > And further: > "Has anyone tried to shorten development time when porting code to a > new embedded hardware platform, by trying to convert legacy code (C/C+ > +/Ada) to Python?" > > I'm currently thinking that Python isn't there yet but certainly would > like to hear any feedback. > > Your input is greatly appreciated; thanks! > - Paulito Python is probably not the best choice for embedded systems because it lacks fine control over hardware, something that C and C++ have in bunches. Also, it doesn't perform well enough to be considered for situations where resources are at a premium (think microcontrollers). Could you develop in Python for a platform like Android or iPhone? Yeah, they have the space, memory, and CPU to run Python stuff. But on weaker CPUs and less memory, Python would be a poor choice. It's not a matter of language maturity, Python is very mature, it's a matter of design. Python is a high-level, garbage-collected, interpreted language, and that's not the ideal type of language for embedded systems. Rafe -- http://mail.python.org/mailman/listinfo/python-list
Re: Return Values & lambda
On Feb 21, 7:59 pm, Steven D'Aprano wrote: > On Mon, 21 Feb 2011 16:43:49 -0800, Rafe Kettler wrote: > > On Feb 21, 1:59 pm, pradeepbpin wrote: > >> I have a main program module that invokes an input dialog box via a > >> menu item. Now, the code for drawing and processing the input of dialog > >> box is in another module, say 'dialogs.py'. I connect the menu item to > >> this dialog box by a statement like, > > >> manu_item.connect('activate', lambda a: dialogs.open_dilaog()) > > >> If this function open_dialog() returns a list of dialog inputs values, > >> how can I access those values in the main module ? > > > Moreover, I don't see why you need a lambda in this case. Why not just > > pass the function itself? > > My guess is that the callback function is passed a single argument, and > open_dialog doesn't take any arguments, hence the wrapper which just > ignores the argument and calls the function. > > -- > Steven That would sound reasonable. Rafe -- http://mail.python.org/mailman/listinfo/python-list
Re: Return Values & lambda
On Feb 21, 1:59 pm, pradeepbpin wrote: > I have a main program module that invokes an input dialog box via a > menu item. Now, the code for drawing and processing the input of > dialog box is in another module, say 'dialogs.py'. I connect the menu > item to this dialog box by a statement like, > > manu_item.connect('activate', lambda a: dialogs.open_dilaog()) > > If this function open_dialog() returns a list of dialog inputs values, > how can I access those values in the main module ? Moreover, I don't see why you need a lambda in this case. Why not just pass the function itself? Rafe -- http://mail.python.org/mailman/listinfo/python-list
Re: Python 3.2 and html.escape function
On Feb 20, 8:15 am, Gerald Britton wrote: > I see that Python 3.2 includes a new module -- html -- with a single > function -- escape. I would like to know how this function differs > from xml.sax.saxutils.escape and, if there is no difference (or only a > minor one), what the need is for this new module and its lone function > > -- > Gerald Britton I believe that they're trying to organize a new top-level package called html that will, at some point, contain all HTML functionality. It's sort of similar to what they're doing with the concurrent package. -- http://mail.python.org/mailman/listinfo/python-list
Re: Python 3.2
On Feb 21, 10:54 am, Duncan Booth wrote: > Georg Brandl wrote: > > Please consider trying Python 3.2 with your code and reporting any bugs > > you may notice to: > > > http://bugs.python.org/ > > It looks like this release breaks the builtin `input()` function on Windows > by leaving a trailing '\r' on the end of the string. > > Reported as issue 11272:http://bugs.python.org/issue11272 > > -- > Duncan Boothhttp://kupuguy.blogspot.com Are fixes for these bugs going to wait until the next release (3.2.1 I guess) or will update as soon as the fixes are available? I want to know if I should be on the lookout for a better version of 3.2. Rafe -- http://mail.python.org/mailman/listinfo/python-list
Re: Best way to dynamically get an attribute from a module from within the same module
> 3) using the bare name: > whatever > 1) and 2) are useful when the desired name is variable, not a constant like > "whatever". I thought that went without saying. > > I have been using #1 for two reasons. First, I will never run this > > module directly, so __name__ will always be the module name and not > > "__main__". > > (note that it works even with __main__) Nice. Thanks. > > Second, I can use this in a class to decide whether I want > > the module where the class came from or, if I make the class a base > I don't completely understand your use case. In the expression: > getattr(some_module, attribute_name) > you may use any module as the first argument (the current module, or any > other). Right, I was just confused about the effects of inheritance and scope. If I put a method in a baseclass, which is inherited in another module, and then run the method from an instance of the sub-class... - "sys.modules[__name__]" returns the baseclass module. - "sys.modules[self.__class__.__module__]" returns the subclass module. in the end I found it felt more logical to put the code in a module- level function and call it from the class. All of this was part of a fairly large factory method implementation. I now know that globals() refers to module-level names. The pros and cons were the main point of this thread and so far I have... 1) getattr(module, "whatever") seems the most pythonic way to do it but it takes more than one line and requires a little more thought about scope and inheritance. 2) globals()["whatever"] is concise, but it seems a little like a shortcut which requires special knowledge (though a very small amount). I did a quick benchmark test: < tmp2.py > import time import sys import tmp class A(tmp.A): pass class B(tmp.A): pass class C(tmp.A): pass class D(tmp.A): pass class E(tmp.A): pass class F(tmp.A): pass class G(tmp.A): pass class H(tmp.A): pass class I(tmp.A): pass class J(tmp.A): pass def test_globals_vs_gettattr(): t1 = time.time() for i in range(0, 100): H = globals()["H"] t2 = time.time() print "globals() too %s seconds." % str(t2-t1) t1 = time.time() mod = sys.modules[__name__] for i in range(0, 100): H = getattr(mod, "H") t2 = time.time() print "getattr() too %s seconds." % str(t2-t1) < /tmp2.py > tmp.py just has a simple class in it. I just wanted to add some complexity, but I doubt this had any affect on the times. >>> import tmp2 >>> tmp2.test_globals_vs_gettattr() globals() too .146900010109 seconds. getattr() too .43423515 seconds. Just to see how much the call to sys.modules was affecting the test, I moved it outside the loop and reloaded the module for a second test. >>> reload(tmp2) >>> tmp2.test_globals_vs_gettattr() globals() too .13913242 seconds. getattr() too .25460381 seconds. This second test is pointless in practice since I would be calling sys.modules each time anyway. Even though the getattr() way is around 3.5 times slower, I had to run the code 1,000,000 times before the difference became humanly recognizable. I also realize benchmarks should be taken with a grain of salt since my setup may differ greatly from others'. I guess, in the end, I'd use getattr() because it feels more pythonic, and more basic. I got pretty deep in to learning python before I had to learn what the globals() dict could do for me. Cheers, - Rafe -- http://mail.python.org/mailman/listinfo/python-list
Re: noob needs help
On Dec 1, 12:50 am, toveysnake <[EMAIL PROTECTED]> wrote: > I decided that I want to learn python, and have no previous > programming experience. I was reading the guide A byte of python and > got to the part where you create and run the program helloworld.py I > used kate to create this program and save it as helloworld.py. I then > entered the command python helloworld.py into the terminal(I am using > ubuntu 8.10) and I get this error: > > [EMAIL PROTECTED]:~$ python helloworld.py > python: can't open file 'helloworld.py': [Errno 2] No such file or > directory > > Am I saving the file in the wrong spot?(I saved it in documents) > Should I use a different editor? Is there a better python book > available online? if you go to the directory where you put the file and run the python command, it should work (or supply the full path and not just 'helloworld.py') - Rafe -- http://mail.python.org/mailman/listinfo/python-list
Re: Project structure - Best practices
On Nov 30, 11:43 pm, "Filip Gruszczyński" <[EMAIL PROTECTED]> wrote: > This is first time that I am building python application that is > larger than a single module and I would like to do it right. I google > it a bit, finding some stuff about not using src directory (which I > have seen so many times, that I believed it be standard) and using > packages. Still, there are few things, that I would like to achieve > with this structure: > * being able to use pychecker a lot - catching all typos in one shot > instead of running app many times really saves me a lot of time > * being able to write some unit tests > * having clean division of code among packages and modules (I have > seen some projects, where modules are pretty large - I would like to > keep them logically divided, event if they stay smaller) > > My project is a tool for people interested in role playing games. My > current structure looks something like this: > > /src > rpgDirectory.py (main script, running the app) > > src/rpg > plans.py > support.py > gui.py > iosystem.py > > src/rpg/character > model.py > sheet.py > gui.py > handlers.py > requirements.py > > The problem is, that modules from src/rpg/character use classes > defined in support.py. Therefore I have to use absolute paths to > import it and this works only, when I run rpgDirectory.py. When I use > pychecker, it can't import this module and fails. Any suggestions, how > can I avoid this and what structure should I use? > > -- > Filip Gruszczyński Hi, I have read in many places that relative imports aren't recommend as a standard. This includes PEP 8 (http://www.python.org/dev/peps/ pep-0008/) which states: " - Relative imports for intra-package imports are highly discouraged. Always use the absolute package path for all imports. Even now that PEP 328 [7] is fully implemented in Python 2.5, its style of explicit relative imports is actively discouraged; absolute imports are more portable and usually more readable." ...and I completely agree. I always use the standard import form unless absolutely necessary. However, I use 'as' to shorten the path to the last module. For example: >>> import app.foo.bar as bar >>> instance = bar.Class() The only "relative" import I use when I am getting another module in the same package. If I have: app/ __init__.py constants.py foo/ __init__.py bar.py here.py utils/ __init__.py ... and I am inside app/foo/here.py, I might have some imports at the top of the module which look like this... import app.constants as appC import app.utils import bar Python will look for 'bar' in the local package before looking through the python path. I could have imported constants as just "c", but single letter variables are dangerous and I work with an application where it is common in the community to use 'c' for 'constants' (regardless of the danger). Lastly, I could import 'app.utils' as 'utils', but this is such a common module name that I like to preserve the name-space or at least prefix it (so I suppose something like 'apputils' would be acceptable, but I'd only be saving one character, the '.', so what's the point?). I find that no matter how well I plan what my structure will be, I end up making small changes such as flattening a sub-package or converting a module to a sub-package to break things down further. As someone who recently started learning python, I would recommend that you just make a quick sketch of what you think might work and then just begin coding. Adjust to logic along the way. At some point planning begins to eat time rather than save it. get through the fastest initial 80%, maybe push for a few more %, then just go for it (I assume you have this luxury. If not then you probably have a team that can help refine the plan anyway.) Hope this helps. - Rafe -- http://mail.python.org/mailman/listinfo/python-list
Re: Do more imported objects affect performance
On Dec 1, 7:26 am, "Filip Gruszczyński" <[EMAIL PROTECTED]> wrote: > I have following question: if I use > > from module import * > > instead > > from module import Class > > am I affecting performance of my program? I believe, that all those > names must be stored somewhere, when they are imported and then > browsed when one of them is called. So am I putting a lot of "garbage" > to this storage and make those searches longer? > > -- > Filip Gruszczyński Why use it if you don't need it? Your post implies a choice and the '*' import can really make things muddy if it isn't actually necessary (rare). Why not just import the module and use what you need? It is way easier to read/debug and maintains the name-space. - Rafe -- http://mail.python.org/mailman/listinfo/python-list
Re: pydoc enforcement.
On Dec 1, 7:27 am, "[EMAIL PROTECTED]" <[EMAIL PROTECTED]> wrote: > I've been thinking about implementing (although no idea yet *HOW*) the > following features/extension for the python compile stage and would be > interested in any thoughts/comments/flames etc. > > Basically I'm interested adding a check to see if: > 1) pydoc's are written for every function/method. > 2) There are entries for each parameter, defined by some > predetermined syntax. > > My idea is that as much as I love dynamic typing, there are times when > using some modules/API's that have less than stellar documentation. I > was thinking that if it was possible to enable some switch that > basically forced compilation to fail if certain documentation criteria > weren't met. > > Yes, it should be up to developers to provide documentation in the > first place. Or, the client developer might need to read the source > (IF its available)... but having some "forced" documentation might at > least ease the problem a little. > > For example (half borrowing from Javadoc). > > class Foo( object ): > > def bar( self, ui ): > pass > > Would fail, since the bar method has an "unknown" parameter called > "ui". > What I think could be interesting is that the compiler forces some > documentation such as: > > class Foo( object ): > > def bar( self, ui ): > """ > @Param: ui : blah blah blah. > """ > pass > > The compiler could check for @Param matching each parameter passed to > the method/function. Sure, a lot of people might just not put a > description in, so we'd be no better off. But at least its getting > them *that* far, maybe it would encourage them to actually fill in > details. > > Now ofcourse, in statically typed language, they might have the > description as "Instance of UIClass" or something like that. For > Python, maybe just a description of "Instance of abstract class UI" or > "List of Dictionaries"... or whatever. Sure, precise class names > mightn't be mentioned (since we mightn't know what is being used > then), but having *some* description would certainly be helpful (I > feel). > > Even if no-one else is interested in this feature, I think it could > help my own development (and would be an interested "first change" > into Python itself). > > Apart from bagging the idea, does anyone have a suggestion on where in > the Python source I would start for implementing such an idea? > > Thanks > > Ken As long as it uses RST (reStructuredText) it could be interesting. Maybe as a wrapper on epydoc or something? I have been simply generating my docs and reading through them. This is fine for catching areas which are totally missing, but gets very time consuming to maintain small changes. What would be really great, is something which ties in to subversion to display an easy to see and find alert in the docs when the source has been updated. It would then require some manual action to hide the alert (hopefully someone would read the doc again before killing the alert). - Rafe -- http://mail.python.org/mailman/listinfo/python-list
Re: Getting in to metaprogramming
On Nov 27, 12:11 pm, Rafe <[EMAIL PROTECTED]> wrote: > On Nov 27, 11:41 am, "Hendrik van Rooyen" <[EMAIL PROTECTED]> > wrote: > > > > > "Steven D'Aprano" wrote: > > > > Well, I don't know about "any problem". And it's not so much about > > > whether metaprograms can solve problems that can't be solved by anything > > > else, as whether metaprograms can solve problems more effectively than > > > other techniques. > > > > If you include factory functions, class factories, the builder design > > > pattern, metaclasses, etc. as "metaprogramming", then I use it all the > > > time, and find it an excellent technique to use. > > > > But if you mean using a Python program to generate Python source code, > > > then I can't think of any time I used it. Which doesn't mean that others > > > don't find it helpful, only that I haven't yet. > > > I am using the term in the restricted sense of Python writing Python source. > > > Given that, can anybody think of an example that you could not do with > > a class? (excepting the "stored procedure" aspect) > > > Or can I claim a new a new meta - rule - I would call it van Rooyen's > > folly... > > > > Thinking further back, when I was young and programming in Apple's > > > Hypercard 4GL, I used to frequently use Hypercard scripts to generate new > > > Hypercard scripts. That was to work around the limitations of the > > > scripting language. > > > What sort of stuff did you do, and would having had simple OO available > > have rendered it unnecessary? > > > > I don't think metaprogramming in the limited sense (programs to output > > > source code) is a bad idea, but I do think that there are probably better > > > alternatives in a language like Python. > > > True. No argument here - I was just wondering if the relationship holds. > > > - Hendrik > > "Given that, can anybody think of an example that you could not do > with a class?" > > Generating a template for a specific script application. For example, > a script with pre-defined callbacks that only require the addition of > the contents. > > I was really interested in exploring the idea of using python output, > instead of XML, to record something a user did in a GUI. I have seen > it done and it is really advantageous in the 3D industry because it > means the script files can be edited directly, in a pinch, to generate > something slightly different. > > For example, say we have code which generates a cube by plotting it's > points. A user then changes a point position in the GUI. The change is > saved by outputting the function call to a file with new arguments > (the new point location). If I wanted to make 100s of copies of the > cube, but with a slightly different point position, I could edit the > custom cube's python code and hand it back for creation without using > the GUI. I could do this with XML, but it can be harder to work with > in a text editor (though I have seen some XML editors that make it a > bit easier.) In fact, in most 3D applications, the app prints > everything the user does to a log. Sometimes in a choice of languages, > so I guess I am looking to do the same thing with my own custom tools. > > In a real situation the generated code file can build some pretty > complex 3D object hierarchies. It moves beyond simple storage of data > and becomes a real script that can be hacked as necessary. > > It is nice to have everything as python scripts because we always have > a blend of GUI users and developers to get a job done. > > -Rafe I was just thinking (hopefully i get some time to try this soon) that it wouldn't be difficult to decorate a function so that when called, a line of code is output. as long as the arguments can be stored as a string (strings, numbers, lists, etc. but no 'object' instances) it should be able to be executed to get the same result. I think it would just have to: 1) Dynamically write the name with a '(' 2) Gather all the args in a list and ", ".join(args) 3) Gather kwargs as a list of ['%s = %s' % key, value] and then and ", ".join(kwlist) 4) Add ')\n' - Rafe -- http://mail.python.org/mailman/listinfo/python-list
Re: Getting in to metaprogramming
On Nov 27, 11:41 am, "Hendrik van Rooyen" <[EMAIL PROTECTED]> wrote: > "Steven D'Aprano" wrote: > > > > > Well, I don't know about "any problem". And it's not so much about > > whether metaprograms can solve problems that can't be solved by anything > > else, as whether metaprograms can solve problems more effectively than > > other techniques. > > > If you include factory functions, class factories, the builder design > > pattern, metaclasses, etc. as "metaprogramming", then I use it all the > > time, and find it an excellent technique to use. > > > But if you mean using a Python program to generate Python source code, > > then I can't think of any time I used it. Which doesn't mean that others > > don't find it helpful, only that I haven't yet. > > I am using the term in the restricted sense of Python writing Python source. > > Given that, can anybody think of an example that you could not do with > a class? (excepting the "stored procedure" aspect) > > Or can I claim a new a new meta - rule - I would call it van Rooyen's folly... > > > > > Thinking further back, when I was young and programming in Apple's > > Hypercard 4GL, I used to frequently use Hypercard scripts to generate new > > Hypercard scripts. That was to work around the limitations of the > > scripting language. > > What sort of stuff did you do, and would having had simple OO available > have rendered it unnecessary? > > > > > I don't think metaprogramming in the limited sense (programs to output > > source code) is a bad idea, but I do think that there are probably better > > alternatives in a language like Python. > > True. No argument here - I was just wondering if the relationship holds. > > - Hendrik "Given that, can anybody think of an example that you could not do with a class?" Generating a template for a specific script application. For example, a script with pre-defined callbacks that only require the addition of the contents. I was really interested in exploring the idea of using python output, instead of XML, to record something a user did in a GUI. I have seen it done and it is really advantageous in the 3D industry because it means the script files can be edited directly, in a pinch, to generate something slightly different. For example, say we have code which generates a cube by plotting it's points. A user then changes a point position in the GUI. The change is saved by outputting the function call to a file with new arguments (the new point location). If I wanted to make 100s of copies of the cube, but with a slightly different point position, I could edit the custom cube's python code and hand it back for creation without using the GUI. I could do this with XML, but it can be harder to work with in a text editor (though I have seen some XML editors that make it a bit easier.) In fact, in most 3D applications, the app prints everything the user does to a log. Sometimes in a choice of languages, so I guess I am looking to do the same thing with my own custom tools. In a real situation the generated code file can build some pretty complex 3D object hierarchies. It moves beyond simple storage of data and becomes a real script that can be hacked as necessary. It is nice to have everything as python scripts because we always have a blend of GUI users and developers to get a job done. - Rafe -- http://mail.python.org/mailman/listinfo/python-list
Re: Reflectiong capabilityof Python
On Nov 26, 9:18 am, [EMAIL PROTECTED] wrote: > Can Python create object by name? Like > clsName = "ClassA" > aObj = createObjectByName(clsName) if you are talking about creating an objcet dynamically and naming the CLASS name at runtime, use: >>> obj = type("ClassA", object, {}) # The base class is object If you know the name of the class and you want to get an instance (object) of it dynamically, there are two ways I have found. 1) use getattr(module, "Class A"). 2) Use the globals() dict which is handy when you know the class is always in the same module as the code that calls it (watch out for inheritance across more than one module though. This can put the globals() call in a different module.) - Rafe -- http://mail.python.org/mailman/listinfo/python-list
Re: Getting in to metaprogramming
On Nov 25, 5:41 pm, Aaron Brady <[EMAIL PROTECTED]> wrote: > On Nov 25, 4:08 am, Rafe <[EMAIL PROTECTED]> wrote: > > > Hi, > > > In the name of self-education can anyone share some pointers, links, > > modules, etc that I might use to begin learning how to do some > > "metaprogramming". That is, using code to write code (right?) > > > Cheers, > > > - Rafe > > Python programs can generate code for themselves. > > >>> for i in range( 10 ): > > ... d= { 'cls': i } > ... s=""" > ... class Cls%(cls)s: > ... def meth%(cls)s( self, arg ): > ... print 'in meth%(cls)s, arg:', arg > ... """% d > ... exec( s ) > ... s= """ > ... inst%(cls)s= Cls%(cls)s() > ... """% d > ... exec( s ) > ...>>> inst0.meth0( "arg" ) > in meth0, arg: arg > >>> inst1.meth1( "arg" ) > in meth1, arg: arg > >>> inst2.meth2( "arg" ) > > in meth2, arg: arg > > The 'Cls0', 'Cls1', 'Cls2' repetitiveness is taken care of with a for- > loop. Michele, I am thinking about python which writes python. Which makes Aaron's post accurate to my needs. More specifically, I am considering what it might be like to use python to build a script file which can be executed later. Until now I planned to store info to XML and then parse it to run later. There are good reasons that generating a script file would be more useful for me. Aaron, Is it really as simple as gathering strings of code? Sort of like generating HTML or XML directly? Is there any other framework or pattern set that is worth looking in to? Thanks for helping me explore this. - Rafe -- http://mail.python.org/mailman/listinfo/python-list
Re: Instance attributes vs method arguments
On Nov 25, 5:48 pm, John O'Hagan <[EMAIL PROTECTED]> wrote: > On Tue, 25 Nov 2008, Marc 'BlackJack' Rintsch wrote: > > On Tue, 25 Nov 2008 07:27:41 +, John O'Hagan wrote: > > > Is it better to do this: > > > > class Class_a(): > > > def __init__(self, args): > > > self.a = args.a > > > self.b = args.b > > > self.c = args.c > > > self.d = args.d > > > def method_ab(self): > > > return self.a + self.b > > > def method_cd(self): > > > return self.c + self.d > > > > or this: > > > > class Class_b(): > > > def method_ab(self, args): > > > a = args.a > > > b = args.b > > > return a + b > > > def method_cd(self, args) > > > c = args.c > > > d = args.d > > > return c + d > > > > ? > > > > Assuming we don't need access to the args from outside the class, is > > > there anything to be gained (or lost) by not initialising attributes > > > that won't be used unless particular methods are called? > > > The question is if `args.a`, `args.b`, …, are semantically part of the > > state of the objects or not. Hard to tell in general. > > Would you mind elaborating a little on that first sentence? > > > > > I know it's a made up example but in the second class I'd ask myself if > > those methods are really methods, because they don't use `self` so they > > could be as well be functions or at least `staticmethod`\s. > > I guess I went overboard keeping the example simple :) : the real case has > many methods, and they all use "self" (except one, actually, so I'm looking > up "static methods" now; thanks). > > Regards, > > John I'm not sure if you are asking a technical question or a design question. If it helps, I try to think of an object as a thing which has a job to do. If the 'thing' needs information every time to define what it is, or give it a starting state, then that is an argument of __init__() . If I want the object to change or handle something which is a logical task of 'thing', then I give it what it needs via properties or methods (I find I almost never use "public" instance attributes, but then again I am usually writing SDKs which is all about interface). Not sure if that helps... - Rafe -- http://mail.python.org/mailman/listinfo/python-list
Re: Instance attributes vs method arguments
snip > It is always good practice to provide default values for > instance variables in the class definition, both to enhance > readability and to allow adding documentation regarding > the variables, e.g. > > class Class_a: > > # Foo bar > a = None > > # Foo baz > b = None snip Those are not instance 'variables' (attributes), they are class attributes. I used to do that in JScript, so I did it in python when I moved over. It caused a lot of trouble for me as a beginner. I was eventually given the advice to stop it. I haven't looked back since (until now). I think you would just be adding a new self.a which blocks access to your class.a, but I'm still shaky on this knowledge. Instance attribute defaults would be inside __init__() and before unpacking the *args. - Rafe -- http://mail.python.org/mailman/listinfo/python-list
Getting in to metaprogramming
Hi, In the name of self-education can anyone share some pointers, links, modules, etc that I might use to begin learning how to do some "metaprogramming". That is, using code to write code (right?) Cheers, - Rafe -- http://mail.python.org/mailman/listinfo/python-list
Re: Module Structure/Import Design Problem
the time I use the standard 'import ' Examples: If I have: package __init__.py # Can be empty subpackage __init__.py # Can be empty module.py# has "ClassA" in it I will almost always do this: >>> import package.subpackage.module as module >>> module.ClassA() This makes it easy to understand where it is coming from (or at the very least offers a clue to look for import statements). Worse is: >>> from package.subpackage import module >>> module.ClassA() I'm not sure if there is any difference to the above, but again, I try to use the simplest form possible, and this is just a more complicated version of the same thing (IMO). It isn't immediately obvious if module is a module, class, function, or other object in this last case. However, the plan import statment only allows module imports, so it IS obvious. Only use "from ... import ..." in very special cases where you need a module attribute, not a module. The next example is bad (again, IMO): >>> from package.subpackage.module import ClassA >>> ClassA() It removes the name space and makes it harder to guess where it is from. There is very little chance of overriding "module.ClassA", but it would be easy to override just "ClassA": Even worse!: >>> from package.subpackage.module import ClassA as Foo >>> Foo() Talk about hiding the truth! Hope this helps. importing isn't nearly as hard as it seems when first using it. Just put your package in the python path and start importing away. It should be quite logical if kept simple. If you have to add things to the python path using sys.path, only add the top level of the package. - Rafe -- http://mail.python.org/mailman/listinfo/python-list
Re: strange pythoncom.com_error - it only happens once
On Nov 21, 5:02 pm, Rafe <[EMAIL PROTECTED]> wrote: > On Nov 21, 4:50 pm, Rafe <[EMAIL PROTECTED]> wrote: > > > > > Hi, > > > I'm getting this error: > > # File "C:\Python25\Lib\site-packages\win32com\client\dynamic.py", > > line 491, in __getattr__ > > # raise pythoncom.com_error, details > > # COM Error: Unspecified failure - [line 52] > > > ...when my program hits a line of code which I know should work. The > > strange thing is, when I run it again in the same python session, it > > DOES work (no error is thrown and the expected results occur). Then it > > happens again on a later line which also works. If I run the program > > again without restarting python, it works all the way through and > > forever after. > > > Any ideas as to what might cause this? An error that is not an error > > and only happens once? > > > I would show examples, but it is application-specific and wouldn't > > help. I have been using this application for about 10 years, so i know > > my usage is correct (especially since it works most times) > > > Cheers, > > > - Rafe > > Forgot to mention I am using win32com.client.dynamic. I'm not sure if > makepy will solve this or not. I need to ask around and find out how > to run it for this application. I tried using pyWin's tool but it > failed. There are 5 or 6 libraries though, so maybe I tried an invalid > one? > > - Rafe I'm still looking for help here. The maker of the application doesn't recommend using makepy because they want to support multiple version installation. I'm not sure how to handle this (or even if makepy would actually solve this). Has anyone else had to deal with COM instability? Thanks, - Rafe -- http://mail.python.org/mailman/listinfo/python-list
Best way to dynamically get an attribute from a module from within the same module
What are the pros and cons of these two patterns (and are there any other)? Which is the best way to dynamically get an attribute from a module from within the same module? 1) Using the name of the module this_module = sys.modules[__name__] attr = getattr(this_module, "whatever", None) 2) using globals globals()["whatever"] I have been using #1 for two reasons. First, I will never run this module directly, so __name__ will always be the module name and not "__main__". Second, I can use this in a class to decide whether I want the module where the class came from or, if I make the class a base for a class in another module, the module where the sub-class is. I am not familiar with the scope, pros or cons of globals() so I would love to hear comments on this subject. Cheers, - Rafe -- http://mail.python.org/mailman/listinfo/python-list
Re: how to dynamically instantiate an object inheriting from several classes?
On Nov 22, 9:02 am, Steven D'Aprano <[EMAIL PROTECTED] cybersource.com.au> wrote: > On Fri, 21 Nov 2008 15:11:20 -0700, Joe Strout wrote: > > I have a function that takes a reference to a class, > > Hmmm... how do you do that from Python code? The simplest way I can think > of is to extract the name of the class, and then pass the name as a > reference to the class, and hope it hasn't been renamed in the meantime: > > def foo(cls_name, item_args): > # Won't necessarily work for nested scopes. > cls = globals()[cls_name] > item = cls(**itemArgs) > return item > > instance = foo(Myclass.__name__, {'a':1}) > > Seems awfully complicated. If I were you, I'd forget the extra layer of > indirection and just pass the class itself, rather than trying to > generate some sort of reference to it. Let the Python virtual machine > worry about what is the most efficient mechanism to use behind the scenes. > > [...] > > > But now I want to generalize this to handle a set of mix-in classes. > > Normally you use mixins by creating a class that derives from two or > > more other classes, and then instantiate that custom class. But in my > > situation, I don't know ahead of time which mixins might be used and in > > what combination. So I'd like to take a list of class references, and > > instantiate an object that derives from all of them, dynamically. > > > Is this possible? If so, how? > > It sounds like you need to generate a new class on the fly. Here's one > way: > > # untested > def foo(cls, item_args, mixins=None): > superclasses = [cls] + (mixins or []) > class MixedClass(*superclasses): > pass > item = MixedClass(**itemArgs) > return item > > instance = foo(MyClass, {'a':1}, [Aclass, Bclass, Cclass]) > > -- > Steven I find type() is the better way to go because it allows you to name the resulting class as well. It may make debugging a little easier. Using a hard-coded class, such as "MixedClass" in the above example, with dynamic bases produces lots of "MixedClass" instances with different interfaces/abilities. - Rafe -- http://mail.python.org/mailman/listinfo/python-list
Re: Dynamic features used
On Nov 21, 4:17 pm, [EMAIL PROTECTED] wrote: > I often use Python to write small programs, in the range of 50-500 > lines of code. For example to process some bioinformatics data, > perform some data munging, to apply a randomized optimization > algorithm to solve a certain messy problem, and many different things. > For that I often use several general modules that I have written, like > implementation of certain data structures, and small general "utility" > functions/classes, plus of course several external modules that I keep > updated. > > Compared to other languages Python generally allows me to write a > correctly working program in the shorter time (probably because of the > Python shell, the built-in safeties, the doctests, the clean and short > and handy syntax, the quick write-run-test cycle, the logical design, > its uniformity, the availability of standard data structures and many > standard or external modules, and the low number (compared to other > languages) of corner cases and tricky semantic corners). > > Today Python is defined a dynamic language (and not a scripting > language, a term that few languages today seem to want attached to > them) but being dynamic isn't a binary thing, it's an analog quality, > a language can be less or be more dynamic. For example I think Ruby is > more dynamic than Python, Python is more dynamic than CLisp, CLips > seems more dynamic than C#/Java, Java is more dynamic than D, and D is > more dynamic than C++. Often such differences aren't sharp, and you > can find ways to things more dynamically, even with a less nice syntax > (or creating less idiomatic code). (In C#4 they have even added a > dynamic feature that may make languages like IronPython/Boo faster and > simpler to write on the dotnet). > > In the last two years I have seen many answers in the Python > newsgroups, and I have seen that some of the dynamic features of > Python aren't much used/appreciated: > - Metaclasses tricks > - exec and eval > - monkey patching done on classes > - arbitrary cmp among different types removed from Python 3 > While some new static-looking/related features are being introduced: > - ABCs and function signatures added > - More tidy exception tree > > So it seems that while C#/D are becoming more dynamic, Python/ > JavaScript are becoming a little less dynamic looking (and this I > think this is positive, because too much dynamism turns code into a > swamp, and too much rigid systems lead to bloat and other problems. > Note that there are another orthogonal solution: to use an advanced > flexible and handy static type system, as in Haskell). > > I have seen that in lot of those little programs of mine, or in a > significant percentage of their lines, I often don't use the dynamic > features of Python (this means that the same code can be written with > static types, especially if you can use templates like in C++/D, or a > flexible type system like in Haskell, and it also means that lot of > those small programs can be compiled by ShedSkin/Cython, with usually > a sharp decrease of running time). > > What are the dynamic features of Python more used in your programs? > (From this set try to remove the things that can be done with a > flexible static template system, like the D one, that for some things > is more handy and powerful than the C++ template system, and for other > things less powerful). > > If very little or no dynamic features are used in a program it may > seem a "waste" to use Python to write the code, because the final > program may be quite slow with no gain from the other features of > Python. (I think in such situations Python can be a good choice > anyway, because it's good to write working prototypes in a short > time). (The large number of solution of this page shows how a certain > class of Python programmers want more speed from their > programs:http://scipy.org/PerformancePython and note that page misses many > other solutions, like SIP, Boost Python, ShedSkin, Cinpy, Cython, > RPython, and so on). > > In the last year I have found two situations where exec/eval is a way > to reduce a lot of the complexity of the code, so if used with care > the dynamic features can be quite useful. > > Before ending this partially incoherent post, I'd also like to briefly > remind how the dynamic features are used in the Boo language: Boo > programs are generally statically typed, but duck types are used once > in a while to reduce the "pressure" of the static type system. You can > find more info on this on the Boo site. (Note that I have never seen a > good set of speed benchmarks to compare the performance of CPython to > Boo). > > Bye, > bearophile http://scipy.org/PerformancePython is loading to slowly for my connection. How ironic (though it probably isn't Pythons fault). -- http://mail.python.org/mailman/listinfo/python-list
Re: strange pythoncom.com_error - it only happens once
On Nov 21, 4:50 pm, Rafe <[EMAIL PROTECTED]> wrote: > Hi, > > I'm getting this error: > # File "C:\Python25\Lib\site-packages\win32com\client\dynamic.py", > line 491, in __getattr__ > # raise pythoncom.com_error, details > # COM Error: Unspecified failure - [line 52] > > ...when my program hits a line of code which I know should work. The > strange thing is, when I run it again in the same python session, it > DOES work (no error is thrown and the expected results occur). Then it > happens again on a later line which also works. If I run the program > again without restarting python, it works all the way through and > forever after. > > Any ideas as to what might cause this? An error that is not an error > and only happens once? > > I would show examples, but it is application-specific and wouldn't > help. I have been using this application for about 10 years, so i know > my usage is correct (especially since it works most times) > > Cheers, > > - Rafe Forgot to mention I am using win32com.client.dynamic. I'm not sure if makepy will solve this or not. I need to ask around and find out how to run it for this application. I tried using pyWin's tool but it failed. There are 5 or 6 libraries though, so maybe I tried an invalid one? - Rafe -- http://mail.python.org/mailman/listinfo/python-list
strange pythoncom.com_error - it only happens once
Hi, I'm getting this error: # File "C:\Python25\Lib\site-packages\win32com\client\dynamic.py", line 491, in __getattr__ # raise pythoncom.com_error, details # COM Error: Unspecified failure - [line 52] ...when my program hits a line of code which I know should work. The strange thing is, when I run it again in the same python session, it DOES work (no error is thrown and the expected results occur). Then it happens again on a later line which also works. If I run the program again without restarting python, it works all the way through and forever after. Any ideas as to what might cause this? An error that is not an error and only happens once? I would show examples, but it is application-specific and wouldn't help. I have been using this application for about 10 years, so i know my usage is correct (especially since it works most times) Cheers, - Rafe -- http://mail.python.org/mailman/listinfo/python-list
Re: function parameter scope python 2.5.2
On Nov 21, 6:31 am, J Kenneth King <[EMAIL PROTECTED]> wrote: > I recently encountered some interesting behaviour that looks like a bug > to me, but I can't find the appropriate reference to any specifications > to clarify whether it is a bug. > > Here's the example code to demonstrate the issue: > > class SomeObject(object): > > def __init__(self): > self.words = ['one', 'two', 'three', 'four', 'five'] > > def main(self): > recursive_func(self.words) > print self.words > > def recursive_func(words): > if len(words) > 0: > word = words.pop() > print "Popped: %s" % word > recursive_func(words) > else: > print "Done" > > if __name__ == '__main__': > weird_obj = SomeObject() > weird_obj.main() > > The output is: > > Popped: five > Popped: four > Popped: three > Popped: two > Popped: one > Done > [] > > Of course I expected that recursive_func() would receive a copy of > weird_obj.words but it appears to happily modify the object. > > Of course a work around is to explicitly create a copy of the object > property befor passing it to recursive_func, but if it's used more than > once inside various parts of the class that could get messy. > > Any thoughts? Am I crazy and this is supposed to be the way python works? You are passing a mutable object. So it can be changed. If you want a copy, use slice: >>> L = [1, 2, 3, 4, 5] >>> copy = L[:] >>> L.pop() 5 >>> L [1, 2, 3, 4] >>> copy [1, 2, 3, 4, 5] ...in your code... def main(self): recursive_func(self.words[:]) print self.words ...or... >>> def recursive_func(words): >>> words = words[:] >>> if len(words) > 0: >>> word = words.pop() >>> print "Popped: %s" % word >>> recursive_func(words) >>> else: >>> print "Done" >>> >>> words = ["one", "two", "three"] >>> recursive_func(words) Popped: three Popped: two Popped: one Done >>> words ['one', 'two', 'three'] Though I haven't been doing this long enough to know if that last example has any drawbacks. If we knew more about what you are trying to do, perhaps an alternative would be even better. - Rafe -- http://mail.python.org/mailman/listinfo/python-list
Re: How to get the class instance of a passed method ?
On Nov 21, 6:41 am, Stef Mientki <[EMAIL PROTECTED]> wrote: > Christian Heimes wrote: > > thanks Christian, > > cheers, > Stef OT: Just to pass along some great advice I got recently. Read PEP008. It contains guidelines for formatting your code. - Rafe -- http://mail.python.org/mailman/listinfo/python-list
Re: Module Structure/Import Design Problem
On Nov 21, 2:36 am, Stef Mientki <[EMAIL PROTECTED]> wrote: > >> I'm not an expert, I even don't fully understand your problem, > >> but having struggled with imports in the past, > >> I've a solution now, which seems to work quit well. > > > That's not very helpful, is it? Were you planning to keep the solution > > secret? > > sorry slip of the keyboard > ;-)http://mientki.ruhosting.nl/data_www/pylab_works/pw_importing.html > cheers, > Stef I really don't understand what you are trying to accomplish in your article. I strongly disagree with this statement... "A second demand is that every module should be able to act as a main file by running it's main section." ...I am finding the best programs have only one entry point or interface (though some libraries can be useful from outside the package.) Being able to run any file in a package seems like it creates a confusing user/developer experience. What kind of problem makes this solution applicable? Next, you say... "...recursive searches for all subdirectories and adds them to the Python Path." ...it seems like you add every module in your packages directly to the sys.path. Doesn't this destroy the package name-spaces? For example, I have a module called 'types' in my package if I add that to the python path, 'import types' still returns the built-in 'types' module. Wouldn't this collision be confusing? Regardless, isn't putting the package in the right place enough? Please explain. Cheers, - Rafe -- http://mail.python.org/mailman/listinfo/python-list
Re: Module Structure/Import Design Problem
On Nov 21, 1:39 am, Steve Holden <[EMAIL PROTECTED]> wrote: > Rafe wrote: > > Hi, > > > I am in a situation where I feel I am being forced to abandon a clean > > module structure in favor of a large single module. If anyone can save > > my sanity here I would be forever grateful. > > > My problem is that classes in several modules share a common base > > class which needs to implement a factory method to return instances of > > these same classes. > > > An example to help illustrate what I mean: > > Lets say I have the following modules with the listed classes: > > - baselib.py with BaseClass > > - types.py with TypeA, ... > > - special.py with SpecialTypeA, ... > > > Which would be used a bit like this: > >>>> type_a = any_type_instance.get_type("TypeA") > >>>> special_type = type_a.get_type("SpecialTypeA") > > > Again, I can get around this by dumping everything in to one module, > > but it muddies the organization of the package a bit. This seems like > > a problem that would come up a lot. Are there any design paradigms I > > can apply here? > > Well a simple way to do this is to observe that even when a base class's > method is inherited by an instance of a subclass, when the method is > called the type of "self" is the subclass. And you can call the > subclass's type to create an instance. Perhaps the following code would > make it more obvious: > > $ cat baseclass.py > class Base(object): > def factory(self, arg): > return type(self)(arg) > > [EMAIL PROTECTED] /c/Users/sholden/Projects/Python > $ cat subclass.py > from baseclass import Base > > class sub(Base): > def __init__(self, arg): > print "Creating a sub with arg", arg > > s = sub("Manual") > > thing = s.factory("Auto") > print type(thing) > > [EMAIL PROTECTED] /c/Users/sholden/Projects/Python > $ python subclass.py > Creating a sub with arg Manual > Creating a sub with arg Auto > > > Hope this helps. > > regards > Steve > -- > Steve Holden +1 571 484 6266 +1 800 494 3119 > Holden Web LLC http://www.holdenweb.com/ Hi Steve, Correct me if I have this wrong, but the problem with your solution is that it only creates a new instance of the same class, type(self), while I need to return any number of different possibilities. I thought about getting the module from self... >>> class base(object): >>> def factory(self, type): >>> module = sys.modules[self.__class__.__module__] >>> return getattr(module, type) ...but my baseclass is used from several modules so this would be inaccurate for me (the factory method only uses my 'types' module, so a hard import works) I'm still wondering what Arnaud meant by "make types register themselves with the factory function" - Rafe -- http://mail.python.org/mailman/listinfo/python-list
Re: Module Structure/Import Design Problem
On Nov 20, 2:06 pm, Arnaud Delobelle <[EMAIL PROTECTED]> wrote: > Rafe <[EMAIL PROTECTED]> writes: > > Hi, > > > I am in a situation where I feel I am being forced to abandon a clean > > module structure in favor of a large single module. If anyone can save > > my sanity here I would be forever grateful. > > > My problem is that classes in several modules share a common base > > class which needs to implement a factory method to return instances of > > these same classes. > > > An example to help illustrate what I mean: > > Lets say I have the following modules with the listed classes: > > - baselib.py with BaseClass > > - types.py with TypeA, ... > > - special.py with SpecialTypeA, ... > > > Which would be used a bit like this: > >>>> type_a = any_type_instance.get_type("TypeA") > >>>> special_type = type_a.get_type("SpecialTypeA") > > > Again, I can get around this by dumping everything in to one module, > > but it muddies the organization of the package a bit. This seems like > > a problem that would come up a lot. Are there any design paradigms I > > can apply here? > > It's not very clear what your problem is. I guess your factory > functions are defined in baselib.py whereas types.py and special.py > import baselib, therefore you don't know how to make the factory > function aware of the types defined in special.py and types.py. > > You can use cyclic import in many cases. > > Or (better IMHO) you can make types register themselves with the factory > function (in which case it would have some state so it would make more > sense to make it a factory object). > > -- > Arnaud hi Arnaud, You got my problem right, sorry it wasn't more clear. Can you elaborate on what you mean by 'register' with the factory function? Also...holy [EMAIL PROTECTED], I got a clean import working! I swear I tried that before with unhappy results. I'll carefully try this in my real code. Is this the right way to impliment the imports? baselib.py [1] class BaseClass(object): [2] def factory(self): [3] import typelib # <-- import inside function [4] return typelib.TypeA() typelib.py [1] import baselib # <-- module level import [2] [3] class TypeA(baselib.BaseClass): [4] def __init__(self): [5] print "TypeA : __init__()" >>> import typelib >>> type = typelib.TypeA() TypeA : __init__() >>> another_type = type.factory() TypeA : __init__() >>> another_type I am curious (not directed at Arnaud), why not have an 'include'-like import for special cases in python (or do I not understand includes either?) Thanks! - Rafe -- http://mail.python.org/mailman/listinfo/python-list
Module Structure/Import Design Problem
Hi, I am in a situation where I feel I am being forced to abandon a clean module structure in favor of a large single module. If anyone can save my sanity here I would be forever grateful. My problem is that classes in several modules share a common base class which needs to implement a factory method to return instances of these same classes. An example to help illustrate what I mean: Lets say I have the following modules with the listed classes: - baselib.py with BaseClass - types.py with TypeA, ... - special.py with SpecialTypeA, ... Which would be used a bit like this: >>> type_a = any_type_instance.get_type("TypeA") >>> special_type = type_a.get_type("SpecialTypeA") Again, I can get around this by dumping everything in to one module, but it muddies the organization of the package a bit. This seems like a problem that would come up a lot. Are there any design paradigms I can apply here? Cheers - Rafe -- http://mail.python.org/mailman/listinfo/python-list
Re: inspect.findsource problem with llinecache
On Nov 15, 1:29 pm, "Gabriel Genellina" <[EMAIL PROTECTED]> wrote: > En Wed, 12 Nov 2008 05:22:55 -0200,Rafe<[EMAIL PROTECTED]> escribió: > > > I think I have discovered two bugs with the inspect module and I would > > like to know if anyone can spot any traps in my workaround. > > They look like real bugs - please report them athttp://bugs.python.org > else this will be forgotten. > > > I welcome comments on the bugs or optimization pointers for my code. I > > am still quite new to Python. My primary question though is: will > > linecache.clearcache() cause me any problems? > > I don't think so, apart from performance degradation (but good performance > with bad results isn't good at all!) > > -- > Gabriel Genellina Thank you for replying Gabriel. I'll report it now. - Rafe -- http://mail.python.org/mailman/listinfo/python-list
Looking for Advice: Creation vs Access in OO Models
Hi, I'm going to try and keep this as general as possible. I'm building an object model/API for creating and working with 'things' (I'm actually working with a 3D application through COM, so I guess I'm making a kind of middleware or wrapper interface). I'm looking for some advice on structure and encapsulation regarding creation vs. access. Links to reading material or replies would be greatly appreciated! The primary job of the system is to automate many steps in the creation of 'things' , but because the 'things' are hierarchical, they are also used heavily for access (to create other things, output meta- data, etc.). For example, I might have a tree like: - root - thing(s) - thing(s) - thing(s) ...where "thing(s)" are objects of various types which share the same base class. Usage is pretty simple, even in the real system: >>> root = get_root() >>> thing_a = root.add_thing(type="a", name="thing a", ...) # factory >>> thing_b = thing_a..add_thing(type="b", name="thing b", ...) >>> print thing_b.name 'thing b' and access... >>> root = get_root() >>> thing_a = root.things["thing a"] >>> print thing_a.name 'thing a' >>> print thing_a.things ['thing b'] Now I start to get in to the problem at hand. Where is the best place to put the code that creates 'thing's. Right now I have a class for each type which takes the name to instantiate the class (find and wrap the 'thing'), but I've kept the class separate for access only. For creation I use a function (which is called by the factory method shown above) which creates the 'thing' and then returns an instance of the type class (note that in the real system, creation takes many parameters), e.g.: def new_type_a(name, **kwargs): thing = application.create_thing(name) # COM object method return TypeA(thing.name) This seems pretty simple and clean, but the more complex my package becomes, the less flexible this becomes. For example, I need to create 'thing's which are actually made from totally different objects in the COM application but still run common code to make it one of my types. I think an analogy is in order. Lets say I want to create a 'thing' which is a kind of desk. All desks get 4 legs, but some are made from oak and some from pine which have different properties. The different types might be a different color or other cosmetic extra. So the creation chain would be: pine --> desk --> desk_a oak --> desk --> desk_a pine --> desk --> desk_b oak --> desk --> desk_b At first it may seem strange to make desk_a out of different chains. Shouldn't it be two different types? In my case no, because the type class doesn't need to know which wood it comes from. The 'user' of the desk will intuitively know which one they want and even if they don't, they can easily check for it. The inheritance has to work this way because (still using the analogy) 'pine' and 'oak' are objects native to the COM accessed application while the 'desk' and desk types are mine. To accommodate this, I've had to add functionality between creation and returning the class. I'm not sure what to call these. Mutators? Constructors? Anyway, to implement this I actually add separate factory functions for each archetype ('pine' or 'oak in the analogy) which each run the same functions *after creation*. Something like: >>> root = get_root() >>> # These would have different arguments in reality... >>> thing_a = root.add_pine_thing(type="a", name="thing a", ...) # pine >>> factory >>> thing_b = thing_a.add_oak_thing(type="a", name="thing a", ...) # oak >>> factory If the analogy is just confusing things, in real terms, I might use one factory to create a 3D object which represents a point in space and another to draw a curve. Both of these might result in a type which is used for the same purpose in the 3D application (the end user doesn't care what it is made out of as long as it does it's job). I hope this all makes sense and gives enough background... The questions are: Should the code which creates the 'things' be encapsulated in the 'thing's class or in a chain of functions? Is there some design pattern in Python which would allow me to create a new 'things' in different ways or access existing 'things' with the same class, without dirtying the instance name-space (since the 'user' will use an instance for access, but the 'system' would use it for creation)? Or something else? Let me know if I can clarify anything. This seems like a basic design question when dealing with things created and accessed by an object model, but the devil is in the details. Thanks for reading, - Rafe -- http://mail.python.org/mailman/listinfo/python-list
Re: inspect.findsource problem with llinecache
On Nov 12, 2:22 pm, Rafe <[EMAIL PROTECTED]> wrote: > Hi, > > I think I have discovered two bugs with the inspect module and I would > like to know if anyone can spot any traps in my workaround. > > I needed a function which takes a function or method and returns the > code inside it (it also adjusts the indent because I have to paste the > to a text string without indents, which is parsed later...long story). > At first this seemed trivial but then I started getting an error on > line 510 of inspect.py: > 504 if iscode(object): > 505 if not hasattr(object, 'co_firstlineno'): > 506 raise IOError('could not find function definition') > 507 lnum = object.co_firstlineno - 1 > 508 pat = re.compile(r'^(\s*def\s)|(.*(? (\s*@)') > 509 while lnum > 0: > 510 if pat.match(lines[lnum]): break > 511 lnum = lnum - 1 > 512 return lines, lnum > > I finally figured out that there was a caching problem. The function I > passed was changed, but the code lines (strings) retrieved by > linecache.getlines() (on lines 464 and 466) didn't update with the new > module contents. The resulting error I was getting occurred when real > module had less lines than the cached module (or the other way around, > whatever.) > > To get around this, I invoke linecache.clearcache(). Here is the > function (minus doc strings)... > > INDENT_SPACES = 4 > > def get_fn_contents(fn, remove_indents=1): > # Clear the cache so inspect.getsourcelines doesn't try to > # check an older version of the function's module. > linecache.clearcache() > source_lines, n = inspect.getsourcelines(fn) > > # Skip the first line which contains the function definition. > # Only want the code inside the function is needed. > fn_contents = source_lines[1:] > > # Remove indents > new_indent_lines = [remove_indent(line, remove_indents) for line > in fn_contents] > > return "".join(new_indent_lines) > > def remove_indent(in_str, num_indent=1): > s = in_str > for i in range(num_indent): > if s[:INDENT_SPACES] == " ": # Whitespace indents > s = s[INDENT_SPACES:] > elif s[:1] == "\t": # Tab characters indents > s = s[1:] > > return s > > [END CODE] > > The second issue is that the last line in the passed function's module > seems to be ignored. So, if the last line of the module is also the > last line of the function, the function is returned one line too > short. > > I welcome comments on the bugs or optimization pointers for my code. I > am still quite new to Python. My primary question though is: will > linecache.clearcache() cause me any problems? > > Thanks, > > - Rafe I forgot to add that while inspect uses the cached module to get the text, the line number used to find the block of source lines is retrieved from the passed object. So, if I pass a function which reports that it starts on line 50, but in the cache it starts on line 40 an error isn't raised but the lines of code returned are wrong. The error only occurs when the line number is higher than the number of lines in the cached module. - Rafe -- http://mail.python.org/mailman/listinfo/python-list
inspect.findsource problem with llinecache
Hi, I think I have discovered two bugs with the inspect module and I would like to know if anyone can spot any traps in my workaround. I needed a function which takes a function or method and returns the code inside it (it also adjusts the indent because I have to paste the to a text string without indents, which is parsed later...long story). At first this seemed trivial but then I started getting an error on line 510 of inspect.py: 504if iscode(object): 505if not hasattr(object, 'co_firstlineno'): 506raise IOError('could not find function definition') 507lnum = object.co_firstlineno - 1 508pat = re.compile(r'^(\s*def\s)|(.*(? 0: 510if pat.match(lines[lnum]): break 511lnum = lnum - 1 512return lines, lnum I finally figured out that there was a caching problem. The function I passed was changed, but the code lines (strings) retrieved by linecache.getlines() (on lines 464 and 466) didn't update with the new module contents. The resulting error I was getting occurred when real module had less lines than the cached module (or the other way around, whatever.) To get around this, I invoke linecache.clearcache(). Here is the function (minus doc strings)... INDENT_SPACES = 4 def get_fn_contents(fn, remove_indents=1): # Clear the cache so inspect.getsourcelines doesn't try to # check an older version of the function's module. linecache.clearcache() source_lines, n = inspect.getsourcelines(fn) # Skip the first line which contains the function definition. # Only want the code inside the function is needed. fn_contents = source_lines[1:] # Remove indents new_indent_lines = [remove_indent(line, remove_indents) for line in fn_contents] return "".join(new_indent_lines) def remove_indent(in_str, num_indent=1): s = in_str for i in range(num_indent): if s[:INDENT_SPACES] == "": # Whitespace indents s = s[INDENT_SPACES:] elif s[:1] == "\t": # Tab characters indents s = s[1:] return s [END CODE] The second issue is that the last line in the passed function's module seems to be ignored. So, if the last line of the module is also the last line of the function, the function is returned one line too short. I welcome comments on the bugs or optimization pointers for my code. I am still quite new to Python. My primary question though is: will linecache.clearcache() cause me any problems? Thanks, - Rafe -- http://mail.python.org/mailman/listinfo/python-list
Re: How can a function know what module it's in?
On Nov 12, 11:17 am, Joe Strout <[EMAIL PROTECTED]> wrote: > Some corrections, to highlight the depth of my confusion... > > On Nov 11, 2008, at 9:10 PM, Joe Strout wrote: > > > doctest.testmod(mymodule) > > > This actually works fine if I'm importing the module (with the > > standard name) somewhere else > > Actually, it does not. > > > I noticed that a function object has a __module__ attribute, that is > > a reference to the module the function is in. > > And no, it isn't; it's the NAME of the module the function is in. I'm > not sure what good that does me. docstring.testmod does take an > optional "name" parameter, but the documentation (at least in 2.5.2) > does not explain what this parameter is used for. I tried using it > thusly: > > doctest.testmod(name=_test.__module__) > > but that didn't work; it appears to still be testing the __main__ > module. (Perhaps the name parameter is only used to describe the > module in the output, in which case, all I've accomplished here is > getting doctest to lie.) > > > I'm sure there is a magic identifier somewhere that lets a code get > > a reference to its own module, but I haven't been able to find it. > > Can someone share a clue? > > This question remains open. :) > > Thanks, > - Joe import sys this_module = sys.modules[__name__] sys.modules is a dictionary of all modules which have been imported during the current session. Since the module had to be imported to access it, it will be in there. '__name__' is available inside functions because it is in the module scope. Classes are a little more tricky because doing something like: this_module = sys.modules[self.__class__.__module__] will return a different module if the class is inherited in a different module (since the base class is __class__ now). However, putting a function at the module level (in the super-class module) should anchor the results (untested though). I'm not sure if this is the answer you need with regards to doctest, but it I think it answers the question in the subject of this thread. - Rafe -- http://mail.python.org/mailman/listinfo/python-list
Re: Is there a time limit for replies?
On Oct 31, 5:21 pm, Ulrich Eckhardt <[EMAIL PROTECTED]> wrote: > Rafe wrote: > > I tried to post some follow-ups to some issues I posted in the hopes > > of helping others, but I only get "reply to author" and "forward", but > > no "reply" option (using GoogleGroups). Is there some kind of time > > limit to reply? > > Two things: > 1. Google Groups is by far not the best interface to the Usenet. Using a > dedicated newsserver and a real newsclient would be much much better. > > 2. This newsgroup is actually a mailinglist with a mail-to-news gateway > connecting it to the Usenet. Anything can happen with a mediocre newsclient > like GG and possibly wacky setups of mailclient and the gateway. > > If you are interested in this group/mailinglist, get at least a real > newsclient or simply subscribe to the mailinglist itself. > > cheers > > Uli > > -- > Sator Laser GmbH > Geschäftsführer: Thorsten Föcking, Amtsgericht Hamburg HR B62 932 Can you recommend anything? I would like to avoid 1,000s of emails flooding my account though. It also has to work from behind a strict corporate firewall. Thanks, - Rafe -- http://mail.python.org/mailman/listinfo/python-list
Is there a time limit for replies?
I tried to post some follow-ups to some issues I posted in the hopes of helping others, but I only get "reply to author" and "forward", but no "reply" option (using GoogleGroups). Is there some kind of time limit to reply? Thanks, - Rafe -- http://mail.python.org/mailman/listinfo/python-list
Re: Exec and Scope
On Oct 31, 10:47 am, "Emanuele D'Arrigo" <[EMAIL PROTECTED]> wrote: > Hi everybody! > > I'm trying to do something in a way that is probably not particularly > wise but at this point I don't know any better, so bear with me. > > Suppose in main.py I have the following statements: > > myObject = MyObject() > execThis("myObject.myCommand()") > > Now suppose the method > > def execThis(aCommandInAString): > exec(aCommandInAString) > > is somewhere "far away" in terms of scope. Somewhere where the code > doesn't know anything about the instance myObject and even less about > its methods and attributes. How do I get myObject.myCommand() properly > executed? > > I'm guessing it's about taking a snapshot of or a reference to the > namespace that is correct for the execution of the command, but... is > that the case? And how do I get a handle to that? > > Thanks for your help! > > Manu If you are just looking to execute an attribute (be it a property, module-level function, instance or class method, or anything else which is an attribute of an object), just use getattr(). Execute a 'method' (which is just a callable object right?) of an instance of MyObject named "myCommand": >>> class MyObject(object): ...def my_command(self): ...print "hello" ... >>> myObject = MyObject() >>> attr = getattr(myObject, "my_command") >>> attr() hello - Rafe -- http://mail.python.org/mailman/listinfo/python-list
Re: @property decorator doesn't raise exceptions
On Oct 27, 2:47 pm, Peter Otten <[EMAIL PROTECTED]> wrote: > Rafewrote: > > Can anyone explain why this is happening? > > When an attribute error is raised that is an indication that the requested > attribute doesn't exist, and __getattr__() must be called as a fallback. > > > I can hack a work-around, > > but even then I could use some tips on how to raise the 'real' > > exception so debugging isn't guesswork. > > Look at the problem again, maybe you can find a solution without > __getattr__() and use only properties. > > Otherwise you have to wrap your getter with something like > > try: > ... > except AttributeError: > raise BuggyProperty, None, original_traceback > > If you put that functionality into a decorator you get: > > import sys > > class BuggyProperty(Exception): > pass > > def safe_getter(get): > def safe_get(self): > try: > return get(self) > except AttributeError: > t, e, tb = sys.exc_info() > raise BuggyProperty("AttributeError in getter %s(); " > "giving original traceback" > % get.__name__), None, tb > return property(safe_get) > > class A(object): > @safe_getter > def attr(self): > return self.m(3) > > def m(self, n): > if n > 0: > return self.m(n-1) > raise AttributeError("it's a bug") > > def __getattr__(self, name): > return "<%s>" % name > > A().attr > > Peter Thanks for the idea Peter. What confuses me is why this only happens to @Property (and I assume other decorator related bindings?). Does it have something to do with the way the class gets built? 'Normal' attributes will raise AttributeErrors as expected, without triggering __getattr__(). Considering this is a built-in decorator, it would be nice if this behavior was fixed if possible. Unfortunately, I need __getattr__() because my class is a wrapper (it is delegating calls to another object when attributes aren't found in the class). As a hack, I am testing the passed attr name against the instance, class and super-class attributes. If there is a match, I assume it is an error and raise an exception which points the developer in the right direction. It isn't ideal, but it helps. You're code is better, as it displays the 'real' traceback, but I need to know more about the effects of using an exception which is not an AttrbiuteError. Which brings me to my next question... In case it isn't obvious, I'm fairly new to Python (and this level of programming in general). I've been wondering about the optimal use of custom exceptions. Up until now, I've been sticking to the built-in exceptions, which seem to work in 90% of situations. Are you aware of any resources which talk about this aspect of programming (more about theory than code)? Thanks again, - Rafe -- http://mail.python.org/mailman/listinfo/python-list
Re: @property decorator doesn't raise exceptions
OT... Sorry about he spam. Thanks for taking the time to post this Duncan. I had the same thought. I have posted to this list before but never experienced anything like this wait. I figured it was possible that I hit "Reply to Author" the first time so I sent it again. I waited about 8 hours before sending the third time (and I posted to GoogleGroups support as well). Even then I didn't see my original post. I'm surprised it took so long to update. Next time I'll just make sure the post was made successfully and wait...as long as it takes. Cheers, - Rafe On Oct 26, 4:23 pm, Duncan Booth <[EMAIL PROTECTED]> wrote: > Rafe<[EMAIL PROTECTED]> wrote: > > Peter Oten pointed me in the right direction. I tried to reply to his > > post 2 times and in spite of GoogleGroups reporting the post was > > successful, it never showed up. > > This is the third variant on your message that has shown up in the > newsgroup. > > Please be aware that messages take time to propogate through usenet: don't > repost just because Google groups hasn't yet got around to displaying your > message. If it says the post was successful then the post was successful. > Just be patient a bit longer for it to become visible to you. -- http://mail.python.org/mailman/listinfo/python-list
Re: @property decorator doesn't raise exceptions
On Oct 24, 1:47 am, Rafe <[EMAIL PROTECTED]> wrote: > Hi, > > I've encountered a problem which is making debugging less obvious than > it should be. The @property decorator doesn't always raise exceptions. > It seems like it is bound to the class but ignored when called. I can > see the attribute using dir(self.__class__) on an instance, but when > called, python enters __getattr__. If I correct the bug, the attribute > calls work as expected and do not call __getattr__. > > I can't seem to make a simple repro. Can anyone offer any clues as to > what might cause this so I can try to prove it? > > Cheers, > > - Rafe Peter Oten pointed me in the right direction. I tried to reply to his post 2 times and in spite of GoogleGroups reporting the post was successful, it never showed up. Here is the repro: The expected behavior... >>> class A(object): ... @property ... def attribute(self): ... raise AttributeError("Correct Error.") >>> A().attribute Traceback (most recent call last): File "", line 0, in File "", line 0, in attribute AttributeError: Correct Error. The misleading/unexpected behavior... >>> class A(object): ... @property ... def attribute(self): ... raise AttributeError("Correct Error.") ... def __getattr__(self, name): ... cls_name = self.__class__.__name__ ... msg = "%s has no attribute '%s'." % (cls_name, name) ... raise AttributeError(msg) >>> A().attribute Traceback (most recent call last): File "", line 0, in File "", line 0, in __getattr__ AttributeError: A has no attribute 'attribute'. Removing @property works as expected... >>> class A(object): ... def attribute(self): ... raise AttributeError("Correct Error.") ... def __getattr__(self, name): ... cls_name = self.__class__.__name__ ... msg = "%s has no attribute '%s'." % (cls_name, name) ... raise AttributeError(msg) >>> A().attribute() # Note the '()' Traceback (most recent call last): File "", line 0, in File "", line 0, in attribute AttributeError: Correct Error. I never suspected __getattr__ was the cause and not just a symptom. The docs seem to indicate __gettattr__ should never be called when the attribute exisits in the class: "Called when an attribute lookup has not found the attribute in the usual places (i.e. it is not an instance attribute nor is it found in the class tree for self). name is the attribute name. This method should return the (computed) attribute value or raise an AttributeError exception." Is this a bug? Any idea why this happens? I can write a hack in to __getattr__ in my class which will detect this, but I'm not sure how to raise the expected exception. Cheers, - Rafe -- http://mail.python.org/mailman/listinfo/python-list
Re: @property decorator doesn't raise exceptions
On Oct 24, 9:58 am, Peter Otten <[EMAIL PROTECTED]> wrote: > Rafe wrote: > > On Oct 24, 2:21 am, Christian Heimes <[EMAIL PROTECTED]> wrote: > >> Rafewrote: > >> > Hi, > > >> > I've encountered a problem which is making debugging less obvious than > >> > it should be. The @property decorator doesn't always raise exceptions. > >> > It seems like it is bound to the class but ignored when called. I can > >> > see the attribute using dir(self.__class__) on an instance, but when > >> > called, python enters __getattr__. If I correct the bug, the attribute > >> > calls work as expected and do not call __getattr__. > > >> > I can't seem to make a simple repro. Can anyone offer any clues as to > >> > what might cause this so I can try to prove it? > > >> You must subclass from "object" to get a new style class. properties > >> don't work correctly on old style classes. > > >> Christian > > > All classes are a sub-class of object. Any other ideas? > > Hard to tell when you don't give any code. > > >>> class A(object): > > ... @property > ... def attribute(self): > ... raise AttributeError > ... def __getattr__(self, name): > ... return "nobody expects the spanish inquisition" > ...>>> A().attribute > > 'nobody expects the spanish inquisition' > > Do you mean something like this? I don't think the __getattr__() call can be > avoided here. > > Peter Peter nailed it, thanks! I thought __getattr__ was a symptom, not a cause of the misleading exceptions. Here is a complete repro: The expected behavior... >>> class A(object): ... @property ... def attribute(self): ... raise AttributeError("Correct Error.") >>> A().attribute Traceback (most recent call last): File "", line 0, in File "", line 0, in attribute AttributeError: Correct Error. The misleading/unexpected behavior... >>> class A(object): ... @property ... def attribute(self): ... raise AttributeError("Correct Error.") ... def __getattr__(self, name): ... cls_name = self.__class__.__name__ ... msg = "%s has no attribute '%s'." % (cls_name, name) ... raise AttributeError(msg) >>> A().attribute Traceback (most recent call last): File "", line 0, in File "", line 0, in __getattr__ AttributeError: A has no attribute 'attribute'. Removing @property works as expected... >>> class A(object): ... def attribute(self): ... raise AttributeError("Correct Error.") ... def __getattr__(self, name): ... cls_name = self.__class__.__name__ ... msg = "%s has no attribute '%s'." % (cls_name, name) ... raise AttributeError(msg) >>> A().attribute() # Note the '()' Traceback (most recent call last): File "", line 0, in File "", line 0, in attribute AttributeError: Correct Error. The docs seem to suggest this is impossible: "Called when an attribute lookup has not found the attribute in the usual places (i.e. it is not an instance attribute nor is it found in the class tree for self). name is the attribute name. This method should return the (computed) attribute value or raise an AttributeError exception." Can anyone explain why this is happening? Is it a bug? I can write a workaround to detect this by comparing the attribute name passed __getattr__ with dir(self.__class__) = self.__dict__.keys(), but how can I raise the expected exception? Thanks, - Rafe -- http://mail.python.org/mailman/listinfo/python-list
Re: @property decorator doesn't raise exceptions
On Oct 24, 9:58 am, Peter Otten <[EMAIL PROTECTED]> wrote: > Rafe wrote: > > On Oct 24, 2:21 am, Christian Heimes <[EMAIL PROTECTED]> wrote: > >> Rafewrote: > >> > Hi, > > >> > I've encountered a problem which is making debugging less obvious than > >> > it should be. The @property decorator doesn't always raise exceptions. > >> > It seems like it is bound to the class but ignored when called. I can > >> > see the attribute using dir(self.__class__) on an instance, but when > >> > called, python enters __getattr__. If I correct the bug, the attribute > >> > calls work as expected and do not call __getattr__. > > >> > I can't seem to make a simple repro. Can anyone offer any clues as to > >> > what might cause this so I can try to prove it? > > >> You must subclass from "object" to get a new style class. properties > >> don't work correctly on old style classes. > > >> Christian > > > All classes are a sub-class of object. Any other ideas? > > Hard to tell when you don't give any code. > > >>> class A(object): > > ... @property > ... def attribute(self): > ... raise AttributeError > ... def __getattr__(self, name): > ... return "nobody expects the spanish inquisition" > ...>>> A().attribute > > 'nobody expects the spanish inquisition' > > Do you mean something like this? I don't think the __getattr__() call can be > avoided here. > > Peter You nailed it Peter! I thought __getattr__ was a symptom, not the cause of the misleading errors. Here is the repro (pretty much regurgitated): The expected behavior... >>> class A(object): ... @property ... def attribute(self): ... raise AttributeError("Correct Error.") >>> A().attribute Traceback (most recent call last): File "", line 0, in File "", line 0, in attribute AttributeError: Correct Error. The unexpected and misleading exception... >>> class A(object): ... @property ... def attribute(self): ... raise AttributeError("Correct Error.") ... def __getattr__(self, name): ... cls_name = self.__class__.__name__ ... msg = "%s has no attribute '%s'." % (cls_name, name) ... raise AttributeError(msg) Traceback (most recent call last): File "", line 0, in File "", line 0, in __getattr__ AttributeError: A has no attribute 'attribute'. The docs state: "Called when an attribute lookup has not found the attribute in the usual places (i.e. it is not an instance attribute nor is it found in the class tree for self). name is the attribute name. This method should return the (computed) attribute value or raise an AttributeError exception." Can anyone explain why this is happening? I can hack a work-around, but even then I could use some tips on how to raise the 'real' exception so debugging isn't guesswork. Cheers, - Rafe -- http://mail.python.org/mailman/listinfo/python-list
Re: @property decorator doesn't raise exceptions
On Oct 24, 2:21 am, Christian Heimes <[EMAIL PROTECTED]> wrote: > Rafewrote: > > Hi, > > > I've encountered a problem which is making debugging less obvious than > > it should be. The @property decorator doesn't always raise exceptions. > > It seems like it is bound to the class but ignored when called. I can > > see the attribute using dir(self.__class__) on an instance, but when > > called, python enters __getattr__. If I correct the bug, the attribute > > calls work as expected and do not call __getattr__. > > > I can't seem to make a simple repro. Can anyone offer any clues as to > > what might cause this so I can try to prove it? > > You must subclass from "object" to get a new style class. properties > don't work correctly on old style classes. > > Christian All classes are a sub-class of object. Any other ideas? - Rafe -- http://mail.python.org/mailman/listinfo/python-list
@property decorator doesn't raise exceptions
Hi, I've encountered a problem which is making debugging less obvious than it should be. The @property decorator doesn't always raise exceptions. It seems like it is bound to the class but ignored when called. I can see the attribute using dir(self.__class__) on an instance, but when called, python enters __getattr__. If I correct the bug, the attribute calls work as expected and do not call __getattr__. I can't seem to make a simple repro. Can anyone offer any clues as to what might cause this so I can try to prove it? Cheers, - Rafe -- http://mail.python.org/mailman/listinfo/python-list
Re: Making class attributes non-case-sensitive?
Thanks for the COM pointers Matt. I'll definitely look in to these. Perhaps this will become a non-issue when I use one of these COM wrappers... > Anybody who is used to developing at all is going to > accept that the software is case sensitive. Case sensitive? Yes. Letting types create hard to debug behaviors that raise either no exceptions or strange ones? No. This is what I am trying to add. Protection. > It still isn't clear to me _why_ you are wrapping this COM object. You > aren't adding any functionality. I've actually been able to add a lot of functionality. I just didn't post the details of how I'm using it because I didn't think it had any bearing on the original question. I can add a lot of automation and convention enforcement to the API by wrapping and extending the applications object-model. If you want me to give some real-world examples (which would be related to 3D animation production) I wouldn't mind doing so at all. I was just trying really hard to keep the question generic (and failed it seems). Thanks again for sticking with the discussion! - Rafe On Oct 15, 4:03 am, Matimus <[EMAIL PROTECTED]> wrote: > > So is iterating through dir() to force both the members of dir(), and > > the requested attribute name, to lower case for a comparison, really > > the easiest way? > > > Thanks again for sticking with me. I hope I didn't add to the > > confusion. What I learn I will of course pass on. > > > - Rafe > > It still isn't clear to me _why_ you are wrapping this COM object. You > aren't adding any functionality. If you are using win32com and the TLB > object you are using has a tlb, then you can generate wrapper classes > for them automatically using makepy. You can extend those. If you want > to do it by hand you should be able to just create a class and inherit > win32com.client.DispatchBaseClass (and object if you want to make it > new-style). Unless your users are screaming for this feature, or there > is some technical reason that it is required, then implementing it is > a waste of time. Anybody who is used to developing at all is going to > accept that the software is case sensitive. > > Matt -- http://mail.python.org/mailman/listinfo/python-list
Re: Making class attributes non-case-sensitive?
self.__obj = obj def __repr__(self): """ Makes the object's name the string representation of the object, just like XSI does. """ return str(self.__obj.name) def __getattr__(self, name): """ Tries to delegate any attribute calls not found in this class to the X3DObject. """ # Try to delegate to the 3DObject. obj = self.__dict__["_DelegationWrapper__obj"] try: return getattr(obj, name) except: pass # Raise an attribute error (Python requires this # to avoid problems) className = self.__class__.__name__ msg = "%s has no attribute '%s'." % (className, name) raise AttributeError(msg) def __setattr__(self, name, val): """ Tries to delegate any attribute assignment not found in this class to the X3DObject. """ # This allows sub-classes to add "private" attributes # freely. dir is checked insteaf od __dict__ because # it contains bound attributes not available in the # instance __dict__. if name in dir(self) or name.startswith("_"): super(DelegationWrapper, self).__setattr__(name, val) return # Try to delegate to the X3DObject. obj = self.__dict__["_DelegationWrapper__obj"] try: obj.__setattr__(name, val) return except TypeError, err: raise TypeError(err) except AttributeError: pass # raised later except Exception, err: raise Exception(err) # Don't allow addition of new 'public' attributes # with AttributeError className = self.__class__.__name__ msg = "%s has no attribute '%s'." % (className, name) raise AttributeError(msg) def set_name(self, val): self.__obj.name = val def get_name(self): return self.__obj.name name = property(get_name, set_name) # Usage... class A(object): name = "a name" test = "a test" a = A() b = DelegationWrapper(a) print b.name print b.test b.name = "new name" print b.name So is iterating through dir() to force both the members of dir(), and the requested attribute name, to lower case for a comparison, really the easiest way? Thanks again for sticking with me. I hope I didn't add to the confusion. What I learn I will of course pass on. - Rafe On Oct 14, 12:14 am, Matimus <[EMAIL PROTECTED]> wrote: > On Oct 13, 4:08 am, Rafe <[EMAIL PROTECTED]> wrote: > > > > > Just so I don't hijack my own thread, the issue is 'how to wrap an > > object which is not case sensitive'. > > > The reason I am stuck dealing with this?... The application's API is > > accessed through COM, so I don't know if I can do anything but react > > to what I get. The API was written while the app (Softimage|XSI - one > > of 3 leading 3D applications for high-end visual effects) was owned by > > Microsoft. I'm not sure if it is standard for Microsoft or just the > > way this app was implemented (perhaps because under-users were > > scripting in VBscript which is not case sensitive). > > > XSI allows many languages to be used via COM, even from within the > > software (there are built-in code editors). In the early days, > > VBScript was the most common scripting language used while anything > > more hard-core was done in C++ (of course the C implementation is case > > sensitive - well as far as I know). Then JScript became the most > > common, now Python is considered standard. > > > Anyway, the standard practice is to use mixed-case, so I need to > > adhere to it as the resulting framework I am creating needs to be > > intuitive to use (my end-user is still writing code. It's an API for > > an API I guess...) > > > I don't *think* I need to worry too much about performance because I'm > > not doing any serious processing, this is more about convention > > enforcement and quality control rather than number crunching. I might > > try to write something generic which gets executed by the wrappers > > __getattr__ and __setattr__, but I was hoping for some nifty > > workaround, maybe in the form of a decorator or something? Again... > > any ideas? > > > Cheers, > > > - Rafe > > > On Oct 13, 4:15 pm, "Diez B. Roggisch" <[EMAIL PROTECTED]> wrote: > > > > Rafe wrote: > > > > Hi, > > > > > I'm working within an application (
Re: Making class attributes non-case-sensitive?
I'm not sure what went wrong with the formatting in my last post. my code is under 80 characters wide. Here is a more narrow copy and paste... class DelegationWrapper(object): """ This is a new-style base class that allows python to extend, or override attributes of a given X3DObject. :parameters: obj : object instance If this class (or a sub-class of this class) do not have an attribute, this wrapped object will be checked before failing. """ def __init__(self, obj): """ Store the object to delegate to. """ self.__obj = obj def __repr__(self): """ Makes the object's name the string representation of the object, just like XSI does. """ return str(self.__obj.name) def __getattr__(self, name): """ Tries to delegate any attribute calls not found in this class to the X3DObject. """ # Try to delegate to the 3DObject. obj = self.__dict__["__obj"] try: return obj.__getattr__(name) except: pass # Raise an attribute error (Python requires this # to avoid problems) className = self.__class__.__name__ msg = "%s has no attribute '%s'." % (className, name) raise AttributeError(msg) def __setattr__(self, name, val): """ Tries to delegate any attribute assignment not found in this class to the X3DObject. """ # This allows sub-classes to add "private" attributes # freely. dir is checked insteaf od __dict__ because # it contains bound attributes not available in the # instance __dict__. if name in dir(self) or name.startswith("_"): object.__setattr__(self, name, val) return # Try to delegate to the X3DObject. try: self.__dict__["__obj"].__setattr__(name, val) return except TypeError, err: raise TypeError(err) except AttributeError: pass # raised later except Exception, err: raise Exception(err) # Don't allow addition of new 'public' attributes # with AttributeError className = self.__class__.__name__ msg = "%s has no attribute '%s'." % (className, name) raise AttributeError(msg) @property def name(self): """ This doesn't do anything here, but in my real code it does. The problem is, if the user types 'Name' this will be bypassed. """ return self.__obj.Name - Rafe On Oct 14, 11:29 am, Rafe <[EMAIL PROTECTED]> wrote: > I really appreciate the replies. I hope you gyus will stick with me > through one more round. > > super(C, self).__setattr__(attr.lower(), value) > > Unfortunately, this will not work because an attribute name such as > "getObject" is legal (I'll explain the convention in a moment.) I > think I would have to loop over all attributes and force both sides of > the compare to lower case to test for a match. > > just skip ahead to the example code if you don't want more confusing > background ;) > > Bear with me while I try to explain. > > Basically, I am working with this application like (I think) any > application would work through a COM object. That said, I can access > python from within the application as well because it is a kind of dev > environment. 3D applications are blended GUI and development > environment and users are expected to use it through both the API and > the GUI. What may seem strange to most people here, is that you will > get hard-core programmers and surface-level users (and everything in > between, like me) working and/or developing in the same environment. > These 3D software applications are quite large and complex. > > The application is called "Softimage|XSI", commonly called "XSI". It > is a 3D application. Most companies will licenses the software but > then build layers on top of it for pipeline productivity and > communication reasons. So, it is standard for a user of the > application to also write scripts or more complex OO models. I > mentioned it was written during the brief period of time where > Softimage was owned by Microsoft because I thought there might be some > precedence for the case sensitivity issues. It was not written by > Microsoft engineers directly, but they did enforce *some* standards. > > The common naming convention in XS
Re: Making class attributes non-case-sensitive?
def __getattr__(self, name): """ Tries to delegate any attribute calls not found in this class to the X3DObject. """ # Try to delegate to the 3DObject. obj = self.__dict__["__obj"] try: return obj.__getattr__(name) except: pass # Raise an attribute error (Python requires this to avoid problems) className = self.__class__.__name__ raise AttributeError("%s has no attribute '%s'." % (className, name)) def __setattr__(self, name, val): """ Tries to delegate any attribute assignment not found in this class to the X3DObject. """ # This allows sub-classes to add "private" attributes freely. # dir is checked insteaf od __dict__ because it contains bound # attributes not available in the instance __dict__. if name in dir(self) or name.startswith("_"): object.__setattr__(self, name, val) return # Try to delegate to the X3DObject. try: self.__dict__["__obj"].__setattr__(name, val) return except TypeError, err: raise TypeError(err) except AttributeError: pass # raised later except Exception, err: raise Exception(err) # Don't allow addition of new 'public' attributes with AttributeError className = self.__class__.__name__ raise AttributeError("%s has no attribute '%s'." % (className, name)) @property def name(self): """ This doesn't do anything here, but in my real code it does. The problem is, if the user types 'Name' this will be bypassed. """ return self.__obj.Name So is iterating through dir() to force both the members of dir(), and the requested attribute name, to lower case for a comparison, really the easiest way? Thanks again for sticking with me. I hope I didn't add to the confusion. What I learn I will of course pass on. - Rafe On Oct 14, 12:14 am, Matimus <[EMAIL PROTECTED]> wrote: > On Oct 13, 4:08 am, Rafe <[EMAIL PROTECTED]> wrote: > > > > > Just so I don't hijack my own thread, the issue is 'how to wrap an > > object which is not case sensitive'. > > > The reason I am stuck dealing with this?... The application's API is > > accessed through COM, so I don't know if I can do anything but react > > to what I get. The API was written while the app (Softimage|XSI - one > > of 3 leading 3D applications for high-end visual effects) was owned by > > Microsoft. I'm not sure if it is standard for Microsoft or just the > > way this app was implemented (perhaps because under-users were > > scripting in VBscript which is not case sensitive). > > > XSI allows many languages to be used via COM, even from within the > > software (there are built-in code editors). In the early days, > > VBScript was the most common scripting language used while anything > > more hard-core was done in C++ (of course the C implementation is case > > sensitive - well as far as I know). Then JScript became the most > > common, now Python is considered standard. > > > Anyway, the standard practice is to use mixed-case, so I need to > > adhere to it as the resulting framework I am creating needs to be > > intuitive to use (my end-user is still writing code. It's an API for > > an API I guess...) > > > I don't *think* I need to worry too much about performance because I'm > > not doing any serious processing, this is more about convention > > enforcement and quality control rather than number crunching. I might > > try to write something generic which gets executed by the wrappers > > __getattr__ and __setattr__, but I was hoping for some nifty > > workaround, maybe in the form of a decorator or something? Again... > > any ideas? > > > Cheers, > > > - Rafe > > > On Oct 13, 4:15 pm, "Diez B. Roggisch" <[EMAIL PROTECTED]> wrote: > > > > Rafe wrote: > > > > Hi, > > > > > I'm working within an application (making a lot of wrappers), but the > > > > application is not case sensitive. For example, Typing obj.name, > > > > obj.Name, or even object.naMe is all fine (as far as the app is > > > > concerned). The problem is, If someone makes a typo, they may get an > > > > unexpected error due accidentally calling the original attribute > > > > instead of the wrapped version. Does anyone
Re: Making class attributes non-case-sensitive?
Just so I don't hijack my own thread, the issue is 'how to wrap an object which is not case sensitive'. The reason I am stuck dealing with this?... The application's API is accessed through COM, so I don't know if I can do anything but react to what I get. The API was written while the app (Softimage|XSI - one of 3 leading 3D applications for high-end visual effects) was owned by Microsoft. I'm not sure if it is standard for Microsoft or just the way this app was implemented (perhaps because under-users were scripting in VBscript which is not case sensitive). XSI allows many languages to be used via COM, even from within the software (there are built-in code editors). In the early days, VBScript was the most common scripting language used while anything more hard-core was done in C++ (of course the C implementation is case sensitive - well as far as I know). Then JScript became the most common, now Python is considered standard. Anyway, the standard practice is to use mixed-case, so I need to adhere to it as the resulting framework I am creating needs to be intuitive to use (my end-user is still writing code. It's an API for an API I guess...) I don't *think* I need to worry too much about performance because I'm not doing any serious processing, this is more about convention enforcement and quality control rather than number crunching. I might try to write something generic which gets executed by the wrappers __getattr__ and __setattr__, but I was hoping for some nifty workaround, maybe in the form of a decorator or something? Again... any ideas? Cheers, - Rafe On Oct 13, 4:15 pm, "Diez B. Roggisch" <[EMAIL PROTECTED]> wrote: > Rafe wrote: > > Hi, > > > I'm working within an application (making a lot of wrappers), but the > > application is not case sensitive. For example, Typing obj.name, > > obj.Name, or even object.naMe is all fine (as far as the app is > > concerned). The problem is, If someone makes a typo, they may get an > > unexpected error due accidentally calling the original attribute > > instead of the wrapped version. Does anyone have a simple solution for > > this? > > > I can protect against some cases just by making an 'alias': > > class AClass(object): > > def name(self): > > print "hello" > > > Name = name > > > ...but this doesn't protect against typos, it gets more complicated > > with multi-word attribute names, and it makes my epydocs confusing to > > read since all spelling versions are shown (I AM concerned about my > > docs being clear, but not as much as stopping typo related errors). > > > I thought about using my wrapper's __getattr__ and __setattr__, but I > > I am concerned about the overhead of every delegated attribute call > > running a search and compare (.lower() based compare?). > > > Any ideas or precedence? > > Ideas? Don't do that... > > Seriously: where does that code come from, who's typing it? If it is python, > then make people follow python's rules. If it is some sort of homebrewn > language you map to python, adapt the mapper to enforce lower-case and make > all your properties lower case. > > Diez -- http://mail.python.org/mailman/listinfo/python-list
Making class attributes non-case-sensitive?
Hi, I'm working within an application (making a lot of wrappers), but the application is not case sensitive. For example, Typing obj.name, obj.Name, or even object.naMe is all fine (as far as the app is concerned). The problem is, If someone makes a typo, they may get an unexpected error due accidentally calling the original attribute instead of the wrapped version. Does anyone have a simple solution for this? I can protect against some cases just by making an 'alias': class AClass(object): def name(self): print "hello" Name = name ...but this doesn't protect against typos, it gets more complicated with multi-word attribute names, and it makes my epydocs confusing to read since all spelling versions are shown (I AM concerned about my docs being clear, but not as much as stopping typo related errors). I thought about using my wrapper's __getattr__ and __setattr__, but I I am concerned about the overhead of every delegated attribute call running a search and compare (.lower() based compare?). Any ideas or precedence? Cheers, - Rafe -- http://mail.python.org/mailman/listinfo/python-list
Re: Clearing a session and reload() problem (with repro error)
On Sep 10, 2:28 pm, "Gabriel Genellina" <[EMAIL PROTECTED]> wrote: > En Wed, 10 Sep 2008 00:56:43 -0300,Rafe<[EMAIL PROTECTED]> escribió: > > > > > On Sep 9, 11:03 pm, "Gabriel Genellina" <[EMAIL PROTECTED]> > > wrote: > >> En Mon, 08 Sep 2008 05:37:24 -0300,Rafe<[EMAIL PROTECTED]> escribió: > >> ... > >> This dependency between modules, applied to all modules in your project, > >> defines a "dependency graph". In some cases, one can define a partial > >> ordering of its nodes, such that no module depends on any other module > >> *after* it (it may depend only on modules *before* it). Look for > >> "topological sort". > > >> Doing that in the generic case is not easy. If you *know* your > >> dependencies, reload the modules in the right order by hand. > >> ... > > I was hoping there would be a way to just wipe out the module cache > > and let it get rebuilt by executing my code (since I'm not using > > reload as part of my program, but rather, to test it in an environment > > where I cannot restart the Python session). > > Ok, I think the following sequence *might* work: > > - replace the __import__ and reload builtins with a custom callable > object. This way you can hook into any import attempt. The object should > keep a list of already reloaded modules. When a module is imported: if it > is already in the list, just delegate to the original __import__; if it is > not in the list, locate the module in sys.modules and reload it. > > - iterate over sys.modules and reload the desired modules, as you did in > your previous attempt. > > - restore the original __import__ function. > > This way, you effectively transform any import statement into a recursive > reload (for the first time); subsequent imports of the same module behave > as usual. This may work for you, or perhaps not, or it may screw all your > running environment up, or even cause the next global thermonuclear war... > (I hope not!) > > Note: some modules don't work well with reload(). A common case: global > mutable values, like a list of objects which starts empty: > my_list = [] > To make it more "reload friendly", use this: > try: my_list > except NameError: my_list = [] > (this way the list will keep its previous values). > > The example below shows how to hook into the import mechanism - it just > prints the module being imported. Implementing the functionality outlined > above is left as an exercise to the reader :) > > py> class ImportHook(object): > ... _orig_import = None > ... # > ... def __call__(self, name, globals={}, locals={}, fromlist=[], > level=-1): > ... if fromlist: > ... print "-> from %s import %s" % (name, ','.join(fromlist)) > ... else: > ... print "-> import %s" % name > ... return self._orig_import(name, globals, locals, fromlist, > level) > ... # > ... def hook(self): > ... import __builtin__ > ... self._orig_import = __builtin__.__import__ > ... __builtin__.__import__ = self > ... # > ... def unhook(self): > ... assert self._orig_import is not None, "unhook called with no > previous hook" > ... import __builtin__ > ... __builtin__.__import__ = self._orig_import > ... del self._orig_import > ... # > ... # support the "with" statement > ... def __enter__(self): > ... self.hook() > ... return self > ... # > ... def __exit__(self, type, value, tb): > ... self.unhook() > ... > py> > py> ih = ImportHook() > py> ih.hook() > py> import htmllib > -> import htmllib > -> import sgmllib > -> import markupbase > -> import re > -> import sre_parse > -> import sre_parse > -> import sre_parse > -> import sre_parse > -> import sre_parse > -> import re > -> import sre_parse > -> import sre_parse > -> import sre_parse > -> import sre_parse > -> import sre_parse > -> import sre_parse > -> import sre_parse > -> import sre_parse > -> import sre_parse > -> import sre_parse > -> import sre_parse > -> import sre_parse > -> from formatter import AS_IS > -> import sys > -> from htmlentitydefs import entitydefs > py> reload(htmllib) > -> import sgmllib > -> from formatter import AS_IS > -> from htmlentitydefs import entitydefs > > py> ih.unhook() > -> impor
Re: Clearing a session and reload() problem (with repro error)
On Sep 9, 11:03 pm, "Gabriel Genellina" <[EMAIL PROTECTED]> wrote: > En Mon, 08 Sep 2008 05:37:24 -0300,Rafe<[EMAIL PROTECTED]> escribió: > ... > This dependency between modules, applied to all modules in your project, > defines a "dependency graph". In some cases, one can define a partial > ordering of its nodes, such that no module depends on any other module > *after* it (it may depend only on modules *before* it). Look for > "topological sort". > > Doing that in the generic case is not easy. If you *know* your > dependencies, reload the modules in the right order by hand. > ... > -- > Gabriel Genellina Hi Gabriel, Thank you for clarifying and re-presenting parts of my case. I appreciate the time. I was hoping there would be a way to just wipe out the module cache and let it get rebuilt by executing my code (since I'm not using reload as part of my program, but rather, to test it in an environment where I cannot restart the Python session). I have been keeping a diagram of my module inheritance to make sure it is as clean as possible, so I could just right a list of reloads as you suggest. However, one of the sub-packages is designed to allow users to add more modules. Because these get dynamically imported, a guess I could add an argument to the reload function to allow a user to give the 'add-on' module they are working on... so much work just to get a clean environment... Separate of my program, I was really hoping to write a generic reload tool for anyone developing in the same application as I am. I just don't see a way to trace import dependencies in systems which include dynamic imports. Any ideas? Thanks again, - Rafe -- http://mail.python.org/mailman/listinfo/python-list
Clearing a session and reload() problem (with repro error)
Hi, This seems to be an old question, and I've read back a bit, but rather than assume the answer is "you can't do that", I'd thought I'd post my version of the question along with a reproducible error to illustrate my confusion. My problem is that I'm using Python inside XSI (a 3D graphics application). If I want to restart Python, I have to restart XSI. This is no small amount of time wasted. The solution would be to somehow unload modules and all references and allow my imports to rebuild the cache. I suppose it would be great to clear the python session completely, but the 3D application might have added something to the session at startup (I doubt this though as I don't see any imported modules outside the norm). I've tried to use reload with a very simple algorithm. Simply run through every imported module, ignoring anything that is "None" or on the C: drive (all of our python is on a network drive so this hack works for me for now) and reload() it. I've come to realize that this isn't near intelligent enough to handle sub-packages. Before I post the repro, my questions are: 1) Does anyone have a work-flow for my situation that doesn't use Reload, and doesn't involve restarting the app for every edit/fix 2) Can anyone help point me in the right direction to build a dependable algorithm to cleanly reload all loaded modules? NOTE: I don't need to maintain anything in memory (i.e. instances, pointers, etc.) Everything will be initialized again each time. I'm not asking for code. Just some ideas or pseudo-code would do nicely. Here is a "simple" repro... Package Structure: --- inheritRepro __init__.py baseLib.py child __init__.py __init__.py: import sys, os def reloadModules(): """ Reload all imported modules that are not on the C: drive. """ print "Reloading Python Modules..." # Iterate over all IMPORTED modules modules = sys.modules for modName in modules: mod = modules[modName] # Skip None types and other itesm we don't care about if modName == "__main__" or not hasattr(mod,"__file__"): continue # Get the drive and skip anything on the C: drive drive = os.path.splitdrive(mod.__file__)[0] if drive != "C:": reload(mod) print "Reloaded %s" % mod baseLib.py: --- class BaseClassA(object): pass class BaseClassB(BaseClassA): def __init__(self): super(BaseClassB, self).__init__() child.__init__.py: import inheritRepro.baseLib as baseLib class ChildClass(baseLib.BaseClassB): def __init__(self): super(ChildClass, self).__init__() RUN: --- >>> import inheritRepro >>> import inheritRepro.child as child >>> obj = child.ChildClass() >>> print obj >>> inheritRepro.reloadModules()# Output omitted, but this worked. >>> import inheritRepro >>> import inheritRepro.child as child >>> obj = child.ChildClass() Traceback (most recent call last): File "", line 0, in File "\\nas6tb\PROJECTS\tech\users\rafe.sacks\python\inheritRepro \child\__init__.py", line 5, in __init__ super(ChildClass, self).__init__() File "\\nas6tb\PROJECTS\tech\users\rafe.sacks\python\inheritRepro \baseLib.py", line 6, in __init__ super(BaseClassB, self).__init__() TypeError: super(type, obj): obj must be an instance or subtype of type NOTE: this works if I don't use a sub-package for 'child' (child.py instead). Is it overly simple to assume reloading by file structure might work? Right now I'm getting around this by reload()-ing the right package after running reloadModules() if an error occurs. It's a bit frustrating that it cost me two days of work before I realized it was reload() causing this error and not super() or some other unknown-to- me inheritance/package structure problem. I rebuilt my code module by module until I noticed, quite by accident, that the thing worked once and then never again! oh well, these are the joys of learning the hard way. I know this was a long one. If you made it this far, thanks for reading, - Rafe -- http://mail.python.org/mailman/listinfo/python-list
Re: exception handling in complex Python programs
On Aug 20, 12:47 am, Fredrik Lundh <[EMAIL PROTECTED]> wrote: > Rafe wrote: > > Again, this is probably too simple to help, but the only way to ignore > > certain types of exceptions, as far as I know, is to catch them and > > pass. > > e.g. this ignores type errors... > > > try: > > somethingBad() > > except TypeError, err: > > pass > > except Exception, err: > > raise TypeError(err) > > so what kind of code are you writing where *type errors* are not > considered programming errors? (catching them and proceeding is one > thing, but catching them and ignoring them?) > > I'd be really worried if I found that in a piece of source code I had to > maintain. > > I'm not it was just the first exception that came to mind... It is pretty rare that I would pass an exception in fact. Maybe as a last- resort test in some cases. - Rafe -- http://mail.python.org/mailman/listinfo/python-list
Re: how to add property "dynamically"?
On Aug 17, 5:09 pm, Bruno Desthuilliers <[EMAIL PROTECTED]> wrote: > akonsu a écrit :> hello, > > > i need to add properties to instances dynamically during run time. > > this is because their names are determined by the database contents. > > so far i found a way to add methods on demand: > > > class A(object) : > > def __getattr__(self, name) : > > if name == 'test' : > > def f() : return 'test' > > setattr(self, name, f) > > return f > > else : > > raise AttributeError("'%s' object has no attribute '%s'" % > > (self.__class__.__name__, name)) > > > this seems to work and i can invoke method test() on an object. > > Nope. This adds per-instance *function* attributes - not *methods*. > > class A(object) : > def __getattr__(self, name) : > if name == 'test' : > def f(self) : > return "%s.test" % self > setattr(self, name, f) > return f > else : > raise AttributeError( > "'%s' object has no attribute '%s'" \ > % (self.__class__.__name__, name) > ) > > a = A() > a.test() > => Traceback (most recent call last): > File "", line 1, in >TypeError: f() takes exactly 1 argument (0 given) > > To add methods on a per-instance basis, you have to manually invoke the > descriptor protocol's implementation of function objects: > > class A(object) : > def __getattr__(self, name) : > if name == 'test' : > def f(self) : > return "%s.test" % self > m = f.__get__(self, type(self)) > setattr(self, name, m) > return m > else : > raise AttributeError( > "'%s' object has no attribute '%s'" \ > % (self.__class__.__name__, name) > ) > > > it > > would be nice to have it as property though. so i tried: > > > class A(object) : > > def __getattr__(self, name) : > > if name == 'test' : > > def f() : return 'test' > > setattr(self, name, property(f)) > > return f > > else : > > raise AttributeError("'%s' object has no attribute '%s'" % > > (self.__class__.__name__, name)) > > > but this does not work, instance.test returns a callable but does not > > call it. > > Properties must be class attributes. The only way (the only way I know) > to get them to work as instance-attributes is to overload > __getattribute__, which is tricky and may have pretty bad impact on > lookup perfs - and ruins the whole point of using properties FWIW. > > > i am not an expert in python, would someone please tell me what i am > > doing wrong? > > Wrong solution to your problem, I'd say. Let's start again: > > """ > > i need to add properties to instances dynamically during run time. > > this is because their names are determined by the database contents. > """ > > Care to elaborate ? I may be wrong, but I suspect you're trying to roll > your own python/database mapper. If so, there are quite a couple Python > ORMs around. Else, please tell us more. I posted this to another thread, but... You can dynamically add properties (or anything else) to a CLASS just before returning the instance using __new__(): class AClass(object): def __new__(cls): setattr(cls,"propName", property(fget = ..., fset = ..., fdel = ..., doc = ...) ) obj = super(AClass, cls).__new__(cls) return obj - Rafe -- http://mail.python.org/mailman/listinfo/python-list
Re: adding properties dynamically (how to?)
On Aug 17, 7:51 pm, André <[EMAIL PROTECTED]> wrote: > I didn't want to hijack the original thread but I have basically the > same request... > > On Aug 17, 7:09 am, Bruno Desthuilliers<[EMAIL PROTECTED]> wrote: > > akonsu a écrit :> hello, > > [SNIP] > > > > > Wrong solution to your problem, I'd say. Let's start again: > > > """ > > > i need to add properties to instances dynamically during run time. > > > this is because their names are determined by the database contents. > > """ > > > Care to elaborate ? I may be wrong, but I suspect you're trying to roll > > your own python/database mapper. If so, there are quite a couple Python > > ORMs around. Else, please tell us more. > > I'm not the original poster, but I'd like to do the same thing (for a > different reason). > > I have a program (crunchy) that is extensible via plugins. New > options available via plugins can be turned on or off (or selected > among a list of options). I have a module for user preferences (let's > call it prefs.py) that allows the setting of these options (and do > error checking, automatic saving of the options selected for future > sessions, etc.). These options are implemented as properties. > > Currently I have it simplified so that only two lines need to be added > to prefs.py to add new options; something like > options = { ... > 'new_option': [value1, value2, ..., valueN], > ...} > > and > class Preferences(object): > ... > >new_option = make_property('new_option', 'some nicely worded help > string') > > === > make_property is a custom define function that return fgets, fsets, > fdel and doc. > > Ideally, I'd like to be able to define new would-be properties from > the plugin and add them to the class prior to creating instances. In > other words, have something like > > === > for option in options_defined_in_plugins: >add_option_as_property_to_Preferences(Preferences, option, ...) > > user_preferences = Preferences() > > Performance in this case would not be an issue. > > Cheers, > > André Hi, You can dynamically add properties to a class just before returning the instance using __new__(): class AClass(object): def __new__(cls): setattr(cls,"propName", property(fget = ..., fset = ..., fdel = ..., doc = ...) ) obj = super(AClass, cls).__new__(cls) return obj You can put this in a for loop and add a property per option, etc. You can also do this with your own descriptor if you make a custom one. - Rafe -- http://mail.python.org/mailman/listinfo/python-list
Re: adding properties dynamically (how to?)
On Aug 17, 7:51 pm, André <[EMAIL PROTECTED]> wrote: > I didn't want to hijack the original thread but I have basically the > same request... > > On Aug 17, 7:09 am, Bruno Desthuilliers<[EMAIL PROTECTED]> wrote: > > akonsu a écrit :> hello, > > [SNIP] > > > > > Wrong solution to your problem, I'd say. Let's start again: > > > """ > > > i need to add properties to instances dynamically during run time. > > > this is because their names are determined by the database contents. > > """ > > > Care to elaborate ? I may be wrong, but I suspect you're trying to roll > > your own python/database mapper. If so, there are quite a couple Python > > ORMs around. Else, please tell us more. > > I'm not the original poster, but I'd like to do the same thing (for a > different reason). > > I have a program (crunchy) that is extensible via plugins. New > options available via plugins can be turned on or off (or selected > among a list of options). I have a module for user preferences (let's > call it prefs.py) that allows the setting of these options (and do > error checking, automatic saving of the options selected for future > sessions, etc.). These options are implemented as properties. > > Currently I have it simplified so that only two lines need to be added > to prefs.py to add new options; something like > options = { ... > 'new_option': [value1, value2, ..., valueN], > ...} > > and > class Preferences(object): > ... > >new_option = make_property('new_option', 'some nicely worded help > string') > > === > make_property is a custom define function that return fgets, fsets, > fdel and doc. > > Ideally, I'd like to be able to define new would-be properties from > the plugin and add them to the class prior to creating instances. In > other words, have something like > > === > for option in options_defined_in_plugins: >add_option_as_property_to_Preferences(Preferences, option, ...) > > user_preferences = Preferences() > > Performance in this case would not be an issue. > > Cheers, > > André You can dynamicly add properties to a class just before returning the instance using __new__(): class AClass(object): def __new__(cls): setattr(cls,"active", property(fget = ..., fset = ..., fdel = ..., doc = ...)) obj = super(BaseGroup, cls).__new__(cls) return obj You can put this in a for loop and add a property per option, etc. - Rafe -- http://mail.python.org/mailman/listinfo/python-list
Re: cathing uncaught exceptions
On Aug 18, 10:02 pm, Alexandru Mosoi <[EMAIL PROTECTED]> wrote: > how can I catch (globally) exception that were not caught in a try/ > catch block in any running thread? i had this weird case that an > exception was raised in one thread, but nothing was displayed/logged. Any chance you might have missed the word "raise", e.g. except Exception, err: Exception(err) vs. except Exception, err: raise Exception(err) This is from the list of stupid things I have done myself, - Rafe -- http://mail.python.org/mailman/listinfo/python-list
Re: exception handling in complex Python programs
On Aug 20, 12:19 am, eliben <[EMAIL PROTECTED]> wrote: > Python provides a quite good and feature-complete exception handling > mechanism for its programmers. This is good. But exceptions, like any > complex construct, are difficult to use correctly, especially as > programs get large. > > Most of the issues of exceptions are not specific to Python, but I > sometimes feel that Python makes them more acute because of the free-n- > easy manner in which it employs exceptions for its own uses and allows > users to do the same. > > Now, what do I mean more specifically... When a program starts growing > large, I find myself a bit scared of all the exceptions that might be > thrown: Python's exceptions as a result of runtime-detection of errors > (Python's dynamic typing also comes into play here), exceptions from > libraries used by the code, and exceptions from my lower-level > classes. > Python doesn't allow to specify which exceptions are thrown (C++'s > feature adding 'throw' after a function/method declaration specifying > the exceptions that can be thrown), and this leaves me at loss - what > should be caught and where ? Which errors should be left to > propagate ? > > I've tried looking around the Python blogosphere, but there doesn't > seem to be much concern with this topic. > > Apologies for the not-too-coherent post, but I suspect you feel the > pain too and can understand my meaning. > > Eli > > P.S. There's a common case where a method is passed a filename, to do > something with a file (say, read data). Should the method catch the > errors possibly thrown by open(), or leave it to the caller ? > > P.P.S. There's a great post on conditions (Common Lisp's exceptions) > here:http://dlweinreb.wordpress.com/2008/03/24/what-conditions-exceptions-... > Not really CL specific, and can apply to Python's exceptions. Maybe I am oversimplifying (and I am here to learn), but I catch all exceptions which otherwise would be hard to understand as a user. In other words, when a better error message is useful. Again, this is probably too simple to help, but the only way to ignore certain types of exceptions, as far as I know, is to catch them and pass. e.g. this ignores type errors... try: somethingBad() except TypeError, err: pass except Exception, err: raise TypeError(err) I suppose you could write a decorator to do this if you want it at the function level, but that seems a bit to broad. Shouldn't exceptions be on a case-by-case basis to add protection and return information exactly where it is needed? - Rafe -- http://mail.python.org/mailman/listinfo/python-list
Re: Handling Property and internal ('__') attribute inheritance and creation
On Aug 16, 1:22 am, Bruno Desthuilliers <[EMAIL PROTECTED]> wrote: > Rafea écrit : > > > Hi, > > > I've been thinking in circles about these aspects of Pythonic design > > and I'm curious what everyone else is doing and thinks. There are 3 > > issues here: > > > 1) 'Declaring' attributes > > There's nothing like "declaration" of variables/attributes/whatever in > Python. > > > - I always felt it was good code practice to > > declare attributes in a section of the class namespace. I set anything > > that is constant but anything variable is set again in __init__(): > > > Class A(object): > > name = "a name" > > type = "a typee" > > childobject = None > > > def __init__(self, obj): > > self.childobject = object > > > This makes it easy to remember and figure out what is in the class. > > Granted there is nothing to enforce this, but that is why I called it > > 'code practice'. Do you agree or is this just extra work? > > It's not only extra work, it's mostly a WTF. You create class attributes > for no other reasons than to mimic some other mainstream languages. If I > was to maintain such code, I'd loose valuable time wondering where these > class attributes are used. > > > > > 2) Internal attributes (starting with 2x'_') aren't inherited. > > Yes they are. But you need to manually mangle them when trying to access > them from a child class method. FWIW, that *is* the point of > __name_mangling : making sure these attributes won't be accidentally > overwritten in a child class. > > > Do you > > just switch to a single '_' when you want an "internal" attribute > > inherited? These are attributes I want the classes to use but not the > > user of these classes. Of course, like anything else in Python, these > > aren't really private. It is just a convention, right? (The example > > for #3 shows this.) > > Yes. The (*very* strong) convention is that > _names_with_simple_leading_underscore denote implementation attributes. > > > > > 3) It isn't possible to override a piece of a Property Descriptor. To > > get around this, I define the necessary functions in the class but I > > define the descriptor in the __new__() method so the inherting class > > can override the methods. Am I overlooking some basic design principle > > here? This seems like a lot of work for a simple behavior. Example: > > > class Base(object): > > def __new__(cls): > > setattr(cls, > > "state", > > property(fget = cls._Get_state, > > fset = cls._Set_state, > > fdel = None, > > doc = cls._doc_state)) > > > obj = super(Base, cls).__new__(cls) > > return obj > > > state = None# Set in __new__() > > _state = True > > _doc_state = "The state of this object" > > def _Get_state(self): return self._state > > def _Set_state(self, value): self._state = value > > pep08 : attribute names (including methods) should be all_lower. > > > class Child(Base): > > def _Get_state(self): > > # Do some work before getting the state. > > print "Getting the state using the child's method" > > return self._state > > > print Child().state > > How often do you really need to override a property ? (hint : as far as > I'm concerned, it never happened so far). Now you have two solutions : > either redefine the whole property in the derived class, or, if you > really intend your property to be overriden, provide a "template method" > hook. > > I'd say you're making things much more complicated than they need to be. Thanks Bruno, and everyone ! These are exactly the type of hard answers I was hoping for. I'm mostly converted but my brain still needs a Pythonic push from time to time. Looks like have some some clean up to perform...with confidence. I'm interested to see the implementation of getter, etc overrides in 2.6/3.0. I have two classes that could be simplified with this. For example, I have a class which does a lot of work and has about 5 key properties. I want to inherit it, and just trigger an event (update something only stored in the child) after 4 of these attributes are finished being set. I was thinking about using a callback which is empty in the parent or __setattr__() (but I hat using this unless I have to, it is still troublesome to me). - Rafe -- http://mail.python.org/mailman/listinfo/python-list
Re: Passing an object through COM which acts like str but isn't
On Aug 16, 1:25 am, Wolfgang Grafen <[EMAIL PROTECTED]> wrote: > Rafeschrieb: > > > On Aug 15, 10:27 pm, Wolfgang Grafen <[EMAIL PROTECTED]> > > wrote: > >>Rafeschrieb: > > >>> Now if I try to pass this as I would a string, roughly like so... > >> s = StrLike("test") > >> Application.AnObject.attribute = "test" # works fine > >> Application.AnObject.attribute = s > >>> ERROR : Traceback (most recent call last): > >>> File "
Handling Property and internal ('__') attribute inheritance and creation
Hi, I've been thinking in circles about these aspects of Pythonic design and I'm curious what everyone else is doing and thinks. There are 3 issues here: 1) 'Declaring' attributes - I always felt it was good code practice to declare attributes in a section of the class namespace. I set anything that is constant but anything variable is set again in __init__(): Class A(object): name = "a name" type = "a typee" childobject = None def __init__(self, obj): self.childobject = object This makes it easy to remember and figure out what is in the class. Granted there is nothing to enforce this, but that is why I called it 'code practice'. Do you agree or is this just extra work? 2) Internal attributes (starting with 2x'_') aren't inherited. Do you just switch to a single '_' when you want an "internal" attribute inherited? These are attributes I want the classes to use but not the user of these classes. Of course, like anything else in Python, these aren't really private. It is just a convention, right? (The example for #3 shows this.) 3) It isn't possible to override a piece of a Property Descriptor. To get around this, I define the necessary functions in the class but I define the descriptor in the __new__() method so the inherting class can override the methods. Am I overlooking some basic design principle here? This seems like a lot of work for a simple behavior. Example: class Base(object): def __new__(cls): setattr(cls, "state", property(fget = cls._Get_state, fset = cls._Set_state, fdel = None, doc = cls._doc_state)) obj = super(Base, cls).__new__(cls) return obj state = None# Set in __new__() _state = True _doc_state = "The state of this object" def _Get_state(self): return self._state def _Set_state(self, value): self._state = value class Child(Base): def _Get_state(self): # Do some work before getting the state. print "Getting the state using the child's method" return self._state print Child().state Please share your thoughts, - Rafe -- http://mail.python.org/mailman/listinfo/python-list
Re: Passing an object through COM which acts like str but isn't
On Aug 15, 10:27 pm, Wolfgang Grafen <[EMAIL PROTECTED]> wrote: > Rafe schrieb: > > > Forgive me if I mangle any terminology here, but please correct me if > > I do... > > > I have an object which acts exactly like a string as long as I stay in > > Python land. However, I am using the object in Softimage|XSI, a 3D > > application on Windows. I'm getting variant erros when trying to use > > my instances as I would a string. > > > XSI was created while (briefly) owned by Microsoft, so knowledge of > > COM with excel, or anything else, should be applicable I should think. > > I should also say I am a COM novice and still learning the depths of > > Python. > > > Here is an example... > > > class StrLike(object): > > def __init__(self, s): self.__data = s > > def __repr__(self): return repr(self.__data) > > def __cmp__(self, string): return cmp(self.__data, string) > > def __contains__(self, char): return char in self.__data > > > __data = "" > > def __Set(self, value): self.__data = value > > def __Get(self): return self.__data > > data = property(fget = __Get, > > fset = __Set, > > fdel = None, > > doc = "string-like example") > > >>>> s = StrLike("test") > >>>> s > > 'test' > >>>> if s == "test": print "cmp works" > > cmp works > > > Now if I try to pass this as I would a string, roughly like so... > >>>> s = StrLike("test") > >>>> Application.AnObject.attribute = "test" # works fine > >>>> Application.AnObject.attribute = s > > ERROR : Traceback (most recent call last): > > File "
Re: Parsing and Editing Source
On Aug 15, 9:21 pm, "Paul Wilson" <[EMAIL PROTECTED]> wrote: > Hi all, > > I'd like to be able to do the following to a python source file > programmatically: > * Read in a source file > * Add/Remove/Edit Classes, methods, functions > * Add/Remove/Edit Decorators > * List the Classes > * List the imported modules > * List the functions > * List methods of classes > > And then save out the result back to the original file (or elsewhere). > > I've begun by using the tokenize module to generate a token-tuple list > and am building datastructures around it that enable the above > methods. I'm find that I'm getting a little caught up in the details > and thought I'd step back and ask if there's a more elegant way to > approach this, or if anyone knows a library that could assist. > > So far, I've got code that generates a line number to token-tuple list > dictionary, and am working on a datastructure describing where the > classes begin and end, indexed by their name, such that they can be > later modified. > > Any thoughts? > Thanks, > Paul I can't help much...yet, but I am also heavily interested in this as I will be approaching a project which will require me to write code which writes code back to a file or new file after being manipulated. I had planned on using the inspect module's getsource(), getmodule() and getmembers() methods rather than doing any sort of file reading. Have you tried any of these yet? Have you found any insurmountable limitations? It looks like everything needed is there. Some quick thoughts regarding inspect.getmembers(module) results... * Module objects can be written based on their attribute name and __name__ values. If they are the same, then just write "import %s" % mod.__name__. If they are different, write "import %s as %s" % (name, mod.__name__) * Skipping built in stuff is easy and everything else is either an attribute name,value pair or an object of type 'function' or 'class'. Both of which work with inspect.getsource() I believe. * If the module used any from-import-* lines, it doesn't look like there is any difference between items defined in the module and those imported in to the modules name space. writing this back directly would 'flatten' this call to individual module imports and local module attributes. Maybe reading the file just to test for this would be the answer. You could then import the module and subtract items which haven't changed. This is easy for attributes but harder for functions and classes...right? Beyond this initial bit of code, I'm hoping to be able to write new code where I only want the new object to have attributes which were changed. So if I have an instance of a Person object who's name has been changed from it's default, I only want a new class which inherits the Person class and has an attribute 'name' with the new value. Basically using python as a text-based storage format instead of something like XML. Thoughts on this would be great for me if it doesn't hijack the thread ;) I know there a quite a few who have done this already. Cheers, - Rafe -- http://mail.python.org/mailman/listinfo/python-list
Re: Passing an object through COM which acts like str but isn't
Forgive me if I mangle any terminology here, but please correct me if I do... I have an object which acts exactly like a string as long as I stay in Python land. However, I am using the object in Softimage|XSI, a 3D application on Windows. I'm getting variant erros when trying to use my instances as I would a string. XSI was created while (briefly) owned by Microsoft, so knowledge of COM with excel, or anything else, should be applicable I should think. I should also say I am a COM novice and still learning the depths of Python. Here is an example... class StrLike(object): def __init__(self, s): self.__data = s def __repr__(self): return repr(self.__data) def __cmp__(self, string): return cmp(self.__data, string) def __contains__(self, char): return char in self.__data __data = "" def __Set(self, value): self.__data = value def __Get(self): return self.__data data = property(fget = __Get, fset = __Set, fdel = None, doc = "string-like example") >>>s = StrLike("test") >>>s 'test' >>>if s == "test": print "cmp works" cmp works Now if I try to pass this as I would a string, roughly like so... >>>s = StrLike("test") >>>Application.AnObject.attribute = "test" # works fine >>>Application.AnObject.attribute = s ERROR : Traceback (most recent call last): File "
Re: Passing an object through COM which acts like str but isn't
The previous message was posted prematurely. Please ignore it. (I hit enter witht he wrong focus I guess...no confirmation or edit available? This was my first post.) - Rafe -- http://mail.python.org/mailman/listinfo/python-list
Passing an object through COM which acts like str but isn't
Forgive me if I mangle any terminology here, but please correct me if I do... I have an object which acts exactly like a string as long as I stay in Python land. However, I am using the object in Softimage|XSI a 3D application on Windows. It was created while (briefly) owned by Microsoft, so knowledge of COM with excel or anything else should be applicable I should think. I should also say I am a COM novice and still learning Python (there are few that aren't learning though I suppose). Here is an example: class Name(object): def __init__(self, s): self.__data = s def __repr__(self): return repr(self.__data) def __cmp__(self, string): return cmp(self.__data, string) def __contains__(self, char): return char in self.__data __data = "Test" __doc = "Test" def __Set(self, value): self.__data = value def __Get(self): return self.__data data = property(fget = __Get, fset = __Set, fdel = None, doc = "string-like example") It also uses some new-style class Property -- http://mail.python.org/mailman/listinfo/python-list