Re: logging module -- better timestamp accuracy on Windows
> For example, are you assuming that your clock() call in logging is > the very first call made? Yes, we were making that assumption (the time.clock() call in the import of our log module), which was true in our code, but I can see where it's not a good thing to assume generally. > Also, IIUC the resolution of clock() is < 1 usec, but > as logging only prints to the nearest msec, won't you lose much of the > benefit of the increased resolution? ... > Or are you saying that the times should be formatted/printed to > microsecond accuracy? No, millisecond is fine. The 0.56ms example I gave in response to Rick was a really bad example -- It's more like when it's 5.6 ms that it's a problem for us, because the request time is saying 5.6ms, but the log timestamps within and at the end of that request are identical. Anyway, as sturlamolden mentioned, time.clock() has long-term accuracy issues (gets out of sync with time.time()), so that's not really a good solution. So the way it is is non-ideal, but that's more a fact of life due to the Windows time functions than anything else. It's not trivial to solve, so we're going to leave it for now, and just use time.clock() to time individual pieces of code when we need more accuracy... -Ben -- http://mail.python.org/mailman/listinfo/python-list
Re: logging module -- better timestamp accuracy on Windows
> AFAIK, the Windows performance counter has long-term accuracy issues, > so neither is perfect. Preferably we should have a timer with the long- > term accuracy of time.time and the short-term accuracy of time.clock. Thanks for the tip -- yes, I hadn't thought about that, but you're right, QueryPerformanceCounter (and hence time.clock) veers away from the system time, and it's non-trivial to fix. See also: http://msdn.microsoft.com/en-us/magazine/cc163996.aspx http://social.msdn.microsoft.com/forums/en-US/windowsgeneraldevelopmentissues/thread/a8b2c286-1133-4827-97be-61e27687ff5d -Ben -- http://mail.python.org/mailman/listinfo/python-list
Re: logging module -- better timestamp accuracy on Windows
> A simpler solution would be to caclulate the time it takes to the handle > the request using time.clock() and include it in the log message. > Something like: Thanks, Ross. Actually, we are doing exactly that already -- it's how we noticed the timestamp issue in the first place. However, that doesn't fix it when we have multiple logged events that we want to calculate time deltas between, such as: 2011-02-15T10:11:12.123 Starting request 2011-02-15T10:11:12.123 Doing stuff 2011-02-15T10:11:12.123 Filtering stuff 2011-02-15T10:11:12.123 Rendering template 2011-02-15T10:11:12.123 Request complete, took 0.56 ms It seems to me that the logging module should use a millisecond-accurate timestamp (time.clock) on Windows, just like the "timeit" module does. -Ben -- http://mail.python.org/mailman/listinfo/python-list
logging module -- better timestamp accuracy on Windows
The Python logging module calls time.time() in LogRecord.__init__ to fetch the timestamp of the log record. However, time.time() isn't particularly accurate on Windows. We're logging start and end of our requests in our web server, which can be milliseconds apart, and the log timestamps often show up as identical, but time.clock() is telling us several milliseconds have actually elapsed. The fix is to use time.clock() if running on win32 (like "timeit" does). Here's how I've improved the accuracy for us: - if sys.platform == 'win32': # Running on win32, time.clock() is much more accurate than # time.time(), use it for LogRecord timestamps # Get the initial time and call time.clock() once to "start" it _start_time = time.time() time.clock() def _formatTimeAccurate(self, record, datefmt): # This is a bit nasty, as it modifies record.created and # record.msecs, but apart from monkey-patching Formatter.__init__, # how else do we do it? accurate_time = _start_time + time.clock() record.created = time.localtime(accurate_time) record.msecs = (accurate_time - int(accurate_time)) * 1000 return time.strftime(datefmt, record.created) # Override logging.Formatter's formatTime() so all logging calls # go through this logging.Formatter.formatTime = _formatTimeAccurate - This works, but as you can see, it's a bit hacky. Is there a better way to fix it? (I'd like the fix to affect all loggers, including the root logger.) I'm somewhat surprised that no one else has run into this before. Maybe I'm the only one who uses logging heavily under Windows ... :-) Thanks, Ben. -- http://mail.python.org/mailman/listinfo/python-list
Re: Is there a way to get __thismodule__?
Wow -- thanks, guys. And who said Python only gives you one way to do things. :-) Metaclasses, globals(), and __subclasses__. Thank Duncan for the __subclassess__ tip -- I didn't know about that. I'd totally overlooked globals(). It's exactly what I was looking for -- thanks, Peter. And I like your is_true_subclass() helper function too. I must say, I found it a bit weird that the first argument to issubclass() *has* to be a class. I would have thought issubclass(42, MyClass) would simply return False, because 42 is definitely not a subclass of MyClass. But I guess "explicit is better than implicit", and being implicit here might mask bugs. globals() feels like the "right" way to do what I want -- fairly simple and direct. Metaclasses are cool, but probably a touch too complicated for this. And __subclassess__ seems a bit undocumented and perhaps implementation-defined. Cheers, Ben. -- http://mail.python.org/mailman/listinfo/python-list
Re: Is there a way to get __thismodule__?
Replying to myself here, after discovering more. :-) > Is there a way to get __thismodule__ in Python? It looks like __thismodule__ is just sys.modules[__name__]. Neat. Hmmm ... does sys.modules always already contain the currently-being- loaded module? Or is this a hack that only happens to work? (It does; I've tested it now.) Just wondering, because the Python docs say that sys.modules is "a dictionary that maps module names to modules which have *already been loaded*." > if isinstance(attr, Message): > nmap[attr.number] = attr Oops, this was untested code. I actually meant issubclass (or something similar) here, not isinstance. Cheers, Ben. -- http://mail.python.org/mailman/listinfo/python-list
Is there a way to get __thismodule__?
Is there a way to get __thismodule__ in Python? That is, the current module you're in. Or isn't that known until the end of the module? For instance, if I'm writing a module of message types/classes, like so: class SetupMessage(Message): number = 1 class ResetMessage(Message): number = 2 class OtherMessage(Message): number = 255 nmap = { # maps message numbers to message classes 1: SetupMessage, 2: ResetMessage, 255: OtherMessage, } Or something similar. But adding each message class manually to the dict at the end feels like repeating myself, and is error-prone. It'd be nice if I could just create the dict automatically, something like so: nmap = {} for name in dir(__thismodule__): attr = getattr(__thismodule__, name) if isinstance(attr, Message): nmap[attr.number] = attr Or something similar. Any ideas? (A friend suggested class decorators, which is a good idea, except that they're not here until Python 2.6 or Python 3000.) Cheers, Ben. -- http://mail.python.org/mailman/listinfo/python-list
Re: Double underscores -- ugly?
> Has anyone thought about alternatives? Is there a previous discussion > on this I can look up? Okay, I just emailed the BDFL and asked if he could tell me the origin of the double underscore syntax for __special__ methods, and what he said I'm pretty sure he won't mind me posting here: > [Guido van Rossum said:] > The specific naming convention was borrowed from the C standard, which > reserves names like __FILE__ and __LINE__. The reason for needing a > convention for "system-assigned" names was that I didn't want users to > be surprised by the system giving a special meaning to methods or > variables they had defined without intending that special meaning, > while at the same time not wanting to introduce a physically separate > namespace (e.g. a separate dict) for system names. I have no regrets. After that and this thread, I'm pretty good with it, I guess. :-) Cheers, Ben. -- http://mail.python.org/mailman/listinfo/python-list
Re: Double underscores -- ugly?
> > Then again, what's stopping us just using a single leading underscore? > > Nobody calls their own private methods _init or _add > > You must be looking at different code from the rest of us. A single > leading underscore on the name *is* the convention for "this attribute > is not part of the external interface", which is about as "private" as > Python normally gets. Yeah, I understand that. I meant -- and I could be wrong -- that I haven't seen people creating a method called "init" with a single leading underscore. It's too similar to "__init__", so they would use some other name. In other words, defining _init to mean what __init__ now means probably wouldn't cause name conflicts. -Ben -- http://mail.python.org/mailman/listinfo/python-list
Re: Double underscores -- ugly?
> My editor actually renders [underscores] as miniature chess pieces. > The bartender said she runs a pre-execution step, that searches and > replaces a double-colon with the underscores. Heh, that makes me think. Great use for character encodings! Just like Tim Hatch's "pybraces", we could use any character combo we liked and just implement a character encoding for it. See: http://timhatch.com/projects/pybraces/ Just kidding. Seriously, though, I give in. There's no perfect solution, and it's probably just a matter of getting over it. Though I wouldn't mind Raymond Hettinger's suggestion of using single underscores as in _init_. Only half as ugly. :-) Then again, what's stopping us just using a single leading underscore? Nobody calls their own private methods _init or _add ... and that's only 1/4 the ugliness, which is getting pretty good. -Ben -- http://mail.python.org/mailman/listinfo/python-list
Re: Double underscores -- ugly?
> [Terry Jan Reedy] > No, the reservered special names are supposed to be ugly ;-) -- or at least > to stand out. However, since special methods are almost always called > indirectly by syntax and not directly, only the class writer or reader, but > not users, generally see them. Fair enough, but my problem is that class writers are users too. :-) -Ben -- http://mail.python.org/mailman/listinfo/python-list
Double underscores -- ugly?
Hi guys, I've been using Python for some time now, and am very impressed with its lack of red tape and its clean syntax -- both probably due to the BDFL's ability to know when to say "no". Most of the things that "got me" initially have been addressed in recent versions of Python, or are being addressed in Python 3000. But it looks like the double underscores are staying as is. This is probably a good thing unless there are better alternatives, but ... Is it just me that thinks "__init__" is rather ugly? Not to mention "if __name__ == '__main__': ..."? I realise that double underscores make the language conceptually cleaner in many ways (because fancy syntax and operator overloading are just handled by methods), but they don't *look* nice. A solution could be as simple as syntactic sugar that converted to double underscores behind the scenes. A couple of ideas that come to my mind (though these have their problems too): def ~init(self): # shows it's special, but too like a C++ destructor def +init(self): # a bit too additive :-) defop add(self, other): # or this, equivalent to "def __add__" def operator add(self, other): # new keyword, and a bit wordy Has anyone thought about alternatives? Is there a previous discussion on this I can look up? Cheers, Ben. -- http://mail.python.org/mailman/listinfo/python-list