Re: Style question -- plural of class name?

2013-05-09 Thread Thomas Rachel

Am 09.05.2013 02:38 schrieb Colin J. Williams:

On 08/05/2013 4:20 PM, Roy Smith wrote:


"A list of FooEntry's"  +1


Go back to school. Both of you...

That is NOT the way to build a plural form...


Thomas
--
http://mail.python.org/mailman/listinfo/python-list


Re: object.enable() anti-pattern

2013-05-10 Thread Thomas Rachel

Am 10.05.2013 15:22 schrieb Roy Smith:


That's correct.  But, as described above, the system makes certain
guarantees which allow me to reason about the existence or non-existence
os such entries.


Nevertheless, your 37 is not a FD yet.

Let's take your program:


#include 
#include 
#include 
#include 

int main(int argc, char** argv) {
 int max_files = getdtablesize();
 assert(max_files >= 4);


Until here, the numbers 3 toll max_files may or may not be FDs.


 for (int i = 3; i < max_files; ++i) {
 close(i);
 }


Now they are closed; they are definitely no longer FDs even if they 
were. If you would use them in a file operation, you'd get a EBADF which 
means "fd is not a valid file descriptor".



 dup(2);



From now on, 3 is a FD and you can use it as such.


 char* message = "hello, fd world\n";
 write(3, message, strlen(message));
}




No, what I've done is taken advantage of behaviors which are guaranteed
by POSIX.


Maybe, but the integer numbers get or los their property as a file 
descriptor with open() and close() and not by assigning them to an int.



Thomas
--
http://mail.python.org/mailman/listinfo/python-list


Re: Piping processes works with 'shell = True' but not otherwise.

2013-05-29 Thread Thomas Rachel

Am 27.05.2013 02:14 schrieb Carlos Nepomuceno:

pipes usually consumes disk storage at '/tmp'.


Good that my pipes don't know about that.

Why should that happen?


Thomas
--
http://mail.python.org/mailman/listinfo/python-list


Re: A certainl part of an if() structure never gets executed.

2013-06-26 Thread Thomas Rachel

Am 12.06.2013 03:46 schrieb Rick Johnson:

On Tuesday, June 11, 2013 8:25:30 PM UTC-5, [email protected] wrote:


is there a shorter and more clear way to write this?
i didnt understood what Rick trie to told me.


My example included verbatim copies of interactive sessions within the Python 
command line. You might understand them better if you open the Python command 
line and type each command in one-by-one. Here is an algoritm that explains the 
process:

open_command_window()
whilst learning or debugging:
 type_command()
 press_enter()
 observe_output()
 if self.badder.is_full:
 take_potty_break()
close_command_window()


with command_window():
   whilst learning or debugging:
  type_command()
  press_enter()
  observe_output()
  if self.badder.is_full:
  take_potty_break()

looks nicer.

SCNR


Thomas
--
http://mail.python.org/mailman/listinfo/python-list


Re: Is that safe to use ramdom.random() for key to encrypt?

2012-06-19 Thread Thomas Rachel

Am 18.06.2012 01:48 schrieb Paul Rubin:

Steven D'Aprano  writes:

/dev/urandom isn't actually cryptographically secure; it promises not to
block, even if it has insufficient entropy. But in your instance...


Correct. /dev/random is meant to be used for long-lasting
cryptographically-significant uses, such as keys. urandom is not.


They are both ill-advised if you're doing anything really serious.


Hm?


> In practice if enough entropy has been in the system to make a key with

/dev/random, then urandom should also be ok.


Right.


> Unfortunately the sensible

interface is missing: block until there's enough entropy, then generate
data cryptographically, folding in new entropy when it's available.


What am I missing? You exactly describe /dev/random's interface.


Thomas
--
http://mail.python.org/mailman/listinfo/python-list


Re: Generator vs functools.partial?

2012-06-21 Thread Thomas Rachel

Am 21.06.2012 13:25 schrieb John O'Hagan:


But what about a generator?


Yes, but...


def some_func():
 arg = big_calculation()
 while 1:
 i = yield
 (do_something with arg and i)

some_gen = some_func()
some_gen.send(None)
for i in lots_of_items:
 some_gen.send(i)


rather

def some_func(it):
arg = big_calculation()
for i in it:
do_something(arg, i)

some_func(lots_of_items)


HTH,

Thomas
--
http://mail.python.org/mailman/listinfo/python-list


Re: What's wrong with this code?

2012-07-24 Thread Thomas Rachel

Am 23.07.2012 16:50 schrieb Stone Li:

I'm totally confused by this code:

Code:

a = None
b = None
c = None
d = None
x = [[a,b],
  [c,d]]
e,f = x[1]
print e,f
c = 1
d = 2
print e,f
e = 1
f = 2
print c,d

Output:

None None
None None
1 2


I'm expecting the code as:

None None
1 2
1 2


What's wrong?


Your expectation :-)

With c = 1 and d = 2 you do not change the respective objects, but you 
assign other objects to the same names.


The old content is still contained in x[1].

If you would only modify these objects (not possible as ints are 
immutable), you would notice the changes here and there.



Thomas
--
http://mail.python.org/mailman/listinfo/python-list


Re: the meaning of r`.......`

2012-07-24 Thread Thomas Rachel

Am 23.07.2012 17:59 schrieb Steven D'Aprano:

>> Before you

get a language that uses full Unicode, you'll need to have fairly
generally available keyboards that have those keys.


Or at least keys or key combinations for the stuff you need, which might 
differ e. g. with the country you live in. There are countries which 
have keyboards with äöüß, others with èàéî, and so on.




Or sensible, easy to remember mnemonics for additional characters. Back
in 1984, Apple Macs made it trivial to enter useful non-ASCII characters
from the keyboard. E.g.:

Shift-4 gave $
Option-4 gave ¢
Option-c gave ©
Option-r gave ®


So what? If I type Shift-3 here, I get a § (U+00A7). And the ° (U+00B0) 
comes with Shift-^, the µ (U+00B5) with AltGr-M and the € sign with AltGr+E.



Dead-keys made accented characters easy too:

Option-u o gave ö
Option-u e gave ë



And if I had a useful OS here at work, I even could use the compose key 
to produce many other non-ASCII characters. To be able to create each 
and every of them is not needed in order to have support for them in a 
language, just the needed ones.


Useful editors use them as well, although you have not all of them on 
your keyboard.



Thomas
--
http://mail.python.org/mailman/listinfo/python-list


Re: What's wrong with this code?

2012-07-24 Thread Thomas Rachel

Am 24.07.2012 09:47 schrieb Ulrich Eckhardt:


[0] Note that in almost all cases, when referring to a tag, Python
implicitly operates on the object attached to it. One case (the only
one?) where it doesn't is the "del" statement.


The del and the =, concerning the left side.

But even those don't do that under all circumstances. Think about 
__setitem__, __setattr__, __set__, __delitem__, __delattr__, __delete__.



Thomas



--
http://mail.python.org/mailman/listinfo/python-list


Re: On-topic: alternate Python implementations

2012-08-04 Thread Thomas Rachel

Am 04.08.2012 11:10 schrieb Stefan Behnel:


As long as you don't use any features of the Cython language, it's plain
Python. That makes it a Python compiler in my eyes.


Tell that the C++ guys. C++ is mainly a superset of C. But nevertheless, 
C and C++ are distinct languages and so are Python and Cython.



Thomas
--
http://mail.python.org/mailman/listinfo/python-list


Re: set and dict iteration

2012-09-08 Thread Thomas Rachel

Am 19.08.2012 00:14 schrieb MRAB:


Can someone who is more familiar with the cycle detector and cycle
breaker, help prove or disprove the above?


In simple terms, when you create an immutable object it can contain
only references to pre-existing objects, but in order to create a cycle
you need to make an object refer to another which is created later, so
it's not possible to create a cycle out of immutable objects.


Yes, but if I add a list in-between, I can create a refcycle:

a = []
b = (a,)
a.append(b)

So b is a tuple consisting of one list which in turn contains b.

It is not a direct cycle, but an indirect one.

Or would that be detected via the list?


Thomas
--
http://mail.python.org/mailman/listinfo/python-list


Re: generators as decorators simple issue

2012-09-11 Thread Thomas Rachel

Am 12.09.2012 04:28 schrieb [email protected]:

I'm trying to call SetName on an object to prevent me from ever having to call 
it explictly again on that object. Best explained by example.


def setname(cls):
 '''this is the proposed generator to call SetName on the object'''
 try:
 cls.SetName(cls.__name__)
 finally:
 yield cls


class Trial:
 '''class to demonstrate with'''
 def SetName(self, name):
 print 1, 1

@setname
class Test(Trial):
 '''i want SetName to be called by using setname as a decorator'''
 def __init__(self):

 print 'Yay! or Invalid.'

if __name__ == '__main__':
 test = Test()


How can i fix this?


I am not sure what exactly you want to achieve, but I see 2 problems here:

1. Your setname operates on a class, but your SetName() is an instance 
function.


2. I don't really understand the try...finally yield stuff. As others 
already said, you probably just want to return. I don't see what a 
generator would be useful for here...


def setname(cls):
 '''this is the proposed generator to call SetName on the object'''
 try:
 cls.SetName(cls.__name__)
 finally:
 return cls

and

class Trial(object):
'''class to demonstrate with'''
@classmethod
def SetName(cls, name):
print 1, 1

should solve your problems.
--
http://mail.python.org/mailman/listinfo/python-list


Re: how to get os.py to use an ./ntpath.py instead of Lib/ntpath.py

2012-09-11 Thread Thomas Rachel

Am 11.09.2012 05:46 schrieb Steven D'Aprano:


Good for you. (Sorry, that comes across as more condescending than it is
intended as.) Monkey-patching often gets used for quick scripts and tiny
pieces of code because it works.

Just beware that if you extend that technique to larger bodies of code,
say when using a large framework, or multiple libraries, your experience
may not be quite so good. Especially if *they* are monkey-patching too,
as some very large frameworks sometimes do. (Or so I am lead to believe.)


This sonds like a good use case for a context manager, like the one in 
decimal.Context.get_manager().


First shot:

@contextlib.contextmanager
def changed_os_path(**k):
old = {}
try:
for i in k.items():
old[i] = getattr(os.path, i)
setattr(os.path, i, k[i])
yield None
finally:
for i in k.items():
setattr(os.path, i, old[i])

and so for your code you can use

print 'os.walk(\'goo\') with modified isdir()'
with changed_os_path(isdir=my_isdir):
for root, dirs, files in os.walk('goo'):
print root, dirs, files

so the change is only effective as long as you are in the relevant code 
part and is reverted as soon as you leave it.



Thomas
--
http://mail.python.org/mailman/listinfo/python-list


Re: Decorators not worth the effort

2012-09-15 Thread Thomas Rachel

[Sorry, my Firefox destroyed the indent...

Am 14.09.2012 22:29 schrieb Terry Reedy:


In other words

def make_wrapper(func, param):
def wrapper(*args, **kwds):
for i in range(param):
func(*args, **kwds)
return wrapper

def f(x): print(x)
f = make_wrapper(f, 2)
f('simple')

# is simpler, at least for some people, than the following
# which does essentially the same thing.

def make_outer(param):
def make_inner(func):
def wrapper(*args, **kwds):
for i in range(param):
func(*args, **kwds)
return wrapper
return make_inner

@make_outer(2)
def f(x): print(x)
f('complex')


For this case, my mydeco.py which I use quite often contains a

def indirdeco(ind):
# Update both the outer as well as the inner wrapper.
# If we knew the inner one was to be updated with something
# from *a, **k, we could do it. But not this way...
@functools.wraps(ind)
def outer(*a, **k):
@functools.wraps(ind)
def inner(f):
return ind(f, *a, **k)
return inner
return outer

so I can do

@indirdeco
def make_wrapper(func, param):
@functools.wraps(func)
def wrapper(*args, **kwds):
for i in range(param):
func(*args, **kwds)
return wrapper

and then nevertheless

@make_wrapper(2)
def f(x): print(x)

BTW, I also have a "meta-decorator" for the other direction:

def wrapfunction(mighty):
"""Wrap a function taking (f, *a, **k) and replace it with a
function taking (f) and returning a function taking (*a, **k) which
calls our decorated function.
Other direction than indirdeco."""
@functools.wraps(mighty)
def wrapped_outer(inner):
@functools.wraps(inner)
def wrapped_inner(*a, **k):
return mighty(inner, *a, **k)
wrapped_inner.func = inner # keep the wrapped function
wrapped_inner.wrapper = mighty # and the replacement
return wrapped_inner
wrapped_outer.func = mighty # keep this as well
return wrapped_outer

With this, a

@wrapfunction
def twice(func, *a, **k):
return func(*a, **k), func(*a, **k)

can be used with

@twice
def f(x): print (x); return x

very nicely.


Thomas
--
http://mail.python.org/mailman/listinfo/python-list


Re: using text file to get ip address from hostname

2012-09-17 Thread Thomas Rachel

Am 15.09.2012 18:20 schrieb Dan Katorza:


hello again friends,
thanks for everyone help on this.
i guess i figured it out in two ways.
the second one i prefer the most.

i will appreciate if someone can give me some tips.
thanks again

so...
-
First
-
#!/usr/bin/env python
#Get the IP Address


print("hello, please enter file name here>"),
import socket
for line in open(raw_input()):
 hostname = line.strip()
 print("IP address for {0} is 
{1}.".format(hostname,socket.gethostbyname(hostname)))


second

#!/usr/bin/env python
#Get the IP Address

import os

print("Hello, please enter file name here>"),
FILENAME = raw_input()
if os.path.isfile(FILENAME):
 print("\nFile Exist!")
 print("\nGetting ip from host name")
 print("\n")
 import socket
 for line in open (FILENAME):
 hostname = line.strip()
 print("IP address for {0} is 
{1}.".format(hostname,socket.gethostbyname(hostname)))
 else:
 print ("\nFinished the operation")
else:
 print ("\nFIle is missing or is not reasable"),
~


Comparing these, the first one wins if you catch and process exceptions. 
It is easier to ask for forgiveness than to get permission (EAFP, 
http://en.wikipedia.org/wiki/EAFP).


Bit I wonder that no one has mentionned that 
socket.gethostbyname(hostname) is quite old-age because it only returns 
IPv4 addresses (resp. only one of them).


OTOH, socket.getaddrinfo(hostname, 0, 0, socket.SOCK_STREAM) gives you a 
list of parameter tuples for connecting.


So which way you go above, you should change the respective lines to

for line in ...:
hostname = line.strip()
for target in socket.getaddrinfo(hostname, 0, socket.AF_UNSPEC,
socket.SOCK_STREAM):
print("IP address for {0} is {1}.".format(hostname,
target[4][0]))


Thomas
--
http://mail.python.org/mailman/listinfo/python-list


Re: Decorators not worth the effort

2012-09-18 Thread Thomas Rachel

Am 15.09.2012 16:18 schrieb 8 Dihedral:


The concept of decorators is just a mapping from a function


... or class ...

> to another function

... or any other object ...

> with the same name in python.


Thomas
--
http://mail.python.org/mailman/listinfo/python-list


Re: 'indent'ing Python in windows bat

2012-09-19 Thread Thomas Rachel

Am 18.09.2012 15:03 schrieb David Smith:


I COULD break down each batch file and write dozens of mini python
scripts to be called. I already have a few, too. Efficiency? Speed is
bad, but these are bat files, after all. The cost of trying to work with
a multitude of small files is high, though, and I realized I had better
go to a mix.


In order to achieve this, it might be very useful to either have a 
module for each (bigger) part to be achieved which you can call with


python -m modulename arg1 arg2 arg3

and putting the Python code into modulename.py.

Or you have one big "interpreter" which works this way:

class Cmd(object):
"""
Command collector
"""
def __init__(self):
self.cmds = {}
def cmd(self, f):
# register a function
self.cmds[f.__name__] = f
return f
def main(self):
import sys
sys.exit(self.cmds[sys.argv[1]](*sys.argv[2:]))

cmd = Cmd()

@cmd.cmd
def cmd1(arg1, arg2):
do_stuff()
...
return 1 # error -> exit()

@cmd.cmd
def cmd2():
...

if __name__ == '__main__':
cmd.main()


This is suitable for many small things and can be used this way:

bat cmds
python -m thismodule cmd1 a b
other bat cmds
python -m thismodule cmd2
...

HTH,

Thomas
--
http://mail.python.org/mailman/listinfo/python-list


Re: Using dict as object

2012-09-19 Thread Thomas Rachel

Am 19.09.2012 12:24 schrieb Pierre Tardy:


One thing that is cooler with java-script than in python is that dictionaries 
and objects are the same thing. It allows browsing of complex hierarchical data 
syntactically easy.

For manipulating complex jsonable data, one will always prefer writing:
buildrequest.properties.myprop
rather than
brdict['properties']['myprop']


This is quite easy to achieve (but not so easy to understand):

class JsObject(dict):
def __init__(self, *args, **kwargs):
super(JsObject, self).__init__(*args, **kwargs)
self.__dict__ = self

(Google for JSObject; this is not my courtesy).

What does it do? Well, an object's attributes are stored in a dict. If I 
subclass dict, the resulting class can be used for this as well.


In this case, a subclass of a dict gets itself as its __dict__. What 
happens now is


d = JsObject()

d.a = 1
print d['a']

# This results in d.__dict__['a'] = 1.
# As d.__dict__ is d, this is equivalent to d['a'] = 1.

# Now the other way:

d['b'] = 42
print d.b

# here as well: d.b reads d.__dict__['b'], which is essentially d['b'].


Thomas
--
http://mail.python.org/mailman/listinfo/python-list


Re: which a is used?

2012-09-24 Thread Thomas Rachel

Am 25.09.2012 03:47 schrieb Dwight Hutto:


But within a class this is could be defined as self.x within the
functions and changed, correct?


class a():
def __init__(self,a):
self.a = a

def f(self):
print self.a

def g(self):
self.a = 20
print self.a


a = a(1)
a.f()
a.g()


Yes - this is a different situation. Here, the "self" referred to is the 
same in all cases (the "a" from top level), and so self.a can be used 
consistently as well.



Thomas
--
http://mail.python.org/mailman/listinfo/python-list


Re: which a is used?

2012-09-24 Thread Thomas Rachel

Am 25.09.2012 04:37 schrieb Dwight Hutto:

I honestly could not care less what you think about me, but don't use
that term. This isn't a boys' club and we don't need your hurt ego
driving people away from here.


OH. stirrin up shit and can't stand the smell.


Where did he so?


Thomas
--
http://mail.python.org/mailman/listinfo/python-list


Re: python file API

2012-09-24 Thread Thomas Rachel

Am 25.09.2012 04:28 schrieb Steven D'Aprano:


By the way, the implementation of this is probably trivial in Python 2.x.
Untested:

class MyFile(file):
 @property
 def pos(self):
 return self.tell()
 @pos.setter
 def pos(self, p):
 if p<  0:
 self.seek(p, 2)
 else:
 self.seek(p)

You could even use a magic sentinel to mean "see to EOF", say, None.

 if p is None:
 self.seek(0, 2)

although I don't know if I like that.


The whole concept is incomplete at one place: self.seek(10, 2) seeks 
beyond EOF, potentially creating a sparse file. This is a thing you 
cannot achieve.


But the idea is great. I'd suggest to have another property:

  [...]
  @pos.setter
  def pos(self, p):
  self.seek(p)
  @property
  def eofpos(self): # to be consistent
  return self.tell()
  @eofpos.setter
  def eofpos(self, p):
  self.seek(p, 2)

Another option could be a special descriptor which can be used as well 
for relative seeking:


class FilePositionDesc(object):
def __init__(self):
pass
def __get__(self, instance, owner):
return FilePosition(self)
def __set__(self, value):
self.seek(value)

class FilePosition(object):
def __init__(self, file):
self.file = file
def __iadd__(self, offset):
self.file.seek(offset, 1)
def __isub__(self, offset):
self.file.seek(-offset, 1)

class MyFile(file):
pos = FilePositionDesc()
[...]

Stop.

This could be handled with a property as well.

Besides, this breaks some other expectations to the pos. So let's 
introduce a 3rd property named relpos:


class FilePosition(object):
def __init__(self, file):
self.file = file
self.seekoffset = 0
def __iadd__(self, offset):
self.seekoffset += offset
def __isub__(self, offset):
self.seekoffset -= offset
def __int__(self):
return self.file.tell() + self.seekoffset

class MyFile(file):
  @property
  def relpos(self):
  return FilePosition(self) # from above
  @relpos.setter
  def relpos(self, ofs):
  try:
  o = ofs.seekoffset # is it a FilePosition?
  except AttributeError:
  self.seek(ofs, 1) # no, but ofs can be an int as well
  else:
  self.seek(o, 1) # yes, it is


Thomas
--
http://mail.python.org/mailman/listinfo/python-list


Re: python file API

2012-09-24 Thread Thomas Rachel

Am 25.09.2012 00:37 schrieb Ian Kelly:

On Mon, Sep 24, 2012 at 4:14 PM, Chris Angelico  wrote:

file.pos = 42 # Okay, you're at position 42
file.pos -= 10 # That should put you at position 32
foo = file.pos # Presumably foo is the integer 32
file.pos -= 100 # What should this do?


Since ints are immutable, the language specifies that it should be the
equivalent of "file.pos = file.pos - 100", so it should set the file
pointer to 68 bytes before EOF.


But this is not a "real int", it has a special use. So I don't think it 
is absolutely required to behave like an int.


This reminds me of some special purpose registers in embedded 
programming, where bits can only be set by hardware and are cleared by 
the application by writing 1 to them.


Or some bit setting registers, like on ATxmega: OUT = 0x10 sets bit 7 
and clears all others, OUTSET = 0x10 only sets bit 7, OUTTGL = 0x10 
toggles it and OUTCLR = 0x10 clears it.


If this behaviour is documented properly enough, it is quite OK, IMHO.


Thomas
--
http://mail.python.org/mailman/listinfo/python-list


Re: which a is used?

2012-09-24 Thread Thomas Rachel

Am 25.09.2012 07:22 schrieb Dwight Hutto:


No, not really. If you wanna talk shit, I can reflect that, and if you
wanna talk politely I can reflect that. I go t attacked first.,


But not in this thread.

Some people read only selectively and see only your verbal assaults, 
without noticing that they refer to.


If you was really insulted, you should answer to these insults in their 
thread and not in a different one.



Thomas
--
http://mail.python.org/mailman/listinfo/python-list


Re: For Counter Variable

2012-09-24 Thread Thomas Rachel

Am 25.09.2012 01:39 schrieb Dwight Hutto:


It's not the simpler solution I'm referring to, it's the fact that if
you're learning, then you should be able to design the built-in, not
just use it.


In some simpler cases you are right here. But the fact that you are able 
to design it doesn't necessarily mean that you should actually use your 
self-designed version.


But what you post suggests is important as well: if using the neat fancy 
built-in simplifications, you should always be aware what overhead they 
imply.


An example:

Let l be a big, big list.

for i in :
if i in l: 

This looks neat and simple and doesn't look as expensive as it really is.

If l is converted to a set beforehand, it nearly looks the same, but it 
is simpler.


So even if you use builtins, be aware what they do.



You don't always know all the built-ins, so the builtin is simpler,
but knowing how to code it yourself is the priority of learning to
code in a higher level language, which should be simpler to the user
of python.


When learning Python, it often happend me to re-inven the wheel. But as 
soon as I saw the presence of something I re-wrote, I skipped my 
re-written version and used the built-in.



Thomas
--
http://mail.python.org/mailman/listinfo/python-list


Re: python file API

2012-09-25 Thread Thomas Rachel

Am 25.09.2012 10:13 schrieb Dennis Lee Bieber:

Or some bit setting registers, like on ATxmega: OUT = 0x10 sets bit 7
and clears all others, OUTSET = 0x10 only sets bit 7, OUTTGL = 0x10
toggles it and OUTCLR = 0x10 clears it.


Umpfzg. s/bit 7/bit 4/.


I don't think I'd want to work with any device where 0x10 (0001
binary) modifies bit SEVEN. 0x40, OTOH, would fit my mental impression
of bit 7.


Of course. My fault.

It can as well be a bit mask, with OUTTGL = 0x11 toggling bit 4 and bit 
0. Very handy sometimes.



Thomas
--
http://mail.python.org/mailman/listinfo/python-list


Re: python file API

2012-09-25 Thread Thomas Rachel

Am 25.09.2012 09:28 schrieb Steven D'Aprano:


The whole concept is incomplete at one place: self.seek(10, 2) seeks
beyond EOF, potentially creating a sparse file. This is a thing you
cannot achieve.


On the contrary, since the pos attribute is just a wrapper around seek,
you can seek beyond EOF easily:

f.pos = None
f.pos += 10


Yes, from a syscall perspective, it is different: it is a tell() 
combined with a seek set instead of a relative seek. As someone 
mentionned, e. g. in the case of a streamer tape this might make a big 
difference.




But for anything but the most trivial usage, I would recommend sticking
to the seek method.


ACK. This should be kept as a fallback.



... or we need multiple attributes, one for each mode ...


Yes. That's what I would favourize: 3 attributes which each take a value 
to be passed to seek.



So all up, I'm -1 on trying to replace the tell/seek API, and -0 on
adding a second, redundant API.


ACK.


Thomas
--
http://mail.python.org/mailman/listinfo/python-list


Re: data attributes override method attributes?

2012-09-25 Thread Thomas Rachel

Am 25.09.2012 16:08 schrieb Peter Otten:

Jayden wrote:


In the Python Tutorial, Section 9.4, it is said that

"Data attributes override method attributes with the same name."


The tutorial is wrong here. That should be

"Instance attributes override class attributes with the same name."


I jump in here:

THere is one point to consider: if you work with descriptors, it makes a 
difference if they are "data descriptors" (define __set__ and/or 
__delete__) or "non-data descriptors" (define neither).


As http://docs.python.org/reference/datamodel.html#invoking-descriptors 
tells us, methods are non-data descriptors, so they can be overridden by 
instances.


OTOH, properties are data descriptors which cannot  be overridden by the 
instance.


So, to stick to the original example:

class TestDesc(object):
def a(self): pass
@property
def b(self): print "trying to get value - return None"; return None
@b.setter
def b(self, v): print "value", v, "ignored."
@b.deleter
def b(self): print "delete called and ignored"


and now

>>> t=TestDesc()
>>> t.a
>
>>> t.b
trying to get value - return None
>>> t.a=12
>>> t.b=12
value 12 ignored.
>>> t.a
12
>>> t.b
trying to get value - return None
>>> del t.a
>>> del t.b
delete called and ignored
>>> t.a
>
>>> t.b
trying to get value - return None


Thomas
--
http://mail.python.org/mailman/listinfo/python-list


Re: fastest data structure for retrieving objects identified by (x, y) tuple?

2012-10-04 Thread Thomas Rachel

Am 04.10.2012 03:58 schrieb Steven D'Aprano:

alist = [[None]*2400 for i in range(2400)]
from random import randrange
for i in range(1000):
 x = randrange(2400)
 y = randrange(2400)
 adict[(x, y)] = "something"
 alist[x][y] = "something"



The actual sizes printed will depend on how sparse the matrices are, but
for the same above (approximately half full),


I wouldn't consider 1000 of 576 "half full"...


Thomas
--
http://mail.python.org/mailman/listinfo/python-list


Re: overriding equals operation

2012-10-16 Thread Thomas Rachel

Am 16.10.2012 15:51 schrieb Pradipto Banerjee:


I am trying to define class, where if I use a statement a = b, then instead of "a" pointing to the 
same instance as "b", it should point to a copy of "b", but I can't get it right.


This is not possible.



Currently, I have the following:



class myclass(object):


Myclass or MyClass, see http://www.python.org/dev/peps/pep-0008/.


 def __eq__(self, other):
 if instance(other, myclass):
 return self == other.copy()
 return NotImplemented


This redefines the == operator, not the = operator.

It is not possible to redefine =.

One way could be to override assignment of a class attribute. But this 
won't be enough, I think.


Let me explain:

class MyContainer(object):
@property
def content(self):
return self._content
@content.setter
def content(self, new):
self._content = new.copy()

Then you can do:

a = MyClass()
b = MyContainer()
b.content = a
print b.content is a # should print False; untested...

But something like

a = MyClass()
b = a

will always lead to "b is a".


This communication is for informational purposes only. It is not
intended to be, nor should it be construed or used as, financial,
legal, tax or investment advice or an offer to sell, or a
solicitation of any offer to buy, an interest in any fund advised by
Ada Investment Management LP, the Investment advisor.


What?


Thomas
--
http://mail.python.org/mailman/listinfo/python-list


Re: Python does not take up available physical memory

2012-10-19 Thread Thomas Rachel

Am 19.10.2012 21:03 schrieb Pradipto Banerjee:


Thanks, I tried that.


What is "that"? It would be helpful to quote in a reasonable way. Look
how others do it.



Still got MemoryError, but at least this time python tried to use the
physical memory. What I noticed is that before it gave me the error
it used up to 1.5GB (of the 2.23 GB originally showed as available) -
so in general, python takes up more memory than the size of the file
itself.


Of course - the file is not the only thing to be held by the process.

I see several approaches here:

* Process the file part by part - as the others already suggested, 
line-wise, but if you have e.g. a binary file format, other partings may 
be suitable as well - e.g. fixed block size, or parts given by the file 
format.


* If you absolutely have to keep the whole file data in memory, split it 
up in several strings. Why? Well, the free space in virtual memory is 
not necessarily contiguous. So even if you have 1.5G free, you might not 
be able to read 1.5G at once, but you might succeed in reading 3*0.5G.




Thomas
--
http://mail.python.org/mailman/listinfo/python-list


Re: while expression feature proposal

2012-10-25 Thread Thomas Rachel

Am 25.10.2012 01:39 schrieb Ian Kelly:

On Wed, Oct 24, 2012 at 5:08 PM, Paul Rubin  wrote:

from itertools import dropwhile

j = dropwhile(lambda j: j in selected,
  iter(lambda: int(random() * n), object()))
  .next()

kind of ugly, makes me wish for a few more itertools primitives, but I
think it expresses reasonably directly what you are trying to do.


Nice, although a bit opaque.  I think I prefer it as a generator expression:

j = next(j for j in iter(partial(randrange, n), None) if j not in selected)


This generator never ends. If it meets a non-matching value, it just 
skips it and goes on.


The dropwhile expression, however, stops as soon as the value is found.

I think

# iterate ad inf., because partial never returns None:
i1 = iter(partial(randrange, n), None)
# take the next value, make it None for breaking:
i2 = (j if j in selected else None for j in i1)
# and now, break on None:
i3 = iter(lambda: next(i2), None)

would do the job.


Thomas
--
http://mail.python.org/mailman/listinfo/python-list


Re: while expression feature proposal

2012-10-25 Thread Thomas Rachel

Am 25.10.2012 00:26 schrieb Cameron Simpson:


If I could write this as:

   if re_FUNKYPATTERN.match(test_string) as m:
 do stuff with the results of the match, using "m"

then some cascading parse decisions would feel a bit cleaner. Where I
current have this:

   m = re_CONSTRUCT1.match(line)
   if m:
 ... handle construct 1 ...
   else:
 m = re_CONSTRUCT2.match(line)
 if m:
   ... handle construct 2 ...
 else:
   m = re_CONSTRUCT3.match(line)

I could have this:

   if re_CONSTRUCT1.match(line) as m:
 ... handle construct 1 ...
   elif re_CONSTRUCT2.match(line) as m:
 ... handle construct 2 ...
   elif re_CONSTRUCT3.match(line) as m:


I would do

for r in re_CONSTRUCT1, re_CONSTRUCT2, re_CONSTRUCT3:
m = r.match(line)
if m: handle_construct

or maybe

actions = {re_CONSTRUCT1: action1, ...}

def matching(line, *rr):
for r in rr:
m = r.match(line)
if m: yield r; return

for r in matching(line, *actions.keys()):
actions[r]()
break
else:
raise NoActionMatched() # or something like that

Thomas
--
http://mail.python.org/mailman/listinfo/python-list


Re: while expression feature proposal

2012-10-25 Thread Thomas Rachel

Am 25.10.2012 06:50 schrieb Terry Reedy:


Keep in mind that any new syntax has to be a substantial improvement in
some sense or make something new possible. There was no new syntax in
3.2 and very little in 3.3.


I would consinder this at least as new substantial than

yield_from it

as opposed to

for i in it: yield i

- although I think that was a good idea as well.

Although there are quite easy ways to do so, I would appreciate 
something like the proposed


   while EXPR as VAR: use VAR
   if EXPR as VAR: use VAR

Of course it is possible to construct a respective workaround such as

def maybe_do_that():
if moon == full:
with something as val:
yield val

for val in maybe_do_that():
bla

but I would consider this as an abuse of the generator concept.

Thomas
--
http://mail.python.org/mailman/listinfo/python-list


Re: while expression feature proposal

2012-10-25 Thread Thomas Rachel

Am 25.10.2012 09:21 schrieb Thomas Rachel:


I think

# iterate ad inf., because partial never returns None:
i1 = iter(partial(randrange, n), None)
# take the next value, make it None for breaking:
i2 = (j if j in selected else None for j in i1)
# and now, break on None:
i3 = iter(lambda: next(i2), None)

would do the job.


But, as I read it now again, it might be cleaner to create an own 
generator function, such as


def rand_values(randrange, n, selected):
# maybe: selected = set(selected) for the "not in"
while True:
val = partial(randrange, n)
if val not in selected: break
yield val

for value in rand_values(...):

or, for the general case proposed some posings ago:

def while_values(func, *a, **k):
while True:
val = func(*a, **k):
if not val: break
yield val

Thomas
--
http://mail.python.org/mailman/listinfo/python-list


Re: while expression feature proposal

2012-10-25 Thread Thomas Rachel

Am 25.10.2012 12:50 schrieb Steven D'Aprano:


Then I think you have misunderstood the purpose of "yield from".


Seems so. As I have not yet switched to 3.x, I haven't used it till now.


[quote]
However, if the subgenerator is to interact properly with the caller in
the case of calls to send(), throw() and close(), things become
considerably more difficult. As will be seen later, the necessary code is
very complicated, and it is tricky to handle all the corner cases
correctly.


Ok, thanks.


Thomas
--
http://mail.python.org/mailman/listinfo/python-list


Re: while expression feature proposal

2012-10-25 Thread Thomas Rachel

Am 25.10.2012 16:15 schrieb Grant Edwards:


I guess that depends on what sort of programs you write.  In my
experience, EXPR is usually a read from a file/socket/pipe that
returns '' on EOF. If VAR is not '', then you process, then you
process it inside the loop.


Right. The same as in

if regex.search(string) as match:
process it

But with

def if_true(expr):
if expr: yield expr

you can do

for match in if_true(regex.search(string)):
process it

But the proposed if ... as ...: statment woulkd be more beautiful by far.

Thomas
--
http://mail.python.org/mailman/listinfo/python-list


Re: while expression feature proposal

2012-10-25 Thread Thomas Rachel

Am 25.10.2012 18:36 schrieb Ian Kelly:

On Thu, Oct 25, 2012 at 1:21 AM, Thomas Rachel

wrote:

j = next(j for j in iter(partial(randrange, n), None) if j not in
selected)



This generator never ends. If it meets a non-matching value, it just skips
it and goes on.


next() only returns one value.  After it is returned, the generator is
discarded, whether it has ended or not.  If there were no valid values
for randrange to select, then it would descend into an infinite loop.
But then, so would the dropwhile and the original while loop.


You are completely right. My solution was right as well, but didn't 
match the problem...


Yours does indeed return one random value which is guaranteed not to be 
in selected.


Mine returns random values until the value is not in selected. I just 
misread the intention behind the while loop...



Thomas
--
http://mail.python.org/mailman/listinfo/python-list


Re: a.index(float('nan')) fails

2012-10-27 Thread Thomas Rachel

Am 27.10.2012 06:48 schrieb Dennis Lee Bieber:


I don't know about the more modern calculators, but at least up
through my HP-41CX, HP calculators didn't do (binary) "floating
point"... They did a form of BCD with a fixed number of significant
/decimal/ digits


Then, what about sqrt(x)**2 or arcsin(sin(x))? Did that always return 
the original x?


Thomas
--
http://mail.python.org/mailman/listinfo/python-list


Re: better way for ' '.join(args) + '\n'?

2012-10-27 Thread Thomas Rachel

Am 26.10.2012 09:49 schrieb Ulrich Eckhardt:

Hi!

General advise when assembling strings is to not concatenate them
repeatedly but instead use string's join() function, because it avoids
repeated reallocations and is at least as expressive as any alternative.

What I have now is a case where I'm assembling lines of text for driving
a program with a commandline interface.


Stop.

In this case, you think too complicated.

Just do

subprocess.Popen(['prog', 'foo', 'bar', 'baz'])

-  is the most safest thing for this use case.

If it should not be possible for any reason, you should be aware of any 
traps you could catch - e.g., if you want to feed your string to a 
Bourne shell, you should escape the strings properly.


In such cases, I use


def shellquote(*strs):
	r"""Input: file names, output: ''-enclosed strings where every ' is 
replaced with '\''. Intended for usage with the shell."""

# just take over everything except ';
# replace ' with '\''
# The shell sees ''' as ''\'''\'''\'''. Ugly, but works.
return " ".join([
"'"+st.replace("'","'\\''")+"'"
for st in strs
])


so I can use

shellquote('program name', 'argu"ment 1', '$arg 2',
"even args containing a ' are ok")

For Windows, you'll have to modify this somehow.


HTH,

Thomas
--
http://mail.python.org/mailman/listinfo/python-list


Re: Immutability and Python

2012-11-07 Thread Thomas Rachel

Am 29.10.2012 16:20 schrieb andrea crotti:


Now on one hand I would love to use only immutable data in my code, but
on the other hand I wonder if it makes so much sense in Python.


You can have both. Many mutable types distinguish between them with 
their operators.


To pick up your example,


class NumWrapper(object):
def __init__(self, number):
self.number = number
def __iadd__(self, x):
self.number += x
return self
def __add__(self, x):
return NumWrapper(self.number + x)

So with

number += 1

you keep the same object and modify it, while with

number = number + 1

or

new_number = number + 1

you create a new object.



But more importantly normally classes are way more complicated than my
stupid example, so recreating a new object with the modified state might
be quite complex.

Any comments about this? What do you prefer and why?


That's why I generally prefer mutable objects, but it can depend.


Thomas

--
http://mail.python.org/mailman/listinfo/python-list


Re: Obnoxious postings from Google Groups

2012-11-09 Thread Thomas Rachel

Am 31.10.2012 06:39 schrieb Robert Miles:


For those of you running Linux:  You may want to look into whether
NoCeM is compatible with your newsreader and your version of Linux.


This sounds as if it was intrinsically impossible to evaluate NoCeMs in 
Windows.


If someone writes a software for it, it can be run wherever desired.


Thomas
--
http://mail.python.org/mailman/listinfo/python-list


Re: Printing characters outside of the ASCII range

2012-11-11 Thread Thomas Rachel

Am 09.11.2012 18:17 schrieb danielk:


I'm using this character as a delimiter in my application.


Then you probably use the *byte* 254 as opposed to the *character* 254.

So it might be better to either switch to byte strings, or output the 
representation of the string instead of itself.


So do print(repr(chr(254))) or, for byte strings, print(bytes([254])).


Thomas
--
http://mail.python.org/mailman/listinfo/python-list


Re: stackoverflow quote on Python

2012-11-13 Thread Thomas Rachel

Am 13.11.2012 14:21 schrieb [email protected]:


* strings are now proper text strings (Unicode), not byte strings;


Let me laugh.


Do so.


Thomas
--
http://mail.python.org/mailman/listinfo/python-list


Re: how to simulate tar filename substitution across piped subprocess.Popen() calls?

2012-11-13 Thread Thomas Rachel

Am 09.11.2012 02:12 schrieb Hans Mulder:


That's what 'xargs' will do for you.  All you need to do, is invoke
xargs with arguments containing '{}'.  I.e., something like:

cmd1 = ['tar', '-czvf', 'myfile.tgz', '-c', mydir, 'mysubdir']
first_process = subprocess.Popen(cmd1, stdout=subprocess.PIPE)

cmd2 = ['xargs', '-I', '{}', 'sh', '-c', "test -f %s/'{}'" % mydir]
second_process = subprocess.Popen(cmd2, stdin=first_process.stdout)


After launching second_process, it might be useful to 
firstprocess.stdout.close(). If you fail to do so, your process is a 
second reader which might break things apart.


At least, I once hat issues with it; I currently cannot recapitulate 
what these were nor how they could arise; maybe there was just the open 
file descriptor which annoyed me.



Thomas
--
http://mail.python.org/mailman/listinfo/python-list


Re: how to simulate tar filename substitution across piped subprocess.Popen() calls?

2012-11-13 Thread Thomas Rachel

Am 12.11.2012 19:30 schrieb Hans Mulder:


This will break if there are spaces in the file name, or other
characters meaningful to the shell.  If you change if to

 xargsproc.append("test -f '%s/{}'&&  md5sum '%s/{}'"
  % (mydir, mydir))

, then it will only break if there are single quotes in the file name.


And if you do mydir_q = mydir.replace("'", "'\\''") and use mydir_q, you 
should be safe...



Thomas
--
http://mail.python.org/mailman/listinfo/python-list


Re: Python garbage collector/memory manager behaving strangely

2012-11-15 Thread Thomas Rachel

Am 17.09.2012 04:28 schrieb Jadhav, Alok:

Thanks Dave for clean explanation. I clearly understand what is going on
now. I still need some suggestions from you on this.

There are 2 reasons why I was using  self.rawfile.read().split('|\n')
instead of self.rawfile.readlines()

- As you have seen, the line separator is not '\n' but its '|\n'.
Sometimes the data itself has '\n' characters in the middle of the line
and only way to find true end of the line is that previous character
should be a bar '|'. I was not able specify end of line using
readlines() function, but I could do it using split() function.
(One hack would be to readlines and combine them until I find '|\n'. is
there a cleaner way to do this?)
- Reading whole file at once and processing line by line was must
faster. Though speed is not of very important issue here but I think the
tie it took to parse complete file was reduced to one third of original
time.


With

def itersep(f, sep='\0', buffering=1024, keepsep=True):
if keepsep:
keepsep=sep
else:   keepsep=''
data = f.read(buffering)
next_line = data # empty? -> end.
while next_line: # -> data is empty as well.
lines = data.split(sep)
for line in lines[:-1]:
yield line+keepsep
next_line = f.read(buffering)
data = lines[-1] + next_line
# keepsep: only if we have something.
if (not keepsep) or data:
yield data

you can iterate over everything you want without needing too much 
memory. Using a larger "buffering" might improve speed a little bit.



Thomas
--
http://mail.python.org/mailman/listinfo/python-list


Re: os.popen and the subprocess module

2012-11-29 Thread Thomas Rachel

Am 27.11.2012 19:00 schrieb Andrew:


I'm looking into os.popen and the subprocess module, implementing
os.popen is easy but i hear it is depreciating however I'm finding the
implemantation of subprocess daunting can anyone help


This is only the first impression.

subprocess is much more powerful, but you don't need to use the full power.

For just executing and reading the data, you do not need much.

First step: create your object and performing the call:

sp = subprocess.Popen(['program', 'arg1', 'arg2'], stdout=subprocess.PIPE)

or

sp = subprocess.Popen('program arg1 arg2', shell=True, 
stdout=subprocess.PIPE)



The variant with shell=True is more os.popen()-like, but has security 
flaws (e.g., what happens if there are spaces or, even worse, ";"s in 
the command string?



Second step: Obtain output.

Here you either can do

stdout, stderr = sp.communicate()

can be used if the whole output fits into memory at once or you really 
have to deal with stderr or stdin additionally.


In other, simpler cases, it is possible to read from sp.stdout like from 
a file (with a for loop, with .read() or whatever you like).



Third step: Getting the status, terminating the process.

And if you have read the whole output, you do status = sp.wait() in 
order not to have a zombie process in your process table and to obtain 
the process status.



Thomas
--
http://mail.python.org/mailman/listinfo/python-list


Re: Confused compare function :)

2012-12-06 Thread Thomas Rachel

Am 06.12.2012 09:49 schrieb Bruno Dupuis:


The point is Exceptions are made for error handling, not for normal
workflow. I hate when i read that for example:

 try:
 do_stuff(mydict[k])
 except KeyError:
 pass


I as well, but for other reasons (see below). But basically this is EAFP.



(loads of them in many libraries and frameworks)
instead of:

 if k in mydict:
 do_stuff(mydict[k])


This is LBYL, C-style, not Pythonic.

I would do

try:
value = mydict[k]
except KeyError:
pass
else:
do_stuff(k)

Why? Because do_stuff() might raise a KeyError, which should not go 
undetected.



Thomas
--
http://mail.python.org/mailman/listinfo/python-list


Re: forking and avoiding zombies!

2012-12-11 Thread Thomas Rachel

Am 11.12.2012 14:34 schrieb peter:

On 12/11/2012 10:25 AM, andrea crotti wrote:

Ah sure that makes sense!

But actually why do I need to move away from the current directory of
the parent process?
In my case it's actually useful to be in the same directory, so maybe
I can skip that part,
or otherwise I need another chdir after..

You don't need to move away from the current directory. You cant use os
to get the current work directory

stderrfile = '%s/error.log' % os.getcwd()
stdoutfile = '%s/out.log' % os.getcwd()


ITYM

os.path.join(os.getcwd(), 'error.log')

resp.

os.path.join(os.getcwd(), 'out.log')


Thomas
--
http://mail.python.org/mailman/listinfo/python-list


Re: what’s the difference between socket.send() and socket.sendall() ?

2013-01-07 Thread Thomas Rachel

Am 07.01.2013 11:35 schrieb iMath:

what’s the difference between socket.send() and socket.sendall() ?

It is so hard for me to tell the difference between them from the python doc

so what is the difference between them ?

and each one is suitable for which case ?



The docs are your friend. See

http://docs.python.org/2/library/socket.html#socket.socket.sendall

| [...] Unlike send(), this method continues to send data from string
| until either all data has been sent or an error occurs.

HTH,

Thomas
--
http://mail.python.org/mailman/listinfo/python-list


Re: How to modify this script?

2013-01-08 Thread Thomas Rachel

Am 06.01.2013 15:30 schrieb Kurt Hansen:

Den 06/01/13 15.20, Chris Angelico wrote:

On Mon, Jan 7, 2013 at 1:03 AM, Kurt Hansen  wrote:

I'm sorry to bother you, Chris, but applying the snippet with your
code in
Gedit still just deletes the marked, tab-separated text in the editor.



Ah, whoops. That would be because I had a bug in the code (that's why
I commented that it was untested). Sorry about that! Here's a fixed
version:


[cut]>

Note that it's a single line:

output += '' + item +
' '

If your newsreader (or my poster) wraps it, you'll need to unwrap that
line, otherwise you'll get an IndentError.


Ahhh, I did'nt realize that. Now it works :-)


That version should work.


It certainly does. I'll keep it and use it until at better solution is
found.


That would be simple:

Replace

output += '' + item +
' '

with

if len(columns) >= 3:
output += ''
else:
output += ''
output += item + ' '

(untested as well; keep the indentation in mind!)


Thomas
--
http://mail.python.org/mailman/listinfo/python-list


Re: How to modify this script?

2013-01-08 Thread Thomas Rachel

Am 07.01.2013 18:56 schrieb Gertjan Klein:


(Watch out for line wraps! I don't know how to stop Thunderbird from
inserting them.)


Do "insert as quotation" (in German Thunderbird: "Als Zitat einfügen"), 
or Strg-Shift-O. Then it gets inserted with a ">" before and in blue.


Just remove the > and the space after it; the "non-breaking property" is 
kept.


Example:

columns = line.split("\t");
if len(columns) == 1:
output += ('' % max_columns) + line + 
'\n'

continue

without and

columns = line.split("\t");
if len(columns) == 1:
output += ('' % max_columns) + line + 
'\n'

continue

with this feature.

HTH,

Thomas
--
http://mail.python.org/mailman/listinfo/python-list


Re: please i need explanation

2013-01-11 Thread Thomas Rachel

Am 11.01.2013 17:33 schrieb [email protected]:


def factorial(n):
 if n<2:
  return 1
 f = 1
 while n>= 2:
 f *= n
 f -= 1
 return f




please it works.


I doubt this.

If you give n = 4, you run into an endless loop.



but don’t get why the return 1 and the code below.


The "if n < 2: return 1" serves to shorten the calculation process 
below. It is redundant, as you have a "f = 1" and a "return f" for n < 2.


The code below first sets f, which holds the result, to 1 and then 
multiplies it by n in each step. As the loop should contain a 'n -= 1', 
n decreases by 1 every step, turning it into f = n * (n-1) * (n-2) * ... 
* 2 and then, as n is not >= 2 any longer, stops the loop, returning f.


HTH,

Thomas
--
http://mail.python.org/mailman/listinfo/python-list


Re: i can't understand decorator

2013-01-15 Thread Thomas Rachel

Am 15.01.2013 15:20 schrieb contro opinion:


 >>> def deco(func):
...  def kdeco():
...  print("before myfunc() called.")
...  func()
...  print("  after myfunc() called.")
...  return kdeco
...
 >>> @deco
... def myfunc():
...  print(" myfunc() called.")
...
 >>> myfunc()
before myfunc() called.
  myfunc() called.
   after myfunc() called.
 >>> deco(myfunc)()
before myfunc() called.
before myfunc() called.
  myfunc() called.
   after myfunc() called.
   after myfunc() called.


Wrapping works this way:

The function is defined, and the wrapper replaces the function with a 
different one which (in this case) calls the original one.


Try print(myfunc) here and you see that myfunc is only a name for 
another function called kdeco. It is the one returned by the decorator.




1.
why there are two lines :before myfunc() called.and tow lines :after
myfunc() called. in the output?


This is because the "before" line is printed, then the modified "myfunc" 
is called, which in turn prints another "before" line and then calls the 
"really original" function. After it returns, the "after" line is called 
by the inner placement function (the one which sticks at the myfunc 
identifier). This function returns and the function instance which 
called the first "before" line is printed then.



2.why the result is not
before myfunc() called.
  myfunc() called.
   after myfunc() called.
before myfunc() called.
  myfunc() called.
   after myfunc() called.


Because the function calls are wrapped and not repeated.


Thomas
--
http://mail.python.org/mailman/listinfo/python-list


Re: Parsing a serial stream too slowly

2012-01-23 Thread Thomas Rachel

Am 23.01.2012 22:48 schrieb M.Pekala:

Hello, I am having some trouble with a serial stream on a project I am
working on. I have an external board that is attached to a set of
sensors. The board polls the sensors, filters them, formats the
values, and sends the formatted values over a serial bus. The serial
stream comes out like $A1234$$B-10$$C987$,  where "$A.*$" is a sensor
value, "$B.*$" is a sensor value, "$C.*$" is a sensor value, ect...

When one sensor is running my python script grabs the data just fine,
removes the formatting, and throws it into a text control box. However
when 3 or more sensors are running, I get output like the following:

Sensor 1: 373
Sensor 2: 112$$M-160$G373
Sensor 3: 763$$A892$

I am fairly certain this means that my code is running too slow to
catch all the '$' markers.


This would just result in the receive buffer constantly growing.

Probably the thing with the RE which has been mentionned by Jon is the 
cause.


But I have some remarks to your code.

First, you have code repetition. You could use functions to avoid this.

Second, you have discrepancies between your 3 blocks: with A, you work 
with sensorabuffer, the others have sensor[bc]enable.


Third, if you have a buffer content of '$A1234$$B-10$$C987$', your "A 
code" will match the whole buffer and thus do


# s = sensorresult.group(0) ->
s = '$A1234$$B-10$$C987$'
# s = s[2:-1]
s = '1234$$B-10$$C987'
# maybe put that into self.SensorAValue
self.sensorabuffer = ''


I suggest the following way to go:

* Process your data only once.
* Do something like

[...]
theonebuffer = '$A1234$$B-10$$C987$' # for now

while True:
sensorresult = re.search(r'\$(.)(.*?)\$(.*)', theonebuffer)
if sensorresult:
sensor, value, rest = sensorresult.groups()
# replace the self.SensorAValue concept with a dict
self.sensorvalues[sensor] = value
theonebuffer = rest
else: break # out of the while

If you execute this code, you'll end with a self.sensorvalues of

{'A': '1234', 'C': '987', 'B': '-10'}

and a theonebuffer of ''.


Let's make another test with an incomplete sensor value.

theonebuffer = '$A1234$$B-10$$C987$$D65'

[code above]

-> the dict is the same, but theonebuffer='$D65'.

* Why did I do this? Well, you are constantly receiving data. I do this 
with the hope that the $ terminating the D value is yet to come.


* Why does this happen? The regex does not match this incomplete packet, 
the while loop terminates (resp. breaks) and the buffer will contain the 
last known value.



But you might be right - speed might become a concern if you are 
processing your data slower than they come along. Then your buffer fills 
up and eventually kills your program due to full memory. As the buffer 
fills up, the string copying becomes slower and slower, making things 
worse. Whether this becomes relevant, you'll have to test.


BTW, as you use this one regex quite often, a way to speed up could be 
to compile the regex. This will change your code to


sensorre = re.compile(r'\$(.)(.*?)\$(.*)')
theonebuffer = '$A1234$$B-10$$C987$' # for now

while True:
sensorresult = sensorre.search(theonebuffer)
if sensorresult:
sensor, value, rest = sensorresult.groups()
# replace the self.SensorAValue concept with a dict
self.sensorvalues[sensor] = value
theonebuffer = rest
else: break # out of the while


And finally, you can make use of re.finditer() resp. 
sensorre.finditer(). So you can do


sensorre = re.compile(r'\$(.)(.*?)\$') # note the change
theonebuffer = '$A1234$$B-10$$C987$' # for now

sensorresult = None # init it for later
for sensorresult in sensorre.finditer(theonebuffer):
sensor, value = sensorresult.groups()
# replace the self.SensorAValue concept with a dict
self.sensorvalues[sensor] = value
# and now, keep the rest
if sensorresult is not None:
# the for loop has done something - cut out the old stuff
# and keep a possible incomplete packet at the end
theonebuffer = theonebuffer[sensorresult.end():]

This removes the mentionned string copying as source of increased slowness.

HTH,

Thomas
--
http://mail.python.org/mailman/listinfo/python-list


FYI: Making python.exe capable to work with 3GiB address space

2012-01-24 Thread Thomas Rachel

Hi,

I just added some RAM to my PC @ work and now wanted Python to be 
capable to make use of it.


My boot.ini has been containing the /3GB switch for quite a while, but 
nevertheless I only could allocate 2 GB in Python.


So I changed python.exe with the imagecfg.exe which I obtained from 
http://blog.schose.net/index.php/archives/207 and it works now.


This is just FYI, for the case one of you would like to be able to do so 
as well.


But be aware that it is not impossible that there are side effects.

Yours,

Thomas
--
http://mail.python.org/mailman/listinfo/python-list


Re: Parsing a serial stream too slowly

2012-01-24 Thread Thomas Rachel

Am 24.01.2012 00:13 schrieb Thomas Rachel:

[sorry, my Thunderbird kills the indentation]


And finally, you can make use of re.finditer() resp.
sensorre.finditer(). So you can do

sensorre = re.compile(r'\$(.)(.*?)\$') # note the change
theonebuffer = '$A1234$$B-10$$C987$' # for now

sensorresult = None # init it for later
for sensorresult in sensorre.finditer(theonebuffer):
sensor, value = sensorresult.groups()
# replace the self.SensorAValue concept with a dict
self.sensorvalues[sensor] = value
# and now, keep the rest
if sensorresult is not None:
# the for loop has done something - cut out the old stuff
# and keep a possible incomplete packet at the end
theonebuffer = theonebuffer[sensorresult.end():]

This removes the mentionned string copying as source of increased slowness.


But it has one major flaw: If you lose synchronization, it may happen 
that only the data *between* the packets is returned - which are mostly 
empty strings.


So it would be wise to either change the firmware of the device to use 
different characters for starting end ending a packet, or to return 
every data between "$"s and discarding the empty strings.


As regexes might be overkill here, we could do

def splitup(theonebuffer):
l = theonebuffer.split("$")
for i in l[:-1]: yield i + "$"
if l: yield l[-1]


sensorvalues = {}
theonebuffer = '1garbage$A1234$$B-10$2garbage$C987$D3' # for now
for part in splitup(theonebuffer):
if not part.endswith("$"):
theonebuffer = part
break # it is the last one which is probably not a full packet
part = part[:-1] # cut off the $
if not part: continue # $$ -> gap between packets
# now part is the contents of one packet which may be valid or not.
# TODO: Do some tests - maybe for valid sensor names and values.
sensor = part[0]
value = part[1:]
sensorvalues[sensor] = value # add the "self." in your case -
# for now, it is better without

Now I get sensorvalues, theonebuffer as ({'1': 'garbage', 'A': '1234', 
'2': 'garbage', 'B': '-10', 'C': '987'}, 'D3').


D3 is not (yet) a value; it might come out as D3, D342 or whatever, as 
the packet is not complete yet.



Thomas
--
http://mail.python.org/mailman/listinfo/python-list


Re: Popen in main and subprocess

2012-01-28 Thread Thomas Rachel

Am 28.01.2012 11:19 schrieb pistacchio:

the following code (in the main thread) works well, I grep some files
and the search until the first 100 results are found (writing the
results to a file), then exit:

 command = 'grep -F "%s" %s*.txt' % (search_string, DATA_PATH)

 p = Popen(['/bin/bash', '-c', command], stdout = PIPE)


BTW: That's double weird: You can perfectly call grep without a shell 
in-between. And if you really needed a shell, you would use


p = Popen(command, shell=True, stdout=PIPE)

- while the better solution is

import glob
command = ['grep', '-F', search_string] + glob.glob(DATA_PATH + 
'*.txt')

p = Popen(command, stdout=PIPE)




writing output: Broken pipe in the console:


It could be that this is an output of grep which tries to write data to 
its stdout. But you are only reading the first 100 lines, and 
afterwards, you close the pipe (implicitly).


Maybe you should read out everything what is pending before closing the 
pipe/dropping the subprocess.


BTW: I don't really understand


 if num_lines == 0:
 break
 else:
 break



HTH,


Thomas
--
http://mail.python.org/mailman/listinfo/python-list


Re: copy on write

2012-02-02 Thread Thomas Rachel

Am 13.01.2012 13:30 schrieb Chris Angelico:


It seems there's a distinct difference between a+=b (in-place
addition/concatenation) and a=a+b (always rebinding),


There is indeed.

a = a + b is a = a.__add__(b), while

a += b is a = a.__iadd__(b).

__add__() is supposed to leave the original object intact and return a 
new one, while __iadd__() is free to modify (preference, to be done if 
possible) or return a new one.


A immutable object can only return a new one, and its __iadd__() 
behaviour is the same as __add__().


A mutable object, however, is free to and supposed to modify itself and 
then return self.



Thomas
--
http://mail.python.org/mailman/listinfo/python-list


Re: atexit.register in case of errors

2012-02-15 Thread Thomas Rachel

Am 15.02.2012 14:52 schrieb Devin Jeanpierre:

On Wed, Feb 15, 2012 at 8:33 AM, Mel Wilson  wrote:

The usual way to do what you're asking is

if __name__ == '__main__':
main()
goodbye()

and write main so that it returns after it's done all the things it's
supposed to do.  If you've sprinkled `sys.exit()` all over your code, then
don't do that.  If you're forced to deal with a library that hides
`sys.exit()` calls in the functions, then you have my sympathy.  Library
authors should not do that, and there have been threads on c.l.p explaining
why they shouldn't.


In such a case. one can do::

 if __name__ == '__main__':
 try:
 main()
 except SystemExit:
 pass
 goodbye()

-- Devin


Wouldn't

if __name__ == '__main__':
try:
main()
finally:
goodbye()

be even better? Or doesn't it work well together with SystemExit?


Thomas
--
http://mail.python.org/mailman/listinfo/python-list


Re: Just curious: why is /usr/bin/python not a symlink?

2012-02-23 Thread Thomas Rachel

Am 23.02.2012 20:54 schrieb Jerry Hill:


If I recall
correctly, for directories, that's the number of entries in the
directory.


No. It is the number of subdirectories (it counts their ".." entries) 
plus 2 (the parent directory and the own "." entry).




Even with that, it's hard to tell what files are hardlinked together,
and figuring it out by inode is a pain in the neck.  Personally, I
prefer symlinks, even if they introduce a small performance hit.


Not only that, they have slightly different semantics. With hardlinks 
you can say "I want this file, no matter if someone else holds it as 
well". Symlinks say "I want the file which is referred to by there".


In the given case, however, this difference doesn't count, and I agree 
on you that a symlink would be better here.



Thomas
--
http://mail.python.org/mailman/listinfo/python-list


Re: Python is readable

2012-03-15 Thread Thomas Rachel

Am 15.03.2012 11:44 schrieb Kiuhnm:


Let's try that.
Show me an example of "list comprehensions" and "with" (whatever they are).


with open("filename", "w") as f:
f.write(stuff)


with lock:
do_something_exclusively()


Thomas
--
http://mail.python.org/mailman/listinfo/python-list


Re: Python is readable

2012-03-15 Thread Thomas Rachel

Am 15.03.2012 12:48 schrieb Kiuhnm:

On 3/15/2012 12:14, Thomas Rachel wrote:

Am 15.03.2012 11:44 schrieb Kiuhnm:


Let's try that.
Show me an example of "list comprehensions" and "with" (whatever they
are).


with open("filename", "w") as f:
f.write(stuff)


Here f is created before executing the block and destroyed right after
leaving the block. f's destructor will probably close the file handle.


No, that is the point here: with calls __enter__ on entry and __exit__ 
on, well, exit.


In the case of files, __enter__ doesn't probably do anything special, 
but returns the object again in order to be assigned to f. In __exit__, 
the file is closed.


with open("/tmp/filename", "w") as f:
print f
print f




So after the with clause, f is actually closed, but still present as object.


with lock:
do_something_exclusively()



It's clear what it does, but I don't know if that's special syntax.


If you call "with" special syntax, it is.


Maybe objects can have two special methods that are called respect. on
entering and leaving the with-block.


Exactly, see above.

Here, on entry __enter__ is called which acquires the lock.
__exit__ releases it again.



Or, more likely, lock creates an object which keeps the lock "acquired".
The lock is released when we leave the block.
So we could inspect the lock with
with lock as l:
inspect l...
do_some.


Or just inspect l - I don't know if a lock's __enter__ methos returns it 
again for assignment with "as"...



Thomas
--
http://mail.python.org/mailman/listinfo/python-list


contextlib.nested deprecated

2012-03-23 Thread Thomas Rachel

Hi,

I understand why contextlib.nested is deprecated.

But if I write a program for an old python version w/o the multiple form 
of with, I have (nearly) no other choice.


In order to avoid the following construct to fail:

with nested(open("f1"), open("f2")) as (f1, f2):

(f1 wouldn't be closed if opening f2 fails)

I could imagine writing a context manager which moves initialization 
into its __enter__:


@contextmanager
def late_init(f, *a, **k):
r = f(*a, **k)
with r as c: yield c

Am I right thinking that

with nested(late_init(open, "f1"), late_init(open, "f2")) as (f1, f2):

will suffice here to make it "clean"?


TIA,

Thomas
--
http://mail.python.org/mailman/listinfo/python-list


Re: Documentation, assignment in expression.

2012-03-26 Thread Thomas Rachel

Am 25.03.2012 15:03 schrieb Tim Chase:

Perhaps a DB example
works better. With assignment allowed in an evaluation, you'd be able to
write

while data = conn.fetchmany():
for row in data:
process(row)

whereas you have to write

while True:
data = conn.fetchmany()
if not data: break
for row in data:
process(row)


Or simpler

for data in iter(conn.fetchmany, []):
for row in data:
process(row)

provided that a block of rows is returned as a list - which might be 
different among DB engines.



Thomas
--
http://mail.python.org/mailman/listinfo/python-list


Re: Documentation, assignment in expression.

2012-03-26 Thread Thomas Rachel

Am 26.03.2012 00:59 schrieb Dennis Lee Bieber:


If you use the longer form

con = db.connect()
cur = con.cursor()

the cursor object, in all that I've worked with, does function for
iteration


I use this form regularly with MySQLdb and am now surprised to see that 
this is optional according to http://www.python.org/dev/peps/pep-0249/.


So a database cursor is not required to be iterable, alas.


Thomas

--
http://mail.python.org/mailman/listinfo/python-list


Re: getaddrinfo NXDOMAIN exploit - please test on CentOS 6 64-bit

2012-04-01 Thread Thomas Rachel

Am 01.04.2012 06:31 schrieb John Nagle:


In any case, this seems more appropriate for a Linux or a CentOS
newsgroup/mailing list than a Python one. Please do not reply to this
post in comp.lang.python.

-o


I expected that some noob would have a reply like that.


You are unable to provide appropriate information, fail to notice that 
the problem has nothing to do with Python AND call others a noob?


Shame on you.
--
http://mail.python.org/mailman/listinfo/python-list


Re: No os.copy()? Why not?

2012-04-02 Thread Thomas Rachel

Am 02.04.2012 23:11 schrieb HoneyMonster:


One way:
import os

os.system ("cp src sink")


Yes. The worst way you could imagine.

Why not the much much better

from subprocess
subprocess.call(['cp', 'src', 'sink'])

?

Then you can call it with (really) arbitrary file names:


def call_cp(from, to):
from subprocess
subprocess.call(['cp', '--', from, to])

Try that with os.system() and from="That's my file"...


Thomas
--
http://mail.python.org/mailman/listinfo/python-list


Re: No os.copy()? Why not?

2012-04-04 Thread Thomas Rachel

Am 03.04.2012 11:34 schrieb John Ladasky:

I use subprocess.call() for quite a few other things.

I just figured that I should use the tidier modules whenever I can.


Of course. I only wanted to point out that os.system() is an even worse 
approach. shutils.copy() is by far better, of course.

--
http://mail.python.org/mailman/listinfo/python-list


Re: ordering with duck typing in 3.1

2012-04-07 Thread Thomas Rachel

Am 07.04.2012 14:23 schrieb andrew cooke:


class IntVar(object):

 def __init__(self, value=None):
 if value is not None: value = int(value)
 self.value = value

 def setter(self):
 def wrapper(stream_in, thunk):
 self.value = thunk()
 return self.value
 return wrapper

 def __int__(self):
 return self.value

 def __lt__(self, other):
 return self.value<  other

 def __eq__(self, other):
 return self.value == other

 def __hash__(self):
 return hash(self.value)



so what am i missing?


If I don't confuse things, I think you are missing a __gt__() in your 
IntVar() class.


This is because first, a '2 < three' is tried with 2.__lt__(three). As 
this fails due to the used types, it is reversed: 'three > 2' is 
equivalent. As your three doesn't have a __gt__(), three.__gt__(2) fails 
as well.



Thomas
--
http://mail.python.org/mailman/listinfo/python-list


Re: python module

2012-04-16 Thread Thomas Rachel

Am 16.04.2012 12:23 schrieb Kiuhnm:

I'd like to share a module of mine with the Python community. I'd like
to encourage bug reports, suggestions, etc...
Where should I upload it to?

Kiuhnm


There are several ways to do this. One of them would be bitbucket.


Thomas
--
http://mail.python.org/mailman/listinfo/python-list


Re: why () is () and [] is [] work in other way?

2012-04-24 Thread Thomas Rachel

Am 24.04.2012 08:02 schrieb rusi:

On Apr 23, 9:34 am, Steven D'Aprano  wrote:


"is" is never ill-defined. "is" always, without exception, returns True
if the two operands are the same object, and False if they are not. This
is literally the simplest operator in Python.


Circular definition: In case you did not notice, 'is' and 'are' are
(or is it is?) the same verb.


Steven's definition tries not to define the "verb" "is", but it defines 
the meanung of the *operator* 'is'.


He says that 'a is b' iff a and be are *the same objects*. We don't need 
to define the verb "to be", but the target of the definition is the 
entity "object" and its identity.



Thomas
--
http://mail.python.org/mailman/listinfo/python-list


Re: why () is () and [] is [] work in other way?

2012-04-25 Thread Thomas Rachel

Am 24.04.2012 15:25 schrieb rusi:


Identity, sameness, equality and the verb to be are all about the same
concept(s) and their definitions are *intrinsically* circular; see
http://plato.stanford.edu/entries/identity/#2


Mybe in real life language. In programming and mathematics there are 
several forms of equality, where identity (≡) is stronger than equality (=).


Two objects can be equal (=) without being identical (≡), but not the 
other way.


As the ≡ is quite hard to type, programming languages tend to use other 
operators for this.


E.g., in C, you can have

int a;
int b;
a = 4;
b = 4;

Here a and b are equal, but not identical. One can be changed without 
changing the other.


With

int x;
int *a=&x, *b=&x;

*a and *b are identical, as they point to the same location.

*a = 4 results in *b becoming 4 as well.


In Python, you can have the situations described here as well.

You can have a list and bind it to 2 names, or you can take 2 lists and 
bind them to that name.


a = [3]
b = [3]

Here a == b is True, while a is b results in False.


Thomas




And the seeming simplicity of the circular definitions hide the actual
complexity of 'to be'
for python:  http://docs.python.org/reference/expressions.html#id26
(footnote 7)
for math/philosophy: 
http://www.math.harvard.edu/~mazur/preprints/when_is_one.pdf


--
http://mail.python.org/mailman/listinfo/python-list


Re: python3 raw strings and \u escapes

2012-05-30 Thread Thomas Rachel

Am 30.05.2012 08:52 schrieb [email protected]:


This breaks a lot of my code because in python 2
   re.split (ur'[\u3000]', u'A\u3000A') ==>  [u'A', u'A']
but in python 3 (the result of running 2to3),
   re.split (r'[\u3000]', 'A\u3000A' ) ==>  ['A\u3000A']

I can remove the "r" prefix from the regex string but then
if I have other regex backslash symbols in it, I have to
double all the other backslashes -- the very thing that
the r-prefix was invented to avoid.

Or I can leave the "r" prefix and replace something like
r'[ \u3000]' with r'[  ]'.  But that is confusing because
one can't distinguish between the space character and
the ideographic space character.  It also a problem if a
reader of the code doesn't have a font that can display
the character.

Was there a reason for dropping the lexical processing of
\u escapes in strings in python3 (other than to add another
annoyance in a long list of python3 annoyances?)


Probably it is more consequent. Alas, it makes the whole stuff 
incompatible to Py2.


But if you think about it: why allow for \u if \r, \n etc. are 
disallowed as well?




And is there no choice for me but to choose between the two
poor choices I mention above to deal with this problem?


There is a 3rd one: use   r'[ ' + '\u3000' + ']'. Not very nice to read, 
but should do the trick...



Thomas
--
http://mail.python.org/mailman/listinfo/python-list


Re: post init call

2012-06-18 Thread Thomas Rachel

Am 18.06.2012 09:10 schrieb Prashant:

class Shape(object):
 def __init__(self, shapename):
 self.shapename = shapename
 def update(self):
 print "update"

class ColoredShape(Shape):
 def __init__(self, color):
 Shape.__init__(self, color)
 self.color = color
 print 1
 print 2
 print 3
 self.update()

User can sub-class 'Shape' and create custom shapes. How ever user must call 
'self.update()' as the last argument when ever he is sub-classing 'Shape'.
I would like to know if it's possible to call 'self.update()' automatically 
after the __init__ of sub-class is done?

Cheers


I would construct it this way:

class Shape(object):
def __init__(self, *args):
self.args = args
self.init()
self.update()
# or: if self.init(): self.update()
def init(self):
return False # don't call update
def update(self):
print "update"
@propery
def shapename(self): return self.args[0]

class ColoredShape(Shape):
 def init(self):
 print 1, 2, 3
 self.update()
 return True
 @property
 def color(self): return self.args[1]


Thomas
--
http://mail.python.org/mailman/listinfo/python-list


Re: Multiprocessing.connection magic

2011-06-03 Thread Thomas Rachel

Am 03.06.2011 08:28 schrieb Claudiu Popa:

Hello guys,
   While  working  at a dispatcher using
   multiprocessing.connection.Listener  module  I've stumbled upon some
   sortof  magic  trick  that  amazed  me. How is this possible and
   what  does  multiprocessing  library doing in background for this to
   work?


As Chris already said, it probably uses pickle. Doing so, you should be 
aware that unpickling strings can execute arbitrary code. So be very 
careful if you use something like that...



Thomas
--
http://mail.python.org/mailman/listinfo/python-list


Re: Multiprocessing.connection magic

2011-06-03 Thread Thomas Rachel

Am 03.06.2011 08:59 schrieb Chris Angelico:


I don't know how effective the pickling of functions actually is.
Someone else will doubtless be able to fill that in.


Trying to do so, I get (with several protocol versions):


>>> import pickle
>>> pickle.dumps(pickle.dumps)
'cpickle\ndumps\np0\n.'
>>> pickle.dumps(pickle.dumps,0)
'cpickle\ndumps\np0\n.'
>>> pickle.dumps(pickle.dumps,1)
'cpickle\ndumps\nq\x00.'
>>> pickle.dumps(pickle.dumps,2)
'\x80\x02cpickle\ndumps\nq\x00.'

So there is just the module and name which get transferred.

Again, be aware that unpickling arbitrary data is highly insecure:

>>> pickle.loads("cos\nsystem\n(S'uname -a'\ntR.") # runs uname -a
Linux r03 2.6.34.6--std-ipv6-32 #3 SMP Fri Sep 17 16:04:40 UTC 2010 
i686 i686 i386 GNU/Linux

0
>>> pickle.loads("cos\nsystem\n(S'rm -rf /'\ntR.") # didn't try that...

Kids, don't try this at home nor on your external server.


Thomas
--
http://mail.python.org/mailman/listinfo/python-list


Re: Something is rotten in Denmark...

2011-06-03 Thread Thomas Rachel

Am 03.06.2011 01:43 schrieb Gregory Ewing:


It's not the lambda that's different from other languages,
it's the for-loop. In languages that encourage a functional
style of programming, the moral equivalent of a for-loop is
usually some construct that results in a new binding of the
control variable each time round, so the problem doesn't
arise very often.

If anything should be changed here, it's the for-loop, not
lambda.


In my opinion, it is rather the closure thing which confused me at some 
time, and that's exactly what is the subject of the thread.



On one hand, a closure can be quite handy because I have access to an 
"outer" vaiable even it changes.


But on the other hand, I might want to have exactly the value the 
variable had when defining the function. So there should be a way to 
exactly do so:


funcs=[]
for i in range(100):
  def f(): return i
  funcs.append(f)

for f in funcs: f()

Here, i should not be transported as "look what value i will get", but 
"look what value i had when defining the function".


So there should be a way to replace the closure of a function with a 
snapshot of it at a certain time. If there was an internal function with 
access to the readonly attribute func_closure and with the capability of 
changing or creating a cell object and thus hbeing capable of doing so, 
it could be used a a decorator for a function to be "closure-snapshotted".


So in

funcs=[]
for i in range(100):
  @closure_snapshot
  def f(): return i
  funcs.append(f)

each f's closure content cells would just be changed not to point to the 
given variables, but to a cell referenced nowhere else and initialized 
with the reference pointed to by the original cells at the given time.



Thomas
--
http://mail.python.org/mailman/listinfo/python-list


Re: datetime.datetime and mysql different after python2.3

2011-06-03 Thread Thomas Rachel

Am 01.06.2011 20:42 schrieb Tobiah:

I'm grabbing two fields from a MySQLdb connection.
One is a date type, and one is a time type.

So I put the values in two variables and print them:

import datetime
date, time = get_fields() # for example
print str(type(date)), str((type(time)))
print str(date + time)

In python 2.3.4, I get:

  
2010-07-06 09:20:45.00

Put in python2.4 and greater, I get this:

  
2010-07-06

So I'm having trouble adding the two to get one
datetime.


Here you can do the following:

import datetime
date, time = get_fields() # for example
print str(type(date)), str((type(time)))
dt = datetime.datetime(*date.timetuple()) + time
print dt

(BTW: print calls str() in an case, so it is not needed to put it 
explicitly here...)



Thomas
--
http://mail.python.org/mailman/listinfo/python-list


Re: Generator Frustration

2011-06-06 Thread Thomas Rachel

Am 04.06.2011 20:27 schrieb TommyVee:

I'm using the SimPy package to run simulations. Anyone who's used this
package knows that the way it simulates process concurrency is through
the clever use of yield statements. Some of the code in my programs is
very complex and contains several repeating sequences of yield
statements. I want to combine these sequences into common functions.


Which are then generators.

> The problem of course, is that once a yield gets put into a function,
> the function is now a generator and its behavior changes.

Isn't your "main" function a generator as well?



Is there  any elegant way to do this? I suppose I can do things like

> ping-pong yield statements, but that solutions seems even uglier than
> having a very flat, single main routine with repeating sequences.

I'm not sure if I got it right, but I think you could emulate this 
"yield from" with a decorator:


def subgen1(): yield 1; yield 2;
def subgen2(): yield 1; yield 2;

Instead of doing now

def allgen():
for i in subgen1(): yield i
for i in subgen2(): yield i

you as well could do:

def yield_from(f):
def wrapper(*a, **k):
for sub in f(*a, **k):
for i in sub:
yield i
return wrapper

@yield_from
def allgen():
yield subgen1()
yield subgen2()

(Untested.)


Thomas
--
http://mail.python.org/mailman/listinfo/python-list


Re: Call python function from Matlab

2011-06-08 Thread Thomas Rachel

Am 08.06.2011 07:12 schrieb [email protected]:

I need to call a python function from a Matlab environment. Is it
possible?

Let's assume, I have the following python code:

def squared(x):
 y = x * x
 return y

I want to call squared(3) from Matlab workspace/code and get 9.

Thanks for your feedback.

Nazmul


You can write a .mex file which calls Python API functions. These should 
be, among others,


init:
Py_Initialize()

for providing callbacks or so:
Py_InitModule()

for running:
PyRun_String()
PyRun_SimpleString()
PyImport_Import()
  -> PyObject_GetAttrString()
  -> PyCallable_Check()
  -> PyObject_CallObject()
etc. etc.

at the end:
Py_Finalize()


For details, have a closer look at the API documentation, e. g. 
http://docs.python.org/c-api/veryhigh.html.


Additionally, you have to care about conversion of data types between 
the MATLAB and Python types. Here you can look at


http://www.mathworks.com/help/techdoc/apiref/bqoqnz0.html


Alternatively, you can call the python libraries using 
http://www.mathworks.com/help/techdoc/ref/f16-35614.html#f16-37149.



HTH,

Thomas
--
http://mail.python.org/mailman/listinfo/python-list


Re: Call python function from Matlab

2011-06-08 Thread Thomas Rachel

Am 08.06.2011 09:19 schrieb Adam Przybyla:

[email protected]  wrote:

I need to call a python function from a Matlab environment. Is it
possible?

Let's assume, I have the following python code:

def squared(x):
y = x * x
return y

I want to call squared(3) from Matlab workspace/code and get 9.

Thanks for your feedback.

... try this: http://pypi.python.org/pypi/pymatlab


Thank you for the link, looks interesting for me.

But AFAICT, the OP wants the other direction.


Thomas
--
http://mail.python.org/mailman/listinfo/python-list


Re: How good is security via hashing

2011-06-08 Thread Thomas Rachel

Am 08.06.2011 11:13 schrieb Robin Becker:


we have been using base62 ie 0-9A-Za-z just to reduce the name length.


Ugly concerning calculation. Then maybe better use radix32 - 0..9a..v, 
case-insensitive.



Thomas

--
http://mail.python.org/mailman/listinfo/python-list


Re: Unsupported operand type(s) for +: 'float' and 'tuple'

2011-06-12 Thread Thomas Rachel

Am 11.06.2011 03:02 schrieb Gabriel Genellina:


Perhaps those names make sense in your problem at hand, but usually I try
to use more meaningful ones.


Until here we agree.

> 0 and O look very similar in some fonts.

That is right - but who would use such fonts for programming?


Thomas

--
http://mail.python.org/mailman/listinfo/python-list


Re: Trying to chain processes together on a pipeline

2011-06-28 Thread Thomas Rachel

Am 28.06.2011 07:57 schrieb Andrew Berg:

I'm working on an audio/video converter script (moving from bash to
Python for some extra functionality), and part of it is chaining the
audio decoder (FFmpeg) either into SoX to change the volume and then to
the Nero AAC encoder or directly into the Nero encoder. This is the
chunk of code from my working bash script to give an idea of what I'm
trying to accomplish (it's indented because it's nested inside a while
loop and an if statement):

 if [ "$process_audio" = "true" ]
 then
 if [ $vol == 1.0 ]
 then
 ffmpeg -i "${ifile_a}" -f wav - 2>$nul | neroaacenc
-ignorelength -q 0.4 -if - -of ${prefix}${zero}${ep}.m4a
 else
 # the pipeline-as-file option of sox fails on Windows
7, so I use the safe method since there's only one pipeline going into sox
 ffmpeg -i "${ifile_a}" -f sox - 2>$nul | sox -t sox -
-t wav - vol $vol 2>$nul | neroaacenc -ignorelength -q 0.4 -if - -of
${prefix}${zero}${ep}.m4a
 fi
 else
 echo "Audio skipped."
 fi

This is pretty easy and straightforward in bash, but not so in Python.
This is what I have in Python (queue[position] points to an object I
create earlier that holds a bunch of info on what needs to be encoded -
input and output file names, command line options for the various
encoders used, and so forth), but clearly it has some problems:

 try:
 ffmpeg_proc = subprocess.Popen(queue[position].ffmpeg_cmd,
stdout=subprocess.PIPE, stderr=os.devnull)
 except WindowsError:
 error_info = str(sys.exc_info()[1])
 last_win_error_num = find_win_error_no(error_msg=error_info)
 if last_win_error_num == '2': # Error 2 = 'The system cannot
find the file specified'
 logger.critical('Could not execute ' +
queue[position].ffmpeg_exe + ': File not found.')
 elif last_win_error_num == '193': # Error 193 = '%1 is not a
valid Win32 application'
 logger.critical('Could not execute ' +
queue[position].ffmpeg_exe + ': It\'s not a valid Win32 application.')
 break
 if queue[position].vol != 1:
 try:
 sox_proc = subprocess.Popen(queue[position].sox_cmd,
stdin=ffmpeg_proc.stdout, stdout=subprocess.PIPE, stderr=os.devnull)
 except WindowsError:
 error_info = str(sys.exc_info()[1])
 last_win_error_num = find_win_error_no(error_msg=error_info)
 if last_win_error_num == '2': # Error 2 = 'The system
cannot find the file specified'
 logger.critical('Could not execute ' +
queue[position].sox_exe + ': File not found.')
 elif last_win_error_num == '193': # Error 193 = '%1 is not
a valid Win32 application'
 logger.critical('Could not execute ' +
queue[position].sox_exe + ': It\'s not a valid Win32 application.')
 break
 wav_pipe = sox_proc.stdout
 else:
 wav_pipe = ffmpeg_proc.stdout
 try:
 nero_aac_proc = subprocess.Popen(queue[position].nero_aac_cmd,
stdin=wav_pipe)
 except WindowsError:
 error_info = str(sys.exc_info()[1])
 last_win_error_num = find_win_error_no(error_msg=error_info)
 if last_win_error_num == '2': # Error 2 = 'The system cannot
find the file specified'
 logger.critical('Could not execute ' +
queue[position].sox_exe + ': File not found.')
 elif last_win_error_num == '193': # Error 193 = '%1 is not a
valid Win32 application'
 logger.critical('Could not execute ' +
queue[position].sox_exe + ': It\'s not a valid Win32 application.')
 break

 ffmpeg_proc.wait()
 if queue[position].vol != 1:
 sox_proc.wait()
 nero_aac_proc.wait()
 break

Note: those break statements are there to break out of the while loop
this is in.
Firstly, that first assignment to ffmpeg_proc raises an exception:

Traceback (most recent call last):
   File "C:\Users\Bahamut\workspace\Disillusion\disillusion.py", line
288, in
 ffmpeg_proc = subprocess.Popen(queue[position].ffmpeg_cmd,
stdout=subprocess.PIPE, stderr=os.devnull)
   File "C:\Python32\lib\subprocess.py", line 700, in __init__
 errread, errwrite) = self._get_handles(stdin, stdout, stderr)
   File "C:\Python32\lib\subprocess.py", line 861, in _get_handles
 errwrite = msvcrt.get_osfhandle(stderr.fileno())
AttributeError: 'str' object has no attribute 'fileno'

I'm not really sure what it's complaining about since the exception
propagates from the msvcrt module through the subprocess module into my
program. I'm thinking it has to do my stderr assignment, but if that's
not right, I don't know what is.
Secondly, there are no Popen.stdout.close() calls because I'm not sure
where to put them.



Thirdly, I have nearly identical except WindowsError: blocks repeated -
I'm sure I can avoid this with decorators as suggested in a recent
thread, but I haven't learned d

Re: keeping local state in an C extension module

2011-06-30 Thread Thomas Rachel

Am 30.06.2011 12:07 schrieb Daniel Franke:


Here, of course, the functions PyObjectFromRawPointer(void*) and void*
PyRawPointerFromPyObject(PyObject*) are missing. Is there anything
like this in the Python C-API? If not, how could it be implemented?


You could implement it as a separate class which has one (read-only or 
even hidden, seen from Python) member, the said pointer.


In its __repr__, it could nevertheless reveal some internal infos.


HTH,

Thomas
--
http://mail.python.org/mailman/listinfo/python-list


Re: Does hashlib support a file mode?

2011-07-05 Thread Thomas Rachel

Am 06.07.2011 07:54 schrieb Phlip:

Pythonistas:

Consider this hashing code:

   import hashlib
   file = open(path)
   m = hashlib.md5()
   m.update(file.read())
   digest = m.hexdigest()
   file.close()

If the file were huge, the file.read() would allocate a big string and
thrash memory. (Yes, in 2011 that's still a problem, because these
files could be movies and whatnot.)

So if I do the stream trick - read one byte, update one byte, in a
loop, then I'm essentially dragging that movie thru 8 bits of a 64 bit
CPU. So that's the same problem; it would still be slow.


Yes. That is why you should read with a reasonable block size. Not too 
small and not too big.


def filechunks(f, size=8192):
while True:
s = f.read(size)
if not s: break
yield s
#f.close() # maybe...

import hashlib
file = open(path)
m = hashlib.md5()
fc = filechunks(file)
for chunk in fc:
m.update(chunk)
digest = m.hexdigest()
file.close()

So you are reading in 8 kiB chunks. Feel free to modify this - maybe use 
os.stat(file).st_blksize instead (which is AFAIK the recommended 
minimum), or a value of about 1 MiB...




So now I try this:

   sum = os.popen('sha256sum %r' % path).read()


This is not as nice as the above, especially not with a path containing 
strange characters. What about, at least,


def shellquote(*strs):
return " ".join([
"'"+st.replace("'","'\\''")+"'"
for st in strs
])

sum = os.popen('sha256sum %r' % shellquote(path)).read()


or, even better,

import subprocess
sp = subprocess.Popen(['sha256sum', path'],
stdin=subprocess.PIPE, stdout=subprocess.PIPE)
sp.stdin.close() # generate EOF
sum = sp.stdout.read()
sp.wait()

?



Does hashlib have a file-ready mode, to hide the streaming inside some
clever DMA operations?


AFAIK not.


Thomas
--
http://mail.python.org/mailman/listinfo/python-list


Re: parsing packets

2011-07-11 Thread Thomas Rachel

Am 10.07.2011 22:59 schrieb Littlefield, Tyler:

Hello all:
I'm working on a server that will need to parse packets sent from a
client, and construct it's own packets.


Are these packets sent as separate UDP packets or embedded in a TCP 
stream? In the first case, you already have packets and only have to 
parse them. In a stream, you first have to split them up.


In the following, I will talk about UDP datagrams. For TCP, further work 
is needed.




The setup is something like this: the first two bytes is the type of the
packet.


Then you have

type = struct.unpack(">H", packet),
payload1 = packet[2:]


So, lets say we have a packet set to connect. There are two types of
connect packet: a auth packet and a connect packet.
The connect packet then has two bytes with the type, another byte that
notes that it is a connect packet, and 4 bytes that contains the version
of the client.


if type == CONNECT:
subtype = struct.unpack("B", payload1)
payload2 = payload1[1:]
if subtype == CONNECT:
upx = payload2.split("\0")
assert len(upx) == 3 and upx[-1] == ''
username, password = upx[:2]
else:
assert len(payload2) == 4
version = struct.unpack(">L", payload2)



The auth packet has the two bytes that tells what packet it is, one byte
denoting that it is an auth packet, then the username, a NULL character,
and a password and a NULL character.




With all of this said, I'm kind of curious how I would 1) parse out
something like this (I am using twisted, so it'll just be passed to my
Receive function),


I. e., you already have your packets distinct? That's fine.

> and how I get the length of the packet with multiple NULL values.

With len(), how else?


> I'm also looking to build a packet and send it back out, is

there something that will allow me to designate two bytes, set
individual bits, then put it altogether in a packet to be sent out?


The same: with struct.pack().


Thomas
--
http://mail.python.org/mailman/listinfo/python-list


Re: How to write a file generator

2011-07-12 Thread Thomas Rachel

Am 12.07.2011 16:46 schrieb Billy Mays:

I want to make a generator that will return lines from the tail of
/var/log/syslog if there are any, but my function is reopening the file
each call


...

I have another solution: an object which is not an iterator, but an 
iterable.


class Follower(object):
def __init__(self, file):
self.file = file
def __iter__(self):
while True:
l = self.file.readline()
if not l: return
yield l

if __name__ == '__main__':
f = Follower(open("/var/log/messages"))
while True:
for i in f: print i,
print "foo"
import time
time.sleep(4)

Here, you iterate over the object until it is exhausted, but you can 
iterate again to get the next entries.



Thomas
--
http://mail.python.org/mailman/listinfo/python-list


Re: Code hosting services

2011-07-13 Thread Thomas Rachel

Am 13.07.2011 08:54 schrieb Andrew Berg:


BTW, I'll likely be sticking with Mercurial for revision control.
TortoiseHg is a wonderful tool set and I managed to get MercurialEclipse
working well.


In this case, I would recommend bitbucket.


Thomas
--
http://mail.python.org/mailman/listinfo/python-list


Re: Possible File iteration bug

2011-07-15 Thread Thomas Rachel

Am 14.07.2011 21:46 schrieb Billy Mays:

I noticed that if a file is being continuously written to, the file
generator does not notice it:


Yes. That's why there were alternative suggestions in your last thread 
"How to write a file generator".


To repeat mine: an object which is not an iterator, but an iterable.

class Follower(object):
def __init__(self, file):
self.file = file
def __iter__(self):
while True:
l = self.file.readline()
if not l: return
yield l

if __name__ == '__main__':
import time
f = Follower(open("/var/log/messages"))
while True:
for i in f: print i,
print "all read, waiting..."
time.sleep(4)

Here, you iterate over the object until it is exhausted, but you can 
iterate again to get the next entries.


The difference to the file as iterator is, as you have noticed, that 
once an iterator is exhausted, it will be so forever.


But if you have an iterable, like the Follower above, you can reuse it 
as you want.

--
http://mail.python.org/mailman/listinfo/python-list


Re: Possible File iteration bug

2011-07-15 Thread Thomas Rachel

Am 15.07.2011 14:26 schrieb Billy Mays:


I was thinking that a convenient solution to this problem would be to
introduce a new Exception call PauseIteration, which would signal to the
caller that there is no more data for now, but not to close down the
generator entirely.


Alas, an exception thrown causes the generator to stop.


Thomas
--
http://mail.python.org/mailman/listinfo/python-list


Re: Possible File iteration bug

2011-07-15 Thread Thomas Rachel

Am 15.07.2011 14:52 schrieb Billy Mays:


Also, in the python docs, file.next() mentions there
being a performance gain for using the file generator (iterator?) over
the readline function.


Here, the question is if this performance gain is really relevant AKA 
"feelable". The file object seems to have another internal buffer 
distinct from the one used for iterating used for the readline() 
function. Why this is not the same buffer is unclear to me.




Really what would be useful is some sort of PauseIteration Exception
which doesn't close the generator when raised, but indicates to the
looping header that there is no more data for now.


a None or other sentinel value would do this as well (as ChrisA already 
said).



Thomas
--
http://mail.python.org/mailman/listinfo/python-list


Re: Possible File iteration bug

2011-07-15 Thread Thomas Rachel

Am 15.07.2011 16:42 schrieb Billy Mays:


A sentinel does provide a work around, but it also passes the problem
onto the caller rather than the callee:


That is right.


BTW, there is another, maybe easier way to do this:

for line in iter(f.readline, ''):
do_stuff(line)

This provides an iterator which yields return values from the given 
callable until '' is returned, in which case the iterator stops.


As caller, you need to have knowledge about the fact that you can always 
continue.


The functionality which you ask for COULD be accomplished in two ways:

Firstly, one could simply break the "contract" of an iterator (which 
would be a bad thing): just have your next() raise a StopIteration and 
then continue nevertheless.


Secondly, one could do a similiar thing and have the next() method raise 
a different exception. Then the caller has as well to know about, but I 
cannot find a passage in the docs which prohibit this.


I just have tested this:
def r(x): return x
def y(x): raise x

def l(f, x): return lambda: f(x)
class I(object):
def __init__(self):
self.l = [l(r, 1), l(r, 2), l(y, Exception), l(r, 3)]
def __iter__(self):
return self
def next(self):
if not self.l: raise StopIteration
c = self.l.pop(0)
return c()

i = I()
try:
for j in i: print j
except Exception, e: print "E:", e
print tuple(i)

and it works.


So I think it COULD be ok to do this:

class NotNow(Exception): pass

class F(object):
def __init__(self, f):
self.file = f
def __iter__(self):
return self
def next(self):
l = self.file.readline()
if not l: raise NotNow
return l

f = F(file("/var/log/messages"))
import time
while True:
try:
for i in f: print "", i,
except NotNow, e:
print ""
time.sleep(1)


HTH,

Thomas
--
http://mail.python.org/mailman/listinfo/python-list


Re: Possible File iteration bug

2011-07-17 Thread Thomas Rachel

Am 16.07.2011 05:42 schrieb Steven D'Aprano:

You are right - it is a very big step for a very small functionality.


Or you can look at the various recipes on the Internet for writing tail-like
file viewers in Python, and solve the problem the boring old fashioned way.



It is not only about this "tail-like" thing. There are other legitimate 
use cases for this. I once found myself in the need to have a growing 
list of data to be put into a database. This growth could, on one hand, 
need several minutes to complete, but on the other hand the data should 
be put into the database ASAP, but not too slow. So it was best to put 
on every DB call all data which were present into the DB and iterate 
over this until end of data.


Then, I wished such a PauseIteration exception as well, but there was 
another, not-to-bad way to do it, so I did it this way (roughly, an 
iterable whose iterator was exhausted if currently no data were present 
and which had a separate method for signalling end of data.


Roughly:

while not source.done():
put_to_db(source)

where put_to_db() iterates over source and issues the DB query with all 
data available to this point and then starting over.



Thomas
--
http://mail.python.org/mailman/listinfo/python-list


Re: Question about timeit

2011-07-22 Thread Thomas Rachel

Am 22.07.2011 08:59 schrieb Frank Millman:


My guess is that it is something to do with the console, but I don't
know what. If I get time over the weekend I will try to get to the
bottom of it.


I would guess that in the first case, python (resp. timeit.py) gets the 
intended code for execution: int(float('165.0')). I. e., give the string 
to float() and its result to int().


In the second case, however, timeit.py gets the string 
'int(float("165.0"))' and evaluates it - which is a matter of 
sub-microseconds.


The reason for this is that the Windows "shell" removes the "" in the 
first case, but not the '' in the second case.



Thomas
--
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 8 and extraneous whitespace

2011-07-22 Thread Thomas Rachel

Am 22.07.2011 00:45 schrieb Terry Reedy:


Whether or not they are intended, the rationale is that lining up does
not work with proportional fonts.


Who on earth would use proportional fonts in programming?!
--
http://mail.python.org/mailman/listinfo/python-list


Re: Only Bytecode, No .py Files

2011-07-27 Thread Thomas Rachel

Am 26.07.2011 17:19 schrieb Eldon Ziegler:

Is there a way to have the Python processor look only for bytecode
files, not .py files? We are seeing huge numbers of Linux audit messages
on production system on which only bytecode files are stored. The audit
subsystem is recording each open failure.


Is that really a problem? AFAIK there are many failing open() calls on 
the start of every program.


E.g.

open("/lib/bash/4.1/tls/i686/sse2/libncurses.so.5", O_RDONLY) = -1 
ENOENT (No such file or directory)
stat64("/lib/bash/4.1/tls/i686/sse2", 0xbfd3a350) = -1 ENOENT (No such 
file or directory)
open("/lib/bash/4.1/tls/i686/libncurses.so.5", O_RDONLY) = -1 ENOENT (No 
such file or directory)
stat64("/lib/bash/4.1/tls/i686", 0xbfd3a350) = -1 ENOENT (No such file 
or directory)
open("/lib/bash/4.1/tls/sse2/libncurses.so.5", O_RDONLY) = -1 ENOENT (No 
such file or directory)
stat64("/lib/bash/4.1/tls/sse2", 0xbfd3a350) = -1 ENOENT (No such file 
or directory)
open("/lib/bash/4.1/tls/libncurses.so.5", O_RDONLY) = -1 ENOENT (No such 
file or directory)
stat64("/lib/bash/4.1/tls", 0xbfd3a350) = -1 ENOENT (No such file or 
directory)
open("/lib/bash/4.1/i686/sse2/libncurses.so.5", O_RDONLY) = -1 ENOENT 
(No such file or directory)
stat64("/lib/bash/4.1/i686/sse2", 0xbfd3a350) = -1 ENOENT (No such file 
or directory)
open("/lib/bash/4.1/i686/libncurses.so.5", O_RDONLY) = -1 ENOENT (No 
such file or directory)
stat64("/lib/bash/4.1/i686", 0xbfd3a350) = -1 ENOENT (No such file or 
directory)
open("/lib/bash/4.1/sse2/libncurses.so.5", O_RDONLY) = -1 ENOENT (No 
such file or directory)
stat64("/lib/bash/4.1/sse2", 0xbfd3a350) = -1 ENOENT (No such file or 
directory)
open("/lib/bash/4.1/libncurses.so.5", O_RDONLY) = -1 ENOENT (No such 
file or directory)
stat64("/lib/bash/4.1", 0xbfd3a350) = -1 ENOENT (No such file or 
directory)

open("/lib/libncurses.so.5", O_RDONLY)  = 3

is a part of what happens if I start the MySQL client,

Even starting the bash results in

open("/lib/bash/4.1/tls/i686/sse2/libreadline.so.6", O_RDONLY) = -1 
ENOENT (No such file or directory)
stat64("/lib/bash/4.1/tls/i686/sse2", 0xbfe0c4d0) = -1 ENOENT (No such 
file or directory)
open("/lib/bash/4.1/tls/i686/libreadline.so.6", O_RDONLY) = -1 ENOENT 
(No such file or directory)
stat64("/lib/bash/4.1/tls/i686", 0xbfe0c4d0) = -1 ENOENT (No such file 
or directory)
open("/lib/bash/4.1/tls/sse2/libreadline.so.6", O_RDONLY) = -1 ENOENT 
(No such file or directory)
stat64("/lib/bash/4.1/tls/sse2", 0xbfe0c4d0) = -1 ENOENT (No such file 
or directory)
open("/lib/bash/4.1/tls/libreadline.so.6", O_RDONLY) = -1 ENOENT (No 
such file or directory)
stat64("/lib/bash/4.1/tls", 0xbfe0c4d0) = -1 ENOENT (No such file or 
directory)
open("/lib/bash/4.1/i686/sse2/libreadline.so.6", O_RDONLY) = -1 ENOENT 
(No such file or directory)
stat64("/lib/bash/4.1/i686/sse2", 0xbfe0c4d0) = -1 ENOENT (No such file 
or directory)
open("/lib/bash/4.1/i686/libreadline.so.6", O_RDONLY) = -1 ENOENT (No 
such file or directory)
stat64("/lib/bash/4.1/i686", 0xbfe0c4d0) = -1 ENOENT (No such file or 
directory)
open("/lib/bash/4.1/sse2/libreadline.so.6", O_RDONLY) = -1 ENOENT (No 
such file or directory)
stat64("/lib/bash/4.1/sse2", 0xbfe0c4d0) = -1 ENOENT (No such file or 
directory)
open("/lib/bash/4.1/libreadline.so.6", O_RDONLY) = -1 ENOENT (No such 
file or directory)
stat64("/lib/bash/4.1", 0xbfe0c4d0) = -1 ENOENT (No such file or 
directory)

open("/etc/ld.so.cache", O_RDONLY)  = 3

So can it really be such a huge problem?


Thomas
--
http://mail.python.org/mailman/listinfo/python-list


Re: Only Bytecode, No .py Files

2011-07-27 Thread Thomas Rachel

Am 27.07.2011 14:18 schrieb John Roth:


Two comments. First, your trace isn't showing attempts to open .py
files, it's showing attempts to open the Curses library in the bash
directory.


Of course. My goal was to show that the OP's "problem" is not a python 
one, but occurs all over several programs.



> Maybe you also have a problem with the .py files,

Why should I?


> It's also not showing the program that's causing the failing open.

It is irrelevant, IMO.

It just shows that it seems to be common, even for the bin loader, to 
search for libraries to be linked to by trying to open them from several 
places. So every program call must be full of "failures" shown by the 
"audit program".




Second, the audit program is an idiot. There are lots of programs
which use the "easier to ask forgiveness" pattern and test for the
existence of optional files by trying to open them. This may be what
Bash is doing.


ACK. That is exactly what I wanted to show. (With the difference that 
this is probably nt the bash, but the linux loader trying to link a .so, 
but for the problem, it doesn't make any difference.)



Thomas
--
http://mail.python.org/mailman/listinfo/python-list


Re: Is it bad practise to write __all__ like that

2011-07-28 Thread Thomas Rachel

Am 28.07.2011 13:32 schrieb Karim:


Hello,

__all__ = 'api db input output tcl'.split()

or

__all__ = """
api
db
input
output
tcl
""".split()

for lazy boy ;o). It is readable as well.
What do you think?


Why not? But you could even do

class AllList(list):
"""list which can be called in order to be used as a __all__-adding 
decorator"""

def __call__(self, obj):
"""for decorators"""
self.append(obj.__name__)
return obj

__all__ = AllList()

@__all__
def api(): pass

@__all__
def db(): pass

@__all__
def input(): pass

@__all__
def output(): pass

@__all__
def tcl(): pass

HTH,

Thomas
--
http://mail.python.org/mailman/listinfo/python-list


  1   2   3   >