win32: emulating select() on pipes

2008-03-17 Thread gangesmaster
hi

i'm trying to figure out if a pipe on win32 has data for me to read.
this is the code i've come up with:

def poll(self, timeout, interval = 0.2):
"""a poor man's version of select() on win32"""
from win32pipe import PeekNamedPipe
from msvcrt import get_osfhandle

handle = get_osfhandle(self.fileno())
if timeout is None:
timeout = sys.maxint
length = 0
tmax = time.time() + timeout
while length == 0 and time.time() < tmax:
length = PeekNamedPipe(handle, 0)[2]
time.sleep(interval)
return length != 0

does anyone know of a better way to tell if data is available on a
pipe?
something that blocks until data is available or the timeout is
elapsed,
and returns True if there's something for me to read, or False
otherwise.


-tomer
-- 
http://mail.python.org/mailman/listinfo/python-list


abusing exceptions for continuations

2007-12-10 Thread gangesmaster
i've had this strange idea of using the exception's traceback (which
holds the stack frame) to enable functional continuations, meaning,
raise some special exception which will be caught by a reactor/
scheduler/framework, which could later revive it by restoring the
frame.

i'm thinking of using the generator's implementation (some minimal
support on the c-side)

has this been tried before? what were the results?


thanks,
-tomer
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: free variables /cell objects question

2007-01-25 Thread gangesmaster
[Steven]
> My solution is, don't try to have one function do too much. Making a list
> of foos should be a separate operation from making a single foo:

that's exactly what my second example does (as well as my production
code)

[Paul]
> But it does work as expected, if your expectations are based on what
> closures actually do.

yet, i find what closures actually do, to be logically wrong.
moreover, it means the frame object must be kept alive for no reason...
or in my case, two frame objects per foo-function.

> The Python idiom is:
...
> def foo(n=n):

besides of being ugly, the def f(n=n) idiom is very bad,
programatically speaking. what if the user chooses to be a smartass
and call with n = 7? or what if the function's signature is
meaningful? (as it is in my case)

anyway, this talk is not going anywhere.
thanks for the info, and i'll see how i manage to optimize my code
from here.

-tomer

On Jan 25, 4:51 pm, Steven D'Aprano
<[EMAIL PROTECTED]> wrote:
> On Thu, 25 Jan 2007 04:29:35 -0800, Paul Rubin wrote:
> > "gangesmaster" <[EMAIL PROTECTED]> writes:
> >> what i see as a bug is this code not working as expected:
>
> >> >>> def make_foos(names):
> >> ... funcs = []
> >> ... for n in names:
> >> ... def foo():
> >> ... print "my name is", n
> >> ... funcs.append(foo)
> >> ... return funcs
>
> > But it does work as expected, if your expectations are based on what
> > closures actually do.
>
> >> i have to create yet another closure, make_foo, so that the name
> >> is correctly bound to the object, rather than the frame's slot:
>
> > The Python idiom is:
>
> >def make_foos(names):
> >funcs = []
> >for n in names:
> >def foo(n=n):
> >print "my name is", n
> >funcs.append(foo)
> >return funcs
>
> > The n=n in the "def foo" creates the internal binding that you need.Hmmm... 
> > I thought that the introduction of nested scopes removed the need
> for that idiom. Its an ugly idiom, the less I see it the happier I am.
>
> And I worry that it will bite you on the backside if your "n=n" is a
> mutable value.
>
> My solution is, don't try to have one function do too much. Making a list
> of foos should be a separate operation from making a single foo:
>
> >>> def makefoo(name):... def foo():
> ... return "my name is " + name
> ... return foo
> ...>>> makefoo("fred")()
> 'my name is fred'
> >>> def makefoos(names):... foos = []
> ... for name in names:
> ... foos.append(makefoo(name))
> ... return foos
> ...>>> L = makefoos(["fred", "wilma"])
> >>> L[0]()
> 'my name is fred'
> >>> L[1]()'my name is wilma'
>
> That makes it easier to do unit testing too: you can test your makefoo
> function independently of your makefoos function, if that's important.
>
> If you absolutely have to have everything in one function:
>
> >>> def makefoos(names):... def makefoo(name):
> ... def foo():
> ... return "my name is " + name
> ... return foo
> ... L = []
> ... for name in names:
> ... L.append(makefoo(name))
> ... return L
> ...>>> L = makefoos(["betty", "barney"])
> >>> L[0]()
> 'my name is betty'
> >>> L[1]()'my name is barney'
>
> Best of all, now I don't have to argue as to which binding behaviour is
> more correct for closures!!! *wink*
> 
> --
> Steven.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: free variables /cell objects question

2007-01-25 Thread gangesmaster
no, it has nothing to do with "i" being global.

>>> tuple(lambda: i for i in range(10))[0]()
9
>>> tuple(lambda: i for i in range(10))[1]()
9

what i see as a bug is this code not working as expected:

>>> def make_foos(names):
... funcs = []
... for n in names:
... def foo():
... print "my name is", n
... funcs.append(foo)
... return funcs
...
>>> foos = make_foos(["hello", "world", "spam", "bacon"])
>>> foos[0]()
my name is bacon
>>> foos[1]()
my name is bacon
>>> foos[2]()
my name is bacon
>>>

i have to create yet another closure, make_foo, so that the name
is correctly bound to the object, rather than the frame's slot:

>>> def make_foo(name):
... def foo():
... print "my name is", name
... return foo
...
>>> def make_foos(names):
... return [make_foo(n) for n in names]
...
>>> foos = make_foos(["hello", "world", "spam", "bacon"])
>>> foos[0]()
my name is hello
>>> foos[1]()
my name is world
>>> foos[2]()
my name is spam


-tomer

On Jan 24, 2:46 am, "Terry Reedy" <[EMAIL PROTECTED]> wrote:
> "gangesmaster" <[EMAIL PROTECTED]> wrote in messagenews:[EMAIL PROTECTED]
> | so this is why [lambda: i for i in range(10)] will always return 9.
>
> No, it returns a list of 10 identical functions which each return the
> current (when executed) global (module) variable i. Except for names,
> 'lambda:i' abbreviates 'def f(): return i'.
>
> >>> a=[lambda: i for i in range(10)]
> >>> i=42
> >>> for j in range(10): print a[j]()42
> 42
> 42
> 42
> 42
> 42
> 42
> 42
> 42
> 42
>
> >>> for i in range(10): print a[i]()0
> 1
> 2
> 3
> 4
> 5
> 6
> 7
> 8
> 9
>
> >>> del i
> >>> for j in range(10): print a[j]()Traceback (most recent call last):
>   File "", line 1, in -toplevel-
> for j in range(10): print a[j]()
>   File "", line 1, in 
> a=[lambda: i for i in range(10)]
> NameError: global name 'i' is not defined
>
> | imho that's a bug, not a feature.
>
> The developers now think it a mistake to let the list comp variable 'leak'
> into the global scope.  It leads to the sort of confusion that you
> repeated.  In Py3, the leak will be plugged, so one will get an exception,
> as in the last example, unless i (or whatever) is defined outside the list
> comp.
> 
> Terry Jan Reedy

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: free variables /cell objects question

2007-01-23 Thread gangesmaster
ugliness :)

so this is why [lambda: i for i in range(10)] will always return 9.
imho that's a bug, not a feature.


thanks.
-tomer

Duncan Booth wrote:
> "gangesmaster" <[EMAIL PROTECTED]> wrote:
>
> > what problem does the cell object solve?
>
> The closure represents the variable, not the object. So if x is rebound to
> a different object your inner function g() will now access the new object.
> If the object itself was passed to MAKE_CLOSURE then g would only ever see
> the value of x from the instant when g was defined.
>
> >>> def f(x):
> def g():
> print "x is", x
> g()
> x += 1
> g()
> 
>   
> >>> f(1)
> x is 1
> x is 2

-- 
http://mail.python.org/mailman/listinfo/python-list


free variables /cell objects question

2007-01-23 Thread gangesmaster
why does CPython require to wrap the free variables if
closure functions by a cell objects?
why can't it just pass the object itself?

>>> def f(x):
... def g():
... return x+2
... return g
...
>>> g5 = f(5)
>>> dis(g5)
  3   0 LOAD_DEREF   0 (x)
  3 LOAD_CONST   1 (2)
  6 BINARY_ADD
  7 RETURN_VALUE
>>> dis(f)
  2   0 LOAD_CLOSURE 0 (x)
  3 BUILD_TUPLE  1
  6 LOAD_CONST   1 (>>

i don't see why dereferencing is needed. why not just pass
the object itself to the MAKE_CLOSURE? i.e.

LOAD_FAST 0(x)
LOAD_CONST 1 (the code object)
MAKE_CLOSURE 0

what problem does the cell object solve? 


-tomer

-- 
http://mail.python.org/mailman/listinfo/python-list


inline metaclasses

2006-07-03 Thread gangesmaster
just something i thought looked nice and wanted to share with the rest
of you:

>>> class x(object):
... def __metaclass__(name, bases, dict):
... print "hello"
... return type(name, bases, dict)
...
hello
>>>

instead of defining a separate metaclass function/class, you can do
it inline. isn't that cool?


-tomer

-- 
http://mail.python.org/mailman/listinfo/python-list


suggestion: adding weakattr to stdlib

2006-07-03 Thread gangesmaster
three-liner:
reposted from python-dev for more feedback. it suggests to add
the weakattr class to the standard weakref.py module.
comments are welcome.

[ http://article.gmane.org/gmane.comp.python.devel/81875 ]

From: tomer filiba  gmail.com>
Subject: weakattr
Newsgroups: gmane.comp.python.devel
Date: 2006-07-01 13:49:46 GMT (2 days, 3 hours and 12 minutes ago)

weakattr (weak attributes) are attributes that are weakly referenced
by their containing object. they are very useful for cyclic references
--
an object that holds a reference to itself.

when a cyclic reference is found by the GC, the memory may be
freed, but __del__ is not called, because it's impossible to tell which
__del__ to call first. this is an awkward asymmetry with no clean
solution: most such objects provide a "close" or "dispose" method
that must be called explicitly.

weakattrs to solve this problem, by providing a "magical" attribute
that "disappears" when the attribute is no longer strongly-referenced.

you can find the code, as well as some examples, on this link
http://sebulba.wikispaces.com/recipe+weakattr

since the stdlib already comes with weakref.py, which provides
higher level concepts over the builtin _weakref module, i'd like to
make weakattr a part of it.

it's only ~20 lines of code, and imho saves the trouble of explicitly
releasing the resource of un__del__able objects.

i think it's useful. here's a snippet:

>>> from weakref import weakattr
>>>
>>> class blah(object):
... yada = weakref()
...
>>> o1 = blah()
>>> o2 = blah()
>>> o1.yada = o2
>>> o2.yada = o1

o1.yada is a *weakref* to o2, so that when o2 is no longer
strongly-referenced...
>>> del o2
o1.yada "magically" disappears as well.
>>> o1.yada
... AttributeError(...)

since the programmer explicitly defined "yada" as a weakatt, he/she
knows it might "disappear". it might look awkward at first, but that's
exactly the *desired* behavior (otherwise we'd just use the regular
strong attributes).

another thing to note is that weakattrs are likely to be gone only
when the object's __del__ is already invoked, so the only code that
needs to take such precautions is __del__ (which already has some
constraints)

for example:

>>> class blah(object):
... me = weakattr()
...
... def __init__(self):
... self.me  = self
...
... def something(self):
... # we can rest assure me exists at this stage
... print self.me
...
... def __del__(self):
... # by the time __del__ is called, "me" is removed
... print "me exists?", hasattr(self, "me")
...
>>> b = blah()
>>> b.me
<__main__.blah object at 0x00C0EC10>
>>> b.something()
<__main__.blah object at 0x00C0EC10>
>>> del b
>>> import gc
>>> gc.collect()
me exists? False
0
>>>



-tomer

-- 
http://mail.python.org/mailman/listinfo/python-list


windows and socket.dup

2006-06-23 Thread gangesmaster
what uses do you have to socket.dup? on *nixes it makes,
to dup() the socket before forking, but how can that be useful
on windows?



-tomer

-- 
http://mail.python.org/mailman/listinfo/python-list


carshing the interpreter in two lines

2006-06-03 Thread gangesmaster
the following (random) code crashes my interpreter
(python 2.4.3/winxp):

from types import CodeType as code
exec code(0, 5, 8, 0, "hello moshe", (), (), (), "", "", 0, "")

i would expect the interpreter to do some verifying, at least for
sanity (valid opcodes, correct stack size, etc.) before executing
artbitrary code... after all, it was the BDFL who said """I'm not
saying it's uncrashable. I'm saying that if you crash it, it's a
bug unless proven harebrained."""


-tomer

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 3102 for review and comment

2006-05-24 Thread gangesmaster
None is not currently a keyword

-- 
http://mail.python.org/mailman/listinfo/python-list


proposal: disambiguating type

2006-05-21 Thread gangesmaster
typing "help(type)" gives the following documentation:
>>> help(type)
Help on class type in module __builtin__:
class type(object)
 |  type(object) -> the object's type
 |  type(name, bases, dict) -> a new type

"type" behaves both as a function, that reports the type of an object,
and as a factory type for creating types, as used mainly with
metaclasses.

calling the constructor of types, like lists, etc., is expected to
create a new instance of that type -- list() is a factory for lists,
dict() is a factory for dicts, etc.

but type() breaks this assumption. it behaves like a factory when
called with 3 params, but as a function when called with one param.
i find this overloading quite ugly and unnecessary.

more over, it can cause abominations like
>>> class x(type):
... pass
...
>>> x(1)


or
>>> list.__class__(1)


i suggest splitting this overloaded meaning into two separate builtins:
* type(name, bases, dict) - a factory for types
* typeof(obj) - returns the type of the object

this way, "type" retains it meaning as the base-class for all types,
and as a factory for types, while typeof() reports the object's type.
it's also more intuitive that typeof(1) returns, well, the *type of*
the
object 1.

no new keywords are needed, and code is always allowed to
override builtin functions, so i don't expect backward-
compatibility issues.

proposed schedule:
* 2.6 introduces typeof(), but type() with one argument retains its
old meaning
* 2.7 deprecates the usage of type() with a single argument
* 2.8 type is only a factory, while typeof replaces type() with a
single argument

comments are welcome.


-tomer

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP-xxx: Unification of for statement and list-comp syntax

2006-05-21 Thread gangesmaster
> Today you can archive the same effect (but not necessarily with the same
> performance) with:
>
> for node in (x for x in tree if x.haschildren()):
> 

true, but it has different semantic meanings


-tomer

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP-xxx: Unification of for statement and list-comp syntax

2006-05-21 Thread gangesmaster
i wanted to suggest this myself. +1


-tomer

-- 
http://mail.python.org/mailman/listinfo/python-list


sock2

2006-05-19 Thread gangesmaster
sock2 is an attempt to improve python's socket module, by a more
pythonic version (options are properties, protocols are classes,
etc.etc)

you can get it here (including a small demo)
http://iostack.wikispaces.com/download

i would like to receive comments/bug reports, to improve it.
just reply to this post with your comments.
it's part of the iostack library, which is not released yet.


-tomer

-- 
http://mail.python.org/mailman/listinfo/python-list


bitstream

2006-05-19 Thread gangesmaster
anyone has a good bit-stream reader and writer?
(before i go to write my own)

i.e.

f = open(..)
b = BitStream(f)
b.write("10010010")
b.read(5) # 10010

or something like that?


-tomer

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: combining a C# GUI with Python code?

2006-05-19 Thread gangesmaster
see http://interpython.sourceforge.net

-- 
http://mail.python.org/mailman/listinfo/python-list


released: RPyC 2.60

2006-05-19 Thread gangesmaster
Remote Python Call (RPyC) has been released. this release introduces
delivering objects, reducing memory consumption with __slots__, and
several other new/improved helper functions. see the release notes and
changelog (on the site) for more info.

home: 
http://rpyc.wikispaces.com


-tomer

-- 
http://mail.python.org/mailman/listinfo/python-list


released: RPyC 2.55

2006-05-07 Thread gangesmaster
Remote Python Call (RPyC) - transparent and symmetrical python RPC and
distributed computing library

download and info: http://rpyc.wikispaces.com
full changelog: http://rpyc.wikispaces.com/changelog
release notes: http://rpyc.wikispaces.com/release+notes

major changes:
* added isinstance and issubclass for remote objects
* moved to tlslite for authentication and encryption
* added server discovery (using UDP broadcasts)
* refactoring 


-tomer

-- 
http://mail.python.org/mailman/listinfo/python-list


released: RPyC 2.50A

2006-04-25 Thread gangesmaster
Remote Python Call 2.50 release-candidate

http://rpyc.wikispaces.com


-tomer

-- 
http://mail.python.org/mailman/listinfo/python-list


Announce: Construct has moved

2006-04-24 Thread gangesmaster
Construct, the "parsing made fun" library, has moved from it's
sourceforge home to wikispaces:

http://pyconstruct.wikispaces.com

(the sf page redirects there)



-tomer

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: threads and sys.exit()

2006-04-24 Thread gangesmaster
that's not a question of design. i just want a child-thread to kill the
process. in a platform agnostic way.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: threads and sys.exit()

2006-04-24 Thread gangesmaster
i can't make the main thread daemonic. the situation is this:
* the main thread starts a thread
* the new thread does sys.exit()
* the new thread dies, but the process remains
i can do os.kill(os.getpid()), or TerminateProcess(-1) but that's not
what i want


-tomer

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: threads and sys.exit()

2006-04-24 Thread gangesmaster
(i forgot to say it didn't work)

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: threads and sys.exit()

2006-04-24 Thread gangesmaster
>>> import threading
>>> t=threading.Thread(target=sys.exit)
>>> t.setDaemon(True)
>>> t.start()
>>>

?

-- 
http://mail.python.org/mailman/listinfo/python-list


threads and sys.exit()

2006-04-24 Thread gangesmaster
calling sys.exit() from a thread does nothing... the thread dies, but
the interpreter remains. i guess the interpreter just catches and
ignore the SystemExit exception...

does anybody know of a way to overcome this limitation? 


-tomer

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Announce: RPyC's wiki!

2006-04-21 Thread gangesmaster
[for people who missed my previous posts]

"""RPyC is a transparent, symmetrical python library for
distributed-computing. Pronounced "are-pie-see", it began as an RPC
library (hence the name), but grew into something much more
comprehensive with many use cases. It basically works by giving you
full control over a remote slave-interpreter (a separate process),
which performs operations on your behalf."""



-tomer

-- 
http://mail.python.org/mailman/listinfo/python-list


Announce: RPyC's wiki!

2006-04-21 Thread gangesmaster
the RPyC's project page has moved to
http://rpyc.wikispaces.com

the old site (http://rpyc.sourceforge.net) redirects there now. because
it's the official site, i chose to limit changes to members only.

it's so much easier to maintain the wiki that the crappy htmls at
sourceforge :)
anyway, the new site contains much more info and is organized better.

i'm currently working on:
* more documentation on the wiki
* version 2.50 - fix a bug with multiple threads using the same
connection.

version 2.46 is ready, but as it provides nothing critical, i dont find
a reason to release it now. you''ll need to wait for 2.50 (somewhere in
May/June i guess)



-tomer

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Property In Python

2006-04-21 Thread gangesmaster
class person(object):
def _get_age(self):
return self.__age
age = property(_get_age) # a read-only property

def _get_name(self):
return self.__name
def _set_name(self, value):
self.__name = value
name = property(_get_name, _set_name)

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How protect proprietary Python code? (bytecode obfuscation?, what better?)

2006-04-18 Thread gangesmaster
okay, i got the name wrong. i wasn't trying to provide production-level
code, just a snippet. the function you want is
PyRun_SimpleString( const char *command)

#include 

char secret_code[] = "print 'moshe'";

int main()
{
return PyRun_SimpleString(secret_code);
}

and you need to link with python24.lib or whatever the object file is
for your platform.



-tomer

-- 
http://mail.python.org/mailman/listinfo/python-list


a flattening operator?

2006-04-17 Thread gangesmaster
as we all know, * (asterisk) can be used to "inline" or "flatten" a
tuple into an argument list, i.e.:

def f(a, b, c):
...
x = (1,2,3)
f(*x)

so... mainly for symmetry's sake, why not make a "flattening" operator
that also works outside the context of function calls? for example:

a = (1,2,3)
b = (4,5)
c = (*a, *b) # ==> (1,2,3,4,5)

yeah, a + b would also give you the same result, but it could be used
like format-strings, for "templating" tuples, i.e.

c = (*a, 7, 8, *b)

i used to have a concrete use-case for this feature some time ago, but
i can't recall it now. sorry. still, the main argument is symmetry:
it's a syntactic sugar, but it can be useful sometimes, so why limit it
to function calls only?

allowing it to be a generic operator would make things like this
possible:

f(*args, 7) # an implied last argument, 7, is always passed to the
function

today you have to do

f(*(args + (7,)))

which is quite ugly.

and if you have to sequences, one being a list and the other being a
tuple, e.g.
x = [1,2]
y = (3,4)

you can't just x+y them. in order to concat them you'd have to use
"casting" like
f(*(tuple(x) + y))

instead of
f(*x, *y)

isn't the latter more elegant?

just an idea. i'm sure people could come up with more creative
use-cases of a standard "flattening operator". but even without the
creative use cases -- isn't symmetry strong enough an argument? why are
function calls more important than regular expressions?

and the zen proves my point:
(*) Beautiful is better than ugly --> f(*(args + (7,))) is ugly
(*) Flat is better than nested --> less parenthesis
(*) Sparse is better than dense --> less noise
(*) Readability counts --> again, less noise
(*) Special cases aren't special enough to break the rules --> then why
are function calls so special?

the flattening operator would work on any sequence (having __iter__ or
__next__), not just tuples and lists. one very useful feature i can
thik of is "expanding" generators, i.e.:

print xrange(10) # ==> xrange(10)
print *xrange(10) # ==> (0, 1, 2, 3, 4, 5, 6, 7, 8, 9)

i mean, python already supports this half-way:
>>> def f(*args):
... print args
...
>>> f(*xrange(10))
(0, 1, 2, 3, 4, 5, 6, 7, 8, 9)

so... why can't i just do "print *xrange(10)" directly? defining a
function just to expand a generator? well, i could use
"list(xrange(10))" to expand it, but it's less intuitive. the other way
is list-comprehension, [x for x in xrange(10)], but isn't *xrange(10)
more to-the-point?

also, "There should be one-- and preferably only one --obvious way to
do it"... so which one?
(*) list(xrange(10))
(*) [x for x in xrange(10)]
(*) [].extend(xrange(10))
(*) f(*xrange(10))

they all expand generators, but which is the preferable way?

and imagine this:

f(*xrange(10), 7)

this time you can't do *(xrange(10) + (7,)) as generators do not
support addition... you'd have to do *(tuple(xrange(10)) + (7,)) which
is getting quite long already.

so as you can see, there are many inconsistencies between function-call
expressions and regular expressions, that impose artificial limitations
on the language. after all, the code is already in there to support
function-call expressions. all it takes is adding support for regular
exoressions.

what do you think? should i bring it up to python-dev?


-tomer

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How protect proprietary Python code? (bytecode obfuscation?, what better?)

2006-04-17 Thread gangesmaster
well, you can do something silly: create a c file into which you embed
your code, ie.,

 #include

char code[] = "print 'hello moshe'";

void main(...)
{
Py_ExecString(code);
}

then you can compile the C file into an object file, and use regular
obfuscators/anti-debuggers. of course people who really want to get the
source will be able to do so, but it will take more time. and isn't
that
the big idea of using obfuscation?

but anyway, it's stupid. why be a dick? those who *really* want to get
to the source will be able to, no matter what you use. after all, the
code is executing on their CPU, and if the CPU can execute it, so
can really enthused men. and those who don't want to use your product,
don't care anyway if you provide the source or not. so share.


-tomer

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 359: The "make" Statement

2006-04-16 Thread gangesmaster
?

i really liked it


-tomer

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 359: The "make" Statement

2006-04-13 Thread gangesmaster
"make type" is uber leet

-- 
http://mail.python.org/mailman/listinfo/python-list


Announce: Construct's wiki!

2006-04-13 Thread gangesmaster
finally, i opened a wiki for Construct, the "parsing made fun" library.

the project's page: http://pyconstruct.sourceforge.net/
the project's wiki: http://pyconstruct.wikispaces.com/ (anyone can
edit)

so now we have one place where people can share inventory constructs,
questions-and-answers, patches, documentation and more. enjoy.


-tomer

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: pondering about the essence of types in python

2006-03-26 Thread gangesmaster
i dont think it's possible, to create proxy classes, but even if i did,
calling remote methods with a `self` that is not an instance of the
remote class would blow up.


-tomer

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: pondering about the essence of types in python

2006-03-25 Thread gangesmaster
i was taking about python...

-- 
http://mail.python.org/mailman/listinfo/python-list


pondering about the essence of types in python

2006-03-25 Thread gangesmaster
let's start with a question:

==
>>> class z(object):
... def __init__(self):
... self.blah=5
...
>>> class x(object):
... def __init__(self):
... z.__init__(self)
...
>>> y=x()
Traceback (most recent call last):
  File "", line 1, in ?
  File "", line 3, in __init__
TypeError: unbound method __init__() must be called with z instance as
first argument (got x instance instead)
==

and the question is -- WHY?

what is a type? generally speaking, if everything were an object, the
type only defines the MRO (method resolution order) for that object.
x.y first looks at the instance, then the class, then the parent
classes, etc. (this was changed a little in python2.3 to something more
complicated, but it's basically the same).

you can see the mro like this:
==
>>> class x(object): pass
>>> class y(x): pass
>>> class z(y): pass
>>> a=z()
>>> print a.__class__.mro()
[, , ,
]
==

after all, if we stay out of builtin types, all python objects are
dicts, which support chian-lookup according to the mro. and a method is
just a function that takes the instance as a first argument. so why is
all this type hassle necessary?

if we've taken it that far already, then let's really go over the edge.
I WANT TO DERIVE FROM INSTANCES. not only types.

why? i'm the developer of rpyc (http://rpyc.sf.net), and i got a
request from someone to add support for deriving from remote types. the
concrete example he gave was quite silly, but after i thought about it
a little, i said why not try?

a little intro about rpyc: it gives you proxies (instances) to remote
objects, which can be instances, functions, or classes. and that user
wanted to do something like this:

class my_frame(conn.modules.wx.Frame):
...

so the client was actually creates the graphics on the server. not very
usable, but why not?  all it means is, when he does "my_frame.xyz",
python should add the remote type to the mro chain. not too bizar.

but __mro__ is a readonly attribute, and deriving from instances is
impossible (conn.modules.wx.Frame is a PROXY to the class)...

and again -- WHY? these all look like INTENTIONAL limitations. someone
went around and added type checks (which are NOT pythonic) into the
cPython implementation. argh. why do that?

so i thought -- let's be nasty. i created a function that creates a
class that wraps an instance. very ugly. small children and peope with
heart problems should close their eyes.


def derive_from(obj):
class cls(object):
def __getattr(self, name):
return getattr(obj, name)
return cls

class my_frame(derive_from(conn.modules.wx.Frame)):



the actual implementation is quite more complex, but that shows the
concept.
so now i'm experimenting with that little shit. but then i came to the
problem that methods check the type of the first argument... ARGH. dont
check types. DONT. the whole point of duck-typing is you DONT CHECK THE
TYPES. you just work with objects, and instead of TypeError you'd get
AttribiuteError, which is much better.  AAARRGGGHH.

python is EVIL at the low level. the high-level is just fine, but when
you try to go under the hood... you better go with an exorcist.



-tomer

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: why use special config formats?

2006-03-11 Thread gangesmaster
>> Why is the first uglier than the second?
YES THATS THE POINT. PYTHON CAN BE USED JUST LIKE A CONFIG FILE.

and if your users did
timeout = "300"
instead of
timeout = 300

then either your config parser must be uber-smart and all-knowing, and
check the types of key-value pairs, or your server would crash. either
way is bad, and i prefer crash-on-use then
know-everything-and-check-at-the-parser-level.



good night,
-tomer

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: why use special config formats?

2006-03-11 Thread gangesmaster
> Huh? You think a competent sys admin can't learn enough Python to hack
> your pickled file?
>
> Binary configs only keep out legitimate users who don't have the time or
> ability to learn how to hack the binary format. Black hats and power users
> will break your binary format and hack them anyway.

then you dont know what pickle is. pickle code is NOT python bytecode.
it's a bytecode they made in order to represent objects. you cannot
"exploit" in in the essence of running arbitrary code, unless you find
a bug in the pickle module. and that's less likely than you find a bug
in the parser of the silly config file formats you use.

i'm not hiding the configuration in "binary files", that's not the
point. pickle is just more secure by definition.

aah. you all are too stupid.


-tomer

-- 
http://mail.python.org/mailman/listinfo/python-list