Re: Large LCD/Plasma TV Output

2006-08-30 Thread simonwittber
[EMAIL PROTECTED] wrote:
 I'm soon going to be starting on a little program that needs to output
 tabular information to a large LCD or Plasma screen. Python is, of
 course, my preferred language.

 My first instinct is PyGame, which I have programming for a PC monitor
 before.

If all you want to do is use a normal video card output,  pygame will
do just fine.

If you want to try some fancy effects (alpha blending, 3D transitions),
have a look at  PyOpenGL, or another OpenGL based library. There are
lots of links on pygame.org.


-- Simon Wittber - http://www.entitycrisis.com/

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Queue limitations?

2006-03-15 Thread simonwittber
[EMAIL PROTECTED] wrote:
  the queue holds references to the images, not the images themselves,
  so the size should be completely irrelevant.I use one instance of 
  imageQueue.

 hmmm.. true. And it also fails when I use PIL Image objects instead of
 arrays. Any idea why compressing the string helps?


Compressing into a string converts the image intro a string type, which
is immutable. When you uncompress it, you've got a copy of the image,
rather than a reference to the original image. This might give you a
hint as to what is going wrong.

-Sw.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: would it be feasable to write python DJing software

2006-02-03 Thread simonwittber
Levi Campbell wrote:
 Any and all mixing would probably happen in some sort of multimedia
 library written in C (it would be both clumsy to program and slow to
 execute if the calculations of raw samples/bytes were done in python) so
 there shouldn't be a noticable performance hit.

Actually, manipulating and mixing audio samples can be both fast and
elegant, in Python, if you use Numeric or a similar library.

-Sw.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Another method of lazy, cached evaluation.

2006-01-18 Thread simonwittber
Hmm, good ideas.

I've made some refinements and posted to the cookbook. The refinements
allow for multilple function arguments and keywords.

http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/466315

-Sw.

-- 
http://mail.python.org/mailman/listinfo/python-list


Another method of lazy, cached evaluation.

2006-01-17 Thread simonwittber
Recently, I needed to provide a number of game sprite classes with
references to assorted images.

I needed a class to:
- allow instances to specify which image files they need.
- ensure that a name should always refer to the same image.
- only load the image if it is actually used.

After some messing about, I came up with this:


class LazyBag(object):
def __init__(self, function):
self.__dict__[function] = function
self.__dict__[arguments] = {}

def __setattr__(self, key, args):
try:
existing_value = self.arguments[key]
except KeyError:
existing_value = args
if existing_value != args:
raise ValueError(Attribute must retain its initial value,
which is %s. % str(self.arguments[key]))
self.arguments[key] = args

def __getattr__(self, key):
args = self.arguments[key]
r = self.__dict__[key] = self.function(*self.arguments[key])
del self.arguments[key]
return r


This class lets me do something like this:

cache = LazyBag(Image.open)

cache.pic_1 = data/pic_1.png
cache.pic_2 = data/pic_2.png


Now, when the pic_1 and pic_2 attributes are accessed, they will return
an Image instance, which is something different to which they were
initially assigned. Is this kind of behavior bad form? Likewise, what
do people think about raising an exception during an assignment
operation?

Is there a correct name for this sort of class? 'LazyBag' doesn't sound
right...


-Sw.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: nanothreads: Want to use them from within wxPython app

2005-12-11 Thread simonwittber
F. GEIGER wrote:
 I've def'ed a handler for EVT_IDLE in the app's main frame. There I'd like
 to call the nanothreads' __iter__ method, somehow.

 When I copy the __iter__ method into a, say, runOnce() method and call the
 next() method of the generator returned by runOnce(), it works. But I can't
 get at the __iter__ method, which is already there and therefore should be
 used instead of messing up nanothreads with changes of mine.

 Any hint welcome

The latest version of nanothreads is now in the fibranet package, which
you can download from the cheeseshop:

http://cheeseshop.python.org/pypi/FibraNet

To iterate nanothreads from wx, I call the nanothreads.poll() function
from the EVT_IDLE handler, making sure that I call event.RequestMore()
from within the handler, to iterate nanothreads as fast as possible.

HTH, Simon WIttber.

-- 
http://mail.python.org/mailman/listinfo/python-list


NanoThreads 11

2005-10-05 Thread simonwittber
NanoThreads v11

NanoThreads allows the programmer to simulate concurrent processing
using generators as tasks, which are registered with a scheduler.

While the scheduler is running, a NanoThread can be:
 - paused
 - resumed
 - ended (terminate and call all registered exit functions)
 - killed (terminate and do not call any registered exit functions)
 - preempted to the top of the execution queue

New in v11:

A NanoThread task can now yield control using 'yield
nanothreads.UNBLOCK', which performs the next iteration of the task in
a separate, OS level thread. This allows the scheduler to keep running
other tasks, while the nanothread is, for example, performing CPU
blocking IO, or calling some time consuming function.

Sw.

-- 
http://mail.python.org/mailman/listinfo/python-announce-list

Support the Python Software Foundation:
http://www.python.org/psf/donations.html


Re: 1 Million users.. I can't Scale!!

2005-09-29 Thread simonwittber
yoda wrote:
 I'm considering moving to stackless python so that I can make use of
 continuations so that I can open a massive number of connections to the
 gateway and pump the messages out to each user simultaneously.(I'm
 thinking of 1 connection per user).

This won't help if your gateway works synchronously. You need to
determine what your gateway can do. If it works asynchronously,
determine the max bandwidth it can handle, then determine how many
messages you can fit into 4 seconds of that bandwidth. That should
provide you with a number of connections you can safely open and still
recieve acceptable response times.

 My questions therefore are:
 1)Should I switch to stackless python or should I carry out experiments
 with mutlithreading the application?

You will build a more scalable solution if you create a multi process
system. This will let you deploy across multiple servers, rather than
CPU's. Multithreading and Multiprocessing will only help you if your
application is IO bound.

If your application is CPU bound, multiprocessing and multithreading
will likely hurt your performance. You will have to build a parallel
processing application which will work across different machines. This
is easier than it sounds, as Python has a great selection of IPC
mechanisms to choose from.

 2)What architectural suggestions can you give me?

Multithreading will introduce extra complexity and overhead. I've
always ended up regretting any use of multithreading which I have
tried. Avoid it if you can.

 3)Has anyone encountered such a situation before? How did you deal with
 it?

Profile each section or stage of the operation. Find the bottlenecks,
and reduce it whichever way you can. Check out ping times. Use gigabit
or better. Remove as many switches and other hops between machines
which talk to each other.

Cache content, reuse it if you can. Pregenerate content, and stick it
in a cache. Cache cache cache! :-)

 4)Lastly, and probably most controversial: Is python the right language
 for this? I really don't want to switch to Lisp, Icon or Erlang as yet.

Absolutely. Python will let you easily implement higher level
algorithms to cope with larger problems.

Sw.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Silly function call lookup stuff?

2005-09-28 Thread simonwittber
  I guess because the function name may be re-bound between loop iterations.
   Are there good applications of this?  I don't know.

I have iterator like objects which dynamically rebind the .next method
in order to different things. When I want a potentially infinite
iterator to stop, I rebind its .next method to another method which
raises StopIteration. This saves lots of messy if/elif state checking
in the .next method, which I _need_ to be very fast.


 Yuk I'd hate that. I think it would be extremely rare.


I use it all the time. Dynamically rebinding methods is a nice way to
change and maintain state between calls.

Sw.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Alternatives to Stackless Python?

2005-09-22 Thread simonwittber
 I found LGT http://lgt.berlios.de/ but it didn't seem as if the
 NanoThreads module had the same capabilites as stackless.

What specific capabilities of Stackless are you looking for, that are
missing from NanoThreads?


Sw.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python game coding

2005-09-20 Thread simonwittber
This article describes a system very similar to my own.

shameless plug
The LGT library (http://developer.berlios.de/projects/lgt) provides a
simple, highly tuned  'microthread' implementation using generators. It
is called NanoThreads. It allows a microthread to be paused, resumed,
and killed, but not pickled.

The eventnet module facilitates event-driven programming using a global
dispatcher. It provides a Handler class which functions in a similar
fashion to the Actor described in the article.

We used the above recently, to compete in the pyweek game competition
(http://mechanicalcat.net/tech/PyWeek/1/) under the moniker TeamXerian.
Our boring (but glitzy) game used nanothreads to move, and animate 100
critters at a frame independent rate. Each critter had a thread
controlling movement and frame swapping.

I also created an XML scene loader, which allowed designers on the team
to create a timelined sequence of events, (like a movie script), which
controlled sound and image elements using pre-programmed movement,
rotation, scaling and fading style actions.

If you want to take a peek, you can download a windows exe (17MB) from
here:
http://metaplay.dyndns.org:82/~xerian/Quido.zip
or the source (95k) from here:
http://metaplay.dyndns.org:82/~xerian/Quido_src_only.zip
/shameless plug

So, forget 'game scripting' in Python, write the whole darn lot in
Python!

Sw.

-- 
http://mail.python.org/mailman/listinfo/python-list


generator object, next method

2005-09-08 Thread simonwittber
 gen = iterator()
 gen.next
method-wrapper object at 0x009D1B70
 gen.next
method-wrapper object at 0x009D1BB0
 gen.next
method-wrapper object at 0x009D1B70
 gen.next
method-wrapper object at 0x009D1BB0
 gen.next is gen.next
False


What is behind this apparently strange behaviour? (The .next method
seems to alternately bind to two different objects)

Sw.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: generator object, next method

2005-09-08 Thread simonwittber

 Why is that?  I thought gen.next is a callable and gen.next() actually
 advances the iterator.  Why shouldn't gen.next always be the same object?

That is, in essence, my question.

Executing the below script, rather than typing at a console, probably
clarifies things a little.

Sw.

#---
def iterator():
yield None

gen = iterator()

#gen.next is bound to x, and therefore, gen.next should not be GC?
x = gen.next
y = gen.next
print x
print y
print gen.next
#---

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Magic Optimisation

2005-09-05 Thread simonwittber

Paul McGuire wrote:
 I still think there are savings to be had by looping inside the
 try-except block, which avoids many setup/teardown exception handling
 steps.  This is not so pretty in another way (repeated while on
 check()), but I would be interested in your timings w.r.t. your current
 code.

Your suggested optimisation worked nicely. It shaved 0.02 seconds from
a loop over 1 iterators, and about 0.002 seconds from a loop over
1000 iterators.

-- 
http://mail.python.org/mailman/listinfo/python-list


Magic Optimisation

2005-09-04 Thread simonwittber
Hello People.

I've have a very tight inner loop (in a game app, so every millisecond
counts) which I have optimised below:

def loop(self):
self_pool = self.pool
self_call_exit_funcs = self.call_exit_funcs
self_pool_popleft = self.pool.popleft
self_pool_append = self.pool.append
check = self.pool.__len__
while check()  0:
task = self_pool_popleft()
try:
task.next()
except StopIteration:
self_call_exit_funcs(task)
return
self_pool_append(task)

This style of optimisation has shaved _seconds_ from my iteration
cycle, esp. when I have many registered tasks, so this style of
optimisation is very important to me.

However, it is very ugly. Does anyone have any tips on how I could get
this optimisation to occor magically, via a decorator perhaps?

Sw.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Magic Optimisation

2005-09-04 Thread simonwittber
I guess it is hard to see what the code is doing without a complete
example.

The StopIteration is actually raised by task.next(), at which point
task is removed from the list of generators (self.pool). So the
StopIteration can be raised at any time.

The specific optimisation I am after, which will clean up the code a
lot, is a way to auto-magically create self_attribute local variables
from self.attribute instance variables.

Sw.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Magic Optimisation

2005-09-04 Thread simonwittber
Yes. It slows down the loop when there are only a few iterators in the
pool, and speeds it up when there are  2000.

My use case involves  1000 iterators, so psyco is not much help. It
doesn't solve the magic creation of locals from instance vars either.

Sw.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Magic Optimisation

2005-09-04 Thread simonwittber
 def loop(self):
 self_pool = self.pool
 self_call_exit_funcs = self.call_exit_funcs
 self_pool_popleft = self.pool.popleft
 self_pool_append = self.pool.append
 check = self.pool.__len__
 while check()  0:
 task = self_pool_popleft()
 try:
 task.next()
 except StopIteration:
 self_call_exit_funcs(task)
 return
 self_pool_append(task)

Stupid me. the 'return' statement above should be 'continue'. Sorry for
the confusion.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Magic Optimisation

2005-09-04 Thread simonwittber
Psyco actually slowed down the code dramatically.
I've fixed up the code (replaced the erroneous return statement) and
uploaded the code for people to examine:

The test code is here: http://metaplay.dyndns.org:82/~xerian/fibres.txt

These are the run times (in seconds) of the test file.

without psyco:

0.00294482
0.03261255
0.06714886
0.87395510

with psyco.full():

0.00446651
0.05012258
0.15308657
11.23493663

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: keylogger in Python

2005-07-31 Thread simonwittber
Michael Hoffman wrote:
 I think this is going to be much harder than you think, and I imagine
 this will only end in frustration for you. You will not be able to do it
 well with just Python. I would recommend a different fun project.


Actually, it's pretty easy, using the pyHook and Python win32 modules.
I use code like this to detect when the user is away from the keyboard.

import pyHook
import time
import pythoncom

def OnKeyboardEvent(event):
print event.Ascii

hm = pyHook.HookManager()
hm.KeyDown = OnKeyboardEvent
hm.HookKeyboard()

while True:
pythoncom.PumpMessages()


Sw.

-- 
http://mail.python.org/mailman/listinfo/python-list


automatically assigning names to indexes

2005-07-12 Thread simonwittber
I know its been done before, but I'm hacking away on a simple Vector
class.

class Vector(tuple):
def __add__(self, b):
return Vector([x+y for x,y in zip(self, b)])
def __sub__(self, b):
return Vector([x-y for x,y in zip(self, b)])
def __div__(self, b):
return Vector([x/b for x in self])
def __mul__(self, b):
return Vector([x*b for x in self])

I like it, because it is simple, and can work with vectors of any
size...

However, I'd like to add attribute access (magically), so I can do
this:

v = Vector((1,2,3))
print v.x
print v.y
print v.z

as well as:

print v[0]
print v[1]
print v[2]

Has anyone got any ideas on how this might be done?


Sw.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: automatically assigning names to indexes

2005-07-12 Thread simonwittber
  class Vector(tuple):
 ... x = property(lambda self: self[0])
 ... y = property(lambda self: self[1])
 ... z = property(lambda self: self[2])
 ...
  Vector(abc)
 ('a', 'b', 'c')
  Vector(abc).z
 'c'
  Vector(abc)[2]
 'c'


Aha! You have simultaneously proposed a neat solution, and shown me a
bug in my class! (It shouldn't accept strings)

Thanks.

Sw.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: automatically assigning names to indexes

2005-07-12 Thread simonwittber

 And what should happen for vectors of size != 3 ? I don't think that a
 general purpose vector class should allow it; a Vector3D subclass would
 be more natural for this.

That's the 'magic' good idea I'm looking for. I think a unified Vector
class for all size vectors is a worthy goal!

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: pickle alternative

2005-07-04 Thread simonwittber
Ok, I've attached the proto PEP below.

Comments on the proto PEP and the implementation are appreciated.

Sw.



Title: Secure, standard serialization of simple python types.

Abstract

This PEP suggests the addition of a module to the standard library,
which provides a serialization class for simple Python types.


Copyright

This document is placed in the public domain.


Motivation

The standard library currently provides two modules which are used
for object serialization. Pickle is not secure by its very nature,
and the marshal module is clearly marked as being not secure in the
documentation. The marshal module does not guarantee compatibility
between Python versions. The proposed module will only serialize
simple built-in Python types, and provide compatibility across
Python versions.

See RFE 467384 (on SourceForge) for more discussion on the above
issues.


Specification

The proposed module should use the same API as the marshal module.

dump(value, file)
#serialize value, and write to open file object
load(file)
#read data from file object, unserialize and return an object
dumps(value)
#return the string that would be written to the file by dump
loads(value)
#unserialize and return object


Reference Implementation

http://metaplay.dyndns.org:82/~simon/gherkin.py.txt


Rationale

The marshal documentation explicitly states that it is unsuitable
for unmarshalling untrusted data. It also explicitly states that
the format is not compatible across Python versions.

Pickle is compatible across versions, but also unsafe for loading
untrusted data. Exploits demonstrating pickle vulnerability exist.

xmlrpclib provides serialization functions, but is unsuitable when
serializing large data structures, or when high performance is a
requirement. If performance is an issue, a C-based accelerator
module can be installed. If size is an issue, gzip can be used,
however, this creates a mutually exclusive size/performance
trade-off.

Other existing formats, such as JSON and Bencode (bittorrent) do
not handle some marginally complex python structures and/or all
the simple Python types.

Time and space efficiency, and security do not have to be mutually
exclusive features of a serializer. Python does not provide, in the
standard library, a serializer which can work safely with untrusted
data which is time and space efficient. The proposed gherkin module
goes some way to achieving this. The format is simple enough to
easily write interoperable implementations across platforms.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: pickle alternative

2005-06-19 Thread simonwittber
 I think you should implement it as a C extension and/or write a PEP.
 This has been an unfilled need in Python for a while (SF RFE 467384).

I've submitted a proto PEP to python-dev. It coming up against many of
the same objections to the RFE.

Sw.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: pickle alternative

2005-06-19 Thread simonwittber

 See also bug# 471893 where jhylton suggests a PEP.  Something really
 ought to be done about this.

I know this, you know this... I don't understand why the suggestion is
meeting so much resistance. This is something I needed for a real world
system which moves lots of data around to untrusted clients. Surely
other people have had similar needs? Pickle and xmlrpclib simply are
not up to the task, but, perhaps Joe Programmer is content to use a
pickle, and not care for the security issues.

Owell. I'm not sure what I can say to make the case any clearer...


Sw.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: pickle alternative

2005-06-19 Thread simonwittber
If anyone is interested, I've implemented a faster and more space
efficient gherkin with a few bug fixes.

http://developer.berlios.de/project/showfiles.php?group_id=2847

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: my golf game needs gui

2005-06-09 Thread simonwittber
For simple 2D graphics, your best option is pygame.

http://pygame.org/

If you need assistance, join the pygame mailing list, where you should
find someone to help you out.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: pickle alternative

2005-06-01 Thread simonwittber
 I can't reproduce your large times for marshal.dumps.  Could you
 post your test code?


Certainly:

import sencode
import marshal
import time

value = [r for r in xrange(100)] +
[{1:2,3:4,5:6},{simon:wittber}]

t = time.clock()
x = marshal.dumps(value)
print marshal enc T:, time.clock() - t

t = time.clock()
x = marshal.loads(x)
print marshal dec T:, time.clock() - t

t = time.clock()
x = sencode.dumps(value)
print sencode enc T:, time.clock() - t
t = time.clock()
x = sencode.loads(x)
print sencode dec T:, time.clock() - t

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: pickle alternative

2005-06-01 Thread simonwittber

Andrew Dalke wrote:
 This is with Python 2.3; the stock one provided by Apple
 for my Mac.

Ahh that is the difference. I'm running Python 2.4. I've checked my
benchmarks on a friends machine, also in Python 2.4, and received the
same results as my machine.

 I expected the numbers to be like this because the marshal
 code is used to make and read the .pyc files and is supposed
 to be pretty fast.

It would appear that the new version 1 format introduced in Python 2.4
is much slower than version 0, when using the dumps function.

Thanks for your feedback Andrew!

Sw.

-- 
http://mail.python.org/mailman/listinfo/python-list


pickle alternative

2005-05-31 Thread simonwittber
I've written a simple module which serializes these python types:

IntType, TupleType, StringType, FloatType, LongType, ListType, DictType

It available for perusal here:

http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/415503

It appears to work faster than pickle, however, the decode process is
much slower (5x) than the encode process. Has anyone got any tips on
ways I might speed this up?


Sw.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: pickle alternative

2005-05-31 Thread simonwittber
 For simple data types consider marshal as an alternative to pickle.

From the marhal documentation:
Warning: The marshal module is not intended to be secure against
erroneous or maliciously constructed data. Never unmarshal data
received from an untrusted or unauthenticated source.

 BTW, your code won't work on 64 bit machines.

Any idea how this might be solved? The number of bytes used has to be
consistent across platforms. I guess this means I cannot use the struct
module?

 There's no need to compute str(long) twice -- for large longs
 it takes a lot of work to convert to base 10.  For that matter,
 it's faster to convert to hex, and the hex form is more compact.

Thanks for the tip.

Sw.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: pickle alternative

2005-05-31 Thread simonwittber


 Ahh, I had forgotten that.  Though I can't recall what an attack
 might be, I think it's because the C code hasn't been fully vetted
 for unexpected error conditions.

I tried out the marshal module anyway.

marshal can serialize small structures very qucikly, however, using the
below test value:

value = [r for r in xrange(100)] +
[{1:2,3:4,5:6},{simon:wittber}]

marshal took 7.90 seconds to serialize it into a 561 length string.
decode took 0.08 seconds.

The aforementioned recipe took 2.53 seconds to serialize it into a
587 length string. decode took 5.16 seconds, which is much longer
than marshal!!

Sw.

-- 
http://mail.python.org/mailman/listinfo/python-list