Re: [Python-Dev] PyPI comments and ratings, *really*?

2009-11-12 Thread Michael Sparks
On Fri, Nov 13, 2009 at 12:44 AM, Ben Finney ben+pyt...@benfinney.id.au wrote:
 Martin v. Löwis mar...@v.loewis.de writes:

  Why can't we just disable it until we can come up with a better
  system that finds a balance between the rights of maintainers, and
  those of the user?

 Because I want to wait for the outcome of the poll first.

 There's a problem with the poll's placement: on the front page of the
 PyPI website.

 I only know about the poll because you said there was one, and I went
 hunting for it. The front page of PyPI is not one I ever visit, as a
 package maintainer; I'll visit the pages for the packages I maintain, or
 make a specific search of packages I'm looking for.

 So, the poll's audience is limited to those who visit the front page
 (which is hardly ever necessary for package maintainers), and those who
 already know it exists (e.g. through this discussion thread). You'll be
 missing the opinions of those maintainers who, like the OP of this
 thread, only discovered the behaviour much later.

This poll is only visible if you're logged into PyPI. This strikes me
as a mistake. I went looking for a poll and didn't see it.

I only found the poll by accident by wondering randomly what might
change if I hit the login using open id button. So you can only vote
in the poll if you a) get told about it b) realise you need to create
an account to login and use in order to vote. I realise there's good
reasons for that, but I think it's a mistake. (There's no guidance
that you need to log in to see the poll for example)


Michael.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] People want CPAN :-)

2009-11-09 Thread Michael Sparks
[ I'm posting this comment in reply to seeing this thread:
* http://thread.gmane.org/gmane.comp.python.distutils.devel/11359
Which has been reposted around - and I've read that thread. I lurk on
this list, in case anything comes up that I'd hope to be able to say
something useful to. I don't know if this will be, but that's my
reason for posting. If this is the wrong place, my apologies, I don't
sub to distutils-sig :-/ ]

On Sat, Nov 7, 2009 at 2:30 PM, sstein...@gmail.com sstein...@gmail.com wrote:
 On Nov 7, 2009, at 3:20 AM, Ben Finney wrote:

 Guido van Rossum gu...@python.org writes:

 On Fri, Nov 6, 2009 at 2:52 PM, David Lyon david.l...@preisshare.net
 wrote:

[ lots of snippage ]
...
 All in all, I think this could be a big leap forward for the Python
 distribution ecosystem whether or not we eventually write the PyPan I wished
 for as a new Perl refugee.

Speaking as someone who left the perl world for the python world, many
years ago now, primarily due to working on one project, the thing I
really miss about Perl is CPAN. It's not the fact that you know you do
perl Makefile.PL  make  make test  make install. Nor the fact
that it's trivial to set up a skeleton package setup that makes that
work for you. It's not the fact that there's an installer than can
download  track dependencies.

The thing that makes the difference IMHO is two points:
* In a language which has a core ethos There's more than one way
to do it, packaging is the one place where there is one, and only one
obvious way to do it. (Oddly with python, with packaging this is
flipped - do I as a random project use distutils? pip? setuptools?
distribute? virtualenv?)
* It has a managed namespace or perhaps better - a co-ordinated namespace.

CPAN may have lots of ills, and bad aspects about it (I've never
really trusted the auto installer due to seeing one too many people
having their perl installation as a whole upgraded due to a bug that
was squashed 6-8 years ago), but these two points are pretty much
killer.

All the other aspects like auto download, upload, dependency tracking,
auto doc extraction for the website etc, really follow from the
managed namespace really. I realise that various efforts like
easy_install  distribute  friends make that sort of step implicitly
- since there can only be one http://pypi.python.org/pypi/flibble .
But it's not quite the same - due to externally hosted packages.

For more detail about this aspect:
   * http://www.cpan.org/modules/04pause.html#namespace

I'm really mentioning this because I didn't see it listed, and I
really think that it's very easy to underestimate this aspect of CPAN.

IMHO, it is what matters the most about CPAN. The fact that they've
nabbed the CTAN idea of having an archive network for storing,
mirroring and grabbing stuff from is by comparison /almost/ irrelevant
IMHO. It is the sort of thing that leads to the DBI::DBD type stuff
that is being simple to use, because of the encouragement to talk and
share a namespace.

The biggest issue with this is retrofitting this to an existing world.

Personal opinion, I hope it's useful, and going back into lurk mode (I
hope :). If this annoys you, please just ignore it.


Michael.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Backport new float repr to Python 2.7?

2009-10-11 Thread Michael Sparks
On Sunday 11 October 2009 21:00:41 Glyph Lefkowitz wrote:
 with all the
 dependency-migration issues 3.x could definitely use some carrots.
..
 everybody's favorate bugaboo, multicore parallelism.

I know it's the upteen-thousandth time it's been discussed, but
removal of the GIL in 3.x would probably be pretty big carrots for
some. I know the arguments about performance hits on single core
systems etc, and the simplifications it brings, but given every entry
level machine these days is multicore, is it time to reconsider some
of those points ? Not here perhaps - python-ideas or c.l.p, but if
bigger carrots are wanted... Just saying. (As time goes on, lack of a
GIL in Ironpython makes it more attractive for multicore work)

Not suggesting this happens, but just noting it would probably be a big carrot.


Michael.
-- 
http://yeoldeclue.com/blog
http://twitter.com/kamaelian
http://www.kamaelia.org/Home
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Backport new float repr to Python 2.7?

2009-10-11 Thread Michael Sparks
On Sun, Oct 11, 2009 at 10:41 PM, Antoine Pitrou solip...@pitrou.net wrote:
 Michael Sparks sparks.m at gmail.com writes:

 I know it's the upteen-thousandth time it's been discussed, but
 removal of the GIL in 3.x would probably be pretty big carrots for
 some. I know the arguments [...]

 Not before someone produces a patch anyway. It is certainly not as easy as you
 seem to think it is.

You misunderstand me I think. I don't think it's easy, based on all
the discussion I've seen over the years, and the various attempts that
have been made, I think it's a Hard problem. I was just mentioning
that if someone wanted a serious carrot, that would probably be it.
I'll be quiet about it now though, since I *do* understand how
contentious an issue it is.


Michael.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [Python-3000] PEP 30XZ: Simplified Parsing

2007-05-03 Thread Michael Sparks
On Thursday 03 May 2007 15:40, Stephen J. Turnbull wrote:
 Teaching Python-based extraction tools about it isn't hard, just make
 sure that you slurp in the whole argument, and eval it.

We generate our component documentation based on going through the AST
generated by compiler.ast, finding doc strings (and other strings in
other known/expected locations), and then formatting using docutils.

Eval'ing the file isn't always going to work due to imports relying on
libraries that may need to be installed. (This is especially the case with 
Kamaelia because we tend to wrap libraries for usage as components in a 
convenient way) 

We've also specifically moved away from importing the file or eval'ing things 
because of this issue. It makes it easier to have docs built on a random 
machine with not too much installed on it.

You could special case 12345 + 67890 as a compile timeconstructor and 
jiggle things such that by the time it came out the parser that looked like 
1234567890, but I don't see what that has to gain over the current form. 
(which doesn't look like an expression) I also think that's a rather nasty 
version.

On the flip side if we're eval'ing an expression to get a docstring, there 
would be great temptation to extend that to be a doc-object - eg using 
dictionaries, etc as well for more specific docs. Is that wise? I don't 
know :)


Michael.
--
Kamaelia project lead
http://kamaelia.sourceforge.net/Home
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Pythonic concurrency

2005-10-10 Thread Michael Sparks
On Monday 10 Oct 2005 15:45, Donovan Baarda wrote:
 Sounds like yet another reason to avoid threading and use processes
 instead... effort spent on threading based message passing
 implementations could instead be spent on inter-process messaging.

I can't let that pass (even if our threaded component has a couple of warts
at the moment).

# Blocking thread example (uses raw_input) to single threaded pygame
# display ticker. (The display is rate limited to 8 words per second at
# most since it was designed for subtitles)
#
from Axon.ThreadedComponent import threadedcomponent
from Kamaelia.Util.PipelineComponent import pipeline
from Kamaelia.UI.Pygame.Ticker import Ticker

class ConsoleReader(threadedcomponent):
   def __init__(self, prompt= ):
  super(ConsoleReader, self).__init__()
  self.prompt = prompt

   def run(self): # implementation wart, should be main
  while 1:
 line = raw_input(self.prompt)
 line = line + \n
 self.outqueues[outbox].put(line)  # implementation wart, should be 
self.send(line, outbox)

pipeline(
  ConsoleReader(),
  Ticker() # Single threaded pygame based text ticker
).run()

There's other ways with other systems to achieve the same goal. 

Inter-process based messaging can be done in various ways. The API though
can look pretty much the same. (There's obviously some implications of
crossing process boundaries though, but that's for the system composer
to deal with, not the components).

Regards,


Michael.
-- 
Michael Sparks, Senior RD Engineer, Digital Media Group
[EMAIL PROTECTED], http://kamaelia.sourceforge.net/
British Broadcasting Corporation, Research and Development
Kingswood Warren, Surrey KT20 6NP

This e-mail may contain personal views which are not the views of the BBC.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Pythonic concurrency

2005-10-08 Thread Michael Sparks
On Saturday 08 October 2005 04:05, Josiah Carlson wrote:
[ simplistic, informal benchmark of a test optimised versioned of the system,
 based on bouncing scaing rotating sprites around the screen. ]
 Single process?  Multi-process single machine?  Multiprocess multiple
 machine?

SIngle process, single CPU, not very recent machine. (600MHz crusoe based 
machine so) That machine wasn't hardware accelerated though, so was only able 
to handle several dozen sprites before slowing down. The slowdown was due to 
the hardware not being able to keep up with pygame's drawing requests though 
rather than the framework.

 I'm just offering the above as example benchmarks (you certainly don't
 need to do them to satisfy me, but I'll be doing those when my tuple
 space implementation is closer to being done).

I'll note them as things worth doing - they look reasonable and interesting
benchmarks. (I can think of a few modifications I might make though. For
example in 3 you say fastest. I might have that as a 3b. 3a could be
simplest to use/read or most likely to pick. Obviously there's a good
chance that's not the fastest. (Could be optimised to be under the hood
I suppose, but that wouldn't be the point of the test)

  [ Network controlled Networked Audio Mixing Matrix ]
 I imagine you are using a C library/extension of some sort to do the
 mixing...perhaps numarray, Numeric, ... 

Nope, just plain old python (I'm now using a 1.6Ghz centrino machine
though). My mixing function is particularly naive as well. To me that says
more about python than my code. I did consider using pyrex to wrap (or
write) an optimised version, but there didn't seem to be any need for
last week (Though for a non-prototype something faster would be
nice :).

I'll save responding the linda things until I have a chance to read in detail
what you've written. It sounds very promising though - having multiple
approaches to different styles of concurrency that work nicely with each
other safely is always a positive thing IMO.

Thanks for the suggestions and best regards,


Michael.
--
Though we are not now that which in days of old moved heaven and earth, 
   that which we are, we are: one equal temper of heroic hearts made 
 weak by time and fate but strong in will to strive, to seek, 
  to find and not to yield -- Ulysses, Tennyson
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Pythonic concurrency

2005-10-07 Thread Michael Sparks
 of benchmark you'd like 
to see. I'm more interested in benchmarks that actually mean something rather 
than say X is better than Y though.

Summarising them, no benchmarks, yet. If you're after speed, I'm certain
you can find that elsewhere. If you're after an easy way of dealing with a
concurrent problem, that's where we're starting from, and then optimising. 
We're very open to suggestions for improvement on both usability/leanability 
and on keeping doors open/open doors to performance though. I'd hate to
have to rewrite everything in a another language later simply due to poor
design decisions. 

[ Network controlled Networked Audio Mixing Matrix ]
 Very neat.  How much data?  What kind of throughput?  What kinds of
 latencies?

For the test system we tested with 3 raw PCM audio data streams. That 's
3 x 44.1Khz, 16 bit stereo - which is around 4.2Mbit/s of data from the
network being processed realtime and output back to the network at
1.4Mbit/s. So, not huge numbers, but not insignificant amounts of data
either. I suppose one thing I can take more time with now is to look at
the specific latency of the mixer. It didn't *appear* to be large however.
(there appeared to be similar latency in the system with or without the
mixer)

[[The aim of the rapid prototyping session was to see what could be done
  rather than to measure the results. The total time taken for coding the
  mixing matrix was 2.5 days. About 1/2 day spent on finding an issue we had
  with network resends regarding non-blocking sockets. A day with me totally
  misunderstanding how mixing raw audio byte streams works. The backplane
  was written during that 3 day time period. The control protocol for
  switching on/off mixes and querying the system though was ~1.5 hours
  from start to finish, including testing. To experiment with what dataflow
  architecture might work, I knocked up a command line controlled dynamic
  graph viewer (add nodes, link nodes, delete nodes) in about 5 minutes and
  then experimented with what the system would look like if done naively. The
   backplane idea became clear as useful here because we wanted to allow
  multiple mixers. ]]

A more interesting effect we found was dealing with mouse movement in pygame
where we found that *huge* numbers of messages being sent one at a time and
processed one at a time (with yields after each) became a huge bottleneck.

It became more sense to batch the events and pass them to client surfaces.
(If that makes no sense we allow pygame components to act as if they have
control of the display by giving them a surface from a pygame display service.
This acts essentially as a simplistic window manager. That means pygame events 
need to be passed through quickly and cleanly.)

The reason I like using pygame for these things is because a) it's relatively 
raw and fast b) games are another often /naturally/ concurrent system. Also 
it normally allows other senses beyond reading numbers/graphs to kick in when 
evaluating changes that looks better/worse, Theres's something wrong 
there.

 I have two recent posts about the performance and features of a (hacked
 together) tuple space system 

Great :-) I'll have a dig around.

 The only thing that it is missing is a prioritization mechanism (fifo,
 numeric priority, etc.), which would get us a job scheduling kernel. Not
 bad  for a message passing/tuple space/IPC library.   

Sounds interesting. I'll try and find some time to have a look and have a 
play. FWIW, we're also missing a prioritisation mechanism right now. Though 
currently I have SImon Wittber's latest release of Nanothreads on my stack of 
to look at. I do have a soft spot for Linda type approaches though :-)

Best Regards,


Michael.
-- 
Michael Sparks, Senior RD Engineer, Digital Media Group
[EMAIL PROTECTED], http://kamaelia.sourceforge.net/
British Broadcasting Corporation, Research and Development
Kingswood Warren, Surrey KT20 6NP

This e-mail may contain personal views which are not the views of the BBC.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Pythonic concurrency

2005-10-07 Thread Michael Sparks
On Friday 07 October 2005 23:26, Bruce Eckel wrote:
  I think the ease of use issue opens up far greater possibilities if you
  include multiprocessing  
...
 That's what I'm looking for.

In which case that's an area we need to push our work into sooner rather
than later. After all, the PS3 and CELL arrive next year. Sun already has
some interesting stuff shipping. I'd like to use that kit effectively, and
more importantly make using that kit effectively available to collegues
sooner rather than later. That really means multiprocess now not later.

BTW, I hope it's clear that I'm not saying concurrency is easy per se (noting
your previous post ;-) but rather than it /should/ be made as simple as is
humanly possible.

Thanks!


Michael.
-- 
Michael Sparks, Senior RD Engineer, Digital Media Group
[EMAIL PROTECTED], http://kamaelia.sourceforge.net/
British Broadcasting Corporation, Research and Development
Kingswood Warren, Surrey KT20 6NP

This e-mail may contain personal views which are not the views of the BBC.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Pythonic concurrency

2005-10-07 Thread Michael Sparks
On Thursday 06 October 2005 21:06, Bruce Eckel wrote:
 So yes indeed, this is quite high on my list to research. Looks like
 people there have been doing some interesting work.

 Right now I'm just trying to cast a net, so that people can put in
 ideas, for when the Java book is done and I can spend more time on it.

Thanks for your kind words. Hopefully it's of use!

:-)


Michael.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Pythonic concurrency

2005-10-06 Thread Michael Sparks
Hi Bruce,


On Thursday 06 October 2005 18:12, Bruce Eckel wrote:
 Although I hope our conversation isn't done, as he suggests!
...
 At some point when more ideas have been thrown about (and TIJ4 is
 done) I hope to summarize what we've talked about in an article.

I don't know if you saw my previous post[1] to python-dev on this topic, but 
Kamaelia is specifically aimed at making concurrency simple and easy to use. 
Initially we were focussed on using scheduled generators for co-operative 
CSP-style (but with buffers) concurrency.
   [1] http://tinyurl.com/dfnah, http://tinyurl.com/e4jfq

We've tested the system so far on 2 relatively inexperienced programmers
(as well as experienced, but the more interesting group is novices). The one
who hadn't done much programming at all (a little bit of VB, pre-university)
actually fared better IMO. This is probably because concurrency became
part of his standard toolbox of approaches.

I've placed the slides I've produced for Euro OSCON on Kamaelia here:
   * http://cerenity.org/KamaeliaEuroOSCON2005.pdf

The corrected URL for the whitepaper based on work now 6 months old (we've 
come quite a way since then!) is here:
   * http://www.bbc.co.uk/rd/pubs/whp/whp113.shtml

Consider a simple server for sending text (generated by a user typing into the 
server) to multiple clients connecting to a server. This is a naturally 
concurrent problem in various ways (user interaction, splitting, listening 
for connections, serving connections, etc). Why is that interesting to us? 
It's effectively a microcosm of how subtitling works. (I work at the BBC)

In Kamaelia this looks like this:

=== start ===
class ConsoleReader(threadedcomponent):
   def run(self):
  while 1:
 line = raw_input( )
 line = line + \n
 self.outqueues[outbox].put(line)

Backplane(subtitles).activate()
pipeline(
ConsoleReader(),
publishTo(subtitles),
).activate()
def subtitles_protocol():
return subscribeTo(subtitles)

SimpleServer(subtitles_protocol, 5000).run()
=== end ===

The ConsoleReader is threaded to allow the use of the naive way of
reading from the input, whereas the server, backplane (a named splitter
component in practice), pipelines, publishing, subscribing, splitting,
etc are all single threaded co-operative concurrency.

A possible client for this text service might be:

pipeline(
TCPClient(subtitles.rd.bbc.co.uk, 5000),
Ticker(),
).run()

(Though that would be a bit bare, even if it does use pygame :)

The entire system is based around communicating generators, but we also
have threads for blocking operations. (Though the entire network subsystem
is non-blocking)

What I'd be interested in, is hearing how our system doesn't match with
the goals of the hypothetical concurrency system you'd like to see (if it
doesn't). The main reason I'm interested in hearing this, is because the
goals you listed are ones we want to achieve. If you don't think our system
matches it (we don't have process migration as yet, so that's one area)
I'd be interested in hearing what areas you think are deficient.

However, the way we're beginning to refer to the project is to refer to
just the component aspect rather than concurrency - for one simple
reason - we're getting to stage where we can ignore /most/ concurrency
issues(not all).

If you have any time for feedback, it'd be appreciated. If you don't I hope 
it's useful food for thought! 

Best Regards,


Michael
-- 
Michael Sparks, Senior RD Engineer, Digital Media Group
[EMAIL PROTECTED], http://kamaelia.sourceforge.net/
British Broadcasting Corporation, Research and Development
Kingswood Warren, Surrey KT20 6NP

This e-mail may contain personal views which are not the views of the BBC.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Pythonic concurrency

2005-10-06 Thread Michael Sparks
On Thursday 06 October 2005 23:15, Josiah Carlson wrote:
[... 6 specific use cases ...]
 If Kamaelia is able to handle all of the above mechanisms in both a
 blocking and non-blocking fashion, then I would guess it has the basic
 requirements for most concurrent applications.

It can. I can easily knock up examples for each if required :-)

That said, a more interesting example implemented this week (as part of
a rapid prototyping project to look at collaborative community radio)
implements an networked audio mixer matrix. That allows mutiple sources of
audio to be mixed, sent on to multiple destinations, may be duplicate mixes
of each other, but also may select different mixes. The same system also
includes point to point communications for network control of the mix.

That application covers ( I /think/ ) 1, 2, 3, 4,  and 6 on your list of
things as I understand what you mean. 5 is fairly trivial though. (The
largest bottleneck in writing it was my personal misunderstanding of
how to actually mix 16bit signed audio :-)

Regarding blocking  non-blocking, links can be marked to synchronous, which
forces blocking style behaviour. Since generally we're using generators, we
can't block for real which is why we throw an exception there. However,
threaded components can  do block. The reason for this was due to the
architecture being inspired by noting the similarities between asynchronous
hardware systems/langages and network systems.

 into my tuple space implementation before it is released. 

I'd be interested in hearing more about that BTW. One thing we've found is
that much organic systems have a neural system for communications between
things, (hence Axon :), that you also need to equivalent of a hormonal system.
In the unix shell world, IMO the environment acts as that for pipelines, and
similarly that's why we have an assistant system. (Which has key/value lookup
facilities)

It's a less obvious requirement, but is a useful one nonetheless, so I don't
really see a message passing style as excluding a linda approach - since
they're orthoganal approaches.

Best Regards,


Michael.
-- 
Michael Sparks, Senior RD Engineer, Digital Media Group
[EMAIL PROTECTED], http://kamaelia.sourceforge.net/
British Broadcasting Corporation, Research and Development
Kingswood Warren, Surrey KT20 6NP

This e-mail may contain personal views which are not the views of the BBC.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Active Objects in Python

2005-10-01 Thread Michael Sparks
On Friday 30 September 2005 22:13, Michael Sparks (home address) wrote:
 I wrote a white paper based on my Python UK talk, which is here:
     * http://www.bbc.co.uk/rd/pubs/whp/whp11.shtml

Oops that URL isn't right. It should be:
   * http://www.bbc.co.uk/rd/pubs/whp/whp113.shtml

Sorry! (Thanks to LD 'Gus' Landis for pointing that out!)

Regards,


Michael.
--
Though we are not now that which in days of old moved heaven and earth, 
   that which we are, we are: one equal temper of heroic hearts made 
 weak by time and fate but strong in will to strive, to seek, 
  to find and not to yield -- Ulysses, Tennyson
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Pythonic concurrency - cooperative MT

2005-10-01 Thread Michael Sparks
 by the threads
abstraction (more at the end).

 Pseudo or cooperative concurrency is not the same as true
 asynchronous concurrency.  

Correct. I've had discussions with a colleague at work who wants to work
on the underlying formal semantics of our system for verification purposes,
and he pointed out that the core assumption with a pure generator approach
effectively serialises the application, which may hide problems in the true
parallel approach (eg only using processes for a CSP-like system).

However that statement had an underlying assumption: that the system would
be a pure generator system. As soon as you involve multiple systems using
network connections, and threads (since we have threaded components as well),
and processes (which has always been on the cards, all our desktop machines
are dual processor and it just seems a waste to use just one) then the system
goes truly asynchronous.

As a result we (at least :-) have thought about these problems along the way.

 you have to deal with the way that they
 might access some same piece of data at the same time.

We do. We have both an underlying approach to deal with this and a
metaphor that encourages correct usage. The underlying mechanism is based
on explicit hand off of data between asynchronous activities. Once you have
handed off a piece of data, you no longer own it and can no longer change it.
If you are handed a piece of data you can do anything you like with it, for
as long as you like until you hand it off or throw it away.

The metaphor of old-fashioned paper based inboxes (or in-trays) and outboxes
(or out-trays) conceptually reinforces this idea - naturally encouraging
safer programming styles.

This means that we only ever have a single reader and single writer for any
item of data, which eliminates whole swathes of concurrency issues - whether
you're pseudo-concurrent (ie threads[*], generators) or truly concurrent
(processes on multiple processors).
   [*] Still only 1 CPU really.

Effectively there is no global data. If there is any global data (since we
do have a global address space we tend to think of as similar to a linda
tuple space), then it has a single owner. Others may read it, but only one
may write to it. Because this is python, this is enforced by convention.
(But the use is discouraged and rarely needed).

The use of generators effectively also hides the local variables from
accidental external modification. Which is a secondary layer of protection.

 If you do that then you might not have to lock access to your data
 structures at all.

We don't have to lock data structures at all - this is because we have
explicit hand off of data. If we hand off between processes, we do this
via Queues that handle the locking issues for us.

 To implement that explicitly, you would need an
 asynchronous version of all the functions that may block on
 resources (e.g. file open, socket write, etc.)

Or you can create a generator that handles reading from a file and hands
off the data on to the next component explicitly. The file reader is given
CPU time by the scheduler. This can seem odd unless you've done any shell
programming in which case the idea should be obvious:

echo `ls *py |while read i; do wc -l $i |cut -d \  -f1; done` | sed -e 's/ 
/+/g' | bc
(yes I know there's better ways of doing this :)

So all in all, I'd say yes generators aren't really concurrent, but they
*are* a very good way (IMHO) of dealing with concurrency in a single thread
and map naturally if you're careful in designing your approach early on to
map to a thread/process based approach cleanly.

If you think I'm talking a load of sphericals (for all I know it's possible
I am, though I hope I'm not :-) , please look at our tutorial first, then
at our howto for building components [*] and tell me what we're
doing wrong. I'd really like to know so we can make the system better,
easier for newbies (and hence everyone else), and more trustable.
   [*] http://tinyurl.com/dp8n7

(This really feels like this more of a comp.lang.python discussion really
though, because AFAICT, python already has everything we need for this.
I might revisit that thought when we've looked at shared memory issues
though. IMHO though that would be largely stuff for the standard library.)

Best Regards,


Michael.
-- 
Michael Sparks, Senior RD Engineer, Digital Media Group
[EMAIL PROTECTED], http://kamaelia.sourceforge.net/
British Broadcasting Corporation, Research and Development
Kingswood Warren, Surrey KT20 6NP

This e-mail may contain personal views which are not the views of the BBC.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Active Objects in Python

2005-09-30 Thread Michael Sparks (home address)
 made, but mainly in the hope it's of use to others
(either directly or for cherry picking ideas).

Best Regards,


Michael.
-- 
Michael Sparks, Senior RD Engineer, Digital Media Group
[EMAIL PROTECTED], http://kamaelia.sourceforge.net/
British Broadcasting Corporation, Research and Development
Kingswood Warren, Surrey KT20 6NP

This e-mail may contain personal views which are not the views of the BBC.


___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Information request; Keywords: compiler compiler, EBNF, python, ISO 14977

2005-07-22 Thread Michael Sparks
On Friday 22 Jul 2005 15:33, Christine C Moran wrote:
 Apart from the SimpleParse module, are there other good parser generators
 out there for python in python that accept EBNF style grammars?
 Preferably ISO 14977 EBNF.

This is off-topic for python-dev.  Please post to comp.lang.python
instead.


Michael.
-- 
Michael Sparks, Senior RD Engineer, Digital Media Group
[EMAIL PROTECTED], http://kamaelia.sourceforge.net/
British Broadcasting Corporation, Research and Development
Kingswood Warren, Surrey KT20 6NP

This e-mail may contain personal views which are not the views of the BBC.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Adding Python-Native Threads

2005-06-30 Thread Michael Sparks
On Sunday 26 Jun 2005 12:04, Adam Olsen wrote:
 On 6/26/05, Ronald Oussoren [EMAIL PROTECTED] wrote:
  Have a look at stackless python. http://www.stackless.com/
 On 6/26/05, Florian Schulze [EMAIL PROTECTED] wrote:
  Also look at greenlets, they are in the py lib http://codespeak.net/py
 While internally Stackless and Greenlets may (or may not) share a lot
 with my proposed python-native threads, they differ greatly in their
 intent and the resulting interface they expose.  

Indeed - Greenlets allows you to build the functionality you propose without 
having to change the language.

  For example, with Greenlets you would use the .switch() method of a 
 specific greenlet instance to switch to it, and with my python-native
 threads you would use the global idle() function which would decide
 for itself which thread to switch to.

This is easy enough to build using greenlets today. I tried writing an
experimental version of our generator scheduling system using greenlets
rather than generators and found it to work very nicely. I'd suggest that if 
you really want this functionality (I can understand why) that you revisit 
greenlets - they probably do what you want.

Mainly replying to say -1,

Best Regards,


Michael.
-- 
Michael Sparks, Senior RD Engineer, Digital Media Group
[EMAIL PROTECTED], http://kamaelia.sourceforge.net/
British Broadcasting Corporation, Research and Development
Kingswood Warren, Surrey KT20 6NP

This e-mail may contain personal views which are not the views of the BBC.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Withdrawn PEP 288 and thoughts on PEP 342

2005-06-17 Thread Michael Sparks
At 08:24 PM 6/16/2005 -0400, Raymond Hettinger wrote:
 As a further benefit, using
attributes was a natural approach because that same technique has long
been used with classes (so no new syntax was needed and the learning
curve was zero).

On Friday 17 Jun 2005 02:53, Phillip J. Eby wrote:
 Ugh.  Having actually emulated co-routines using generators, I have to tell
 you that I don't find generator attributes natural for this at all;
 returning a value or error (via PEP 343's throw()) from a yield expression
 as in PEP 342 is just what I've been wanting.

We've been essentially emulating co-routines using generators embedded
into a class to give us the equivalent of generator attributes. We've found
this very natural for system composition. (Essentially it's a CSP type system, 
though with an aim of ease of use)

I've written up my talk from ACCU/Python UK this year, and it's available
here: http://www.bbc.co.uk/rd/pubs/whp/whp113.shtml

I'll also be talking about it at Europython later this month.

At 08:03 PM 6/16/2005 -0700, Guido van Rossum wrote:
Someone should really come up with some realistic coroutine examples
written using PEP 342 (with or without continue EXPR).

On Friday 17 Jun 2005 05:07:22, Phillip J. Eby wrote:
 How's this?
 
def echo(sock):
while True:
try:
data = yield nonblocking_read(sock)
yield nonblocking_write(sock, data)
... snip ...

For comparison, our version of this would be:

from Axon.Component import component
from Kamaelia.SimpleServerComponent import SimpleServer
class Echo(component):
   def mainBody(self):
  while True:
 if self.dataReady(inbox):
self.send(data,outbox)
 yield1

SimpleServer(protocol=EchoProtocol, port=1501).run()


For more interesting pipelines we have:

pipeline(TCPClient(127.0.0.1,1500),
 VorbisDecode(),
 AOAudioPlaybackAdaptor()
).run()

Which works in the same way as a Unix pipeline. I haven't written the 
pipegraph or similar component yet that could allow this:

graph(A=SingleServer(0.0.0.0, 1500),
   B=Echo(),
   layout = { A:outbox: B:inbox, B:outbox : A:inbox } )

(Still undecided on API for that really, currently the above is a lot more 
verbose -)

By contrast I really can't see how passing attributes in via .next() helps 
this approach in any way (Not that that's a problem for us :).

I CAN see though it helps if you're taking the approach for generator
composition if you're using twisted.flow (though I'll defer a good example
for that to someone else since although I've been asked for a comparison in 
the past, I don't think I'm sufficiently twisted to do so!). 


Michael.
-- 
Michael Sparks, Senior RD Engineer, Digital Media Group
[EMAIL PROTECTED], http://kamaelia.sourceforge.net/
British Broadcasting Corporation, Research and Development
Kingswood Warren, Surrey KT20 6NP

This e-mail may contain personal views which are not the views of the BBC.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com