Re: Extending dict (dict's) to allow for multidimensional dictionary

2011-03-07 Thread TomF

On 2011-03-05 12:05:43 -0800, Paul Rubin said:


Ravi ra.ravi@gmail.com writes:

I can extend dictionary to allow for the my own special look-up
tables. However now I want to be able to define multidimensional
dictionary which supports look-up like this:

d[1]['abc'][40] = 'dummy'


Why do that anyway?  You can use a tuple as a subscript:

   d[1,'abc',40] = 'dummy'


Depends on the use cases.  If you only want to access d by a full 
tuple, that's fine.  If you want to access all the entries below d[1], 
it's a problem.  And if you want to access entities randomly by an 
arbitrary dimension, you probably want a database (as others have 
pointed out).


-Tom

--
http://mail.python.org/mailman/listinfo/python-list


examples of realistic multiprocessing usage?

2011-01-16 Thread TomF
I'm trying to multiprocess my python code to take advantage of multiple 
cores.  I've read the module docs for threading and multiprocessing, 
and I've done some web searches.  All the examples I've found are too 
simple: the processes take simple inputs and compute a simple value.  
My problem involves lots of processes, complex data structures, and 
potentially lots of results.  It doesn't map cleanly into a Queue, 
Pool, Manager or Listener/Client example from the python docs.


Instead of explaining my problem and asking for design suggestions, 
I'll ask: is there a compendium of realistic Python multiprocessing 
examples somewhere?  Or an open source project to look at?


Thanks,
-Tom

--
http://mail.python.org/mailman/listinfo/python-list


Re: Not clear about the dot notation

2011-01-16 Thread TomF

On 2011-01-16 11:59:11 -0800, Zeynel said:


What does vote.vote refer to in this snippet?

def txn():
quote = Quote.get_by_id(quote_id)
vote = Vote.get_by_key_name(key_names = user.email(), parent =
quote)
if vote is None:
vote = Vote(key_name = user.email(), parent = quote)
if vote.vote == newvote:
return
quote.votesum = quote.votesum - vote.vote + newvote
vote.vote = newvote

from here: http://code.google.com/appengine/articles/overheard.html


vote refers to the Vote instance.
vote.vote refers to the instance variable in that instance:
      vote: The value of 1 for like, -1 for dislike.

Confusing choice of names, in my opinion.

-Tom

--
http://mail.python.org/mailman/listinfo/python-list


Re: Not clear about the dot notation

2011-01-16 Thread TomF

On 2011-01-16 12:44:35 -0800, Zeynel said:


On Jan 16, 3:24 pm, TomF tomf.sess...@gmail.com wrote:


vote refers to the Vote instance.


So he must have instatiated previously like

vote = Vote()

is this correct?


Yes.



So I have a model

class Item(db.Model):
title = db.StringProperty()
url = db.StringProperty()
date = db.DateTimeProperty(auto_now_add=True)
author = db.UserProperty()

and to write to the database I do

item = Item()
item.title = self.request.get(title)
item.url = self.request.get(url)
item.author = users.get_current_user()
item.put()
self.redirect(/newest)

so his vote.vote is like my item.url ?


I believe so.  Though you're now talking about an extension to db.Model 
which looks like it's doing a lot more behind the scenes than a simple 
variable access.


-Tom

--
http://mail.python.org/mailman/listinfo/python-list


Re: examples of realistic multiprocessing usage?

2011-01-16 Thread TomF

On 2011-01-16 19:16:15 -0800, Dan Stromberg said:


On Sun, Jan 16, 2011 at 11:05 AM, TomF tomf.sess...@gmail.com wrote:

I'm trying to multiprocess my python code to take advantage of multiple
cores.  I've read the module docs for threading and multiprocessing, and
I've done some web searches.  All the examples I've found are too simple:
the processes take simple inputs and compute a simple value.  My problem
involves lots of processes, complex data structures, and potentially lots of
results.  It doesn't map cleanly into a Queue, Pool, Manager or
Listener/Client example from the python docs.

Instead of explaining my problem and asking for design suggestions, I'll
ask: is there a compendium of realistic Python multiprocessing examples
somewhere?  Or an open source project to look at?


I'm unaware of a big archive of projects that use multiprocessing, but
maybe one of the free code search engines could help with that.

It sounds like you're planning to use mutable shared state, which is
generally best avoided if at all possible, in concurrent programming -
because mutable shared state tends to slow down things quite a bit.

I'm trying to avoid mutable shared state since I've read the cautions 
against it.  I think it's possible for each worker to compute changes 
and return them back to the parent (and have the parent coordinate all 
changes) without too much overhead.  So far It looks like 
multiprocessing.Pool.apply_async is the best match to what I want.


One difficulty is that there is a queue of work to be done and a queue 
of results to be incorporated back into the parent; there is no 
one-to-one correspondence between the two.  It's not obvious to me how 
to coordinate the queues in a natural way to avoid deadlock or 
starvation.




But if you must have mutable shared state that's more complex than a
basic scalar or homogeneous array, I believe the multiprocessing
module would have you use a server process manager.


I've looked into Manager but I don't really understand the trade-offs.
-Tom

--
http://mail.python.org/mailman/listinfo/python-list


Re: examples of realistic multiprocessing usage?

2011-01-16 Thread TomF

On 2011-01-16 20:57:41 -0800, Adam Skutt said:


On Jan 16, 11:39 pm, TomF tomf.sess...@gmail.com wrote:

One difficulty is that there is a queue of work to be done and a queue
of results to be incorporated back into the parent; there is no
one-to-one correspondence between the two.  It's not obvious to me how
to coordinate the queues in a natural way to avoid deadlock or
starvation.



Depends on what you are doing.  If you can enqueue all the jobs before
waiting for your results, then two queues are adequate.  The first
queue is jobs to be accomplished, the second queue is the results.
The items you put on the result queue have both the result and some
sort of id so the results can be ordered after the fact.  Your parent
thread of execution (thread hereafter) then:

1. Adds jobs to the queue
2. Blocks until all the results are returned.  Given that you
suggested that there isn't a 1:1 correspondence between jobs and
results, have the queue support a message saying, 'Job X is done'.
You're finished when all jobs send such a message.
3. Sorts the results into the desired ordered.
4. Acts on them.

If you cannot enqueue all the jobs before waiting for the results, I
suggest turning the problem into a pipeline, such that the thread
submitting the jobs and the thread acting on the results are
different: submitter - job processor - results processor.
Adam


Thanks for your reply.  I can enqueue all the jobs before waiting for 
the results, it's just that I want the parent to process the results as 
they come back.  I don't want the parent to block until all results are 
returned.  I was hoping the Pool module had a test for whether all 
processes were done, but I guess it isn't hard to keep track of that 
myself.


-Tom

--
http://mail.python.org/mailman/listinfo/python-list


Re: True lists in python?

2010-12-19 Thread TomF

On 2010-12-18 22:18:07 -0800, Dmitry Groshev said:


Is there any way to use a true lists (with O(c) insertion/deletion and
O(n) search) in python? For example, to make things like reversing
part of the list with a constant time.


I assume you mean a C extension that implements doubly linked lists 
(reversing part of a list is only constant time if the list is 
doubly-linked).  I'm not aware of one.


A longer answer is that many high level languages (Python, Perl, Ruby) 
don't bother implementing simple linked lists because they're not very 
useful.  Instead they use hybrid data structures that can operate as 
lists and arrays with flexibility and acceptable costs.  And if you 
need greater speed you usually go to special purpose arrays (for 
constant time access) rather than lists.


-Tom

--
http://mail.python.org/mailman/listinfo/python-list


Re: Comparisons of incompatible types

2010-12-07 Thread TomF

On 2010-12-07 16:09:17 -0800, Mark Wooding said:


Carl Banks pavlovevide...@gmail.com writes:


I think that feeling the need to sort non-homogenous lists is
indictative of bad design.


Here's a reason you might want to.

You're given an object, and you want to compute a hash of it.  (Maybe
you want to see whether someone else's object is the same as yours, but
don't want to disclose the actual object, say.)  To hash it, you'll need
to serialize it somehow.  But here's a problem: objects like
dictionaries and sets don't impose an ordering on their elements.  For
example, the set { 1, 'two' } is the same as the set { 'two', 1 } -- but
iterating the two might well yield the elements in a different order.
(The internal details of a hash table tend to reflect the history of
operations on the hash table as well as its current contents.)

The obvious answer is to apply a canonical ordering to unordered objects
like sets and dictionaries.  A set can be serialized with its elements
in ascending order; a dictionary can be serialized as key/value pairs
with the keys in ascending order.  But to do this, you need an
(arbitrary, total) order on all objects which might be set elements or
dictionary keys.  The order also needs to be dependent only on the
objects' serializable values, and not on any incidental facts such as
memory addresses or whatever.


I have no argument that there might be an extra-logical use for such an 
ordering which you might find convenient.  This is the point you're 
making.  sort() and sorted() both take a cmp argument for this sort of 
thing.   My complaint is with Python adopting nonsensical semantics 
(shoe  7) to accomodate it.


By analogy, I often find it convenient to have division by zero return 
0 to the caller for use in calculations.  But if Python defined 0/0==0 
I'd consider it broken.


-Tom

--
http://mail.python.org/mailman/listinfo/python-list


Comparisons of incompatible types

2010-12-06 Thread TomF

I'm aggravated by this behavior in python:

x = 4
print x  7# prints False

The issue, of course, is comparisons of incompatible types.  In most 
languages this throws an error (in Perl the types are converted 
silently).   In Python this comparison fails silently.  The 
documentation says: objects of different types *always* compare 
unequal, and are ordered consistently but arbitrarily.


I can't imagine why this design decision was made.  I've been bitten by 
this several times (reading data from a file and not converting the 
numbers before comparison).  Can I get this to throw an error instead 
of failing silently?


Thanks,
-Tom

--
http://mail.python.org/mailman/listinfo/python-list


Re: Comparisons of incompatible types

2010-12-06 Thread TomF


On 2010-12-06 09:04:00 -0800, Peter Otten said:


TomF wrote:


I'm aggravated by this behavior in python:

x = 4
print x  7# prints False

The issue, of course, is comparisons of incompatible types.  In most
languages this throws an error (in Perl the types are converted
silently).   In Python this comparison fails silently.  The
documentation says: objects of different types *always* compare
unequal, and are ordered consistently but arbitrarily.

I can't imagine why this design decision was made.  I've been bitten by
this several times (reading data from a file and not converting the
numbers before comparison).  Can I get this to throw an error instead
of failing silently?


This change would break a lot of code, so it could not be made within the
2.x series. However:

Python 3.1.1+ (r311:74480, Nov  2 2009, 15:45:00)
[GCC 4.4.1] on linux2
Type help, copyright, credits or license for more information.

4  7

Traceback (most recent call last):
  File stdin, line 1, in module
TypeError: unorderable types: str()  int()


Thanks.  I was hoping there was something I could do for 2.x but I 
suppose this will have to do.


But I'm mystified by your statement, this change would break a lot of 
code.  Given that the semantics are virtually random, how could code 
depend on this?


-Tom

--
http://mail.python.org/mailman/listinfo/python-list


Request for comments on a design

2010-10-23 Thread TomF
I have a program that manipulates lots of very large indices, which I 
implement as bit vectors (via the bitarray module).   These are too 
large to keep all of them in memory so I have to come up with a way to 
cache and load them from disk as necessary.  I've been reading about 
weak references and it looks like they may be what I want.


My idea is to use a WeakValueDictionary to hold references to these 
bitarrays, so Python can decide when to garbage collect them.  I then 
keep a key-value database of them (via bsddb) on disk and load them 
when necessary.  The basic idea for accessing one of these indexes is:


_idx_to_bitvector_dict = weakref.WeakValueDictionary()

def retrieve_index(idx):
   if idx in _idx_to_bitvector_dict and _idx_to_bitvector_dict[idx] is 
not None:

   return _idx_to_bitvector_dict[idx]
   else:  # it's been gc'd
   bv_str = bitvector_from_db[idx]# Load from bsddb
   bv = cPickle.loads(bv_str)# Deserialize the string
   _idx_to_bitvector_dict[idx] = bv   # Re-initialize the weak 
dict element

   return bv

Hopefully that's not too confusing.  Comments on this approach?  I'm 
wondering whether the weakref stuff isn't duplicating some of the 
caching that bsddb might be doing.


Thanks,
-Tom

--
http://mail.python.org/mailman/listinfo/python-list


Re: Request for comments on a design

2010-10-23 Thread TomF

On 2010-10-23 01:50:53 -0700, Peter Otten said:


TomF wrote:


I have a program that manipulates lots of very large indices, which I
implement as bit vectors (via the bitarray module).   These are too
large to keep all of them in memory so I have to come up with a way to
cache and load them from disk as necessary.  I've been reading about
weak references and it looks like they may be what I want.

My idea is to use a WeakValueDictionary to hold references to these
bitarrays, so Python can decide when to garbage collect them.  I then
keep a key-value database of them (via bsddb) on disk and load them
when necessary.  The basic idea for accessing one of these indexes is:

_idx_to_bitvector_dict = weakref.WeakValueDictionary()


In a well written script that cache will be almost empty. You should compare
the weakref approach against a least-recently-used caching strategy. In
newer Pythons you can use collections.OrderedDict to implement an LRU cache
or use the functools.lru_cache decorator.


I don't know what your first sentence means, but thanks for pointers to 
the LRU stuff.  Maintaining my own LRU cache might be a better way to 
go.  At least I'll have more control.


Thanks,
-Tom

--
http://mail.python.org/mailman/listinfo/python-list


Re: Deferring a function call

2010-10-19 Thread TomF

Thanks for the ideas, everyone.

functools.partial and lambda expressions seem like a more pythonic way 
of doing what I want.  I don't know whether they're actually more 
efficient or better, but at least they eliminate the need to carry args 
around separately.


I'd forgotten Python has a sched module in its standard library.  It 
may be overkill for what I want to do but I'll take a look.


-Tom

--
http://mail.python.org/mailman/listinfo/python-list


Deferring a function call

2010-10-18 Thread TomF
I'm writing a simple simulator, and I want to schedule an action to 
occur at a later time.  Basically, at some later point I want to call a 
function f(a, b, c).  But the values of a, b and c are determined at 
the current time.


One way way to do this is to keep a list of entries of the form [[TIME, 
FN, ARGS]...] and at simulated time TIME do: apply(FN, ARGS)
Aside from the fact that apply is deprecated, it seems like there 
should be a cleaner (possibly more Pythonic) way to do this.   Ideas?


-Tom

--
http://mail.python.org/mailman/listinfo/python-list


Re: please help explain this result

2010-10-17 Thread TomF

On 2010-10-17 10:21:36 -0700, Paul Kölle said:

Am 17.10.2010 13:48, schrieb Steven D'Aprano:

On Sun, 17 Oct 2010 03:58:21 -0700, Yingjie Lan wrote:


Hi,

I played with an example related to namespaces/scoping. The result is a
little confusing:


[snip example of UnboundLocalError]

Python's scoping rules are such that if you assign to a variable inside a
function, it is treated as a local. In your function, you do this:

def f():
a = a + 1

Since a is treated as a local, when you enter the function the local a is
unbound -- it does not have a value. So the right hand side fails, since
local a does not exist, and you get an UnboundLocalError. You are trying
to get the value of local a when it doesn't have a value.


Steven's explanation is correct.  In your example below you're altering 
portions of a global data structure, not reassigning a global variable. 
Put another way, there is a significant difference between:

   a = 7
and:
   a['x'] = 7

Only the first reassigns a global variable.

-Tom


Oh really? Can you explain the following?

  a = {}
  def foo():
... a['a'] = 'lowercase a'
... print a.keys()
...
  foo()
['a']
  a
{'a': 'lowercase a'}
  def bar():
... a['b'] = a['a'].replace('a', 'b')
...
  bar()
  a
{'a': 'lowercase a', 'b': 'lowercbse b'}
 

cheers
  Paul



--
http://mail.python.org/mailman/listinfo/python-list


Re: Global variables for python applications

2010-05-20 Thread TomF



On 2010-05-19 07:34:37 -0700, Steven D'Aprano said:


# Untested.
def verbose_print(arg, level, verbosity=1):
if level = verbosity:
print arg

def my_function(arg):
my_print(arg, level=2)
return arg.upper()

if __name__ == '__main__':
if '--verbose' in sys.argv:
my_print = functools.partial(verbose_print, verbosity=2)
elif '--quiet' in sys.argv:
my_print = functools.partial(verbose_print, verbosity=0)

my_function(hello world)


Note that although there is no verbosity global setting, every function
that calls my_print will do the right thing (unless I've got the test
backwards...), and if a function needs to override the implicit verbosity
setting, it can just call verbose_print directly.



--
http://mail.python.org/mailman/listinfo/python-list


Re: Global variables for python applications

2010-05-19 Thread TomF

On 2010-05-16 12:27:21 -0700, christian schulze said:


On 16 Mai, 20:20, James Mills prolo...@shortcircuit.net.au wrote:

On Mon, May 17, 2010 at 4:00 AM, Krister Svanlund

krister.svanl...@gmail.com wrote:

On Sun, May 16, 2010 at 7:50 PM, AON LAZIO aonla...@gmail.com wrote:

   How can I set up global variables for the entire python applications?
Like I can call and set this variables in any .py files.
   Think of it as a global variable in a single .py file but thisis for the
entire application.



First: Do NOT use global variables, it is bad practice and will
eventually give you loads of s**t.



But if you want to create global variables in python I do believe it
is possible to specify them in a .py file and then simply import it as
a module in your application. If you change one value in a module the
change will be available in all places you imported that module in.


The only place global variables are considered somewhat acceptable
are as constants in a module shared as a static value.

Anything else should be an object that you share. Don't get into the
habit of using global variables!

--james


Exactly! Python's OOP is awesome. Use it. Global vars used as anything
but constants is bad practice. It isn't that much work to implement
that.


Let's say you have a bunch of globals, one of which is a verbose flag.  
If I understand the difference, using a module gbls.py:

# in gbls.py
verbose = False
# elsewhere:
import gbls
gbls.verbose = True

Using a class:

# In the main module:
class gbls(object):
def __init__(self, verbose=False):
self.verbose = verbose

my_globals = gbls.gbls(verbose=True)
...
some_function(my_globals, ...)


If this is what you have in mind, I'm not really seeing how one is good 
practice and the other is bad.  The OOP method is more verbose (no pun 
intended) and I don't see how I'm any less likely to shoot myself in 
the foot with it.


-Tom


--
http://mail.python.org/mailman/listinfo/python-list


Re: Django as exemplary design

2010-05-06 Thread TomF

On 2010-05-06 18:20:02 -0700, Trent Nelson said:

I'm interested in improving my python design by studying a large,
well-designed codebase.


I'll tell you one of the best ways to improve your Python code: attend
one of Raymond Hettinger's Code Clinic workshops at a Python conference
and put some up of your work up on the projector for 20+ developers to
rip apart, line by line ;-)  You'll pick up more in 30 minutes than you
ever thought possible.


I don't doubt it.  But I'm not really interested in line (micro) level 
code issues at the moment. Not that my code couldn't stand being 
improved, but I'm more interested in seeing how medium/large OO python 
systems are designed.  If I could get this from a book I would, but I 
suspect I need to study real code.


-Tom

--
http://mail.python.org/mailman/listinfo/python-list


Re: Django as exemplary design

2010-05-04 Thread TomF

Thanks to everyone for their comments.

On 2010-05-04 07:11:08 -0700, alex23 said:

TomF tomf.sess...@gmail.com wrote:

I'm interested in improving my python design by studying a large,
well-designed codebase.  Someone (not a python programmer) suggested
Django.  I realize that Django is popular, but can someone comment on
whether its code is well-designed and worth studying?


Here's a viewpoint that says no: 
http://mockit.blogspot.com/2010/04/mess-djangos-in.html


There's a lot of good counterpoint in the comments too.


I read most of the discussion.  Yep, there is a LOT of disagreement 
about the code quality.  I guess I'll dig in and see whether I can 
learn anything.


(I also think there's value to be gained in studying _bad_ code,
too...)


True, although whether that's time well spent is another question.

Regards,
-Tom

--
http://mail.python.org/mailman/listinfo/python-list


Django as exemplary design

2010-05-03 Thread TomF
I'm interested in improving my python design by studying a large, 
well-designed codebase.  Someone (not a python programmer) suggested 
Django.  I realize that Django is popular, but can someone comment on 
whether its code is well-designed and worth studying?


Thanks,
-Tom

--
http://mail.python.org/mailman/listinfo/python-list


Re: getting a string as the return value from a system command

2010-04-18 Thread TomF

On 2010-04-16 12:06:13 -0700, Catherine Moroney said:


Hello,

I want to call a system command (such as uname) that returns a string,
and then store that output in a string variable in my python program.

What is the recommended/most-concise way of doing this?

I could always create a temporary file, call the subprocess.Popen
module with the temporary file as the stdout argument, and then
re-open that temporary file and read in its contents.  This seems
to be awfully long way of doing this, and I was wondering about 
alternate ways of accomplishing this task.


In pseudocode, I would like to be able to do something like:
hostinfo = subprocess.Popen(uname -srvi) and have hostinfo
be a string containing the result of issuing the uname command.


Here is the way I do it:

import os
hostinfo = os.popen(uname -srvi).readline().strip()

(I add a strip() call to get rid of the trailing newline.)

os.popen has been replaced by the subprocess module, so I suppose the 
new preferred method is:


from subprocess import Popen, PIPE
hostinfo = Popen([uname, -srvi], stdout=PIPE).communicate()[0].strip()


Looks ugly to me, but there we are.

-Tom

--
http://mail.python.org/mailman/listinfo/python-list


distutils examples?

2010-04-16 Thread TomF
I'm packaging up a program with distutils and I've run into problems 
trying to get setup.py right.  It's not a standalone package; it's a 
script plus modules, data files and documentation.  I've been over the 
distutils documentation but I'm having trouble getting the package_data 
and data_files correct.


Is there a repository of distutils examples somewhere that I can look at?
The documentation is not particularly instructive for me.

Thanks,
-Tom

--
http://mail.python.org/mailman/listinfo/python-list


Re: (a==b) ? 'Yes' : 'No'

2010-03-31 Thread TomF

On 2010-03-31 00:57:51 -0700, Peter Otten __pete...@web.de said:

Pierre Quentel wrote:


I'm surprised nobody proposed a solution with itertools ;-)


next(itertools.takewhile(lambda _: a == b, [yes]), no)

You spoke to soon :)


I salute you, sir, for upholding the standards of this group.

-Tom

--
http://mail.python.org/mailman/listinfo/python-list


Re: sum for sequences?

2010-03-25 Thread TomF
On 2010-03-24 14:07:24 -0700, Steven D'Aprano 
ste...@remove.this.cybersource.com.au said:

On Wed, 24 Mar 2010 15:29:07 +, kj wrote:


Is there a sequence-oriented equivalent to the sum built-in?  E.g.:

seq_sum(((1, 2), (5, 6))) -- (1, 2) + (5, 6) -- (1, 2, 5, 6)

?


Yes, sum.

help(sum) is your friend.


You might not want to be so glib.  The sum doc sure doesn't sound like 
it should work on lists.


   Returns the sum of a sequence of numbers (NOT strings) plus the value
   of parameter 'start' (which defaults to 0).

-Tom

--
http://mail.python.org/mailman/listinfo/python-list


Re: to pass self or not to pass self

2010-03-15 Thread TomF

On 2010-03-15 09:39:50 -0700, lallous elias.bachaal...@gmail.com said:


Hello,

Learning Python from the help file and online resources can leave one
with many gaps. Can someone comment on the following:

# -
class X:
T = 1

def f1(self, arg):
print f1, arg=%d % arg
def f2(self, arg):
print f2, arg=%d % arg
def f3(self, arg):
print f3, arg=%d % arg

# this:
F = f2
# versus this:
func_tbl = { 1: f1, 2: f2, 3: f3 }

def test1(self, n, arg):
# why passing 'self' is needed?
return self.func_tbl[n](self, arg)

def test2(self):
f = self.f1
f(6)

f = self.F
# why passing self is not needed?
f(87)

# -
x = X()

x.test1(1, 5)
print '--'
x.test2()

Why in test1() when it uses the class variable func_tbl we still need
to pass self, but in test2() we don't ?

What is the difference between the reference in 'F' and 'func_tbl' ?


I recommend putting print statements into your code like this:

   def test1(self, n, arg):
   print In test1, I'm calling a %s % self.func_tbl[n]
   return self.func_tbl[n](self, arg)

   def test2(self):
   f = self.f1
   print Now in test2, I'm calling a %s % f
   f(6)


Bottom line: You're calling different things.  Your func_tbl is a dict 
of functions, not methods.


-Tom

--
http://mail.python.org/mailman/listinfo/python-list


Re: cpan for python?

2010-03-03 Thread TomF

On 2010-03-02 19:59:01 -0800, Lie Ryan lie.1...@gmail.com said:


On 03/03/2010 09:47 AM, TomF wrote:

On 2010-03-02 13:14:50 -0800, R Fritz rfr...@u.washington.edu said:


On 2010-02-28 06:31:56 -0800, sstein...@gmail.com said:


On Feb 28, 2010, at 9:28 AM, Someone Something wrote:


Is there something like cpan for python? I like python's syntax, but
Iuse perl because of cpan and the tremendous modules that it has.  --


Please search the mailing list archives.

This subject has been discussed to absolute death.


But somehow the question is not in the FAQ, though the answer is. See:

http://www.python.org/doc/faq/library/#how-do-i-find-a-module-or-application-to-perform-task-x



There 



is also a program called cpan, distributed with Perl.  It is used for
searching, downloading, installing and testing modules from the CPAN
repository.  It's far more extensive than setuptools.  AFAIK the python
community has developed nothing like it.


python have easy_install


easy_install is part of setuptools.  As I said, nothing like cpan.

-Tom

--
http://mail.python.org/mailman/listinfo/python-list


Re: cpan for python?

2010-03-02 Thread TomF

On 2010-03-02 13:14:50 -0800, R Fritz rfr...@u.washington.edu said:


On 2010-02-28 06:31:56 -0800, sstein...@gmail.com said:


On Feb 28, 2010, at 9:28 AM, Someone Something wrote:

Is there something like cpan for python? I like python's syntax, but 
Iuse perl because of cpan and the tremendous modules that it has.  --


Please search the mailing list archives.

This subject has been discussed to absolute death.


But somehow the question is not in the FAQ, though the answer is. See:
  
http://www.python.org/doc/faq/library/#how-do-i-find-a-module-or-application-to-perform-task-x


There 


is also a program called cpan, distributed with Perl.  It is used for 
searching, downloading, installing and testing modules from the CPAN 
repository.  It's far more extensive than setuptools.  AFAIK the python 
community has developed nothing like it.


-Tom

--
http://mail.python.org/mailman/listinfo/python-list


Re: formatting a number as percentage

2010-02-21 Thread TomF

On 2010-02-21 09:53:45 -0800, vsoler vicente.so...@gmail.com said:

I'm trying to print .7 as 70%
I've tried:

print format(.7,'%%')
.7.format('%%')

but neither works. I don't know what the syntax is...



print Grade is {0:%}.format(.87)

Grade is 87.00%

or if you want to suppress those trailing zeroes:


print Grade is {0:.0%}.format(.87)

Grade is 87%

--
http://mail.python.org/mailman/listinfo/python-list


Re: Is there a simple way to find the list index to the max value?

2010-02-16 Thread TomF

On 2010-02-16 11:44:45 -0800, a...@pythoncraft.com (Aahz) said:

In article 4b7a91b1.6030...@lonetwin.net, steve  st...@lonetwin.net wrote:

On 02/16/2010 05:49 PM, W. eWatson wrote:


See Subject. a = [1,4,9,3]. Find max, 9, then index to it, 2.


The most obvious would be a.index(max(a)). Is that what you wanted ?


The disadvantage of that is that it's O(2N) instead of O(N).


I don't think you understand order notation.   There's no such thing as O(2N).

To answer the original question, how about:
max(enumerate(l), key=lambda x: x[1])[0]
As to whether this is faster than index(max()), you'd have to time it.

-Tom

--
http://mail.python.org/mailman/listinfo/python-list


Re: Python system with exemplary organization/coding style

2009-05-15 Thread TomF

On 2009-05-14 16:18:07 -0700, CTO debat...@gmail.com said:


On May 14, 7:01 pm, TomF tomf.sess...@gmail.com wrote:

I'm looking for a medium-sized Python system with very good coding
style and good code organization, so I can learn from it.  I'm reading
various books on Python with advice on such things but I'd prefer to
see a real system.

By medium-sized I mean 5-20 classes, 5-20 files, etc; a code base that
has some complexity but isn't overwhelming.

Thanks,
-Tom


I'd recommend screenlets. Good documentation and pretty good style,
and
they have some external dependencies so you can see how that operates,
and of course you can see what your code is and isn't doing pretty
quickly.

URL: http://www.screenlets.org


Thanks, this does look pretty good.  Too bad it's grapics oriented, but 
I suppose I can filter out the graphics dependencies.


-Tom



--
http://mail.python.org/mailman/listinfo/python-list


Python system with exemplary organization/coding style

2009-05-14 Thread TomF
I'm looking for a medium-sized Python system with very good coding 
style and good code organization, so I can learn from it.  I'm reading 
various books on Python with advice on such things but I'd prefer to 
see a real system.


By medium-sized I mean 5-20 classes, 5-20 files, etc; a code base that 
has some complexity but isn't overwhelming.


Thanks,
-Tom

--
http://mail.python.org/mailman/listinfo/python-list


Re: Simple way of handling errors

2009-05-07 Thread TomF

On 2009-05-07 01:01:57 -0700, Peter Otten __pete...@web.de said:


TomF wrote:


As a relative newcomer to Python, I like it a lot but I'm dismayed at
the difficulty of handling simple errors.  In Perl if you want to
anticipate a file-not-found error you can simply do:

open($file)  or die(open($file): $!);

and you get an intelligible error message.  In Python, to get the same
thing it appears you need at least:

try:
f=open(file)
except IOError, err:
print open(%s): got %s % (file, err.strerror)
exit(-1)

Is there a simpler interface or idiom for handling such errors?  I
appreciate that Python's exception handling is much more sophisticated
but often I don't need it.

-Tom


While you are making the transition you could write

from perl_idioms import open_or_die

f = open_or_die(does-not-exist)


with the perl_idioms module looking like

import sys

def open_or_die(*args):
try:
return open(*args)
except IOError, e:
sys.exit(e)

Peter


Thanks.  Rolling my own error module for common errors may be the best 
way to go.


-Tom

--
http://mail.python.org/mailman/listinfo/python-list


Simple way of handling errors

2009-05-06 Thread TomF
As a relative newcomer to Python, I like it a lot but I'm dismayed at 
the difficulty of handling simple errors.  In Perl if you want to 
anticipate a file-not-found error you can simply do:


open($file)  or die(open($file): $!);

and you get an intelligible error message.  In Python, to get the same 
thing it appears you need at least:


try:
   f=open(file)
except IOError, err:
   print open(%s): got %s % (file, err.strerror)
   exit(-1)

Is there a simpler interface or idiom for handling such errors?  I 
appreciate that Python's exception handling is much more sophisticated 
but often I don't need it.


-Tom

--
http://mail.python.org/mailman/listinfo/python-list


Re: Simple way of handling errors

2009-05-06 Thread TomF
On 2009-05-06 19:41:29 -0700, Steven D'Aprano 
ste...@remove.this.cybersource.com.au said:



On Wed, 06 May 2009 16:40:19 -0700, TomF wrote:


As a relative newcomer to Python, I like it a lot but I'm dismayed at
the difficulty of handling simple errors.  In Perl if you want to
anticipate a file-not-found error you can simply do:

open($file)  or die(open($file): $!);

and you get an intelligible error message.  In Python, to get the same
thing it appears you need at least:

try:
f=open(file)
except IOError, err:
print open(%s): got %s % (file, err.strerror)
exit(-1)



Functions never fail silently in Python. (At least built-in functions
never fail silently. Functions you write yourself can do anything you
want.)


Well, yes, I'm aware that if you don't handle errors Python barfs out a 
backtrace.



If it fails, you get both a straight-forward error message and a useful
traceback:

Traceback (most recent call last):
  File stdin, line 1, in module
IOError: [Errno 2] No such file or directory: 'foomanchu'

The only reason you would bother going to the time and effort of catching
the error, printing your own error message, and then exiting, is if you
explicitly want to hide the traceback from the user.


Well, to me, exposing the user to such raw backtraces is 
unprofessional, which is why I try to catch user-caused errors.  But I 
suppose I have an answer to my question.


Thanks,
-Tom

--
http://mail.python.org/mailman/listinfo/python-list