Help with Python+Bazaar+Launchpad

2008-04-21 Thread TimeHorse
Hello,

I am trying to to create a branch of the bzr mirror for the current
Python Trunk 2.6 development so I can finish my work on Issue 2636.  I
am not a core developer but am trying to create this branch so it can
be reviewed by a core developer I am working with.  Because I develop
on multiple machines, I want to set up a central repository for my
branch database and would like to use Launchpad to host it.  Python's
bzr archive is mirrored on launchpad via the bzr address lp:python, so
that should be the parent branch.  I can create a branch on 1 machine
locally, but I cannot upload (push) that branch onto launchpad, which
is preventing me from doing my development because I don't have access
to all machines at all times.  I need to have one shared branch
between all my development platforms.  So, I have tried and failed at
all the following:

1) Click the create branch button on the launchpad interface; this
creates an empty branch which cannot be populated.

2) Branch from lp:python to a local install then branch from that then
try and upload to launchpad.  But that means my branch is the child of
the child of the mainline-trunk, so merging is too complicated.

3) Branch directly onto launchpad via bzr branch lp:python bzr+ssh://
name@bazaar.launchpad.net/~name/python/branch-name.  This
creates a NON-empty branch on Launchpad but I cannot check it out or
pull it.  Also, it would not be created as a tree-less (--no-trees)
which is how it should be created.

4) I have tried to directly use my first branch from step 2 (from
lp:python to my local disc) to push an instance onto launchpad, but
this creates an empty branch too, and as an empty branch it cannot be
checked out or pulled.

I know the type of Bazaar setup I want is the type specified in
Chapter 5 of the User Guild: decentralized, multi-platform, single- or
multiple-user.  I just can't figure out how to do that with a branch
from python.  Chapter 5 talks about setting up a new database with
init-repo and pushing new content, but I want to take a branch of an
existing database and push it to the public launchpad server.  I just
can't for the life of me figure out how to do it.  I have bzr 1.3 and
1.3.1 and neither have succeeded.

Any help would be greatly appreciated as I've totally lost an entire
weekend of development which I could have used to complete item 1 of
my issue and run gprof over the new engine.  I really need help with
all this difficult administrative stuff so I can get back to
development and get things done in time for the June beta.  PLEASE
HELP!
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Opposite of repr() (kind of)

2008-04-21 Thread TimeHorse
On Apr 21, 7:05 am, Guillermo [EMAIL PROTECTED] wrote:
 Hi there,

 How can I turn a string into a callable object/function?

 I have a = 'len', and I want to do: if callable(eval(a)): print
 callable, but that doesn't quite work the way I want. :)

 Regards,

 Guillermo

What version of Python are you using?  I just tried
callable(eval('len')) on Python 2.5.1 and got True, and eval('len')
returns built-in function len.
-- 
http://mail.python.org/mailman/listinfo/python-list



Re: Adding Priority Scheduling feature to the subprocess

2008-02-25 Thread TimeHorse
On Feb 22, 4:30 am, Nick Craig-Wood [EMAIL PROTECTED] wrote:
 Interestingly enough this was changed in recent linux kernels.
 Process levels in linus kernels are logarithmic now, whereas before
 they weren't (but I wouldn't like to say exactly what!).

Wow!  That's a VERY good point.  I ran a similar test on Windows with
the 'start' command which is similar to nice but you need to specify
the Priority Class by name, e.g.

start /REALTIME python.exe bench1.py

Now, in a standard operating system you'd expect some variance between
runs, and I did find that.  So I wrote a script to compute the Mode
(but not the Standard Deviation as I didn't have time for it) for each
Priority Class, chosen each run at random, accumulated the running
value for each one.  Now, when I read the results, I really wish I'd
computed the Chi**2 to calculate the Standard Deviation because the
results all appeared within very close relation to one another, as if
the Priority Class had overall very little effect.  In fact, I would
be willing to guess that say NORMAL and ABOVENORMAL lie with one
Standard Deviation of one another!

That having been said, the tests all ran in about 10 seconds so it may
be that the process was too simple to show any statistical results.  I
know for instance that running ffmpeg as NORMAL or REALTIME makes a
sizable difference.

So, I concede the Unified Priority may indeed be dead in the water,
but I am thinking of giving it once last go with the following
suggestion:

0.0 == Zero-Page (Windows, e.g. 0) / +20 (Unix)
1.0 == Normal (Foreground) Priority (Windows, e.g. 9) / 0 (Unix)
MAX_PRIORITY == Realtime / Time Critical (Windows, e.g. 31) / -20
(Unix)

With the value of MAX_PRIORITY TBD.  Now, 0.0 would still represent
(relatively) 0% CPU usage, but now 1.0 would represent 100% of
'Normal' priority.  I would still map 0.0 - 1.0 linearly over the
scale corresponding to the given operating system (0 - 9, Window; +20
- 0, Unix), but higher priorities would correspond to  1.0 values.

The idea here is that most user will only want to lower priority, not
raise it, so it makes lowering pretty intuitive.  As for the linear
mapping, I would leave a note in the documentation that although the
scale is linear, the operating system may choose to behave as if the
scale is linear and that the user should consult the documentation for
their OS to determine specific behavior.  This is similar to the
documentation of the file time-stamps in os.stat, since their
granularity differs based on OS.  Most users, I should think, would
just want to make their spawn slower and use the scale do determine
how much in a relative fashion rather than expecting hard-and-fast
numbers for the actually process retardation.

Higher than Normal priorities may OTHO, be a bit harder to deal with.
It strikes me that maybe the best approach is to make MAX_PRIORITY
operating system dependent, specifically 31 - 9 + 1.0 = +23.0 for
Windows and -20 - 0 + 1.0 = +21.0 for Unix.  This way, again the
priorities map linearly and in this case 1:1.  I think for most users,
they would choose a High Priority relative to MAX_PRIORITY or just
choose a small increment about 1.0 to add just a little boost.

Of course, the 2 biggest problems with this approach are, IMHO, a) the
 Normal scale is percent but the  Normal scale is additive.
However, there is no Simple definition of MAX_PRIORITY, so I think
using the OS's definition is natural. b) This use of the priority
scale may be confusing to Unix users, since 1.0 now represents
Normal and +21, not +/-20 represents Max Priority.  However, the
definition of MAX_PRIORITY would be irrelevant to the definition of
setPriority and getPriority, since each would, in my proposal, compute
for p  1.0:

Windows: 9 + int((p - 1) / (MAX_PRIORITY - 1) * 22 + .5)
Unix: -int((p - 1) / (MAX_PRIORITY - 1) * 20 + .5)

Anyway, that's how I'd propose to do the nitty-gritty.  But, more than
anything, I think the subprocess 'priority' methods should use a
priority scheme that is easy to explain.  And by that, I propose:

1.0 represents normal priority, 100%.  Any priority less than 1
represents a below normal priority, down to 0.0, the lowest possible
priority or 0%.  Any priority above 1.0 represents an above normal
priority, with MAX_PRIORITY being the highest priority level available
for a given os.

Granted, it's not much simpler than 0 is normal, -20 is highest and
+20 is lowest, except in so far as it being non-intuitive to consider
a lower priority number representing a higher priority.  Certainly, we
could conform all systems to the +20.0 to -20.0 floating point system,
but I prefer not to bias the methods and honestly feel percentage is
more intuitive.

So, how does that sound to people?  Is that more palatable?

Thanks again for all the input!

Jeffrey.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Adding Priority Scheduling feature to the subprocess

2008-02-21 Thread TimeHorse
On Feb 20, 10:15 pm, Terry Reedy [EMAIL PROTECTED] wrote:
 | Because UNIX uses priorities between +20 and -20 and Windows, via
 | Process and Thread priorities, allows settings between 0 and 31, a
 | uniform setting for each system should be derived.  This would be
 | accomplished by giving process priority in terms of a floating-point
 | value between 0.0 and 1.0 for lowest and highest possible priority,
 | respectively.

 I would rather that the feature use the -20 to 20 priorities and map that
 appropriately to the Windows range on Windows.

The problem as I see it is that -20 to +20 is only just over 5 bits of
precision and I can easily imagine an OS with many more than just 5
bits to specify a process priority.  Of course, the os.getpriority and
os.setpriority, being specific to UNIX, WOULD use the -20 to +20
scale, it's just the generic subprocess that would not.  But for a
generic priority, I like floating point because it gives 52 bits of
precision on most platforms.  This would allow for the most
flexibility.  Also, 0.0 to 1.0 is in some ways more intuitive to new
programmers because it can be modeled as ~0% CPU usage vs. ~100% CPU
usage, theoretically.  Users not familiar with UNIX might OTHO be
confused by the idea that a lower priority number constitutes a
higher priority.

Of course, the scale used for p in Popen(...).setPriority(p) is really
not an important issue to me as long as it makes sense in the context
of priorities.  Given that os.setpriority and Popen(...).setPriority
have virtually the same name, it would probably be better to rename
the later to something a bit less prone to confusion.  Alternatively,
it would not be unreasonable to design setPriority (and getPriority
correspondingly) such that under UNIX it takes 1 parameter, -20 to +20
and under Windows it takes 2 parameters, second one optional, where
the Windows API priorities are directly passed to it (for getPriority,
Windows would return a Tuple pair corresponding to Priority Class and
Main Thread Priority).  However, I personally prefer a unified
definition for subprocess.py's Priority since there already is or will
be direct os-level methods to accomplish the same thing in the os-
native scale.

Anyway, thanks for the input and I will make a note of it in the PEP.
Other than the generic Property ranges, do you see any other issues
with my proposal?

Jeffrey.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Adding Priority Scheduling feature to the subprocess

2008-02-21 Thread TimeHorse
On Feb 21, 1:17 pm, Dennis Lee Bieber [EMAIL PROTECTED] wrote:
         Why imagine... AmigaOS ran -128..+127 (though in practice, one never
 went above +20 as the most time critical system processes ran at that
 level; User programs ran at 0, the Workbench [desktop] ran at +1... I
 think file system ran +5 and disk device handlers ran +10)

         On the other side, VMS ran 0..31, with 16..31 being fixed realtime
 priorities, and 0..15 being variable (they don't drop below the process
 base, but can and are boosted higher the longer they are preempted)

Thanks for the info!  Actually, many peices of the WinNT kernel were
based on VMS, interestingly enough (go back to the NT 3.5 days and you
can see this) and of course Windows 2000, XP and Vista are all derived
from NT.  So I guess one long-time holdover from the VMS days of NT is
just as you say, it's still true in Windows: processes run 0..15 for
anything but realtime, even if the thread is boosted to Time Critical,
but in realtime the priorities go from 16..31.

Anyway, on the one hand AmigaOS support where -128 - p = 0.0 and +127
- p = 1.0 would be a good example of why simply using a 41 point UNIX
scale is defecient in representing all possible priorities, but apart
from the support AmigaOS argument, you bring up another issue which
may be dangerous: are priorities linear in nature?  For instance, a
with-Focus Windows App running Normal-Normal runs at priority 9, p ~=
0.29 (as well as likely for VMS), where as UNIX and Amiga have 0 for
normal processes, p ~= 0.50.  In many ways, the Normal mode being
the epoch makes sense, but this clearly cannot be done on a linear
scale.  Perhaps I should modify the PEP to instead having the generic
priorities from 0.0 to 1.0, to have them go from -1.0 to +1.0, with
0.0 considered normal priority.  Then, the negative and positive
regions can be considered linearly but not necessairily with the same
spacing since on Windows -1.0 to 0.0 spans 0 to 9 priorities and 0.0
to +1.0 spans 10 to 31 priorities.  And then, since +20 is the highest
AmigaOS priority in practice, yet the scale goes up to +127, the would
mean that from p ~= +0.16 to p ~= -1.0 you get obscenely high
priorities which do not seem practical.

I will need to think about this problem so more.  I'd hate to think
there might be a priority scale based on a Normal Distribution!  I'm
already going bald; I can't afford to loose any more hair!  :)

Anyway, thanks for info Dennis; you've given me quite a bit to think
about!

Jeffrey.
-- 
http://mail.python.org/mailman/listinfo/python-list


Consistent mapping from frame to function

2007-10-27 Thread TimeHorse
Is there a consistent way to map a frame object to a function / method
object?  Methods usually hang off of parameter 1, and functions
usually in the global or local scope.  But lambda functions and
decorators are not so easily mapped.  You can get access to the
executing code object from the frame, but I don't see a consistent way
to map it to the function.  Any suggestion?

-- 
http://mail.python.org/mailman/listinfo/python-list


New module for method level access modifiers

2007-10-23 Thread TimeHorse
I have started work on a new module that would allow the decoration of
class methods to restrict access based on calling context.
Specifically, I have created 3 decorators named public, private and
protected.

These modifiers work as follows:

Private: A private method can only be called from a method defined in
the same class as the called method.

Protected: A protected method can only be called from a method defined
in the same or class derived from the called method's class.

Public: No access restrictions (essentially a no-op).

Programmers with a C++ or Java background will be familiar with these
concepts in those languages; these decorators attempt to emulate their
behavior

Bugs:

1)  These decorators will not tolerate other decorators because they
examine the call stack in order to determine the caller's frame.  A
second decorator, either before or after the access decorator will
insert a stack frame, which the current version of these decorators
cannot handle.  Making sure decorators set their wrapper functions
__name__, __doc__ and append to its dictionary would be required at
least for interoperability.

2)  As noted, staticmethod and classmethod cannot be handled by the
access decorators, not only because they are decorators themselves,
but because the current access decorators require access to an
instance of the class (self) in order to do method resolution.
classmethod support could probably be added without too much
difficulty but staticmethods, because they have no self or cls, would
be difficult.

3)  Friend classes, as defined in C++.  These would probably be defined
as a class-level list of classes that may have private/protected
access to the given class's internals.  This should not be too hard to
add.

4)  Decorators for member variables -- these decorators can already be
applied to get_* and set_* methods for properties.  Overriding
__getattr__ may be a better solution for attribute access, however.

Source available at: http://starship.python.net/crew/timehorse/access.py

Jeffrey

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: starship.python.net is down

2007-04-08 Thread TimeHorse
On Feb 26, 4:46 pm, Tom Bryan [EMAIL PROTECTED] wrote:
 Yes.  Unfortunately, there may be a hardware problem.  Stefan, the admin

Any word from the ISP what the hardware problem might be, Tom?

Jeffrey.

-- 
http://mail.python.org/mailman/listinfo/python-list


RFC: Assignment as expression (pre-PEP)

2007-04-05 Thread TimeHorse
I would like to gauge interest in the following proposal:

Problem:

Assignment statements cannot be used as expressions.

Performing a list of mutually exclusive checks that require data
processing can cause excessive tabification.  For example, consider
the following python snipet...

temp = my_re1.match(exp)
if temp:
  # do something with temp
else:
  temp = my_re2.match(exp)
  if temp:
# do something with temp
  else:
temp = my_re3.match(exp)
if temp:
  # do something with temp
else:
  temp = my_re4.match(exp)

# etc.

Even with 2-space tabification, after about 20 matches, the
indentation will take up half an 80-column terminal screen.

Details:

Python distinguishes between an assignment statement and an equality
expression.  This is to force disambiguation of assignment and
comparison so that a statement like:

if x = 3:

Will raise an expression rather than allowing the programmer to
accidentally overwrite x.  Likewise,

x == 3

Will either return True, False or raise a NameError exception, which
can alert the author of any potential coding mistakes since if x = 3
(assignment) was meant, assignment being a statement returns nothing
(though it may raise an exception depending on the underlying
assignment function called).

Because this forced disambiguation is a guiding virtue of the python
language, it would NOT be wise to change these semantics of the
language.

Proposal:

Add a new assignment-expression operator to disambiguate it completely
from existing operators.

Although any number of glyph could be used for such a new operator, I
here propose using pascal/gnu make-like assignment.  Specifically,

let:

x = 3

Be a statement that returns nothing;

let:

x == 3

Be an expression that, when x is a valid, in-scope name, returns True
or False;

let:

x := 3

Be an expression that first assigned the value (3) to x, then returns
x.

Thus...

if x = 3:
  # Rais exception
  pass

if x == 3:
  # Execute IFF x has a value equivalent to 3
  pass

if x := 3:
  # Executes based on value of x after assignment;
  # since x will be 3 and non-zero and thus represents true, always
executed
  pass

Additional:

Since python allows in-place operator assignment, (e.g. +=, *=, etc.),
allow for these forms again by prefixing each diglyph with a colon
(:), forming a triglyph.

E.g.

if x :+= 3:
  # Executes IFF, after adding 3 to x, x represents a non-zero number.
  pass

Also note, that although the colon operator is used to denote the
beginning of a programme block, it should be easily distinguished from
the usage of : to denote a diglyph or triglyph assignment expression
as well as the trinary conditional expression.  This is because
firstly, the statement(s) following a colon (:) in existing python
should never begin with an assignment operator.  I.e.,

if x: = y

is currently not valid python.  Any attempt at interpreting the
meaning of such an expression in the current implementation of python
is likely to fail.  Secondly, the diglyph and triglyph expressions do
not contain spaces, further disambiguating them from existing python.

Alternative proposals for dyglyph and triglyph representations for
assignment expressions are welcome.

Implementation details:

When the python interpreter parser encounters a diglyph or triglyph
beginning with a colon (:) and ending with an equals sign (=), perform
the assignment specified by glyph[1:] and then return the value of the
variable(s) on the left-hand side of the expression.  The assignment
function called would be based on standard python lookup rules for the
corresponding glyph[1:] operation (the glyph without the leading
colon).

Opposition:

Adding any new operator to python could be considered code bloat.

Using a colon in this way could still be ambiguous.

Adding the ability to read triglyph operators in the python
interpreter parser would require too big a code revision.

Usage is too obscure.

Using an assignment expression would lead to multiple conceptual
instructions for a single python statement (e.g. first an assignment,
then an if based on the assignment would mean two operations for a
single if statement.)

Comments:

[Please comment]

Jeffrey.

-- 
http://mail.python.org/mailman/listinfo/python-list