Travis E. Oliphant wrote:
M.-A. Lemburg wrote:
Travis E. Oliphant wrote:
M.-A. Lemburg wrote:
Travis E. Oliphant wrote:
PEP: unassigned
Title: Adding data-type objects to the standard library
Attributes
kind
Mike Krell wrote:
class S(str):
def __str__(self): return S.__str__
class U(unicode):
def __str__(self): return U.__str__
print str(S())
print str(U())
This script prints:
S.__str__
U.__str__
Yes, but print U() prints nothing, and the explicit str() should not
be
Ronald Oussoren wrote:
Patch http://www.python.org/sf/1580674 fixes readlink's behaviour w.r.t.
Unicode strings: without this patch this function uses the system
default encoding instead of the filesystem encoding to convert Unicode
objects to plain strings. Like os.listdir, os.readlink will
Josiah Carlson wrote:
Fredrik Lundh [EMAIL PROTECTED] wrote:
Josiah Carlson wrote:
Presumably with this library you have created, you have also written a
fast object encoder/decoder (like marshal or pickle). If it isn't any
faster than cPickle or marshal, then users may bypass the module
Larry Hastings wrote:
Fredrik Lundh wrote:
[EMAIL PROTECTED] wrote:
MAL's pybench would probably be better for this presuming it does some
addition with string operands.
or stringbench.
I ran 'em, and they are strangely consistent with pystone.
With concat, stringbench is
Fredrik Lundh wrote:
Tim Lesher wrote:
1. Does this seem like a reasonable addition to the standard library?
I cannot remember ever doing this, or seeing anyone except Perforce
doing this, and it'll only save you a few lines of code every other year
or so, so my answer is definitely no.
Georg Brandl wrote:
[EMAIL PROTECTED] wrote:
Georg [ Bug http://python.org/sf/1541585 ]
Georg This seems to be handled like a security issue by linux
Georg distributors, it's also a news item on security related pages.
Georg Should a security advisory be written and official
Kristján V. Jónsson wrote:
Hello All.
I just added patch 1552880 to sourceforge. It is a patch for 2.6 (and 2.5)
which allows unicode paths in sys.path and uses the unicode file api on
windows.
This is tried and tested on 2.5, and backported to 2.3 and is currently
running on clients in
Martin v. Löwis wrote:
Neal Norwitz schrieb:
I was playing around with a little patch to avoid that penalty. It
doesn't take any additional memory, just a handful of bits we aren't
using. :-)
There are common schemes that allow constant-time issubclass tests,
although they do require more
Guido van Rossum wrote:
On 8/15/06, M.-A. Lemburg [EMAIL PROTECTED] wrote:
The distutils version number should be changed back to a static
string literal.
It's currently setup to get its version number
from the Python version running it which pretty much defeats
the whole purpose of having
Martin v. Löwis wrote:
Guido van Rossum schrieb:
I think it must be rolled back, at least as long as
distutils is officially listed as a package that needs to support
older versions of Python, which pretty much implies that it's okay to
extract it from the 2.5 release and distribute it
Martin v. Löwis wrote:
M.-A. Lemburg schrieb:
I find it important to maintain distutils compatibility with
a few Python versions back. Even if I can't volunteer to
maintain distutils, like Martin suggested, due to lack of time,
I don't really see the requirement to use the latest and greatest
/15/06, M.-A. Lemburg [EMAIL PROTECTED] wrote:
Martin v. Löwis wrote:
Guido van Rossum schrieb:
I think it must be rolled back, at least as long as
distutils is officially listed as a package that needs to support
older versions of Python, which pretty much implies that it's okay
Martin v. Löwis wrote:
M.-A. Lemburg schrieb:
It's either an official feature, with somebody maintaining it,
or people should expect to break it anytime.
I'll let you know when things break - is that good enough ?
That can't be an official policy; you seem to define breaks
as breaks in my
M.-A. Lemburg wrote:
M.-A. Lemburg wrote:
Guido van Rossum wrote:
Marc-Andre, how's the patch coming along?
I'm working on it.
Since we only want equal compares to generate the warning,
I have to add a rich compare function to Unicode objects.
Here's an initial version:
http
Armin Rigo wrote:
Hi,
On Thu, Aug 10, 2006 at 02:36:16PM -0700, Guido van Rossum wrote:
On Thu, Aug 10, 2006 at 09:11:42PM +0200, Martin v. L?wis wrote:
I'm in favour of having this __eq__ just return False. I don't think
the warning is necessary, (...)
+1
Can you explain why you believe
Guido van Rossum wrote:
Marc-Andre, how's the patch coming along?
I'm working on it.
Since we only want equal compares to generate the warning,
I have to add a rich compare function to Unicode objects.
--
Marc-Andre Lemburg
eGenix.com
Professional Python Services directly from the Source
M.-A. Lemburg wrote:
Guido van Rossum wrote:
Marc-Andre, how's the patch coming along?
I'm working on it.
Since we only want equal compares to generate the warning,
I have to add a rich compare function to Unicode objects.
Here's an initial version:
http://sourceforge.net/tracker
Guido van Rossum wrote:
I've been happily ignoring python-dev for the last three weeks or so,
and Neal just pointed me to some thorny issues that are close to
resolution but not quite yet resolved, yet need to be before beta 3 on
August 18 (Friday next week).
Here's my take on the
Guido van Rossum wrote:
On 8/10/06, M.-A. Lemburg [EMAIL PROTECTED] wrote:
I'd suggest that we still inform the programmers of the problem
by issuing a warning (which they can then silence at will),
maybe a new PyExc_UnicodeWarning.
Hmm... Here's an idea... How about we change unicode-vs
Michael Chermside wrote:
How about we change unicode-vs-str __eq__ to
issue a warning (and return False) instead of raising
UnicodeException?
[... Marc-Andre Lemburg agrees ...]
Great! Now we need someone to volunteer to write a patch (which should
include doc and NEWS updates) in time
Guido van Rossum wrote:
On 8/10/06, M.-A. Lemburg [EMAIL PROTECTED] wrote:
Guido van Rossum wrote:
Hmm... Here's an idea... How about we change unicode-vs-str __eq__ to
issue a warning (and return False) instead of raising
UnicodeException? That won't break much code (it's unlikely
Martin v. Löwis wrote:
M.-A. Lemburg schrieb:
Python just doesn't know the encoding of the 8-bit string, so can't
make any assumptions on it. As result, it raises an exception to inform
the programmer.
Oh, Python does make an assumption what the encoding is: it assumes
it is the system
Martin v. Löwis wrote:
M.-A. Lemburg schrieb:
Failure to decode a string doesn't imply inequality.
If the failure is these bytes don't have a meaningful character
interpretation, then the bytes are *clearly* not equal to
some character string.
It implies
that the programmer needs to step
Armin Rigo wrote:
Hi,
On Thu, Aug 03, 2006 at 07:53:11PM +0200, M.-A. Lemburg wrote:
I though I'd heard (from Guido here or on the py3k list) that it was only
1 u'abc' that would raise an exception, and that 1 == u'abc' would still
evaluate to False. Did I misunderstand?
Could
Neal Norwitz wrote:
I believe all 3 outstanding issues (and solutions!) could use some
more discussion. All bugs/patches blocking release are set to
priority 9.
http://python.org/sf/1530559 - struct rejecting floats (patch pending)
Ralf Schmitt wrote:
Does python 2.4 catch any exception when comparing keys (which are not
basestrings) in dictionaries?
Yes. It does so for all equality compares that need to be done
as part of the hash collision algorithm (not only w/r to strings
and Unicode, but in general).
This was
Greg Ewing wrote:
M.-A. Lemburg wrote:
If a string
is not ASCII and thus causes the exception, there's not a lot you
can say, since you don't know the encoding of the string.
That's one way of looking at it.
Another is that any string containing chars 127 is not
text at all
Ralf Schmitt wrote:
M.-A. Lemburg wrote:
Ralf Schmitt wrote:
Does python 2.4 catch any exception when comparing keys (which are not
basestrings) in dictionaries?
Yes. It does so for all equality compares that need to be done
as part of the hash collision algorithm (not only w/r to strings
Terry Reedy wrote:
Michael Hudson [EMAIL PROTECTED] wrote in message
news:[EMAIL PROTECTED]
Michael Chermside [EMAIL PROTECTED] writes:
I'm changing the subject line because I want to convince everyone that
the problem being discussed in the unicode hell thread has nothing
to do with
Ralf Schmitt wrote:
Ralf Schmitt wrote:
Still trying to port our software. here's another thing I noticed:
d = {}
d[u'm\xe1s'] = 1
d['m\xe1s'] = 1
print d
With python 2.4 I can add those two keys to the dictionary and get:
$ python2.4 t2.py
{u'm\xe1s': 1, 'm\xe1s': 1}
With python 2.5
Ralf Schmitt wrote:
Still trying to port our software. here's another thing I noticed:
d = {}
d[u'm\xe1s'] = 1
d['m\xe1s'] = 1
print d
With python 2.5 I get:
$ python2.5 t2.py
Traceback (most recent call last):
File t2.py, line 3, in module
d['m\xe1s'] = 1
UnicodeDecodeError:
John J Lee wrote:
On Thu, 3 Aug 2006, M.-A. Lemburg wrote:
[...]
It's actually a good preparation for Py3k where 1 == u'abc' will
(likely) also raise an exception.
I though I'd heard (from Guido here or on the py3k list) that it was only
1 u'abc' that would raise an exception, and that 1
Jim Jewett wrote:
http://mail.python.org/pipermail/python-dev/2006-August/067934.html
M.-A. Lemburg mal at egenix.com
Ralf Schmitt wrote:
Still trying to port our software. here's another thing I noticed:
d = {}
d[u'm\xe1s'] = 1
d['m\xe1s'] = 1
print d
(a 2-element dictionary
Greg Ewing wrote:
M.-A. Lemburg wrote:
You often have a need for controlled rounding when doing
financial calculations
You should NOT be using binary floats for money
in the first place.
Believe me: you have to if you want to do more
advanced calculus related to pricing and risk
Michael Chermside wrote:
Marc-Andre Lemburg writes:
You often have a need for controlled rounding when doing
financial calculations or [other reason snipped]
Hmm. Not in the banks _I_ have worked at! We *never* use binary
floating point for money. The decimal class is fairly useful in
that
Greg Ewing wrote:
Raymond Hettinger wrote:
I think this would harm more than it would help. It more confusing to
have several rounding-thingies to choose from than it is have an
explicit two-step.
But is it more confusing enough to be worth forcing
everyone to pay two function calls
Martin v. Löwis wrote:
Anthony Baxter schrieb:
In any case, I bumped the version number to 2.5, according to the
policy discussed in
Could this not simply use the Python version number directly, instead?
See the prior discussion at
Greg Ewing wrote:
M.-A. Lemburg wrote:
I suppose you don't know about the optional argument
to round that lets you round up to a certain decimal ?!
Yes, I know about, but I rarely if ever use it.
Rounding a binary float to a number of decimal
places seems a fundamentally ill-considered
James Y Knight wrote:
On Jul 18, 2006, at 1:54 PM, Martin v. Löwis wrote:
Mihai Ibanescu wrote:
To follow up on my own email: it looks like, even though in some
locale
INFO.lower() != info
uINFO.lower() == info (at least in the Turkish locale).
Is that guaranteed, at least for now
Martin v. Löwis wrote:
M.-A. Lemburg wrote:
The Unicode database OTOH *defines* the upper/lower case mapping in
a locale independent way, so the mappings are guaranteed
to always produce the same results on all platforms.
Actually, that isn't the full truth; see UAX#21, which is now
James Y Knight wrote:
On Jul 15, 2006, at 3:15 PM, M.-A. Lemburg wrote:
Note that it also helps setting the default encoding
to 'unknown'. That way you disable the coercion of strings
to Unicode and all the places where this implicit conversion
takes place crop up, allowing you to take proper
Georg Brandl wrote:
M.-A. Lemburg wrote:
A nice side-effect would be that could easily use the
same approach to replace the often used default-argument-hack,
e.g.
def fraction(x, int=int, float=float):
return float(x) - int(x)
This would then read:
def fraction(x):
const int
Reading on in the thread it seems that there's agreement
on using static instead of const, to s/const/static
:-)
M.-A. Lemburg wrote:
Georg Brandl wrote:
M.-A. Lemburg wrote:
A nice side-effect would be that could easily use the
same approach to replace the often used default-argument-hack
Phillip J. Eby wrote:
Maybe the real answer is to have a const declaration, not necessarily the
way that Fredrik suggested, but a way to pre-declare constants e.g.:
const FOO = 27
And then require case expressions to be either literals or constants. The
constants need not be
Guido van Rossum wrote:
Without const declarations none of this can work and
the at-function-definition-time freezing is the best, because most
predictable, approach IMO.
I you like this approach best, then how about using the same
approach as we have for function default argument values:
This discussion appears to repeat everything we already have
in the PEP 275:
http://www.python.org/dev/peps/pep-0275/
FWIW, below is a real-life use case that would
benefit from a switch statement or an optimization of the
existing if-elif-else case. It's the unpickler inner loop
of an XML
Thomas Heller wrote:
It should be noted that I once started to convert the import machinery
to be fully unicode aware. As far as I can tell, a *lot* has to be changed
to make this work.
I started with refactoring Python/import.c, but nobody responded to the
question
whether such a
Neal Norwitz wrote:
On 6/16/06, M.-A. Lemburg [EMAIL PROTECTED] wrote:
Fredrik Lundh wrote:
what's the beta 1 status ? fixing this should be trivial, but I
don't have any
cycles to spare today.
Good question. PEP 356 says beta 1 was planned two days
ago...
http://www.python.org/dev
Raymond Hettinger wrote:
The optimisation of the if-elif case would then simply be to say that the
compiler can recognise if-elif chains like the one above where the RHS
of the comparisons are all hashable literals and collapse them to switch
statements.
Right (constants are usually
Greg Ewing wrote:
M.-A. Lemburg wrote:
My personal favorite is making the compiler
smarter to detect the mentioned if-elif-else scheme
and generate code which uses a lookup table for
implementing fast branching.
But then the values need to be actual compile-time
constants, precluding
Georg Brandl wrote:
In string_replace, there is
if (PyString_Check(from)) {
/* Can this be made a '!check' after the Unicode check? */
}
#ifdef Py_USING_UNICODE
if (PyUnicode_Check(from))
return PyUnicode_Replace((PyObject *)self,
Fredrik Lundh wrote:
M.-A. Lemburg wrote:
Since replace() only works on string objects, it appears
as if a temporary string object would have to be created.
However, this would involve an unnecessary allocation
and copy process... it appears as if the refactoring
during the NFS sprint left
Nick Coghlan wrote:
M.-A. Lemburg wrote:
The standard
if ...
elif ...
efif ...
else:
...
scheme already provides the above logic. There's really
no need to invent yet another syntax to write such
constructs, IMHO.
It's a DRY thing.
Exactly, though not in the sense you
Thomas Lee wrote:
On Mon, Jun 12, 2006 at 11:33:49PM +0200, Michael Walter wrote:
Maybe switch became a keyword with the patch..
Regards,
Michael
That's correct.
On 6/12/06, M.-A. Lemburg [EMAIL PROTECTED] wrote:
Could you upload your patch to SourceForge ? Then I could add
Michael Walter wrote:
Maybe switch became a keyword with the patch..
Ah, right. Good catch :-)
Regards,
Michael
On 6/12/06, M.-A. Lemburg [EMAIL PROTECTED] wrote:
Thomas Lee wrote:
Hi all,
As the subject of this e-mail says, the attached patch adds a switch
statement to the Python
/
2006-07-03: EuroPython 2006, CERN, Switzerland 19 days left
::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,FreeBSD for free !
M.-A. Lemburg wrote:
Fredrik Lundh wrote:
M.-A. Lemburg wrote:
You can download a current
I just tried to upgrade Tools/pybench/ to my latest version,
so I imported pybench-2.0 into the externals/ tree and then
tried copying over the new version into the Tools/pybench/
trunk.
Unfortunately the final copy didn't actually replace the files in
Tools/pybench/ but instead created a
Fredrik Lundh wrote:
M.-A. Lemburg wrote:
Here's the command I used:
svn copy svn+pythonssh://[EMAIL PROTECTED]/external/pybench-2.0 \
svn+pythonssh://[EMAIL PROTECTED]/python/trunk/Tools/pybench
Am I missing some final slash in the copy command or is there
a different
FYI: I've just checked in pybench 2.0 under Tools/pybench/.
Please give it a go and let me know whether the new
calibration strategy and default timers result in
better repeatability of the benchmark results.
I've tested the release extensively on Windows and Linux
and found that the test times
Thomas Lee wrote:
Hi all,
As the subject of this e-mail says, the attached patch adds a switch
statement to the Python language.
However, I've been reading through PEP 275 and it seems that the PEP
calls for a new opcode - SWITCH - to be added to support the new
construct.
I got a bit
Fredrik Lundh wrote:
M.-A. Lemburg wrote:
The results were produced by pybench 2.0 and use time.time
on Linux, plus a different calibration strategy. As a result
these timings are a lot more repeatable than with pybench 1.3
and I've confirmed the timings using several runs to make sure
Fredrik Lundh wrote:
M.-A. Lemburg wrote:
You can download a current snapshot from:
http://www.egenix.com/files/python/pybench-2.0-2006-06-09.zip
believe it or not, but this hangs on my machine, under 2.5 trunk. and
it hangs hard; nether control-c, break, or the task manager manages
Nick Coghlan wrote:
M.-A. Lemburg wrote:
Still, here's the timeit.py measurement of the PythonFunctionCall
test (note that I've scaled down the test in terms of number
of rounds for timeit.py):
Python 2.5 as of last night:
10 loops, best of 3: 21.9 msec per loop
10 loops, best of 3: 21.8
Fredrik Lundh wrote:
M.-A. Lemburg wrote:
sigh I put the headings for the timeit.py output on the
wrong blocks. Thanks for pointing this out.
so how do you explain the Try/Except results, where timeit and pybench
seems to agree?
The pybench results match those of timeit.py on my test
Fredrik Lundh wrote:
M.-A. Lemburg wrote:
The pybench results match those of timeit.py on my test machine
in both cases. I just mixed up the headers when I wrote the email.
on a line by line basis ?
No idea what you mean ? I posted the corrected version after Nick
told me about
Fredrik Lundh wrote:
M.-A. Lemburg wrote:
The pybench results match those of timeit.py on my test machine
in both cases.
but they don't match the timeit results on similar machines, nor do they
reflect what was done at the sprint.
Huh ? They do show the speedups you achieved
Thomas Wouters wrote:
On 6/8/06, M.-A. Lemburg [EMAIL PROTECTED] wrote:
All this on AMD64, Linux2.6, gcc3.3.
FWIW, my AMD64, linux 2.6, gcc 4.0 machine reports 29.0-29.5 usec for 2.5,
30.0-31.0 for 2.4 and 30.5-31.5 for 2.3, using the code you attached. In
other words, 2.5 is definately
Thomas Wouters wrote:
On 6/8/06, M.-A. Lemburg [EMAIL PROTECTED] wrote:
Perhaps it's a new feature in gcc 4.0 that makes the slow-down I see
turn into a speedup :-)
It seems so. I tested with gcc 2.95, 3.3 and 4.0 on FreeBSD 4.10 (only
machine I had available with those gcc versions
Steve Holden wrote:
M.-A. Lemburg wrote:
[...]
Overall, time.clock() on Windows and time.time() on Linux appear
to give the best repeatability of tests, so I'll make those the
defaults in pybench 2.0.
In short: Tim wins, I lose.
Was a nice experiment, though ;-)
Perhaps so, but it would
Some more interesting results from comparing Python 2.4 (other) against
the current SVN snapshot (this):
Testnamesminimum run-timeaverage run-time
thisother diffthisother
diff
M.-A. Lemburg wrote:
Some more interesting results from comparing Python 2.4 (other) against
the current SVN snapshot (this):
Here's the list again, this time without wrapping (sigh):
Testnamesminimum run-timeaverage run-time
Fredrik Lundh wrote:
M.-A. Lemburg wrote:
I just had an idea: if we could get each test to run
inside a single time slice assigned by the OS scheduler,
then we could benefit from the better resolution of the
hardware timers while still keeping the noise to a
minimum.
I suppose this could
FWIW, these are my findings on the various timing strategies:
* Windows:
time.time()
- not usable; I get timings with an error interval of roughly 30%
GetProcessTimes()
- not usable; I get timings with an error interval of up to 100%
with differences in steps of 15.626ms
M.-A. Lemburg wrote:
FWIW, these are my findings on the various timing strategies:
Correction (due to a bug in my pybench dev version):
* Windows:
time.time()
- not usable; I get timings with an error interval of roughly 30%
GetProcessTimes()
- not usable; I get timings
Fredrik Lundh wrote:
M.-A. Lemburg wrote:
Seriously, I've been using and running pybench for years
and even though tweaks to the interpreter do sometimes
result in speedups or slow-downs where you wouldn't expect
them (due to the interpreter using the Python objects),
they are reproducable
Fredrik Lundh wrote:
M.-A. Lemburg wrote:
Of course, but then changes to try-except logic can interfere
with the performance of setting up method calls. This is what
pybench then uncovers.
I think the only thing PyBench has uncovered is that you're convinced that
it's
always right
Andrew Dalke wrote:
M.-A. Lemburg:
The approach pybench is using is as follows:
...
The calibration step is run multiple times and is used
to calculate an average test overhead time.
One of the changes that occured during the sprint was to change this
algorithm
to use the best time
M.-A. Lemburg wrote:
That's why the timers being used by pybench will become a
parameter that you can then select to adapt pybench it to
the OS your running pybench on.
Wasn't that decision a consequence of the problems found during
the sprint?
It's a consequence of a discussion I had
M.-A. Lemburg wrote:
That's why the timers being used by pybench will become a
parameter that you can then select to adapt pybench it to
the OS your running pybench on.
Wasn't that decision a consequence of the problems found during
the sprint?
It's a consequence of a discussion I had
[EMAIL PROTECTED] wrote:
(This is more appropriate for comp.lang.python/[EMAIL PROTECTED])
Niko After reading through recent Python mail regarding dictionaries
Niko and exceptions, I wondered, what is the current state of the art
Niko in Python benchmarks?
Pybench was recently
[EMAIL PROTECTED] wrote:
MAL Could you please forward such questions to me ?
I suppose, though what question were you referring to?
Not sure - I thought you knew ;-)
I was referring to
Fredrik's thread about stringbench vs pybench for string/unicode tests,
which I thought was posted
[EMAIL PROTECTED] wrote:
MAL I'm aware of that thread, but Fredrik only posted some vague
MAL comment to the checkins list, saying that they couldn't use
MAL pybench. I asked for some more details, but he didn't get back to
MAL me.
I'm pretty sure I saw him (or maybe Andrew
Martin v. Löwis wrote:
M.-A. Lemburg wrote:
Well, the strings and integers count twice: once in the module
namespace and once in the errorcode dictionary.
That shouldn't be the case: the strings are interned (as they
are identifier-like), so you have the very same string object
in both
Martin v. Löwis wrote:
M.-A. Lemburg wrote:
I was leaving those out already - only the codes named 'ERROR_*'
get included (see attached parser and generator).
Right. One might debate whether DNS_INFO_AXFR_COMPLETE (9751L)
or WSAEACCES (10013L) should be included as well.
The WSA codes
Martin v. Löwis wrote:
M.-A. Lemburg wrote:
What do you think ?
I think the size could be further reduced by restricting the set of
error codes. For example, if the COM error codes are left out, I only
get a Python file with 60k source size (although the bytecode size
is then 130k). I'm
M.-A. Lemburg wrote:
Martin v. Löwis wrote:
M.-A. Lemburg wrote:
BTW, and intended as offer for compromise, should we instead
add the Win32 codes to the errno module (or a new winerrno
module) ?! I can write a parser that takes winerror.h and
generates the module code.
Instead won't help
Martin v. Löwis wrote:
M.-A. Lemburg wrote:
BTW, and intended as offer for compromise, should we instead
add the Win32 codes to the errno module (or a new winerrno
module) ?! I can write a parser that takes winerror.h and
generates the module code.
Instead won't help: the breakage
Martin v. Löwis wrote:
I'm saying that changing WindowsError to include set errno
to DOS error codes would *also* be an incompatible change.
Since code catching IOError will also see any WindowsError
exception, I'd opt for making .errno always return the
DOS/BSD error codes and have
Phillip J. Eby wrote:
At 09:54 PM 4/27/2006 +0200, M.-A. Lemburg wrote:
Note that I was talking about the .pth file being
writable, not the directory.
Please stop this vague, handwaving FUD. You have yet to explain how this
situation is supposed to arise. Is there some platform on which
M.-A. Lemburg wrote:
No, I'm talking about a format which has the same if not
more benefits as what you're trying to achieve with the
.egg file approach, but without all the magic and hacks.
It's not like this wouldn't be possible to achieve.
That may or may not be true. Perhaps if you had
Phillip J. Eby wrote:
At 06:47 PM 4/27/2006 +0200, M.-A. Lemburg wrote:
Just read that you are hijacking site.py for setuptools'
just works purposes.
hijacking isn't the word I'd use; wrapping is what it actually
does. The standard site.py is executed, there is just some pre- and
post
Guido van Rossum wrote:
On 4/26/06, Barry Warsaw [EMAIL PROTECTED] wrote:
On Wed, 2006-04-26 at 10:16 -0700, Guido van Rossum wrote:
So I have a very simple proposal: keep the __init__.py requirement for
top-level pacakages, but drop it for subpackages. This should be a
small change. I'm
Guido van Rossum wrote:
On 4/26/06, Phillip J. Eby [EMAIL PROTECTED] wrote:
At 10:16 AM 4/26/2006 -0700, Guido van Rossum wrote:
So I have a very simple proposal: keep the __init__.py requirement for
top-level pacakages, but drop it for subpackages.
Note that many tools exist which have grown
Guido van Rossum wrote:
On 4/20/06, Martin v. Löwis [EMAIL PROTECTED] wrote:
1. don't load packages out of .zip files. It's not that bad if
software on the user's disk occupies multiple files, as long as
there is a convenient way to get rid of them at once.
Many problems go away if
Phillip J. Eby wrote:
What I'm opposed to in making setuptools install things the distutils way
by default is that there is no easy path to clean upgrade or installation
in the absence of a system packaging tool like RPM or deb or
what-have-you. I am not opposed to doing the classic style
M.-A. Lemburg wrote:
Tim Peters wrote:
[M.-A. Lemburg]
I could contribute pybench to the Tools/ directory if that
makes a difference:
+1. It's frequently used and nice work. Besides, then we could
easily fiddle the tests to make Python look better ;-)
That's a good argument :-)
Note
M.-A. Lemburg wrote:
M.-A. Lemburg wrote:
Tim Peters wrote:
[M.-A. Lemburg]
I could contribute pybench to the Tools/ directory if that
makes a difference:
+1. It's frequently used and nice work. Besides, then we could
easily fiddle the tests to make Python look better ;-)
That's a good
[removing the python-checkins CC]
Phillip J. Eby wrote:
At 09:02 PM 4/18/2006 +0200, M.-A. Lemburg wrote:
Phillip J. Eby wrote:
At 07:15 PM 4/18/2006 +0200, M.-A. Lemburg wrote:
Why should a 3rd party extension be hot-fixing the standard
Python distribution ?
Because setuptools installs
Phillip.eby wrote:
Author: phillip.eby
Date: Tue Apr 18 02:59:55 2006
New Revision: 45510
Modified:
python/trunk/Lib/pkgutil.py
python/trunk/Lib/pydoc.py
Log:
Second phase of refactoring for runpy, pkgutil, pydoc, and setuptools
to share common PEP 302 support code, as described
701 - 800 of 989 matches
Mail list logo