Re: [Python-Dev] PyPy 1.7 - widening the sweet spot

2011-11-24 Thread Maciej Fijalkowski
On Wed, Nov 23, 2011 at 11:13 PM, Philip Jenvey  wrote:
>
> On Nov 22, 2011, at 12:43 PM, Amaury Forgeot d'Arc wrote:
>
>> 2011/11/22 Philip Jenvey 
>> One reason to target 3.2 for now is it's not a moving target. There's 
>> overhead involved in managing modifications to the pure python standard lib 
>> needed for PyPy, tracking 3.3 changes as they happen as well exacerbates 
>> this.
>>
>> The plans to split the standard lib into its own repo separate from core 
>> CPython will of course help alternative implementations here.
>>
>> I don't see how it would help here.
>> Copying the CPython Lib/ directory is not difficult, even though PyPy made 
>> slight modifications to the files, and even without any merge tool.
>
> Pulling in a separate stdlib as a subrepo under the PyPy repo would certainly 
> make this whole process easier.
>
> But you're right, if we track CPython's default branch (3.3) we can make many 
> if not all of the PyPy modifications upstream   (until the 3.3rc1 code 
> freeze) instead of in PyPy's modified-3.x directory. Maintaining that 
> modified-3.x dir after every resync can be tedious.
>
> --
> Philip Jenvey

The problem is not with maintaining the modified directory. The
problem was always things like changing interface between the C
version and the Python version or introduction of new stuff that does
not run on pypy because it relies on refcounting. I don't see how
having a subrepo helps here.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PyPy 1.7 - widening the sweet spot

2011-11-24 Thread Nick Coghlan
On Thu, Nov 24, 2011 at 10:20 PM, Maciej Fijalkowski  wrote:
> The problem is not with maintaining the modified directory. The
> problem was always things like changing interface between the C
> version and the Python version or introduction of new stuff that does
> not run on pypy because it relies on refcounting. I don't see how
> having a subrepo helps here.

Indeed, the main thing that can help on this front is to get more
modules to the same state as heapq, io, datetime (and perhaps a few
others that have slipped my mind) where the CPython repo actually
contains both C and Python implementations and the test suite
exercises both to make sure their interfaces remain suitably
consistent (even though, during normal operation, CPython users will
only ever hit the C accelerated version).

This not only helps other implementations (by keeping a Python version
of the module continuously up to date with any semantic changes), but
can help people that are porting CPython to new platforms: the C
extension modules are far more likely to break in that situation than
the pure Python equivalents, and a relatively slow fallback is often
going to be better than no fallback at all. (Note that ctypes based
pure Python modules *aren't* particularly useful for this purpose,
though - due to the libffi dependency, ctypes is one of the extension
modules most likely to break when porting).

Cheers,
Nick.

-- 
Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Long term development external named branches and periodic merges from python

2011-11-24 Thread Jesus Cea
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

I have a question and I would rather have an answer instead of
actually trying and getting myself in a messy situation.

Let say we have the following scenario:

1. A programer clones hg.python.org.
2. Programer creates a named branch and start to develop a new feature.
3. She adds her repository&named branch to the bugtracker.
4. From time to time, she posts updates in the tracker using the
"Create Patch" button.

So far so good. Now, the question:

5. Development of the new feature is taking a long time, and python
canonical version keeps moving forward. The clone+branch and the
original python version are diverging. Eventually there are changes in
python that the programmer would like in her version, so she does a
"pull" and then a merge for the original python branch to her named
branch.

6. What would be posted in the bug tracker when she does a new "Create
Patch"?. Only her changes, her changes SINCE the merge, her changes
plus merged changes or something else?. What if the programmer
cherrypick changesets from the original python branch?.

Thanks! :-).

- -- 
Jesus Cea Avion _/_/  _/_/_/_/_/_/
j...@jcea.es - http://www.jcea.es/ _/_/_/_/  _/_/_/_/  _/_/
jabber / xmpp:j...@jabber.org _/_/_/_/  _/_/_/_/_/
.  _/_/  _/_/_/_/  _/_/  _/_/
"Things are not so easy"  _/_/  _/_/_/_/  _/_/_/_/  _/_/
"My name is Dump, Core Dump"   _/_/_/_/_/_/  _/_/  _/_/
"El amor es poner tu felicidad en la felicidad de otro" - Leibniz
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iQCVAwUBTs50H5lgi5GaxT1NAQJsTAP6AsUsLo2REdxxyVvPBDQ51GjZermCXD08
jOqKkKY9cre4OHx/+uZHEvO8j7RJ5X3o2/0Yl4OeDSTBDY8/eWINc9cgtuNqrJdW
W27fu1+UTIpgl1oLh06P23ufOEWPWh90gsV6eiVnFlj7r+b3HkP7PNdZCmqU2+UW
92Ac9B1JOvU=
=goYv
-END PGP SIGNATURE-
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Long term development external named branches and periodic merges from python

2011-11-24 Thread Éric Araujo
Hi,

> I have a question and I would rather have an answer instead of
> actually trying and getting myself in a messy situation.
Clones are cheap, trying is cheap! 

> Let say we have the following scenario:
> 
> 1. A programer clones hg.python.org.
> 2. Programer creates a named branch and start to develop a new feature.
> 3. She adds her repository&named branch to the bugtracker.
> 4. From time to time, she posts updates in the tracker using the
> "Create Patch" button.
> 
> So far so good. Now, the question:
> 
> 5. Development of the new feature is taking a long time, and python
> canonical version keeps moving forward. The clone+branch and the
> original python version are diverging. Eventually there are changes in
> python that the programmer would like in her version, so she does a
> "pull" and then a merge for the original python branch to her named
> branch.
I do this all the time.  I work on a fix- branch, and once a week
for example I pull and merge the base branch.  Sometimes there are no
conflicts except Misc/NEWS, sometimes I have to adapt my code because of
other people’s changes before I can commit the merge.

> 6. What would be posted in the bug tracker when she does a new "Create
> Patch"?. Only her changes, her changes SINCE the merge, her changes
> plus merged changes or something else?.
The diff would be equivalent to “hg diff -r base” and would contain all
the changes she did to add the bug fix or feature.  Merging only makes
sure that the computed diff does not appear to touch unrelated files,
IOW that it applies cleanly.  (Barring bugs in Mercurial-Roundup
integration, we have a few of these in the metatracker.)

> What if the programmer cherrypick changesets from the original python
> branch?.
Then her branch will revert some changes done in the original branch.
Therefore, cherry-picking is not a good idea.

Regards
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] file.readinto performance regression in Python 3.2 vs. 2.7?

2011-11-24 Thread Eli Bendersky
Hi there,

I was doing some experiments with the buffer interface of bytearray today,
for the purpose of quickly reading a file's contents into a bytearray which
I can then modify. I decided to do some benchmarking and ran into
surprising results. Here are the functions I was timing:

def justread():
# Just read a file's contents into a string/bytes object
f = open(FILENAME, 'rb')
s = f.read()

def readandcopy():
# Read a file's contents and copy them into a bytearray.
# An extra copy is done here.
f = open(FILENAME, 'rb')
b = bytearray(f.read())

def readinto():
# Read a file's contents directly into a bytearray,
# hopefully employing its buffer interface
f = open(FILENAME, 'rb')
b = bytearray(os.path.getsize(FILENAME))
f.readinto(b)

FILENAME is the name of a 3.6MB text file. It is read in binary mode,
however, for fullest compatibility between 2.x and 3.x

Now, running this under Python 2.7.2 I got these results ($1 just reflects
the executable name passed to a bash script I wrote to automate these runs):

$1 -m timeit -s'import fileread_bytearray' 'fileread_bytearray.justread()'
1000 loops, best of 3: 461 usec per loop
$1 -m timeit -s'import fileread_bytearray'
'fileread_bytearray.readandcopy()'
100 loops, best of 3: 2.81 msec per loop
$1 -m timeit -s'import fileread_bytearray' 'fileread_bytearray.readinto()'
1000 loops, best of 3: 697 usec per loop

Which make sense. The readinto() approach is much faster than copying the
read buffer into the bytearray.

But with Python 3.2.2 (built from the 3.2 branch today):

$1 -m timeit -s'import fileread_bytearray' 'fileread_bytearray.justread()'
1000 loops, best of 3: 336 usec per loop
$1 -m timeit -s'import fileread_bytearray'
'fileread_bytearray.readandcopy()'
100 loops, best of 3: 2.62 msec per loop
$1 -m timeit -s'import fileread_bytearray' 'fileread_bytearray.readinto()'
100 loops, best of 3: 2.69 msec per loop

Oops, readinto takes the same time as copying. This is a real shame,
because readinto in conjunction with the buffer interface was supposed to
avoid the redundant copy.

Is there a real performance regression here, is this a well-known issue, or
am I just missing something obvious?

Eli

P.S. The machine is quad-core i7-2820QM, running 64-bit Ubuntu 10.04
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] file.readinto performance regression in Python 3.2 vs. 2.7?

2011-11-24 Thread Antoine Pitrou
On Thu, 24 Nov 2011 20:15:25 +0200
Eli Bendersky  wrote:
> 
> Oops, readinto takes the same time as copying. This is a real shame,
> because readinto in conjunction with the buffer interface was supposed to
> avoid the redundant copy.
> 
> Is there a real performance regression here, is this a well-known issue, or
> am I just missing something obvious?

Can you try with latest 3.3 (from the default branch)?

Thanks

Antoine.


___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] file.readinto performance regression in Python 3.2 vs. 2.7?

2011-11-24 Thread Eli Bendersky
On Thu, Nov 24, 2011 at 20:29, Antoine Pitrou  wrote:

> On Thu, 24 Nov 2011 20:15:25 +0200
> Eli Bendersky  wrote:
> >
> > Oops, readinto takes the same time as copying. This is a real shame,
> > because readinto in conjunction with the buffer interface was supposed to
> > avoid the redundant copy.
> >
> > Is there a real performance regression here, is this a well-known issue,
> or
> > am I just missing something obvious?
>
> Can you try with latest 3.3 (from the default branch)?
>

Sure. Updated the default branch just now and built:

$1 -m timeit -s'import fileread_bytearray' 'fileread_bytearray.justread()'
1000 loops, best of 3: 1.14 msec per loop
$1 -m timeit -s'import fileread_bytearray'
'fileread_bytearray.readandcopy()'
100 loops, best of 3: 2.78 msec per loop
$1 -m timeit -s'import fileread_bytearray' 'fileread_bytearray.readinto()'
1000 loops, best of 3: 1.6 msec per loop

Strange. Although here, like in python 2, the performance of readinto is
close to justread and much faster than readandcopy, but justread itself is
much slower than in 2.7 and 3.2!

Eli
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Long term development external named branches and periodic merges from python

2011-11-24 Thread Nick Coghlan
I've never been able to get the Create Patch button to work reliably with
my BitBucket repo, so I still just run "hg diff -r default" locally and
upload the patch directly.

It would be nice if I could just specify both the feature branch *and* the
branch to diff against rather than having to work out why Roundup is
guessing wrong...

--
Nick Coghlan (via Gmail on Android, so likely to be more terse than usual)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Long term development external named branches and periodic merges from python

2011-11-24 Thread Martin v. Löwis
Am 24.11.2011 21:55, schrieb Nick Coghlan:
> I've never been able to get the Create Patch button to work reliably
> with my BitBucket repo, so I still just run "hg diff -r default" locally
> and upload the patch directly.

Please submit a bug report to the meta tracker.

> It would be nice if I could just specify both the feature branch *and*
> the branch to diff against rather than having to work out why Roundup is
> guessing wrong...

Why would you not diff against the default branch?

Regards,
Martin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Long term development external named branches and periodic merges from python

2011-11-24 Thread Xavier Morel
On 2011-11-24, at 21:55 , Nick Coghlan wrote:
> I've never been able to get the Create Patch button to work reliably with
> my BitBucket repo, so I still just run "hg diff -r default" locally and
> upload the patch directly.
Wouldn't it be simpler to just use MQ and upload the patch(es) from the series? 
Would be easier to keep in sync with the development tip too.

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] file.readinto performance regression in Python 3.2 vs. 2.7?

2011-11-24 Thread Matt Joiner
What if you broke up the read and built the final string object up. I
always assumed this is where the real gain was with read_into.
On Nov 25, 2011 5:55 AM, "Eli Bendersky"  wrote:

> On Thu, Nov 24, 2011 at 20:29, Antoine Pitrou  wrote:
>
>> On Thu, 24 Nov 2011 20:15:25 +0200
>> Eli Bendersky  wrote:
>> >
>> > Oops, readinto takes the same time as copying. This is a real shame,
>> > because readinto in conjunction with the buffer interface was supposed
>> to
>> > avoid the redundant copy.
>> >
>> > Is there a real performance regression here, is this a well-known
>> issue, or
>> > am I just missing something obvious?
>>
>> Can you try with latest 3.3 (from the default branch)?
>>
>
> Sure. Updated the default branch just now and built:
>
> $1 -m timeit -s'import fileread_bytearray' 'fileread_bytearray.justread()'
> 1000 loops, best of 3: 1.14 msec per loop
> $1 -m timeit -s'import fileread_bytearray'
> 'fileread_bytearray.readandcopy()'
> 100 loops, best of 3: 2.78 msec per loop
> $1 -m timeit -s'import fileread_bytearray' 'fileread_bytearray.readinto()'
> 1000 loops, best of 3: 1.6 msec per loop
>
> Strange. Although here, like in python 2, the performance of readinto is
> close to justread and much faster than readandcopy, but justread itself is
> much slower than in 2.7 and 3.2!
>
> Eli
>
>
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> http://mail.python.org/mailman/options/python-dev/anacrolix%40gmail.com
>
>
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] file.readinto performance regression in Python 3.2 vs. 2.7?

2011-11-24 Thread Eli Bendersky
On Fri, Nov 25, 2011 at 00:02, Matt Joiner  wrote:

> What if you broke up the read and built the final string object up. I
> always assumed this is where the real gain was with read_into.
>
Matt, I'm not sure what you mean by this - can you suggest the code?

Also, I'd be happy to know if anyone else reproduces this as well on other
machines/OSes.

Eli
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Long term development external named branches and periodic merges from python

2011-11-24 Thread Nick Coghlan
On Fri, Nov 25, 2011 at 7:23 AM, "Martin v. Löwis"  wrote:
> Am 24.11.2011 21:55, schrieb Nick Coghlan:
>> I've never been able to get the Create Patch button to work reliably
>> with my BitBucket repo, so I still just run "hg diff -r default" locally
>> and upload the patch directly.
>
> Please submit a bug report to the meta tracker.

Done: http://psf.upfronthosting.co.za/roundup/meta/issue428

>> It would be nice if I could just specify both the feature branch *and*
>> the branch to diff against rather than having to work out why Roundup is
>> guessing wrong...
>
> Why would you not diff against the default branch?

I usually do - the only case I have at the moment where diffing
against a branch other than default sometimes make sense is the
dependency from the PEP 380 branch on the dis.get_opinfo() feature
branch (http://bugs.python.org/issue11682).

In fact, I believe that's also the case that confuses the diff generator.

My workflow in the repo is:

- update default from hg.python.org/cpython
- merge into get_opinfo branch from default
- merge into pep380 branch from the get_opinfo branch

So, after merging into the pep380 branch, "hg diff -r default" gives a
full patch from default -> pep380 (including the dis module updates),
while "hg diff -r get_opinfo" gives a patch that assumes the dis
changes have already been applied separately.

I'm now wondering if doing an explicit "hg merge default" before I do
the merges from the get_opinfo branch in my sandbox might be enough to
get the patch generator back on track...

Cheers,
Nick.

-- 
Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Long term development external named branches and periodic merges from python

2011-11-24 Thread Nick Coghlan
On Fri, Nov 25, 2011 at 7:46 AM, Xavier Morel  wrote:
> On 2011-11-24, at 21:55 , Nick Coghlan wrote:
>> I've never been able to get the Create Patch button to work reliably with
>> my BitBucket repo, so I still just run "hg diff -r default" locally and
>> upload the patch directly.
> Wouldn't it be simpler to just use MQ and upload the patch(es) from the 
> series? Would be easier to keep in sync with the development tip too.

>From my (admittedly limited) experience, using MQ means I can only
effectively collaborate with other people also using MQ (e.g. the
Roundup integration doesn't work if the only thing that is published
on BitBucket is a patch queue). I'll stick with named branches until
MQ becomes a builtin Hg feature that better integrates with other
tools.

Cheers,
Nick.

-- 
Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] 404 in (important) documentation in www.python.org and contributor agreement

2011-11-24 Thread Jesus Cea
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Trying to clear the licensing issues surrounding my DTrace work
(http://bugs.python.org/issue13405) I am contacting Sun/Oracle guys.

Checking documentation abut the contributor license agreement, I had
encounter a wrong HTML link in http://www.python.org/about/help/ :

* "Python Patch Guidelines" points to
http://www.python.org/dev/patches/, that doesn't exist.

Other links in that page seems OK.

PS: The devguide doesn't say anything (AFAIK) about the contributor
agreement.

- -- 
Jesus Cea Avion _/_/  _/_/_/_/_/_/
j...@jcea.es - http://www.jcea.es/ _/_/_/_/  _/_/_/_/  _/_/
jabber / xmpp:j...@jabber.org _/_/_/_/  _/_/_/_/_/
.  _/_/  _/_/_/_/  _/_/  _/_/
"Things are not so easy"  _/_/  _/_/_/_/  _/_/_/_/  _/_/
"My name is Dump, Core Dump"   _/_/_/_/_/_/  _/_/  _/_/
"El amor es poner tu felicidad en la felicidad de otro" - Leibniz
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iQCVAwUBTs7ayZlgi5GaxT1NAQLOfwQAoa1GFuQZKhbXD3FnmH3XUiylzTMBmXMh
vB++AdDP8fcEBC/NYZ9j0DH+AGspXrPg4YVta09zJJ/1kHa2UxRVmtXM8centl3V
Jkad+6lJw6YYjtXXgM4QExlzClsYNn1ByhYaRSiSa8g9dtsFq4YTlKzfeAXLPC50
DUju8DavMyo=
=xOEe
-END PGP SIGNATURE-
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] webmas...@python.org address not working

2011-11-24 Thread Jesus Cea
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

When mailing there, I get this error. Not sure where to report.

"""
Final-Recipient: rfc822; sdr...@sdrees.de
Original-Recipient: rfc822;webmas...@python.org
Action: failed
Status: 5.1.1
Remote-MTA: dns; stefan.zinzdrees.de
Diagnostic-Code: smtp; 550 5.1.1 : Recipient address
rejected: User unknown in local recipient table
"""

- -- 
Jesus Cea Avion _/_/  _/_/_/_/_/_/
j...@jcea.es - http://www.jcea.es/ _/_/_/_/  _/_/_/_/  _/_/
jabber / xmpp:j...@jabber.org _/_/_/_/  _/_/_/_/_/
.  _/_/  _/_/_/_/  _/_/  _/_/
"Things are not so easy"  _/_/  _/_/_/_/  _/_/_/_/  _/_/
"My name is Dump, Core Dump"   _/_/_/_/_/_/  _/_/  _/_/
"El amor es poner tu felicidad en la felicidad de otro" - Leibniz
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iQCVAwUBTs7fQZlgi5GaxT1NAQLxrQQAmph2w/KrLbwK34IVFKNKAn3P78uY19U1
yoUslB7J4u4IhqQHd5r/FD0v6q6W12t9H8UFNdKELc/zRnRWtE7wKI+3RAeBMAfe
pTV6OY7kbGtUfDk9na8o6+oEQ7iZUWT1LbBtMpSusHBuif239RD3HMeaaJ6u/BFT
TMmsu39qf2E=
=ecRu
-END PGP SIGNATURE-
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] file.readinto performance regression in Python 3.2 vs. 2.7?

2011-11-24 Thread Antoine Pitrou
On Thu, 24 Nov 2011 20:53:30 +0200
Eli Bendersky  wrote:
> 
> Sure. Updated the default branch just now and built:
> 
> $1 -m timeit -s'import fileread_bytearray' 'fileread_bytearray.justread()'
> 1000 loops, best of 3: 1.14 msec per loop
> $1 -m timeit -s'import fileread_bytearray'
> 'fileread_bytearray.readandcopy()'
> 100 loops, best of 3: 2.78 msec per loop
> $1 -m timeit -s'import fileread_bytearray' 'fileread_bytearray.readinto()'
> 1000 loops, best of 3: 1.6 msec per loop
> 
> Strange. Although here, like in python 2, the performance of readinto is
> close to justread and much faster than readandcopy, but justread itself is
> much slower than in 2.7 and 3.2!

This seems to be a side-effect of
http://hg.python.org/cpython/rev/f8a697bc3ca8/

Now I'm not sure if these numbers matter a lot.  1.6ms for a 3.6MB file
is still more than 2 GB/s.

Regards

Antoine.


___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] file.readinto performance regression in Python 3.2 vs. 2.7?

2011-11-24 Thread Terry Reedy

On 11/24/2011 5:02 PM, Matt Joiner wrote:

What if you broke up the read and built the final string object up. I
always assumed this is where the real gain was with read_into.


If a pure read takes twice as long in 3.3 as in 3.2, that is a concern 
regardless of whether there is a better way.


--
Terry Jan Reedy

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] file.readinto performance regression in Python 3.2 vs. 2.7?

2011-11-24 Thread Matt Joiner
Eli,

Example coming shortly, the differences are quite significant.

On Fri, Nov 25, 2011 at 9:41 AM, Eli Bendersky  wrote:
> On Fri, Nov 25, 2011 at 00:02, Matt Joiner  wrote:
>>
>> What if you broke up the read and built the final string object up. I
>> always assumed this is where the real gain was with read_into.
>
> Matt, I'm not sure what you mean by this - can you suggest the code?
>
> Also, I'd be happy to know if anyone else reproduces this as well on other
> machines/OSes.
>
> Eli
>
>
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] file.readinto performance regression in Python 3.2 vs. 2.7?

2011-11-24 Thread Matt Joiner
It's my impression that the readinto method does not fully support the
buffer interface I was expecting. I've never had cause to use it until
now. I've created a question on SO that describes my confusion:

http://stackoverflow.com/q/8263899/149482

Also I saw some comments on "top-posting" am I guilty of this? Gmail
defaults to putting my response above the previous email.

On Fri, Nov 25, 2011 at 11:49 AM, Matt Joiner  wrote:
> Eli,
>
> Example coming shortly, the differences are quite significant.
>
> On Fri, Nov 25, 2011 at 9:41 AM, Eli Bendersky  wrote:
>> On Fri, Nov 25, 2011 at 00:02, Matt Joiner  wrote:
>>>
>>> What if you broke up the read and built the final string object up. I
>>> always assumed this is where the real gain was with read_into.
>>
>> Matt, I'm not sure what you mean by this - can you suggest the code?
>>
>> Also, I'd be happy to know if anyone else reproduces this as well on other
>> machines/OSes.
>>
>> Eli
>>
>>
>
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] file.readinto performance regression in Python 3.2 vs. 2.7?

2011-11-24 Thread Antoine Pitrou
On Fri, 25 Nov 2011 12:02:17 +1100
Matt Joiner  wrote:
> It's my impression that the readinto method does not fully support the
> buffer interface I was expecting. I've never had cause to use it until
> now. I've created a question on SO that describes my confusion:
> 
> http://stackoverflow.com/q/8263899/149482

Just use a memoryview and slice it:

b = bytearray(...)
m = memoryview(b)
n = f.readinto(m[some_offset:])

> Also I saw some comments on "top-posting" am I guilty of this?

Kind of :)

Regards

Antoine.


___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] webmas...@python.org address not working

2011-11-24 Thread Michael Foord

On 25/11/2011 00:20, Jesus Cea wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

When mailing there, I get this error. Not sure where to report.


The address works fine. It would be nice if someone fixed the annoying 
bounce however. :-)


Michael


"""
Final-Recipient: rfc822; sdr...@sdrees.de
Original-Recipient: rfc822;webmas...@python.org
Action: failed
Status: 5.1.1
Remote-MTA: dns; stefan.zinzdrees.de
Diagnostic-Code: smtp; 550 5.1.1: Recipient address
 rejected: User unknown in local recipient table
"""

- -- 
Jesus Cea Avion _/_/  _/_/_/_/_/_/

j...@jcea.es - http://www.jcea.es/ _/_/_/_/  _/_/_/_/  _/_/
jabber / xmpp:j...@jabber.org _/_/_/_/  _/_/_/_/_/
.  _/_/  _/_/_/_/  _/_/  _/_/
"Things are not so easy"  _/_/  _/_/_/_/  _/_/_/_/  _/_/
"My name is Dump, Core Dump"   _/_/_/_/_/_/  _/_/  _/_/
"El amor es poner tu felicidad en la felicidad de otro" - Leibniz
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iQCVAwUBTs7fQZlgi5GaxT1NAQLxrQQAmph2w/KrLbwK34IVFKNKAn3P78uY19U1
yoUslB7J4u4IhqQHd5r/FD0v6q6W12t9H8UFNdKELc/zRnRWtE7wKI+3RAeBMAfe
pTV6OY7kbGtUfDk9na8o6+oEQ7iZUWT1LbBtMpSusHBuif239RD3HMeaaJ6u/BFT
TMmsu39qf2E=
=ecRu
-END PGP SIGNATURE-
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk




--
http://www.voidspace.org.uk/

May you do good and not evil
May you find forgiveness for yourself and forgive others
May you share freely, never taking more than you give.
-- the sqlite blessing http://www.sqlite.org/different.html

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Long term development external named branches and periodic merges from python

2011-11-24 Thread Jesus Cea
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 24/11/11 18:08, Éric Araujo wrote:
>> I have a question and I would rather have an answer instead of 
>> actually trying and getting myself in a messy situation.
> Clones are cheap, trying is cheap! 

I would need to publish another repository online, and instruct the
bug tracker to use it and create a patch, and play for the best or
risk polluding the tracker. Maybe I would be hitting a corner case and
be lucky this time, but not next time.

Better to ask people that know better, I guess.

>> 5. Development of the new feature is taking a long time, and
>> python canonical version keeps moving forward. The clone+branch
>> and the original python version are diverging. Eventually there
>> are changes in python that the programmer would like in her
>> version, so she does a "pull" and then a merge for the original
>> python branch to her named branch.
> I do this all the time.  I work on a fix- branch, and once a
> week for example I pull and merge the base branch.  Sometimes there
> are no conflicts except Misc/NEWS, sometimes I have to adapt my
> code because of other people’s changes before I can commit the
> merge.

That is good, because that means your patch is always able to be
applied to the original branch tip, and that you changes work with
current work in the mainline.

That is what I want to do, but I need to know that it is safe to do so
(from the "Create Patch" perspective).

>> 6. What would be posted in the bug tracker when she does a new
>> "Create Patch"?. Only her changes, her changes SINCE the merge,
>> her changes plus merged changes or something else?.
> The diff would be equivalent to “hg diff -r base” and would contain
> all the changes she did to add the bug fix or feature.  Merging
> only makes sure that the computed diff does not appear to touch
> unrelated files, IOW that it applies cleanly.  (Barring bugs in
> Mercurial-Roundup integration, we have a few of these in the
> metatracker.)

So you are saying that "Create patch" will ONLY get the differences in
the development branch and not the changes brought in from the merge?.
A "hg diff -r base" -as you indicate- should show all changes in the
branch since creation, included the merges, if I understand it
correctly. I don't want to include the merges, although I want their
effect in my own work (like changing patch offset).

That is, is that merge safe for "Create Patch"?. Your answer seems to
indicate "yes", but I rather prefer an explicit "yes" that an
"implicit" yes :). Python Zen! :).

PS: Sorry if I am being blunt. My (lack of) social skills are legendary.

- -- 
Jesus Cea Avion _/_/  _/_/_/_/_/_/
j...@jcea.es - http://www.jcea.es/ _/_/_/_/  _/_/_/_/  _/_/
jabber / xmpp:j...@jabber.org _/_/_/_/  _/_/_/_/_/
.  _/_/  _/_/_/_/  _/_/  _/_/
"Things are not so easy"  _/_/  _/_/_/_/  _/_/_/_/  _/_/
"My name is Dump, Core Dump"   _/_/_/_/_/_/  _/_/  _/_/
"El amor es poner tu felicidad en la felicidad de otro" - Leibniz
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iQCVAwUBTs7/1Jlgi5GaxT1NAQJUDAQAhQi5e3utsVdOveO/3r1EDr/9BUTpB8Tb
DxIe12HEt+KT33CJR+HGTt9OBqNGmVb4Q3h8lj3YIi7WdIXjc3CQ3+dO1NF1jTZO
0rt5EbEU99RAkgqOT0r3ziKy6MSSWhTuZlQy6pvcivEJet0GANiNqbdw6xFBETeZ
a5m85Q793iU=
=1Kg3
-END PGP SIGNATURE-
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] file.readinto performance regression in Python 3.2 vs. 2.7?

2011-11-24 Thread Matt Joiner
On Fri, Nov 25, 2011 at 12:07 PM, Antoine Pitrou  wrote:
> On Fri, 25 Nov 2011 12:02:17 +1100
> Matt Joiner  wrote:
>> It's my impression that the readinto method does not fully support the
>> buffer interface I was expecting. I've never had cause to use it until
>> now. I've created a question on SO that describes my confusion:
>>
>> http://stackoverflow.com/q/8263899/149482
>
> Just use a memoryview and slice it:
>
> b = bytearray(...)
> m = memoryview(b)
> n = f.readinto(m[some_offset:])

Cheers, this seems to be what I wanted. Unfortunately it doesn't
perform noticeably better if I do this.

Eli, the use pattern I was referring to is when you read in chunks,
and and append to a running buffer. Presumably if you know in advance
the size of the data, you can readinto directly to a region of a
bytearray. There by avoiding having to allocate a temporary buffer for
the read, and creating a new buffer containing the running buffer,
plus the new.

Strangely, I find that your readandcopy is faster at this, but not by
much, than readinto. Here's the code, it's a bit explicit, but then so
was the original:

BUFSIZE = 0x1

def justread():
# Just read a file's contents into a string/bytes object
f = open(FILENAME, 'rb')
s = b''
while True:
b = f.read(BUFSIZE)
if not b:
break
s += b

def readandcopy():
# Read a file's contents and copy them into a bytearray.
# An extra copy is done here.
f = open(FILENAME, 'rb')
s = bytearray()
while True:
b = f.read(BUFSIZE)
if not b:
break
s += b

def readinto():
# Read a file's contents directly into a bytearray,
# hopefully employing its buffer interface
f = open(FILENAME, 'rb')
s = bytearray(os.path.getsize(FILENAME))
o = 0
while True:
b = f.readinto(memoryview(s)[o:o+BUFSIZE])
if not b:
break
o += b

And the timings:

$ python3 -O -m timeit 'import fileread_bytearray'
'fileread_bytearray.justread()'
10 loops, best of 3: 298 msec per loop
$ python3 -O -m timeit 'import fileread_bytearray'
'fileread_bytearray.readandcopy()'
100 loops, best of 3: 9.22 msec per loop
$ python3 -O -m timeit 'import fileread_bytearray'
'fileread_bytearray.readinto()'
100 loops, best of 3: 9.31 msec per loop

The file was 10MB. I expected readinto to perform much better than
readandcopy. I expected readandcopy to perform slightly better than
justread. This clearly isn't the case.

>
>> Also I saw some comments on "top-posting" am I guilty of this?

If tehre's a magical option in gmail someone knows about, please tell.

>
> Kind of :)
>
> Regards
>
> Antoine.
>
>
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: 
> http://mail.python.org/mailman/options/python-dev/anacrolix%40gmail.com
>
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] file.readinto performance regression in Python 3.2 vs. 2.7?

2011-11-24 Thread Eli Bendersky
> On Thu, 24 Nov 2011 20:53:30 +0200
> Eli Bendersky  wrote:
> >
> > Sure. Updated the default branch just now and built:
> >
> > $1 -m timeit -s'import fileread_bytearray'
> 'fileread_bytearray.justread()'
> > 1000 loops, best of 3: 1.14 msec per loop
> > $1 -m timeit -s'import fileread_bytearray'
> > 'fileread_bytearray.readandcopy()'
> > 100 loops, best of 3: 2.78 msec per loop
> > $1 -m timeit -s'import fileread_bytearray'
> 'fileread_bytearray.readinto()'
> > 1000 loops, best of 3: 1.6 msec per loop
> >
> > Strange. Although here, like in python 2, the performance of readinto is
> > close to justread and much faster than readandcopy, but justread itself
> is
> > much slower than in 2.7 and 3.2!
>
> This seems to be a side-effect of
> http://hg.python.org/cpython/rev/f8a697bc3ca8/
>
> Now I'm not sure if these numbers matter a lot.  1.6ms for a 3.6MB file
> is still more than 2 GB/s.
>

Just to be clear, there were two separate issues raised here. One is the
speed regression of readinto() from 2.7 to 3.2, and the other is the
relative slowness of justread() in 3.3

Regarding the second, I'm not sure it's an issue because I tried a larger
file (100MB and then also 300MB) and the speed of 3.3 is now on par with
3.2 and 2.7

However, the original question remains - on the 100MB file also, although
in 2.7 readinto is 35% faster than readandcopy(), on 3.2 it's about the
same speed (even a few % slower). That said, I now observe with Python 3.3
the same speed as with 2.7, including the readinto() speedup - so it
appears that the readinto() regression has been solved in 3.3? Any clue
about where it happened (i.e. which bug/changeset)?

Eli
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] file.readinto performance regression in Python 3.2 vs. 2.7?

2011-11-24 Thread Eli Bendersky
>
> Eli, the use pattern I was referring to is when you read in chunks,
> and and append to a running buffer. Presumably if you know in advance
> the size of the data, you can readinto directly to a region of a
> bytearray. There by avoiding having to allocate a temporary buffer for
> the read, and creating a new buffer containing the running buffer,
> plus the new.
>
> Strangely, I find that your readandcopy is faster at this, but not by
> much, than readinto. Here's the code, it's a bit explicit, but then so
> was the original:
>
> BUFSIZE = 0x1
>
> def justread():
># Just read a file's contents into a string/bytes object
>f = open(FILENAME, 'rb')
> s = b''
>while True:
>b = f.read(BUFSIZE)
>if not b:
>break
>s += b
>
> def readandcopy():
># Read a file's contents and copy them into a bytearray.
># An extra copy is done here.
>f = open(FILENAME, 'rb')
> s = bytearray()
>while True:
>b = f.read(BUFSIZE)
>if not b:
>break
>s += b
>
> def readinto():
># Read a file's contents directly into a bytearray,
># hopefully employing its buffer interface
>f = open(FILENAME, 'rb')
> s = bytearray(os.path.getsize(FILENAME))
>o = 0
>while True:
>b = f.readinto(memoryview(s)[o:o+BUFSIZE])
>if not b:
>break
>o += b
>
> And the timings:
>
> $ python3 -O -m timeit 'import fileread_bytearray'
> 'fileread_bytearray.justread()'
> 10 loops, best of 3: 298 msec per loop
> $ python3 -O -m timeit 'import fileread_bytearray'
> 'fileread_bytearray.readandcopy()'
> 100 loops, best of 3: 9.22 msec per loop
> $ python3 -O -m timeit 'import fileread_bytearray'
> 'fileread_bytearray.readinto()'
> 100 loops, best of 3: 9.31 msec per loop
>
> The file was 10MB. I expected readinto to perform much better than
> readandcopy. I expected readandcopy to perform slightly better than
> justread. This clearly isn't the case.
>
>
What is 'python3' on your machine? If it's 3.2, then this is consistent
with my results. Try it with 3.3 and for a larger file (say ~100MB and up),
you may see the same speed as on 2.7

Also, why do you think chunked reads are better here than slurping the
whole file into the bytearray in one go? If you need it wholly in memory
anyway, why not just issue a single read?

Eli
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Long term development external named branches and periodic merges from python

2011-11-24 Thread Stephen J. Turnbull
Nick Coghlan writes:

 > I'll stick with named branches until MQ becomes a builtin Hg
 > feature that better integrates with other tools.

AFAIK MQ *is* considered to be a *stable, standard* part of Hg
functionality that *happens* (for several reasons *not* including
"it's not ready for Prime Time") to be packaged as an extension.

If you want more/different functionality from it, you probably should
file a feature request with the Mercurial developers.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com