Re: [Python-Dev] cpython (merge 3.2 -> default): MERGE: Better test for Issue #15402: Add a __sizeof__ method to struct.Struct

2012-07-24 Thread Serhiy Storchaka

On 24.07.12 00:44, mar...@v.loewis.de wrote:

42 is most likely not the right answer, as the size should be a
multiple of four.


>>> ''.__sizeof__()
25

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] 2to3 porting HOWTO: setup.py question

2012-07-24 Thread Terry Reedy

On 7/24/2012 12:44 AM, anatoly techtonik wrote:


Python 3 check explicitly tells the reader that 2to3 should only be
used in Python 3. Otherwise everybody need to guess when this *_2to3
tools are triggered. As for me, I see no technical limitations why
*_2to3 can not be run by Python 2 (PyPy, RPython or whatever). Maybe I
don't have Python3, but want to build my package for Python 3. In
ideal world it is possible.


This is not an ideal world and 2to3 is not good enough to convert files 
without further intervention and testing.


--
Terry Jan Reedy



___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] 2to3 porting HOWTO: setup.py question

2012-07-24 Thread Michael Foord

On 24 Jul 2012, at 10:30, Terry Reedy wrote:

> On 7/24/2012 12:44 AM, anatoly techtonik wrote:
> 
>> Python 3 check explicitly tells the reader that 2to3 should only be
>> used in Python 3. Otherwise everybody need to guess when this *_2to3
>> tools are triggered. As for me, I see no technical limitations why
>> *_2to3 can not be run by Python 2 (PyPy, RPython or whatever). Maybe I
>> don't have Python3, but want to build my package for Python 3. In
>> ideal world it is possible.
> 
> This is not an ideal world and 2to3 is not good enough to convert files 
> without further intervention and testing.


It is if you design your code *to be converted* by 2to3 and do regular testing 
of the result.

Michael

> 
> -- 
> Terry Jan Reedy
> 
> 
> 
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: 
> http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk
> 


--
http://www.voidspace.org.uk/


May you do good and not evil
May you find forgiveness for yourself and forgive others
May you share freely, never taking more than you give.
-- the sqlite blessing 
http://www.sqlite.org/different.html





___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] 2to3 porting HOWTO: setup.py question

2012-07-24 Thread Oscar Benjamin
On Jul 24, 2012 10:32 AM, "Terry Reedy"  wrote:
>
> On 7/24/2012 12:44 AM, anatoly techtonik wrote:
>
>> Python 3 check explicitly tells the reader that 2to3 should only be
>> used in Python 3. Otherwise everybody need to guess when this *_2to3
>> tools are triggered. As for me, I see no technical limitations why
>> *_2to3 can not be run by Python 2 (PyPy, RPython or whatever). Maybe I
>> don't have Python3, but want to build my package for Python 3. In
>> ideal world it is possible.
>
>
> This is not an ideal world and 2to3 is not good enough to convert files
without further intervention and testing.

Which is exactly why it's use should be explicit. To go back to the
original question is it not better to be explicit about the version check?
The try/importerror snippet in setup.py is often accompanied by a comment
that explains the fact that it is implicitly performing a version check
whereas I find the explicit version check self documenting.

I know python users often frown upon explicitly checking with if/else,
preferring the flexibility afforded by duck typing and the possibility of
monkey patching but I don't see any advantage in this case. As said above,
"it's called checking the thing that matters" which is definitely the
python version.

Oscar

>
> --
> Terry Jan Reedy
>
>
>
>
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
http://mail.python.org/mailman/options/python-dev/oscar.j.benjamin%40gmail.com
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] 2to3 porting HOWTO: setup.py question

2012-07-24 Thread Devin Jeanpierre
On Tue, Jul 24, 2012 at 6:07 AM, Michael Foord
 wrote:
>> This is not an ideal world and 2to3 is not good enough to convert files 
>> without further intervention and testing.
>
> It is if you design your code *to be converted* by 2to3 and do regular 
> testing of the result.

That's hardly without testing!

-- Devin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] 2to3 porting HOWTO: setup.py question

2012-07-24 Thread Michael Foord

On 24 Jul 2012, at 11:52, Devin Jeanpierre wrote:

> On Tue, Jul 24, 2012 at 6:07 AM, Michael Foord
>  wrote:
>>> This is not an ideal world and 2to3 is not good enough to convert files 
>>> without further intervention and testing.
>> 
>> It is if you design your code *to be converted* by 2to3 and do regular 
>> testing of the result.
> 
> That's hardly without testing!
> 

Well, *no* code can safely be created without testing. The OP did say 
intervention *and* testing...

Michael

> -- Devin
> 


--
http://www.voidspace.org.uk/


May you do good and not evil
May you find forgiveness for yourself and forgive others
May you share freely, never taking more than you give.
-- the sqlite blessing 
http://www.sqlite.org/different.html





___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Building python 2.7.3 with Visual Studio 2012 (VS11.0)

2012-07-24 Thread Wim Colgate
Please forgive me if this is not the prescribed method for asking this
question.

For various reasons, I would like to build python 2.7.3 from source
using the latest VS tools (VS11.0 is in RC -- which is close enough
for my purposes). I have seen the various sub-directories (VC6, VS7.1
and VS8.0) in the sources for specific VS tool chains. I have also
seen the patch for VS 10.0 (http://wiki.python.org/moin/VS2010).

If building with VS11.0, is there more than just applying the
equivalent VS10.0 patch to also include VS11? Are the other VS
sub-directories unneeded?

Regards,

Wim
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Building python 2.7.3 with Visual Studio 2012 (VS11.0)

2012-07-24 Thread Brian Curtin
On Tue, Jul 24, 2012 at 1:02 PM, Wim Colgate  wrote:
> Please forgive me if this is not the prescribed method for asking this
> question.
>
> For various reasons, I would like to build python 2.7.3 from source
> using the latest VS tools (VS11.0 is in RC -- which is close enough
> for my purposes). I have seen the various sub-directories (VC6, VS7.1
> and VS8.0) in the sources for specific VS tool chains. I have also
> seen the patch for VS 10.0 (http://wiki.python.org/moin/VS2010).
>
> If building with VS11.0, is there more than just applying the
> equivalent VS10.0 patch to also include VS11? Are the other VS
> sub-directories unneeded?

If you can get it working on VS2010 first, VS2012 can read that
project file, but without converting it'll just run the 2010 compiler
and allow you to use the 2012 IDE.

Competing the actual port from 2010 to 2012 did not appear to be very
hard, but I didn't look to far into it.


You don't need the old VS sub-directories unless you are compiling
with those versions.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Building python 2.7.3 with Visual Studio 2012 (VS11.0)

2012-07-24 Thread Martin v. Löwis
> If building with VS11.0, is there more than just applying the
> equivalent VS10.0 patch to also include VS11?

I think nobody *really* knows at this point. Microsoft has a tradition
of breaking Python with every VS release, by making slight incompatible
changes in the C library. With VS 2012, on the one hand, they give
explicit consideration to VS 2010 and continued use of its tools; OTOH,
they also deliberately broke XP support in the CRT.

So you have to try for yourself. If Python passes the test suite (as
good as the official release), then the build was successful.

A different matter is dependent libraries (zlib, openssl, Tcl/Tk, ...).
You also have to build those with VS 2012 (if you want to use them),
each one likely posing its own challenges.

If you manage to succeed, don't forget to post your findings here.
Also if you fail.

Good luck,
Martin

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] datetime nanosecond support

2012-07-24 Thread Vincenzo Ampolo
Hi all,

This is the first time I write to this list so thank you for considering
this message (if you will) :)

I know that this has been debated many times but until now there was no
a real use case. If you look on google about "python datetime
nanosecond" you can find more than 141k answer about that. They all say
that "you can't due to hardware imprecisions" or "you don't need it"
even if there is a good amount of people looking for this feature.

But let me explain my use case:

most OSes let users capture network packets (using tools like tcpdump or
wireshark) and store them using file formats like pcap or pcap-ng. These
formats include a timestamp for each of the captured packets, and this
timestamp usually has nanosecond precision. The reason is that on
gigabit and 10 gigabit networks the frame rate is so high that
microsecond precision is not enough to tell two frames apart.
pcap (and now pcap-ng) are extremely popular file formats, with millions
of files stored around the world. Support for nanoseconds in datetime
would make it possible to properly parse these files inside python to
compute precise statistics, for example network delays or round trip times.

More about this issue at http://bugs.python.org/issue15443

I completely agree with the YAGNI principle that seems to have driven
decisions in this area until now but It is the case to reconsider it
since this real use case has shown up?

Thank you for your attention

Best Regards,
-- 
Vincenzo Ampolo
http://vincenzo-ampolo.net
http://goshawknest.wordpress.com
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] datetime nanosecond support

2012-07-24 Thread Guido van Rossum
On Tue, Jul 24, 2012 at 5:58 PM, Vincenzo Ampolo
 wrote:
> Hi all,
>
> This is the first time I write to this list so thank you for considering
> this message (if you will) :)

You're welcome.

> I know that this has been debated many times but until now there was no
> a real use case. If you look on google about "python datetime
> nanosecond" you can find more than 141k answer about that. They all say
> that "you can't due to hardware imprecisions" or "you don't need it"
> even if there is a good amount of people looking for this feature.

Have you read PEP 410 and my rejection of it
(http://mail.python.org/pipermail/python-dev/2012-February/116837.html)?
Even though that's about using Decimal for timestamps, it could still
be considered related.

> But let me explain my use case:
>
> most OSes let users capture network packets (using tools like tcpdump or
> wireshark) and store them using file formats like pcap or pcap-ng. These
> formats include a timestamp for each of the captured packets, and this
> timestamp usually has nanosecond precision. The reason is that on
> gigabit and 10 gigabit networks the frame rate is so high that
> microsecond precision is not enough to tell two frames apart.
> pcap (and now pcap-ng) are extremely popular file formats, with millions
> of files stored around the world. Support for nanoseconds in datetime
> would make it possible to properly parse these files inside python to
> compute precise statistics, for example network delays or round trip times.
>
> More about this issue at http://bugs.python.org/issue15443
>
> I completely agree with the YAGNI principle that seems to have driven
> decisions in this area until now but It is the case to reconsider it
> since this real use case has shown up?

Not every use case deserves an API change. :-)

First you will have to show how you'd have to code this *without*
nanosecond precision in datetime and how tedious that is. (I expect
that representing the timestamp as a long integer expressing a posix
timestamp times a billion would be very reasonable.)

I didn't read the entire bug, but it mentioned something about storing
datetimes in databases. Do databases support nanosecond precision?

-- 
--Guido van Rossum (python.org/~guido)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] datetime nanosecond support

2012-07-24 Thread Chris Lambacher
On Tue, Jul 24, 2012 at 9:46 PM, Guido van Rossum  wrote:

> I didn't read the entire bug, but it mentioned something about storing
> datetimes in databases. Do databases support nanosecond precision?
>

MS SQL Server 2008 R2 has the datetime2 data type which supports 100
nanosecond (.1 microsecond) precision:
http://msdn.microsoft.com/en-us/library/bb677335(v=sql.105)

PostgreSQL does 1 microsecond:
http://www.postgresql.org/docs/8.0/static/datatype-datetime.html

If I am reading this correctly the Oracle TIMESTAMP type allows up to 9
digits of fractional seconds (1 nanosecond):
http://docs.oracle.com/cd/B19306_01/server.102/b14195/sqlqr06.htm#r9c1-t3

-Chris

-- 
Christopher Lambacher
ch...@kateandchris.net
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] datetime nanosecond support

2012-07-24 Thread Vincenzo Ampolo
On 07/24/2012 06:46 PM, Guido van Rossum wrote:
> 
> You're welcome.

Hi Guido,

I'm glad you spent your time reading my mail. I would have never
imagined that my mail could come to your attention.

> Have you read PEP 410 and my rejection of it
> (http://mail.python.org/pipermail/python-dev/2012-February/116837.html)?
> Even though that's about using Decimal for timestamps, it could still
> be considered related.

I've read it and point 5 is very like in this issue. You said:

"[...]
I see only one real use case for nanosecond precision: faithful
copying of the mtime/atime recorded by filesystems, in cases where the
filesystem (like e.g. ext4) records these times with nanosecond
precision. Even if such timestamps can't be trusted to be accurate,
converting them to floats and back loses precision, and verification
using tools not written in Python will flag the difference. But for
this specific use case a much simpler set of API changes will suffice;
only os.stat() and os.utime() need to change slightly (and variants of
os.stat() like os.fstat()).
[...]"

I think that's based on a wrong hypothesis: just one case -> let's
handle in a different way (modifying os.stat() and os.utime()).
I would say: It's not just one case, there are at lest other two
scenarios. One is like mine, parsing network packets, the other one is
in parsing stock trading data.
But in this case there is no os.stat() or os.utime() that can be
modified. I've to write my own class to deal with time and loose all the
power and flexibility that the datetime module adds to the python language.

> Not every use case deserves an API change. :-)
> 
> First you will have to show how you'd have to code this *without*
> nanosecond precision in datetime and how tedious that is. (I expect
> that representing the timestamp as a long integer expressing a posix
> timestamp times a billion would be very reasonable.)

Yeah that's exactly how we built our Time class to handle this, and we
wrote also a Duration class to represent timedelta.
The code we developed is 383 python lines long but is not comparable
with all the functionalities that the datetime module offers and it's
also really slow compared to native datetime module which is written in C.

As you may think using that approach in a web application is very
limiting since there is no strftime() in this custom class.

I cannot share the code right now since It's copyrighted by the company
I work for but I've asked permission to do so. I just need to wait
tomorrow morning (PDT time) so they approve my request. Looking at the
code you can see how tedious is to try to remake all the conversions
that are already implemented on the datetime module.
Just let me know if you actually want to have a look at the code.

> 
> I didn't read the entire bug, but it mentioned something about storing
> datetimes in databases. Do databases support nanosecond precision?
> 

Yeah. According to
http://wiki.ispirer.com/sqlways/postgresql/data-types/timestamp at least
Oracle support timestamps with nanoseconds accuracy, SQL server supports
100 nanosecond accuracy.
Since I use Postgresql personally the best way to accomplish it (also
suggested by the #postgresql on freenode) is to store the timestamp with
nanosecond (like 1343158283.880338907242) as bigint and let the ORM (so
a python ORM) do all the conversion job.
An yet again, having nanosecond support in datetime would make things
much more easy.

While writing this mail Chris Lambacher answered with more data about
nanosecond support on databases

Best Regards,
-- 
Vincenzo Ampolo
http://vincenzo-ampolo.net
http://goshawknest.wordpress.com
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] datetime nanosecond support

2012-07-24 Thread Guido van Rossum
On Tue, Jul 24, 2012 at 8:25 PM, Vincenzo Ampolo
 wrote:
> On 07/24/2012 06:46 PM, Guido van Rossum wrote:
>>
>> You're welcome.
>
> Hi Guido,
>
> I'm glad you spent your time reading my mail. I would have never
> imagined that my mail could come to your attention.

Stop brownnosing already. :-) If you'd followed python-dev you'd known
I read it.

>> Have you read PEP 410 and my rejection of it
>> (http://mail.python.org/pipermail/python-dev/2012-February/116837.html)?
>> Even though that's about using Decimal for timestamps, it could still
>> be considered related.
>
> I've read it and point 5 is very like in this issue. You said:
>
> "[...]
> I see only one real use case for nanosecond precision: faithful
> copying of the mtime/atime recorded by filesystems, in cases where the
> filesystem (like e.g. ext4) records these times with nanosecond
> precision. Even if such timestamps can't be trusted to be accurate,
> converting them to floats and back loses precision, and verification
> using tools not written in Python will flag the difference. But for
> this specific use case a much simpler set of API changes will suffice;
> only os.stat() and os.utime() need to change slightly (and variants of
> os.stat() like os.fstat()).
> [...]"
>
> I think that's based on a wrong hypothesis: just one case -> let's
> handle in a different way (modifying os.stat() and os.utime()).
> I would say: It's not just one case, there are at lest other two
> scenarios. One is like mine, parsing network packets, the other one is
> in parsing stock trading data.
> But in this case there is no os.stat() or os.utime() that can be
> modified. I've to write my own class to deal with time and loose all the
> power and flexibility that the datetime module adds to the python language.

Also, this use case is unlike the PEP 410 use case, because the
timestamps there use a numeric type, not datetime (and that was
separately argued).

>> Not every use case deserves an API change. :-)
>>
>> First you will have to show how you'd have to code this *without*
>> nanosecond precision in datetime and how tedious that is. (I expect
>> that representing the timestamp as a long integer expressing a posix
>> timestamp times a billion would be very reasonable.)
>
> Yeah that's exactly how we built our Time class to handle this, and we
> wrote also a Duration class to represent timedelta.
> The code we developed is 383 python lines long but is not comparable
> with all the functionalities that the datetime module offers and it's
> also really slow compared to native datetime module which is written in C.

So what functionality specifically do you require? You speak in
generalities but I need specifics.

> As you may think using that approach in a web application is very
> limiting since there is no strftime() in this custom class.

Apparently you didn't need it? :-) Web frameworks usually have their
own date/time formatting anyway.

> I cannot share the code right now since It's copyrighted by the company
> I work for but I've asked permission to do so. I just need to wait
> tomorrow morning (PDT time) so they approve my request. Looking at the
> code you can see how tedious is to try to remake all the conversions
> that are already implemented on the datetime module.
> Just let me know if you actually want to have a look at the code.

I believe you.

>> I didn't read the entire bug, but it mentioned something about storing
>> datetimes in databases. Do databases support nanosecond precision?
>>
>
> Yeah. According to
> http://wiki.ispirer.com/sqlways/postgresql/data-types/timestamp at least
> Oracle support timestamps with nanoseconds accuracy, SQL server supports
> 100 nanosecond accuracy.
> Since I use Postgresql personally the best way to accomplish it (also
> suggested by the #postgresql on freenode) is to store the timestamp with
> nanosecond (like 1343158283.880338907242) as bigint and let the ORM (so
> a python ORM) do all the conversion job.
> An yet again, having nanosecond support in datetime would make things
> much more easy.

How so, given that the database you use doesn't support it?

> While writing this mail Chris Lambacher answered with more data about
> nanosecond support on databases

Thanks, Chris.

TBH, I think that adding nanosecond precision to the datetime type is
not unthinkable. You'll have to come up with some clever backward
compatibility in the API though, and that will probably be a bit ugly
(you'd have a microsecond parameter with a range of 0-100 and a
nanosecond parameter with a range of 0-1000). Also the space it takes
in memory would probably increase (there's no room for an extra 10
bits in the carefully arranged 8-byte internal representation).

But let me be clear -- are you willing to help implement any of this?
You can't just order a feature, you know...

-- 
--Guido van Rossum (python.org/~guido)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.or