[Python-Dev] Re: PEP 622: Structural Pattern Matching

2020-06-23 Thread Neil Girdhar
On Tue, Jun 23, 2020 at 9:45 PM Neil Girdhar  wrote:

> This is awesome!
>
> What I love about this is that it strongly encourages people not to do
> EAFP with types (which I've seen many times), which causes problems when
> doing type annotations.  Instead, if they use pattern matching, they're
> essentially forced to do isinstance without even realizing it.  I love
> features that encourage good coding practices by design.
>
> My question is how does this work with polymorphic types?  Typically, you
> might not want to fix everywhere the exact order of the attributes.  It
> would be a shame for you to be dissuaded from adding an attribute to a
> dataclass because it would mess up every one of your case statements.  Do
> case statements support extracting by attribute name somehow?
>
> case Point(x, z):
>
> means extract into x and y positionally.  What if I want to extract by
> keyword somehow?  Can something like this work?
>
> case Point(x=x, z=z):
>
> That way, if I add an attribute y, my case statement is just fine.
>

Ah, never mind!  You guys thought of everything.


>
>

> I like the design choices. After reading a variety of comments, I'm
> looking forward to seeing the updated PEP with discussion regarding:
> case _: vs else:
> _ vs ...
> case x | y: vs case x or y:
>
> Best,
> Neil
>
> On Tue, Jun 23, 2020 at 12:07 PM Guido van Rossum 
> wrote:
>
>> I'm happy to present a new PEP for the python-dev community to review.
>> This is joint work with Brandt Bucher, Tobias Kohn, Ivan Levkivskyi and
>> Talin.
>>
>> Many people have thought about extending Python with a form of pattern
>> matching similar to that found in Scala, Rust, F#, Haskell and other
>> languages with a functional flavor. The topic has come up regularly on
>> python-ideas (most recently yesterday :-).
>>
>> I'll mostly let the PEP speak for itself:
>> - Published: https://www.python.org/dev/peps/pep-0622/ (*)
>> - Source: https://github.com/python/peps/blob/master/pep-0622.rst
>>
>> (*) The published version will hopefully be available soon.
>>
>> I want to clarify that the design space for such a match statement is
>> enormous. For many key decisions the authors have clashed, in some cases we
>> have gone back and forth several times, and a few uncomfortable compromises
>> were struck. It is quite possible that some major design decisions will
>> have to be revisited before this PEP can be accepted. Nevertheless, we're
>> happy with the current proposal, and we have provided ample discussion in
>> the PEP under the headings of Rejected Ideas and Deferred Ideas. Please
>> read those before proposing changes!
>>
>> I'd like to end with the contents of the README of the repo where we've
>> worked on the draft, which is shorter and gives a gentler introduction than
>> the PEP itself:
>>
>>
>> # Pattern Matching
>>
>> This repo contains a draft PEP proposing a `match` statement.
>>
>> Origins
>> ---
>>
>> The work has several origins:
>>
>> - Many statically compiled languages (especially functional ones) have
>>   a `match` expression, for example
>>   [Scala](
>> http://www.scala-lang.org/files/archive/spec/2.11/08-pattern-matching.html
>> ),
>>   [Rust](https://doc.rust-lang.org/reference/expressions/match-expr.html
>> ),
>>   [F#](
>> https://docs.microsoft.com/en-us/dotnet/fsharp/language-reference/pattern-matching
>> );
>> - Several extensive discussions on python-ideas, culminating in a
>>   summarizing
>>   [blog post](
>> https://tobiaskohn.ch/index.php/2018/09/18/pattern-matching-syntax-in-python/
>> )
>>   by Tobias Kohn;
>> - An independently developed [draft
>>   PEP](
>> https://github.com/ilevkivskyi/peps/blob/pattern-matching/pep-.rst)
>>   by Ivan Levkivskyi.
>>
>> Implementation
>> --
>>
>> A full reference implementation written by Brandt Bucher is available
>> as a [fork]((https://github.com/brandtbucher/cpython/tree/patma)) of
>> the CPython repo.  This is readily converted to a [pull
>> request](https://github.com/brandtbucher/cpython/pull/2)).
>>
>> Examples
>> 
>>
>> Some [example code](
>> https://github.com/gvanrossum/patma/tree/master/examples/) is available
>> from this repo.
>>
>> Tutorial
>> 
>>
>> A `match` statement takes an expression and compares it to successive
>> patterns given as one or more `case` blocks.  This is superficially
>> simila

[Python-Dev] Re: PEP 622: Structural Pattern Matching

2020-06-23 Thread Neil Girdhar
This is awesome!

What I love about this is that it strongly encourages people not to do EAFP
with types (which I've seen many times), which causes problems when doing
type annotations.  Instead, if they use pattern matching, they're
essentially forced to do isinstance without even realizing it.  I love
features that encourage good coding practices by design.

My question is how does this work with polymorphic types?  Typically, you
might not want to fix everywhere the exact order of the attributes.  It
would be a shame for you to be dissuaded from adding an attribute to a
dataclass because it would mess up every one of your case statements.  Do
case statements support extracting by attribute name somehow?

case Point(x, z):

means extract into x and y positionally.  What if I want to extract by
keyword somehow?  Can something like this work?

case Point(x=x, z=z):

That way, if I add an attribute y, my case statement is just fine.

I like the design choices. After reading a variety of comments, I'm looking
forward to seeing the updated PEP with discussion regarding:
case _: vs else:
_ vs ...
case x | y: vs case x or y:

Best,
Neil

On Tue, Jun 23, 2020 at 12:07 PM Guido van Rossum  wrote:

> I'm happy to present a new PEP for the python-dev community to review.
> This is joint work with Brandt Bucher, Tobias Kohn, Ivan Levkivskyi and
> Talin.
>
> Many people have thought about extending Python with a form of pattern
> matching similar to that found in Scala, Rust, F#, Haskell and other
> languages with a functional flavor. The topic has come up regularly on
> python-ideas (most recently yesterday :-).
>
> I'll mostly let the PEP speak for itself:
> - Published: https://www.python.org/dev/peps/pep-0622/ (*)
> - Source: https://github.com/python/peps/blob/master/pep-0622.rst
>
> (*) The published version will hopefully be available soon.
>
> I want to clarify that the design space for such a match statement is
> enormous. For many key decisions the authors have clashed, in some cases we
> have gone back and forth several times, and a few uncomfortable compromises
> were struck. It is quite possible that some major design decisions will
> have to be revisited before this PEP can be accepted. Nevertheless, we're
> happy with the current proposal, and we have provided ample discussion in
> the PEP under the headings of Rejected Ideas and Deferred Ideas. Please
> read those before proposing changes!
>
> I'd like to end with the contents of the README of the repo where we've
> worked on the draft, which is shorter and gives a gentler introduction than
> the PEP itself:
>
>
> # Pattern Matching
>
> This repo contains a draft PEP proposing a `match` statement.
>
> Origins
> ---
>
> The work has several origins:
>
> - Many statically compiled languages (especially functional ones) have
>   a `match` expression, for example
>   [Scala](
> http://www.scala-lang.org/files/archive/spec/2.11/08-pattern-matching.html
> ),
>   [Rust](https://doc.rust-lang.org/reference/expressions/match-expr.html),
>   [F#](
> https://docs.microsoft.com/en-us/dotnet/fsharp/language-reference/pattern-matching
> );
> - Several extensive discussions on python-ideas, culminating in a
>   summarizing
>   [blog post](
> https://tobiaskohn.ch/index.php/2018/09/18/pattern-matching-syntax-in-python/
> )
>   by Tobias Kohn;
> - An independently developed [draft
>   PEP](
> https://github.com/ilevkivskyi/peps/blob/pattern-matching/pep-.rst)
>   by Ivan Levkivskyi.
>
> Implementation
> --
>
> A full reference implementation written by Brandt Bucher is available
> as a [fork]((https://github.com/brandtbucher/cpython/tree/patma)) of
> the CPython repo.  This is readily converted to a [pull
> request](https://github.com/brandtbucher/cpython/pull/2)).
>
> Examples
> 
>
> Some [example code](
> https://github.com/gvanrossum/patma/tree/master/examples/) is available
> from this repo.
>
> Tutorial
> 
>
> A `match` statement takes an expression and compares it to successive
> patterns given as one or more `case` blocks.  This is superficially
> similar to a `switch` statement in C, Java or JavaScript (an many
> other languages), but much more powerful.
>
> The simplest form compares a target value against one or more literals:
>
> ```py
> def http_error(status):
> match status:
> case 400:
> return "Bad request"
> case 401:
> return "Unauthorized"
> case 403:
> return "Forbidden"
> case 404:
> return "Not found"
> case 418:
> return "I'm a teapot"
> case _:
> return "Something else"
> ```
>
> Note the last block: the "variable name" `_` acts as a *wildcard* and
> never fails to match.
>
> You can combine several literals in a single pattern using `|` ("or"):
>
> ```py
> case 401|403|404:
> return "Not allowed"
> ```
>
> Patterns can look like unpacking assignments, and can be used 

Re: [Python-Dev] PEP487: Simpler customization of class creation

2016-07-19 Thread Neil Girdhar
Thanks for clarifying.

On Tue, Jul 19, 2016 at 10:34 AM Nick Coghlan  wrote:

> On 19 July 2016 at 16:41, Neil Girdhar  wrote:
> > Yes, I see what you're saying.   However, I don't understand why
> > __init_subclass__ (defined on some class C) cannot be used to implement
> the
> > checks required by @abstractmethod instead of doing it in ABCMeta.  This
> > would prevent metaclass conflicts since you could use @abstractmethod
> with
> > any metaclass or no metaclass at all provided you inherit from C.
>
> ABCMeta also changes how __isinstance__ and __issubclass__ work and
> adds additional methods (like register()), so enabling the use of
> @abstractmethod without otherwise making the type an ABC would be very
> confusing behaviour that we wouldn't enable by default.
>
> But yes, this change does make it possible to write a mixin class that
> implements the "@abstractmethod instances must all be overridden to
> allow instances to to be created" logic from ABCMeta without otherwise
> turning the class into an ABC instance.
>
> Cheers,
> Nick.
>
> --
> Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP487: Simpler customization of class creation

2016-07-18 Thread Neil Girdhar
Yes, I see what you're saying.   However, I don't understand why
__init_subclass__ (defined on some class C) cannot be used to implement the
checks required by @abstractmethod instead of doing it in ABCMeta.  This
would prevent metaclass conflicts since you could use @abstractmethod with
any metaclass or no metaclass at all provided you inherit from C.

On Tue, Jul 19, 2016 at 12:21 AM Nick Coghlan  wrote:

> On 19 July 2016 at 09:26, Neil Girdhar  wrote:
> > Yes, I'm very excited about this!
> >
> > Will this mean no more metaclass conflicts if using @abstractmethod?
>
> ABCMeta and EnumMeta both create persistent behavioural differences
> rather than only influencing subtype definition, so they'll need to
> remain as custom metaclasses.
>
> What this PEP (especially in combination with PEP 520) is aimed at
> enabling is subclassing APIs designed more around the notion of
> "implicit class decoration" where a common base class or mixin can be
> adjusted to perform certain actions whenever a new subclass is
> defined, without changing the runtime behaviour of those subclasses.
> (For example: a mixin or base class may require that certain
> parameters be set as class attributes - this PEP will allow the base
> class to check for those and throw an error at definition time, rather
> than getting a potentially cryptic error when it attempts to use the
> missing attribute)
>
> Cheers,
> Nick.
>
> --
> Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP487: Simpler customization of class creation

2016-07-18 Thread Neil Girdhar
Yes, I'm very excited about this!

Will this mean no more metaclass conflicts if using @abstractmethod?

On Sun, Jul 17, 2016 at 12:59 PM Guido van Rossum  wrote:

> This PEP is now accepted for inclusion in Python 3.6. Martin,
> congratulations! Someone (not me) needs to review and commit your
> changes, before September 12, when the 3.6 feature freeze goes into
> effect (see https://www.python.org/dev/peps/pep-0494/#schedule).
>
> On Sun, Jul 17, 2016 at 4:32 AM, Martin Teichmann
>  wrote:
> > Hi Guido, Hi Nick, Hi list,
> >
> > so I just updated PEP 487, you can find it here:
> > https://github.com/python/peps/pull/57 if it hasn't already been
> > merged. There are no substantial updates there, I only updated the
> > wording as suggested, and added some words about backwards
> > compatibility as hinted by Nick.
> >
> > Greetings
> >
> > Martin
> >
> > 2016-07-14 17:47 GMT+02:00 Guido van Rossum :
> >> I just reviewed the changes you made, I like __set_name__(). I'll just
> >> wait for your next update, incorporating Nick's suggestions. Regarding
> >> who merges PRs to the PEPs repo, since you are the author the people
> >> who merge don't pass any judgment on the changes (unless it doesn't
> >> build cleanly or maybe if they see a typo). If you intend a PR as a
> >> base for discussion you can add a comment saying e.g. "Don't merge
> >> yet". If you call out @gvanrossum, GitHub will make sure I get a
> >> message about it.
> >>
> >> I think the substantial discussion about the PEP should remain here in
> >> python-dev; comments about typos, grammar and other minor editorial
> >> issues can go on GitHub. Hope this part of the process makes sense!
> >>
> >> On Thu, Jul 14, 2016 at 6:50 AM, Martin Teichmann
> >>  wrote:
> >>> Hi Guido, Hi list,
> >>>
> >>> Thanks for the nice review! I applied followed up your ideas and put
> >>> it into a github pull request: https://github.com/python/peps/pull/53
> >>>
> >>> Soon we'll be working there, until then, some responses to your
> comments:
> >>>
>  I wonder if this should be renamed to __set_name__ or something else
>  that clarifies we're passing it the name of the attribute? The method
>  name __set_owner__ made me assume this is about the owning object
>  (which is often a useful term in other discussions about objects),
>  whereas it is really about telling the descriptor the name of the
>  attribute for which it applies.
> >>>
> >>> The name for this has been discussed a bit already, __set_owner__ was
> >>> Nick's idea, and indeed, the owner is also set. Technically,
> >>> __set_owner_and_name__ would be correct, but actually I like your idea
> >>> of __set_name__.
> >>>
>  That (inheriting type from type, and object from object) is very
>  confusing. Why not just define new classes e.g. NewType and NewObject
>  here, since it's just pseudo code anyway?
> >>>
> >>> Actually, it's real code. If you drop those lines at the beginning of
> >>> the tests for the implementation (as I have done here:
> >>>
> https://github.com/tecki/cpython/blob/pep487b/Lib/test/test_subclassinit.py
> ),
> >>> the test runs on older Pythons.
> >>>
> >>> But I see that my idea to formulate things here in Python was a bad
> >>> idea, I will put the explanation first and turn the code into
> >>> pseudo-code.
> >>>
> > def __init__(self, name, bases, ns, **kwargs):
> > super().__init__(name, bases, ns)
> 
>  What does this definition of __init__ add?
> >>>
> >>> It removes the keyword arguments. I describe that in prose a bit down.
> >>>
> > class object:
> > @classmethod
> > def __init_subclass__(cls):
> > pass
> >
> > class object(object, metaclass=type):
> > pass
> 
>  Eek! Too many things named object.
> >>>
> >>> Well, I had to do that to make the tests run... I'll take that out.
> >>>
> > In the new code, it is not ``__init__`` that complains about keyword
> arguments,
> > but ``__init_subclass__``, whose default implementation takes no
> arguments. In
> > a classical inheritance scheme using the method resolution order,
> each
> > ``__init_subclass__`` may take out it's keyword arguments until none
> are left,
> > which is checked by the default implementation of
> ``__init_subclass__``.
> 
>  I called this out previously, and I am still a bit uncomfortable with
>  the backwards incompatibility here. But I believe what you describe
>  here is the compromise proposed by Nick, and if that's the case I have
>  peace with it.
> >>>
> >>> No, this is not Nick's compromise, this is my original. Nick just sent
> >>> another mail to this list where he goes a bit more into the details,
> >>> I'll respond to that about this topic.
> >>>
> >>> Greetings
> >>>
> >>> Martin
> >>>
> >>> P.S.: I just realized that my changes to the PEP were accepted by
> >>> someone else than Guido. I am a bit surprised

Re: [Python-Dev] PEP 448 review

2015-04-07 Thread Neil Girdhar
Hello,

Following up with PEP 448,  I've gone over the entire code review except a
few points as mentioned at the issue: http://bugs.python.org/review/2292/.
I'm hoping that this will get done at the PyCon sprints.  Is there any way
I can help?

I couldn't make it to PyCon, but I do live in Montreal.  I would be more
than happy to make time to meet up with anyone who wants to help with the
review, e.g. Laika's usually a good place to work and have a coffee (
http://www.yelp.ca/biz/la%C3%AFka-montr%C3%A9al-2).  Code reviews tend go
much faster face-to-face.

Also, I'm definitely interested in meeting any Python developers for coffee
or drinks.  I know the city pretty well.  :)

Best,

Neil

On Tue, Mar 17, 2015 at 9:49 AM, Brett Cannon  wrote:

>
>
> On Mon, Mar 16, 2015 at 7:11 PM Neil Girdhar 
> wrote:
>
>> Hi everyone,
>>
>> I was wondering what is left with the PEP 448 (
>> http://bugs.python.org/issue2292) code review?  Big thanks to Benjamin,
>> Ethan, and Serhiy for reviewing some (all?) of the code.  What is the next
>> step of this process?
>>
>
> My suspicion is that if no one steps up between now and PyCon to do a
> complete code review of the final patch, we as a group will try to get it
> done at the PyCon sprints. I have made the issue a release blocker to help
> make sure it gets reviewed and doesn't accidentally get left behind.
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Build is broken for me after updating

2015-04-07 Thread Neil Girdhar
Ever since I updated, I am getting:

In file included from Objects/dictobject.c:236:0:
Objects/clinic/dictobject.c.h:70:26: fatal error: stringlib/eq.h: No such
file or directory
 #include "stringlib/eq.h"

But, Objects/stringlib/eq.h exists.  Replacing the include with
"Objects/stringlib/eq.h" seems to make this error go away, but others
follow.

Would anyone happen to know why this is happening?

Neil
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] pep 7

2015-03-20 Thread Neil Girdhar
The code reviews I got asked me to revert PEP 7 changes.  I can understand
that, but then logically someone should go ahead and clean up the code.
It's not "high risk" if you just check for whitespace equivalence of the
source code and binary equivalence of the compiled code.  The value is for
people who are new to the codebase.

Best,

Neil

On Fri, Mar 20, 2015 at 10:35 PM, Brian Curtin  wrote:

> On Fri, Mar 20, 2015 at 7:54 PM, Neil Girdhar 
> wrote:
> > If ever someone wants to clean up the repository to conform to PEP 7, I
> > wrote a program that catches a couple hundred PEP 7 violations in
> ./Python
> > alone (1400 in the whole codebase):
> >
> > import os
> > import re
> >
> > def grep(path, regex):
> > reg_obj = re.compile(regex, re.M)
> > res = []
> > for root, dirs, fnames in os.walk(path):
> > for fname in fnames:
> > if fname.endswith('.c'):
> > path = os.path.join(root, fname)
> > with open(path) as f:
> > data = f.read()
> > for m in reg_obj.finditer(data):
> > line_number = sum(c == '\n'
> >   for c in data[:m.start()]) + 1
> > res.append("{}: {}".format(path, line_number))
> > return res
> >
> > for pattern in [
> > r'^\s*\|\|',
> > r'^\s*\&\&',
> > r'} else {',
> > r'\ > ]:
> > print("Searching for", pattern)
> > print("\n".join(grep('.', pattern)))
> >
> > In my experience, it was hard to write PEP 7 conforming code when the
> > surrounding code is inconsistent.
>
> You can usually change surrounding code within reason if you want to
> add conforming code of your own, but there's little value and high
> risk in any mass change just to apply the style guidelines.
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] pep 7

2015-03-20 Thread Neil Girdhar
If ever someone wants to clean up the repository to conform to PEP 7, I
wrote a program that catches a couple hundred PEP 7 violations in ./Python
alone (1400 in the whole codebase):

import os
import re

def grep(path, regex):
reg_obj = re.compile(regex, re.M)
res = []
for root, dirs, fnames in os.walk(path):
for fname in fnames:
if fname.endswith('.c'):
path = os.path.join(root, fname)
with open(path) as f:
data = f.read()
for m in reg_obj.finditer(data):
line_number = sum(c == '\n'
  for c in data[:m.start()]) + 1
res.append("{}: {}".format(path, line_number))
return res

for pattern in [
r'^\s*\|\|',
r'^\s*\&\&',
r'} else {',
r'\___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Missing *-unpacking generalizations (issue 2292)

2015-03-20 Thread Neil Girdhar
Wow, this is an excellent review.  Thank you.

My only question is with respect to this:

I think there ought to be two opcodes; one for unpacking maps in
function calls and another for literals. The whole function location
thing is rather hideous.

What are the two opcodes?  BUILD_MAP_UNPACK and BUILD_MAP_UNPACK_WITH_CALL?

The first takes (n) a number of maps that it will merge, and the second
does the same but also accepts (function_call_location) for the purpose of
error reporting?

Thanks,

Neil



On Thu, Mar 19, 2015 at 1:53 PM,  wrote:

> It's a start.
>
> There need to be documentation updates.
>
> There are still unrelated style changes in compile.c that should be
> reverted.
>
>
> http://bugs.python.org/review/2292/diff/14152/Include/opcode.h
> File Include/opcode.h (right):
>
> http://bugs.python.org/review/2292/diff/14152/Include/opcode.h#newcode71
> Include/opcode.h:71: #define DELETE_GLOBAL  98
> This file should not be manually changed. Rather,
> Tools/scripts/generate_opcode_h.py should be run.
>
> http://bugs.python.org/review/2292/diff/14152/Lib/importlib/_bootstrap.py
> File Lib/importlib/_bootstrap.py (right):
>
>
> http://bugs.python.org/review/2292/diff/14152/Lib/importlib/_bootstrap.py#newcode428
> Lib/importlib/_bootstrap.py:428: MAGIC_NUMBER = (3321).to_bytes(2,
> 'little') + b'\r\n'
> As the comment above indicates, the magic value should be incremented by
> 10 not 1. Also, the comment needs to be updated.
>
> http://bugs.python.org/review/2292/diff/14152/Lib/test/test_ast.py
> File Lib/test/test_ast.py (right):
>
>
> http://bugs.python.org/review/2292/diff/14152/Lib/test/test_ast.py#newcode937
> Lib/test/test_ast.py:937: main()
> Why is this here?
>
> http://bugs.python.org/review/2292/diff/14152/Lib/test/test_parser.py
> File Lib/test/test_parser.py (right):
>
>
> http://bugs.python.org/review/2292/diff/14152/Lib/test/test_parser.py#newcode334
> Lib/test/test_parser.py:334: self.check_expr('{**{}, 3:4, **{5:6,
> 7:8}}')
> Should there be tests for the new function call syntax, too?
>
> http://bugs.python.org/review/2292/diff/14152/Python/ast.c
> File Python/ast.c (right):
>
> http://bugs.python.org/review/2292/diff/14152/Python/ast.c#newcode1746
> Python/ast.c:1746: ast_error(c, n, "iterable unpacking cannot be used in
> comprehension");
> |n| should probably be |ch|
>
> http://bugs.python.org/review/2292/diff/14152/Python/ast.c#newcode2022
> Python/ast.c:2022: if (TYPE(CHILD(ch, 0)) == DOUBLESTAR)
> int is_dict = TYPE(CHILD(ch, 0)) == DOUBLESTAR;
>
> http://bugs.python.org/review/2292/diff/14152/Python/ast.c#newcode2026
> Python/ast.c:2026: && TYPE(CHILD(ch, 1)) == COMMA)) {
> boolean operators should be on the previous line
>
> http://bugs.python.org/review/2292/diff/14152/Python/ast.c#newcode2031
> Python/ast.c:2031: && TYPE(CHILD(ch, 1)) == comp_for) {
> ditto
>
> http://bugs.python.org/review/2292/diff/14152/Python/ast.c#newcode2036
> Python/ast.c:2036: && TYPE(CHILD(ch, 3 - is_dict)) == comp_for) {
> ditto
>
> http://bugs.python.org/review/2292/diff/14152/Python/ceval.c
> File Python/ceval.c (right):
>
> http://bugs.python.org/review/2292/diff/14152/Python/ceval.c#newcode2403
> Python/ceval.c:2403: as_tuple = PyObject_CallFunctionObjArgs(
> Use PyList_AsTuple.
>
> http://bugs.python.org/review/2292/diff/14152/Python/ceval.c#newcode2498
> Python/ceval.c:2498: TARGET(BUILD_MAP_UNPACK) {
> I think there ought to be two opcodes; one for unpacking maps in
> function calls and another for literals. The whole function location
> thing is rather hideous.
>
> http://bugs.python.org/review/2292/diff/14152/Python/ceval.c#newcode2526
> Python/ceval.c:2526: if (PySet_Size(intersection)) {
> Probably this would be faster with PySet_GET_SIZE(so).
>
> http://bugs.python.org/review/2292/diff/14152/Python/compile.c
> File Python/compile.c (right):
>
> http://bugs.python.org/review/2292/diff/14152/Python/compile.c#newcode2931
> Python/compile.c:2931: asdl_seq_SET(elts, i, elt->v.Starred.value);
> The compiler should not be mutating the AST.
>
> http://bugs.python.org/review/2292/diff/14152/Python/compile.c#newcode3088
> Python/compile.c:3088: int i, nseen, nkw = 0;
> Many of these should probably be Py_ssize_t.
>
> http://bugs.python.org/review/2292/diff/14152/Python/symtable.c
> File Python/symtable.c (right):
>
> http://bugs.python.org/review/2292/diff/14152/Python/symtable.c#newcode1372
> Python/symtable.c:1372: if (e != NULL) {
> Please fix the callers, so they don't pass NULL here.
>
> http://bugs.python.org/review/2292/
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 448 review

2015-03-16 Thread Neil Girdhar
Hi everyone,

I was wondering what is left with the PEP 448 (
http://bugs.python.org/issue2292) code review?  Big thanks to Benjamin,
Ethan, and Serhiy for reviewing some (all?) of the code.  What is the next
step of this process?

Thanks,

Neil


On Sun, Mar 8, 2015 at 4:38 PM, Neil Girdhar  wrote:

> Anyone have time to do a code review?
>
> http://bugs.python.org/issue2292
>
>
> On Mon, Mar 2, 2015 at 4:54 PM, Neil Girdhar 
> wrote:
>
>> It's from five days ago.  I asked Joshua to take a look at something, but
>> I guess he is busy.
>>
>> Best,
>>
>> Neil
>>
>> —
>>
>> The latest file there is from Feb 26, while your message that the patch
>> was ready for review is from today -- so is the
>> patch from five days ago the most recent?
>>
>> --
>> ~Ethan~
>>
>> On Mon, Mar 2, 2015 at 3:18 PM, Neil Girdhar 
>> wrote:
>>
>>> http://bugs.python.org/issue2292
>>>
>>> On Mon, Mar 2, 2015 at 3:17 PM, Victor Stinner >> > wrote:
>>>
>>>> Where is the patch?
>>>>
>>>> Victor
>>>>
>>>> Le lundi 2 mars 2015, Neil Girdhar  a écrit :
>>>>
>>>> Hi everyone,
>>>>>
>>>>> The patch is ready for review now, and I should have time this week to
>>>>> make changes and respond to comments.
>>>>>
>>>>> Best,
>>>>>
>>>>> Neil
>>>>>
>>>>> On Wed, Feb 25, 2015 at 2:42 PM, Guido van Rossum 
>>>>> wrote:
>>>>>
>>>>>> I'm back, I've re-read the PEP, and I've re-read the long thread with
>>>>>> "(no subject)".
>>>>>>
>>>>>> I think Georg Brandl nailed it:
>>>>>>
>>>>>> """
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> *I like the "sequence and dict flattening" part of the PEP, mostly
>>>>>> because itis consistent and should be easy to understand, but the
>>>>>> comprehension syntaxenhancements seem to be bad for readability and
>>>>>> "comprehending" what the codedoes.The call syntax part is a mixed bag on
>>>>>> the one hand it is nice to be consistent with the extended possibilities 
>>>>>> in
>>>>>> literals (flattening), but on the other hand there would be small but
>>>>>> annoying inconsistencies anyways (e.g. the duplicate kwarg case above).*
>>>>>> """
>>>>>>
>>>>>> Greg Ewing followed up explaining that the inconsistency between dict
>>>>>> flattening and call syntax is inherent in the pre-existing different 
>>>>>> rules
>>>>>> for dicts vs. keyword args: {'a':1, 'a':2} results in {'a':2}, while 
>>>>>> f(a=1,
>>>>>> a=2) is an error. (This form is a SyntaxError; the dynamic case f(a=1,
>>>>>> **{'a': 1}) is a TypeError.)
>>>>>>
>>>>>> For me, allowing f(*a, *b) and f(**d, **e) and all the other
>>>>>> combinations for function calls proposed by the PEP is an easy +1 -- 
>>>>>> it's a
>>>>>> straightforward extension of the existing pattern, and anybody who knows
>>>>>> what f(x, *a) does will understand f(x, *a, y, *b). Guessing what f(**d,
>>>>>> **e) means shouldn't be hard either. Understanding the edge case for
>>>>>> duplicate keys with f(**d, **e) is a little harder, but the error 
>>>>>> messages
>>>>>> are pretty clear, and it is not a new edge case.
>>>>>>
>>>>>> The sequence and dict flattening syntax proposals are also clean and
>>>>>> logical -- we already have *-unpacking on the receiving side, so allowing
>>>>>> *x in tuple expressions reads pretty naturally (and the similarity with 
>>>>>> *a
>>>>>> in argument lists certainly helps). From here, having [a, *x, b, *y] is
>>>>>> also natural, and then the extension to other displays is natural: {a, 
>>>>>> *x,
>>>>>> b, *y} and {a:1, **d, b:2, **e}. This, too, gets a +1 from me.
>>>>>

Re: [Python-Dev] boxing and unboxing data types

2015-03-09 Thread Neil Girdhar
Totally agree
On 9 Mar 2015 19:22, "Nick Coghlan"  wrote:

>
> On 10 Mar 2015 06:51, "Neil Girdhar"  wrote:
> >
> >
> >
> > On Mon, Mar 9, 2015 at 12:54 PM, Serhiy Storchaka 
> wrote:
> >>
> >> On 09.03.15 17:48, Neil Girdhar wrote:
> >>>
> >>> So you agree that the ideal solution is composition, but you prefer
> >>> inheritance in order to not break code?
> >>
> >>
> >> Yes, I agree. There is two advantages in the inheritance: larger
> backward compatibility and simpler implementation.
> >>
> >
> > Inheritance might be more backwards compatible, but I believe that you
> should check how much code is genuine not restricted to the idealized flags
> interface.   It's not worth talking about "simpler implementation" since
> the two solutions differ by only a couple dozen lines.
>
> We literally can't do this, as the vast majority of Python code in the
> world is locked up behind institutional firewalls or has otherwise never
> been published. The open source stuff is merely the tip of a truly enormous
> iceberg.
>
> If we want to *use* IntFlags in the standard library (and that's the only
> pay-off significant enough to justify having it in the standard library),
> then it needs to inherit from int.
>
> However, cloning the full enum module architecture to create
> flags.FlagsMeta, flags.Flags and flags.IntFlags would make sense to me.
>
> It would also make sense to try that idea out on PyPI for a while before
> incorporating it into the stdlib.
>
> Regards,
> Nick.
>
> >
> > On the other hand, composition is better design.  It prevents you from
> making mistakes like adding to flags and having carries, or using flags in
> an unintended way.
> >
> >>>
> >>> Then,I think the big question
> >>> is how much code would actually break if you presented the ideal
> >>> interface.  I imagine that 99% of the code using flags only uses __or__
> >>> to compose and __and__, __invert__ to erase flags.
> >>
> >>
> >> I don't know and don't want to guess. Let just follow the way of bool
> and IntEnum. When users will be encouraged to use IntEnum and IntFlags
> instead of plain ints we could consider the idea of dropping inheritance of
> bool, IntEnum and IntFlags from int. This is not near future.
> >
> >
> > I think it's the other way around.  You should typically start with the
> modest interface and add methods as you need.  If you start with full blown
> inheritance, you will find it only increasingly more difficult to remove
> methods in changing your solution.  Using inheritance instead of
> composition is one of the most common errors in objected oriented
> programming, and I get the impression from your other paragraph that you're
> seduced by the slightly shorter code.  I don't think it's worth giving in
> to that without proof that composition will actually break a significant
> amount of code.
> >
> > Regarding IntEnum — that should inherit from int since they are truly
> just integer constants.  It's too late for bool; that ship has sailed
> unfortunately.
> >
> >>
> >>
> >>
> >>> > Here's another reason.  What if someone wants to use an IntFlags
> object,
> >>> > but wants to use a fixed width type for storage, say
> numpy.int32?   Why
> >>> > shouldn't they be able to do that?  By using composition, you
> can easily
> >>> > provide such an option.
> >>> You can design abstract interface Flags that can be combined with
> >>> int or other type. But why you want to use numpy.int32 as storage?
> >>> This doesn't save much memory, because with composition the
> IntFlags
> >>> class weighs more than int subclass.
> >>> Maybe you're storing a bunch of flags in a numpy array having dtype
> >>> np.int32?  It's contrived, I agree.
> >>
> >>
> >> I afraid that composition will not help you with this. Can numpy array
> pack int-like objects into fixed-width integer array and then restore
> original type on unboxing?
> >
> >
> > You're right.
> >>
> >>
> >>
> >>
> >> ___
> >> Python-Dev mailing list
> >> Python-Dev@python.org
> >> https://mail.python.org/mailman/listinfo/python-dev
> >> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/mistersheik%40gmail.com
> >
> >
> >
> > ___
> > Python-Dev mailing list
> > Python-Dev@python.org
> > https://mail.python.org/mailman/listinfo/python-dev
> > Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/ncoghlan%40gmail.com
> >
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] boxing and unboxing data types

2015-03-09 Thread Neil Girdhar
On Mon, Mar 9, 2015 at 12:54 PM, Serhiy Storchaka 
wrote:

> On 09.03.15 17:48, Neil Girdhar wrote:
>
>> So you agree that the ideal solution is composition, but you prefer
>> inheritance in order to not break code?
>>
>
> Yes, I agree. There is two advantages in the inheritance: larger backward
> compatibility and simpler implementation.
>
>
Inheritance might be more backwards compatible, but I believe that you
should check how much code is genuine not restricted to the idealized flags
interface.   It's not worth talking about "simpler implementation" since
the two solutions differ by only a couple dozen lines.

On the other hand, composition is better design.  It prevents you from
making mistakes like adding to flags and having carries, or using flags in
an unintended way.


>  Then,I think the big question
>> is how much code would actually break if you presented the ideal
>> interface.  I imagine that 99% of the code using flags only uses __or__
>> to compose and __and__, __invert__ to erase flags.
>>
>
> I don't know and don't want to guess. Let just follow the way of bool and
> IntEnum. When users will be encouraged to use IntEnum and IntFlags instead
> of plain ints we could consider the idea of dropping inheritance of bool,
> IntEnum and IntFlags from int. This is not near future.


I think it's the other way around.  You should typically start with the
modest interface and add methods as you need.  If you start with full blown
inheritance, you will find it only increasingly more difficult to remove
methods in changing your solution.  Using inheritance instead of
composition is one of the most common errors in objected oriented
programming, and I get the impression from your other paragraph that you're
seduced by the slightly shorter code.  I don't think it's worth giving in
to that without proof that composition will actually break a significant
amount of code.

Regarding IntEnum — that should inherit from int since they are truly just
integer constants.  It's too late for bool; that ship has sailed
unfortunately.


>
>
>  > Here's another reason.  What if someone wants to use an IntFlags
>> object,
>> > but wants to use a fixed width type for storage, say numpy.int32?
>>  Why
>> > shouldn't they be able to do that?  By using composition, you can
>> easily
>> > provide such an option.
>> You can design abstract interface Flags that can be combined with
>> int or other type. But why you want to use numpy.int32 as storage?
>> This doesn't save much memory, because with composition the IntFlags
>> class weighs more than int subclass.
>> Maybe you're storing a bunch of flags in a numpy array having dtype
>> np.int32?  It's contrived, I agree.
>>
>
> I afraid that composition will not help you with this. Can numpy array
> pack int-like objects into fixed-width integer array and then restore
> original type on unboxing?


You're right.

>
>
>
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: https://mail.python.org/mailman/options/python-dev/
> mistersheik%40gmail.com
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Tunning binary insertion sort algorithm in Timsort.

2015-03-09 Thread Neil Girdhar
It may be that the comparison that you do is between two elements that are
almost always in the same cache line whereas the binary search might often
incur a cache miss.

On Mon, Mar 9, 2015 at 2:49 PM, nha pham  wrote:

> I do not know exactly, one thing I can imagine is: it turns the worst case
> of binary insertion sort to best case.
> With sorted array in range of 32 or 64 items, built from zero element. The
> new element you put into the sorted list has a high chance of being the
> smallest or the the highest of the sorted list (or nearly highest or nearly
> smallest)
>
> If that case happen, the old binary insertion sort will have the
> investigate all the list, while with my idea, it just have to compare more
> 1-2 times.
> I will try to run more test an more thinking to make sure though.
>
> On Mon, Mar 9, 2015 at 11:48 AM, nha pham  wrote:
>
>> I do not know exactly, one thing I can imagine is: it turns the worst
>> case of binary insertion sort to best case.
>> With sorted array in range of 32 or 64 items, built from zero element.
>> The new element you put into the sorted list has a high chance of being the
>> smallest or the the highest of the sorted list (or nearly highest or nearly
>> smallest)
>>
>> If that case happen, the old binary insertion sort will have the
>> investigate all the list, while with my idea, it just have to compare more
>> 1-2 times.
>> I will try to run more test an more thinking to make sure though.
>>
>>
>>
>> On Mon, Mar 9, 2015 at 10:39 AM, Isaac Schwabacher > > wrote:
>>
>>> On 15-03-08, nha pham
>>>  wrote:
>>> >
>>> > We can optimize the TimSort algorithm by optimizing its binary
>>> insertion sort.
>>> >
>>> > The current version of binary insertion sort use this idea:
>>> >
>>> > Use binary search to find a final position in sorted list for a new
>>> element X. Then insert X to that location.
>>> >
>>> > I suggest another idea:
>>> >
>>> > Use binary search to find a final postion in sorted list for a new
>>> element X. Before insert X to that location, compare X with its next
>>> element.
>>> >
>>> > For the next element, we already know if it is lower or bigger than X,
>>> so we can reduce the search area to the left side or on the right side of X
>>> in the sorted list.
>>>
>>> I don't understand how this is an improvement, since with binary search
>>> the idea is that each comparison cuts the remaining list to search in half;
>>> i.e., each comparison yields one bit of information. Here, you're spending
>>> a comparison to cut the list to search at the element you just inserted,
>>> which is probably not right in the middle. If you miss the middle, you're
>>> getting on average less than a full bit of information from your
>>> comparison, so you're not reducing the remaining search space by as much as
>>> you would be if you just compared to the element in the middle of the list.
>>>
>>> > I have applied my idea on java.util. ComparableTimSort.sort() and
>>> testing. The execute time is reduced by 2%-6% with array of random integer.
>>>
>>> For all that, though, experiment trumps theory...
>>>
>>> > Here is detail about algorithm and testing:
>>> https://github.com/nhapq/Optimize_binary_insertion_sort
>>> >
>>> > Sincerely.
>>> >
>>> > phqnha
>>>
>>
>>
>
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/mistersheik%40gmail.com
>
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] boxing and unboxing data types

2015-03-09 Thread Neil Girdhar
On Mon, Mar 9, 2015 at 11:11 AM, Serhiy Storchaka 
wrote:

> понеділок, 09-бер-2015 10:18:50 ви написали:
> > On Mon, Mar 9, 2015 at 10:10 AM, Serhiy Storchaka 
> wrote:
> > > понеділок, 09-бер-2015 09:52:01 ви написали:
> > > > On Mon, Mar 9, 2015 at 2:07 AM, Serhiy Storchaka <
> storch...@gmail.com>
> > > > > And to be ideal drop-in replacement IntEnum should override such
> methods
> > > > > as __eq__ and __hash__ (so it could be used as mapping key). If
> all methods
> > > > > should be overridden to quack as int, why not take an int?
> > > >
> > > > You're absolutely right that if *all the methods should be
> overrriden to
> > > > quack as int, then you should subclass int (the Liskov substitution
> > > > principle).  But all methods should not be overridden — mainly the
> methods
> > > > you overrode in your patch should be exposed.  Here is a list of
> methods on
> > > > int that should not be on IntFlags in my opinion (give or take a
> couple):
> > > >
> > > > __abs__, __add__, __delattr__, __divmod__, __float__, __floor__,
> > > > __floordiv__, __index__, __lshift__, __mod__, __mul__, __pos__,
> __pow__,
> > > > __radd__, __rdivmod__, __rfloordiv__, __rlshift__, __rmod__,
> __rmul__,
> > > > __round__, __rpow__, __rrshift__, __rshift__, __rsub__, __rtruediv__,
> > > > __sub__, __truediv__, __trunc__, conjugate, denominator, imag,
> numerator,
> > > > real.
> > > >
> > > > I don't think __index__ should be exposed either since are you
> really going
> > > > to slice a list using IntFlags?  Really?
> > >
> > > Definitely __index__ should be exposed. __int__ is for lossy
> conversion to int
> > > (as in float). __index__ is for lossless conversion.
> >
> > Is it?  __index__ promises lossless conversion, but __index__ is *for*
> > indexing.
>
> I spite of its name it is for any lossless conversion.
>

You're right.

>
> > > __add__ should be exposed because some code can use + instead of | for
> > > combining flags. But it shouldn't preserve the type, because this is
> not
> > > recommended way.
> >
> > I think it should be blocked because it can lead to all kinds of weird
> > bugs.  If the flag is already set and you add it a copy, it silently
> spills
> > over into other flags.  This is a mistake that a good interface prevents.
>
> I think this is a case when backward compatibility has larger weight.
>
>
So you agree that the ideal solution is composition, but you prefer
inheritance in order to not break code?  Then,I think the big question is
how much code would actually break if you presented the ideal interface.  I
imagine that 99% of the code using flags only uses __or__ to compose and
__and__, __invert__ to erase flags.


> > > For the same reason I think __lshift__, __rshift__, __sub__,
> > > __mul__, __divmod__, __floordiv__, __mod__, etc should be exposed too.
> So the
> > > majority of the methods should be exposed, and there is a risk that we
> loss
> > > something.
> >
> > I totally disagree with all of those.
> >
> > > For good compatibility with Python code IntFlags should expose also
> > > __subclasscheck__ or __subclasshook__. And when we are at this point,
> why not
> > > use int subclass?
> >
> > Here's another reason.  What if someone wants to use an IntFlags object,
> > but wants to use a fixed width type for storage, say numpy.int32?   Why
> > shouldn't they be able to do that?  By using composition, you can easily
> > provide such an option.
>
> You can design abstract interface Flags that can be combined with int or
> other type. But why you want to use numpy.int32 as storage? This doesn't
> save much memory, because with composition the IntFlags class weighs more
> than int subclass.
>
>
Maybe you're storing a bunch of flags in a numpy array having dtype
np.int32?  It's contrived, I agree.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] boxing and unboxing data types

2015-03-09 Thread Neil Girdhar
On Mon, Mar 9, 2015 at 11:46 AM, Steven D'Aprano 
wrote:

> On Mon, Mar 09, 2015 at 09:52:01AM -0400, Neil Girdhar wrote:
>
> > Here is a list of methods on
> > int that should not be on IntFlags in my opinion (give or take a couple):
> >
> > __abs__, __add__, __delattr__, __divmod__, __float__, __floor__,
> > __floordiv__, __index__, __lshift__, __mod__, __mul__, __pos__, __pow__,
> > __radd__, __rdivmod__, __rfloordiv__, __rlshift__, __rmod__, __rmul__,
> > __round__, __rpow__, __rrshift__, __rshift__, __rsub__, __rtruediv__,
> > __sub__, __truediv__, __trunc__, conjugate, denominator, imag, numerator,
> > real.
> >
> > I don't think __index__ should be exposed either since are you really
> going
> > to slice a list using IntFlags?  Really?
>
> In what way is this an *Int*Flags object if it is nothing like an int?
> It sounds like what you want is a bunch of Enum inside a set with a custom
> __str__, not IntFlags.
>
>
It doesn't matter what you call it.  I believe the goal of this is to have
a flags object with flags operations and pretty-printing.  It makes more
sense to me to decide the interface and then the implementation.

>
> --
> Steve
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/mistersheik%40gmail.com
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] boxing and unboxing data types

2015-03-09 Thread Neil Girdhar
On Mon, Mar 9, 2015 at 2:07 AM, Serhiy Storchaka 
wrote:

> On 09.03.15 06:33, Ethan Furman wrote:
>
>> I guess it could boil down to:  if IntEnum was not based on 'int', but
>> instead had the __int__ and __index__ methods
>> (plus all the other __xxx__ methods that int has), would it still be a
>> drop-in replacement for actual ints?  Even when
>> being used to talk to non-Python libs?
>>
>
> If you don't call isinstance(x, int) (PyLong_Check* in C).
>
> Most conversions from Python to C implicitly call __index__ or __int__,
> but unfortunately not all.
>
> >>> float(Thin(42))
> 42.0
> >>> float(Wrap(42))
> Traceback (most recent call last):
>   File "", line 1, in 
> TypeError: float() argument must be a string or a number, not 'Wrap'
>
> >>> '%*s' % (Thin(5), 'x')
> 'x'
> >>> '%*s' % (Wrap(5), 'x')
> Traceback (most recent call last):
>   File "", line 1, in 
> TypeError: * wants int
>
> >>> OSError(Thin(2), 'No such file or directory')
> FileNotFoundError(2, 'No such file or directory')
> >>> OSError(Wrap(2), 'No such file or directory')
> OSError(<__main__.Wrap object at 0xb6fe81ac>, 'No such file or directory')
>
> >>> re.match('(x)', 'x').group(Thin(1))
> 'x'
> >>> re.match('(x)', 'x').group(Wrap(1))
> Traceback (most recent call last):
>   File "", line 1, in 
> IndexError: no such group
>
> And to be ideal drop-in replacement IntEnum should override such methods
> as __eq__ and __hash__ (so it could be used as mapping key). If all methods
> should be overridden to quack as int, why not take an int?
>
>
You're absolutely right that if *all the methods should be overrriden to
quack as int, then you should subclass int (the Liskov substitution
principle).  But all methods should not be overridden — mainly the methods
you overrode in your patch should be exposed.  Here is a list of methods on
int that should not be on IntFlags in my opinion (give or take a couple):

__abs__, __add__, __delattr__, __divmod__, __float__, __floor__,
__floordiv__, __index__, __lshift__, __mod__, __mul__, __pos__, __pow__,
__radd__, __rdivmod__, __rfloordiv__, __rlshift__, __rmod__, __rmul__,
__round__, __rpow__, __rrshift__, __rshift__, __rsub__, __rtruediv__,
__sub__, __truediv__, __trunc__, conjugate, denominator, imag, numerator,
real.

I don't think __index__ should be exposed either since are you really going
to slice a list using IntFlags?  Really?


>
>
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: https://mail.python.org/mailman/options/python-dev/
> mistersheik%40gmail.com
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 448 review

2015-03-08 Thread Neil Girdhar
Anyone have time to do a code review?

http://bugs.python.org/issue2292

On Mon, Mar 2, 2015 at 4:54 PM, Neil Girdhar  wrote:

> It's from five days ago.  I asked Joshua to take a look at something, but
> I guess he is busy.
>
> Best,
>
> Neil
>
> —
>
> The latest file there is from Feb 26, while your message that the patch
> was ready for review is from today -- so is the
> patch from five days ago the most recent?
>
> --
> ~Ethan~
>
> On Mon, Mar 2, 2015 at 3:18 PM, Neil Girdhar 
> wrote:
>
>> http://bugs.python.org/issue2292
>>
>> On Mon, Mar 2, 2015 at 3:17 PM, Victor Stinner 
>> wrote:
>>
>>> Where is the patch?
>>>
>>> Victor
>>>
>>> Le lundi 2 mars 2015, Neil Girdhar  a écrit :
>>>
>>> Hi everyone,
>>>>
>>>> The patch is ready for review now, and I should have time this week to
>>>> make changes and respond to comments.
>>>>
>>>> Best,
>>>>
>>>> Neil
>>>>
>>>> On Wed, Feb 25, 2015 at 2:42 PM, Guido van Rossum 
>>>> wrote:
>>>>
>>>>> I'm back, I've re-read the PEP, and I've re-read the long thread with
>>>>> "(no subject)".
>>>>>
>>>>> I think Georg Brandl nailed it:
>>>>>
>>>>> """
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> *I like the "sequence and dict flattening" part of the PEP, mostly
>>>>> because itis consistent and should be easy to understand, but the
>>>>> comprehension syntaxenhancements seem to be bad for readability and
>>>>> "comprehending" what the codedoes.The call syntax part is a mixed bag on
>>>>> the one hand it is nice to be consistent with the extended possibilities 
>>>>> in
>>>>> literals (flattening), but on the other hand there would be small but
>>>>> annoying inconsistencies anyways (e.g. the duplicate kwarg case above).*
>>>>> """
>>>>>
>>>>> Greg Ewing followed up explaining that the inconsistency between dict
>>>>> flattening and call syntax is inherent in the pre-existing different rules
>>>>> for dicts vs. keyword args: {'a':1, 'a':2} results in {'a':2}, while 
>>>>> f(a=1,
>>>>> a=2) is an error. (This form is a SyntaxError; the dynamic case f(a=1,
>>>>> **{'a': 1}) is a TypeError.)
>>>>>
>>>>> For me, allowing f(*a, *b) and f(**d, **e) and all the other
>>>>> combinations for function calls proposed by the PEP is an easy +1 -- it's 
>>>>> a
>>>>> straightforward extension of the existing pattern, and anybody who knows
>>>>> what f(x, *a) does will understand f(x, *a, y, *b). Guessing what f(**d,
>>>>> **e) means shouldn't be hard either. Understanding the edge case for
>>>>> duplicate keys with f(**d, **e) is a little harder, but the error messages
>>>>> are pretty clear, and it is not a new edge case.
>>>>>
>>>>> The sequence and dict flattening syntax proposals are also clean and
>>>>> logical -- we already have *-unpacking on the receiving side, so allowing
>>>>> *x in tuple expressions reads pretty naturally (and the similarity with *a
>>>>> in argument lists certainly helps). From here, having [a, *x, b, *y] is
>>>>> also natural, and then the extension to other displays is natural: {a, *x,
>>>>> b, *y} and {a:1, **d, b:2, **e}. This, too, gets a +1 from me.
>>>>>
>>>>> So that leaves comprehensions. IIRC, during the development of the
>>>>> patch we realized that f(*x for x in xs) is sufficiently ambiguous that we
>>>>> decided to disallow it -- note that f(x for x in xs) is already somewhat 
>>>>> of
>>>>> a special case because an argument can only be a "bare" generator
>>>>> expression if it is the only argument. The same reasoning doesn't apply 
>>>>> (in
>>>>> that form) to list, set and dict comprehensions -- while f(x for x in xs)
>>>>> is identical in meaning to f((x for x in xs)), [x for x in xs] is NOT the
>>>>> same as [(x for x in xs)] (that's a list of one element, an

Re: [Python-Dev] PEP 488: elimination of PYO files

2015-03-06 Thread Neil Girdhar
On Fri, Mar 6, 2015 at 1:11 PM, Brett Cannon  wrote:

>
>
> On Fri, Mar 6, 2015 at 1:03 PM Mark Shannon  wrote:
>
>>
>> On 06/03/15 16:34, Brett Cannon wrote:
>> > Over on the import-sig I proposed eliminating the concept of .pyo files
>> > since they only signify that /some/ optimization took place, not
>> > /what/ optimizations took place. Everyone on the SIG was positive with
>> > the idea so I wrote a PEP, got positive feedback from the SIG again, and
>> > so now I present to you PEP 488 for discussion.
>> >
>> [snip]
>>
>> Historically -O and -OO have been the antithesis of optimisation, they
>> change the behaviour of the program with no noticeable effect on
>> performance.
>> If a change is to be made, why not just drop .pyo files and be done with
>> it?
>>
>
> I disagree with your premise that .pyo files don't have a noticeable
> effect on performance. If you don't use asserts a lot then there is no
> effect, but if you use them heavily or have them perform expensive
> calculations then there is an impact. And the dropping of docstrings does
> have an impact on memory usage when you use Python at scale.
>
> You're also assuming that we will never develop an AST optimizer that will
> go beyond what the peepholer can do based on raw bytecode, or something
> that involves a bit of calculation and thus something you wouldn't want to
> do at startup.
>

I don't want to speak for him, but you're going to get the best results
optimizing ASTs at runtime, which is what I thought he was suggesting.
Trying to optimize Python at compile time is setting your sights really
low.   You have so little information then.


>
>
>>
>> Any worthwhile optimisation needs to be done at runtime or involve much
>> more than tweaking bytecode.
>>
>
> I disagree again. If you do something like whole program analysis and want
> to use that to optimize something, you will surface that through bytecode
> and not editing the source. So while you are doing "much more than tweaking
> bytecode" externally to Python, you still have to surface to the
> interpreter through bytecode.
>
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/mistersheik%40gmail.com
>
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 448 review

2015-03-02 Thread Neil Girdhar
It's from five days ago.  I asked Joshua to take a look at something, but I
guess he is busy.

Best,

Neil

—

The latest file there is from Feb 26, while your message that the patch was
ready for review is from today -- so is the
patch from five days ago the most recent?

-- 
~Ethan~

On Mon, Mar 2, 2015 at 3:18 PM, Neil Girdhar  wrote:

> http://bugs.python.org/issue2292
>
> On Mon, Mar 2, 2015 at 3:17 PM, Victor Stinner 
> wrote:
>
>> Where is the patch?
>>
>> Victor
>>
>> Le lundi 2 mars 2015, Neil Girdhar  a écrit :
>>
>> Hi everyone,
>>>
>>> The patch is ready for review now, and I should have time this week to
>>> make changes and respond to comments.
>>>
>>> Best,
>>>
>>> Neil
>>>
>>> On Wed, Feb 25, 2015 at 2:42 PM, Guido van Rossum 
>>> wrote:
>>>
>>>> I'm back, I've re-read the PEP, and I've re-read the long thread with
>>>> "(no subject)".
>>>>
>>>> I think Georg Brandl nailed it:
>>>>
>>>> """
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> *I like the "sequence and dict flattening" part of the PEP, mostly
>>>> because itis consistent and should be easy to understand, but the
>>>> comprehension syntaxenhancements seem to be bad for readability and
>>>> "comprehending" what the codedoes.The call syntax part is a mixed bag on
>>>> the one hand it is nice to be consistent with the extended possibilities in
>>>> literals (flattening), but on the other hand there would be small but
>>>> annoying inconsistencies anyways (e.g. the duplicate kwarg case above).*
>>>> """
>>>>
>>>> Greg Ewing followed up explaining that the inconsistency between dict
>>>> flattening and call syntax is inherent in the pre-existing different rules
>>>> for dicts vs. keyword args: {'a':1, 'a':2} results in {'a':2}, while f(a=1,
>>>> a=2) is an error. (This form is a SyntaxError; the dynamic case f(a=1,
>>>> **{'a': 1}) is a TypeError.)
>>>>
>>>> For me, allowing f(*a, *b) and f(**d, **e) and all the other
>>>> combinations for function calls proposed by the PEP is an easy +1 -- it's a
>>>> straightforward extension of the existing pattern, and anybody who knows
>>>> what f(x, *a) does will understand f(x, *a, y, *b). Guessing what f(**d,
>>>> **e) means shouldn't be hard either. Understanding the edge case for
>>>> duplicate keys with f(**d, **e) is a little harder, but the error messages
>>>> are pretty clear, and it is not a new edge case.
>>>>
>>>> The sequence and dict flattening syntax proposals are also clean and
>>>> logical -- we already have *-unpacking on the receiving side, so allowing
>>>> *x in tuple expressions reads pretty naturally (and the similarity with *a
>>>> in argument lists certainly helps). From here, having [a, *x, b, *y] is
>>>> also natural, and then the extension to other displays is natural: {a, *x,
>>>> b, *y} and {a:1, **d, b:2, **e}. This, too, gets a +1 from me.
>>>>
>>>> So that leaves comprehensions. IIRC, during the development of the
>>>> patch we realized that f(*x for x in xs) is sufficiently ambiguous that we
>>>> decided to disallow it -- note that f(x for x in xs) is already somewhat of
>>>> a special case because an argument can only be a "bare" generator
>>>> expression if it is the only argument. The same reasoning doesn't apply (in
>>>> that form) to list, set and dict comprehensions -- while f(x for x in xs)
>>>> is identical in meaning to f((x for x in xs)), [x for x in xs] is NOT the
>>>> same as [(x for x in xs)] (that's a list of one element, and the element is
>>>> a generator expression).
>>>>
>>>> The basic premise of this part of the proposal is that if you have a
>>>> few iterables, the new proposal (without comprehensions) lets you create a
>>>> list or generator expression that iterates over all of them, essentially
>>>> flattening them:
>>>>
>>>> >>> xs = [1, 2, 3]
>>>> >>> ys = ['abc', 'def']
>>>> >>> zs = [99]
>>>> >>> [*xs, *ys, *zs]
>>>&g

Re: [Python-Dev] PEP 448 review

2015-03-02 Thread Neil Girdhar
http://bugs.python.org/issue2292

On Mon, Mar 2, 2015 at 3:17 PM, Victor Stinner 
wrote:

> Where is the patch?
>
> Victor
>
> Le lundi 2 mars 2015, Neil Girdhar  a écrit :
>
> Hi everyone,
>>
>> The patch is ready for review now, and I should have time this week to
>> make changes and respond to comments.
>>
>> Best,
>>
>> Neil
>>
>> On Wed, Feb 25, 2015 at 2:42 PM, Guido van Rossum 
>> wrote:
>>
>>> I'm back, I've re-read the PEP, and I've re-read the long thread with
>>> "(no subject)".
>>>
>>> I think Georg Brandl nailed it:
>>>
>>> """
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> *I like the "sequence and dict flattening" part of the PEP, mostly
>>> because itis consistent and should be easy to understand, but the
>>> comprehension syntaxenhancements seem to be bad for readability and
>>> "comprehending" what the codedoes.The call syntax part is a mixed bag on
>>> the one hand it is nice to be consistent with the extended possibilities in
>>> literals (flattening), but on the other hand there would be small but
>>> annoying inconsistencies anyways (e.g. the duplicate kwarg case above).*
>>> """
>>>
>>> Greg Ewing followed up explaining that the inconsistency between dict
>>> flattening and call syntax is inherent in the pre-existing different rules
>>> for dicts vs. keyword args: {'a':1, 'a':2} results in {'a':2}, while f(a=1,
>>> a=2) is an error. (This form is a SyntaxError; the dynamic case f(a=1,
>>> **{'a': 1}) is a TypeError.)
>>>
>>> For me, allowing f(*a, *b) and f(**d, **e) and all the other
>>> combinations for function calls proposed by the PEP is an easy +1 -- it's a
>>> straightforward extension of the existing pattern, and anybody who knows
>>> what f(x, *a) does will understand f(x, *a, y, *b). Guessing what f(**d,
>>> **e) means shouldn't be hard either. Understanding the edge case for
>>> duplicate keys with f(**d, **e) is a little harder, but the error messages
>>> are pretty clear, and it is not a new edge case.
>>>
>>> The sequence and dict flattening syntax proposals are also clean and
>>> logical -- we already have *-unpacking on the receiving side, so allowing
>>> *x in tuple expressions reads pretty naturally (and the similarity with *a
>>> in argument lists certainly helps). From here, having [a, *x, b, *y] is
>>> also natural, and then the extension to other displays is natural: {a, *x,
>>> b, *y} and {a:1, **d, b:2, **e}. This, too, gets a +1 from me.
>>>
>>> So that leaves comprehensions. IIRC, during the development of the patch
>>> we realized that f(*x for x in xs) is sufficiently ambiguous that we
>>> decided to disallow it -- note that f(x for x in xs) is already somewhat of
>>> a special case because an argument can only be a "bare" generator
>>> expression if it is the only argument. The same reasoning doesn't apply (in
>>> that form) to list, set and dict comprehensions -- while f(x for x in xs)
>>> is identical in meaning to f((x for x in xs)), [x for x in xs] is NOT the
>>> same as [(x for x in xs)] (that's a list of one element, and the element is
>>> a generator expression).
>>>
>>> The basic premise of this part of the proposal is that if you have a few
>>> iterables, the new proposal (without comprehensions) lets you create a list
>>> or generator expression that iterates over all of them, essentially
>>> flattening them:
>>>
>>> >>> xs = [1, 2, 3]
>>> >>> ys = ['abc', 'def']
>>> >>> zs = [99]
>>> >>> [*xs, *ys, *zs]
>>> [1, 2, 3, 'abc', 'def', 99]
>>> >>>
>>>
>>> But now suppose you have a list of iterables:
>>>
>>> >>> xss = [[1, 2, 3], ['abc', 'def'], [99]]
>>> >>> [*xss[0], *xss[1], *xss[2]]
>>> [1, 2, 3, 'abc', 'def', 99]
>>> >>>
>>>
>>> Wouldn't it be nice if you could write the latter using a comprehension?
>>>
>>> >>> xss = [[1, 2, 3], ['abc', 'def'], [99]]
>>> >>> [*xs for xs in xss]
>>> [1, 2, 3

Re: [Python-Dev] PEP 448 review

2015-03-02 Thread Neil Girdhar
Hi everyone,

The patch is ready for review now, and I should have time this week to make
changes and respond to comments.

Best,

Neil

On Wed, Feb 25, 2015 at 2:42 PM, Guido van Rossum  wrote:

> I'm back, I've re-read the PEP, and I've re-read the long thread with "(no
> subject)".
>
> I think Georg Brandl nailed it:
>
> """
>
>
>
>
>
>
>
>
> *I like the "sequence and dict flattening" part of the PEP, mostly because
> itis consistent and should be easy to understand, but the comprehension
> syntaxenhancements seem to be bad for readability and "comprehending" what
> the codedoes.The call syntax part is a mixed bag on the one hand it is nice
> to be consistent with the extended possibilities in literals (flattening),
> but on the other hand there would be small but annoying inconsistencies
> anyways (e.g. the duplicate kwarg case above).*
> """
>
> Greg Ewing followed up explaining that the inconsistency between dict
> flattening and call syntax is inherent in the pre-existing different rules
> for dicts vs. keyword args: {'a':1, 'a':2} results in {'a':2}, while f(a=1,
> a=2) is an error. (This form is a SyntaxError; the dynamic case f(a=1,
> **{'a': 1}) is a TypeError.)
>
> For me, allowing f(*a, *b) and f(**d, **e) and all the other combinations
> for function calls proposed by the PEP is an easy +1 -- it's a
> straightforward extension of the existing pattern, and anybody who knows
> what f(x, *a) does will understand f(x, *a, y, *b). Guessing what f(**d,
> **e) means shouldn't be hard either. Understanding the edge case for
> duplicate keys with f(**d, **e) is a little harder, but the error messages
> are pretty clear, and it is not a new edge case.
>
> The sequence and dict flattening syntax proposals are also clean and
> logical -- we already have *-unpacking on the receiving side, so allowing
> *x in tuple expressions reads pretty naturally (and the similarity with *a
> in argument lists certainly helps). From here, having [a, *x, b, *y] is
> also natural, and then the extension to other displays is natural: {a, *x,
> b, *y} and {a:1, **d, b:2, **e}. This, too, gets a +1 from me.
>
> So that leaves comprehensions. IIRC, during the development of the patch
> we realized that f(*x for x in xs) is sufficiently ambiguous that we
> decided to disallow it -- note that f(x for x in xs) is already somewhat of
> a special case because an argument can only be a "bare" generator
> expression if it is the only argument. The same reasoning doesn't apply (in
> that form) to list, set and dict comprehensions -- while f(x for x in xs)
> is identical in meaning to f((x for x in xs)), [x for x in xs] is NOT the
> same as [(x for x in xs)] (that's a list of one element, and the element is
> a generator expression).
>
> The basic premise of this part of the proposal is that if you have a few
> iterables, the new proposal (without comprehensions) lets you create a list
> or generator expression that iterates over all of them, essentially
> flattening them:
>
> >>> xs = [1, 2, 3]
> >>> ys = ['abc', 'def']
> >>> zs = [99]
> >>> [*xs, *ys, *zs]
> [1, 2, 3, 'abc', 'def', 99]
> >>>
>
> But now suppose you have a list of iterables:
>
> >>> xss = [[1, 2, 3], ['abc', 'def'], [99]]
> >>> [*xss[0], *xss[1], *xss[2]]
> [1, 2, 3, 'abc', 'def', 99]
> >>>
>
> Wouldn't it be nice if you could write the latter using a comprehension?
>
> >>> xss = [[1, 2, 3], ['abc', 'def'], [99]]
> >>> [*xs for xs in xss]
> [1, 2, 3, 'abc', 'def', 99]
> >>>
>
> This is somewhat seductive, and the following is even nicer: the *xs
> position may be an expression, e.g.:
>
> >>> xss = [[1, 2, 3], ['abc', 'def'], [99]]
> >>> [*xs[:2] for xs in xss]
> [1, 2, 'abc', 'def', 99]
> >>>
>
> On the other hand, I had to explore the possibilities here by
> experimenting in the interpreter, and I discovered some odd edge cases
> (e.g. you can parenthesize the starred expression, but that seems a
> syntactic accident).
>
> All in all I am personally +0 on the comprehension part of the PEP, and I
> like that it provides a way to "flatten" a sequence of sequences, but I
> think very few people in the thread have supported this part. Therefore I
> would like to ask Neil to update the PEP and the patch to take out the
> comprehension part, so that the two "easy wins" can make it into Python 3.5
> (basically, I am accepting two-thirds of the PEP :-). There is some time
> yet until alpha 2.
>
> I would also like code reviewers (Benjamin?) to start reviewing the patch
> , taking into account that the
> comprehension part needs to be removed.
>
> --
> --Guido van Rossum (python.org/~guido)
>
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] subclassing builtin data structures

2015-02-14 Thread Neil Girdhar
Oops, I meant to call super if necessary:

@classmethod
def __make_me_cls__(cls, arg_cls, *args, **kwargs):
if arg_cls is C:
pass
elif arg_cls is D:
args, kwargs = modified_args_for_D(args, kwargs)
elif arg_cls is E:
args, kwargs = modified_args_for_D(args, kwargs)
else:
return super().__make_me_cls__(arg_cls, args, kwargs)

if cls is C:
return C(*args, **kwargs)
return cls.__make_me_cls__(C, *args, **kwargs)


On Sat, Feb 14, 2015 at 3:15 PM, Neil Girdhar  wrote:

> I think the __make_me__ pattern discussed earlier is still the most
> generic cooperative solution.  Here it is with a classmethod version too:
>
> class C(D, E):
> def some_method(self):
> return __make_me__(self, C)
>
> def __make_me__(self, arg_cls, *args, **kwargs):
> if arg_cls is C:
> pass
> elif issubclass(D, arg_cls):
> args, kwargs = modified_args_for_D(args, kwargs)
> elif issubclass(E, arg_cls):
> args, kwargs = modified_args_for_D(args, kwargs)
> else:
> raise ValueError
>
> if self.__class__ == C:
> return C(*args, **kwargs)
> return self.__make_me__(C, *args, **kwargs)
>
> @classmethod
> def __make_me_cls__(cls, arg_cls, *args, **kwargs):
> if arg_cls is C:
> pass
> elif issubclass(D, arg_cls):
> args, kwargs = modified_args_for_D(args, kwargs)
> elif issubclass(E, arg_cls):
> args, kwargs = modified_args_for_D(args, kwargs)
> else:
> raise ValueError
>
> if cls == C:
> return C(*args, **kwargs)
> return cls.__make_me_cls__(C, *args, **kwargs)
>
>
> On Sat, Feb 14, 2015 at 7:23 AM, Steven D'Aprano 
> wrote:
>
>> On Fri, Feb 13, 2015 at 06:03:35PM -0500, Neil Girdhar wrote:
>> > I personally don't think this is a big enough issue to warrant any
>> changes,
>> > but I think Serhiy's solution would be the ideal best with one
>> additional
>> > parameter: the caller's type.  Something like
>> >
>> > def __make_me__(self, cls, *args, **kwargs)
>> >
>> > and the idea is that any time you want to construct a type, instead of
>> >
>> > self.__class__(assumed arguments…)
>> >
>> > where you are not sure that the derived class' constructor knows the
>> right
>> > argument types, you do
>> >
>> > def SomeCls:
>> >  def some_method(self, ...):
>> >return self.__make_me__(SomeCls, assumed arguments…)
>> >
>> > Now the derived class knows who is asking for a copy.
>>
>> What if you wish to return an instance from a classmethod? You don't
>> have a `self` available.
>>
>> class SomeCls:
>> def __init__(self, x, y, z):
>> ...
>> @classmethod
>> def from_spam(cls, spam):
>> x, y, z = process(spam)
>> return cls.__make_me__(self, cls, x, y, z)  # oops, no self
>>
>>
>> Even if you are calling from an instance method, and self is available,
>> you cannot assume that the information needed for the subclass
>> constructor is still available. Perhaps that information is used in the
>> constructor and then discarded.
>>
>> The problem we wish to solve is that when subclassing, methods of some
>> base class blindly return instances of itself, instead of self's type:
>>
>>
>> py> class MyInt(int):
>> ... pass
>> ...
>> py> n = MyInt(23)
>> py> assert isinstance(n, MyInt)
>> py> assert isinstance(n+1, MyInt)
>> Traceback (most recent call last):
>>   File "", line 1, in ?
>> AssertionError
>>
>>
>> The means that subclasses often have to override all the parent's
>> methods, just to ensure the type is correct:
>>
>> class MyInt(int):
>> def __add__(self, other):
>> o = super().__add__(other)
>> if o is not NotImplemented:
>> o = type(self)(o)
>> return o
>>
>>
>> Something like that, repeated for all the int methods, should work:
>>
>> py> n = MyInt(23)
>> py> type(n+1)
>> 
>>
>>
>> This is tedious and error prone, but at least once it is done,
>> subclasses of MyInt will Just Work:
>>
>>
>> py> class MyOtherInt(MyInt):
>> ... pass
>> ...
>> py> a = MyOtherInt(42)
>> py> ty

Re: [Python-Dev] subclassing builtin data structures

2015-02-14 Thread Neil Girdhar
I think the __make_me__ pattern discussed earlier is still the most generic
cooperative solution.  Here it is with a classmethod version too:

class C(D, E):
def some_method(self):
return __make_me__(self, C)

def __make_me__(self, arg_cls, *args, **kwargs):
if arg_cls is C:
pass
elif issubclass(D, arg_cls):
args, kwargs = modified_args_for_D(args, kwargs)
elif issubclass(E, arg_cls):
args, kwargs = modified_args_for_D(args, kwargs)
else:
raise ValueError

if self.__class__ == C:
return C(*args, **kwargs)
return self.__make_me__(C, *args, **kwargs)

@classmethod
def __make_me_cls__(cls, arg_cls, *args, **kwargs):
if arg_cls is C:
pass
elif issubclass(D, arg_cls):
args, kwargs = modified_args_for_D(args, kwargs)
elif issubclass(E, arg_cls):
args, kwargs = modified_args_for_D(args, kwargs)
else:
raise ValueError

if cls == C:
return C(*args, **kwargs)
return cls.__make_me_cls__(C, *args, **kwargs)


On Sat, Feb 14, 2015 at 7:23 AM, Steven D'Aprano 
wrote:

> On Fri, Feb 13, 2015 at 06:03:35PM -0500, Neil Girdhar wrote:
> > I personally don't think this is a big enough issue to warrant any
> changes,
> > but I think Serhiy's solution would be the ideal best with one additional
> > parameter: the caller's type.  Something like
> >
> > def __make_me__(self, cls, *args, **kwargs)
> >
> > and the idea is that any time you want to construct a type, instead of
> >
> > self.__class__(assumed arguments…)
> >
> > where you are not sure that the derived class' constructor knows the
> right
> > argument types, you do
> >
> > def SomeCls:
> >  def some_method(self, ...):
> >return self.__make_me__(SomeCls, assumed arguments…)
> >
> > Now the derived class knows who is asking for a copy.
>
> What if you wish to return an instance from a classmethod? You don't
> have a `self` available.
>
> class SomeCls:
> def __init__(self, x, y, z):
> ...
> @classmethod
> def from_spam(cls, spam):
> x, y, z = process(spam)
> return cls.__make_me__(self, cls, x, y, z)  # oops, no self
>
>
> Even if you are calling from an instance method, and self is available,
> you cannot assume that the information needed for the subclass
> constructor is still available. Perhaps that information is used in the
> constructor and then discarded.
>
> The problem we wish to solve is that when subclassing, methods of some
> base class blindly return instances of itself, instead of self's type:
>
>
> py> class MyInt(int):
> ... pass
> ...
> py> n = MyInt(23)
> py> assert isinstance(n, MyInt)
> py> assert isinstance(n+1, MyInt)
> Traceback (most recent call last):
>   File "", line 1, in ?
> AssertionError
>
>
> The means that subclasses often have to override all the parent's
> methods, just to ensure the type is correct:
>
> class MyInt(int):
> def __add__(self, other):
> o = super().__add__(other)
> if o is not NotImplemented:
> o = type(self)(o)
> return o
>
>
> Something like that, repeated for all the int methods, should work:
>
> py> n = MyInt(23)
> py> type(n+1)
> 
>
>
> This is tedious and error prone, but at least once it is done,
> subclasses of MyInt will Just Work:
>
>
> py> class MyOtherInt(MyInt):
> ... pass
> ...
> py> a = MyOtherInt(42)
> py> type(a + 1000)
> 
>
>
> (At least, *in general* they will work. See below.)
>
> So, why not have int's methods use type(self) instead of hard coding
> int? The answer is that *some* subclasses might override the
> constructor, which would cause the __add__ method to fail:
>
> # this will fail if the constructor has a different signature
> o = type(self)(o)
>
>
> Okay, but changing the constructor signature is quite unusual. Mostly,
> people subclass to add new methods or attributes, or to override a
> specific method. The dict/defaultdict situation is relatively uncommon.
>
> Instead of requiring *every* subclass to override all the methods,
> couldn't we require the base classes (like int) to assume that the
> signature is unchanged and call type(self), and leave it up to the
> subclass to override all the methods *only* if the signature has
> changed? (Which they probably would have to do anyway.)
>
> As the MyInt example above shows, or datetime in the standard library,
> this actually works fine in practi

Re: [Python-Dev] subclassing builtin data structures

2015-02-13 Thread Neil Girdhar
I think it works as Isaac explained if __make_me__ is an instance method
that also accepts the calling class type.

On Fri, Feb 13, 2015 at 8:12 PM, Ethan Furman  wrote:

> On 02/13/2015 02:31 PM, Serhiy Storchaka wrote:
> > On 13.02.15 05:41, Ethan Furman wrote:
> >> So there are basically two choices:
> >>
> >> 1) always use the type of the most-base class when creating new
> instances
> >>
> >> pros:
> >>   - easy
> >>   - speedy code
> >>   - no possible tracebacks on new object instantiation
> >>
> >> cons:
> >>   - a subclass that needs/wants to maintain itself must override all
> >> methods that create new instances, even if the only change is to
> >> the type of object returned
> >>
> >> 2) always use the type of self when creating new instances
> >>
> >> pros:
> >>   - subclasses automatically maintain type
> >>   - much less code in the simple cases [1]
> >>
> >> cons:
> >>   - if constructor signatures change, must override all methods
> which
> >> create new objects
> >
> > And switching to (2) would break existing code which uses subclasses
> with constructors with different signature (e.g.
> > defaultdict).
>
> I don't think defaultdict is a good example -- I don't see any methods on
> it that return a new dict, default or
> otherwise. So if this change happened, defaultdict would have to have its
> own __add__ and not rely on dict's __add__.
>
>
> > The third choice is to use different specially designed constructor.
> >
> > class A(int):
> >
> > --> class A(int):
> > ... def __add__(self, other):
> > ... return self.__make_me__(int(self) + int(other))
> >
> > ... def __repr__(self):
> > ... return 'A(%d)' % self
>
> How would this help in the case of defaultdict?  __make_me__ is a class
> method, but it needs instance info to properly
> create a new dict with the same default factory.
>
> --
> ~Ethan~
>
>
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/mistersheik%40gmail.com
>
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] subclassing builtin data structures

2015-02-13 Thread Neil Girdhar
You're right.

On Fri, Feb 13, 2015 at 7:55 PM, Isaac Schwabacher 
wrote:

> On 15-02-13, Neil Girdhar  wrote:
> > Unlike a regular method, you would never need to call super since you
> should know everyone that could be calling you. Typically, when you call
> super, you have something like this:
> >
> > A < B, C
> >
> >
> > B < D
> >
> >
> > so you end up with
> >
> >
> > mro: A, B, C, D
> >
> >
> > And then when A calls super and B calls super it gets C which it
> doesn't know about.
>
> But C calls super and gets D. The scenario I'm concerned with is that A
> knows how to mimic B's constructor and B knows how to mimic D's, but A
> doesn't know about D. So D asks A if it knows how to mimic D's constructor,
> and it says no. Via super, B gets a shot, and it does know, so it
> translates the arguments to D's constructor into arguments to B's
> constructor, and again asks A if it knows how to handle them. Then A says
> yes, translates the args, and constructs an A. If C ever gets consulted, it
> responds "I don't know a thing" and calls super.
>
> > But in the case of make_me, it's someone like C who is calling
> make_me. If it gets a method in B, then that's a straight-up bug.
> make_me needs to be reimplemented in A as well, and A would never delegate
> up since other classes in the mro chain (like B) might not know about C.
>
> This scheme (as I've written it) depends strongly on all the classes in
> the MRO having __make_me__ methods with this very precisely defined
> structure: test base against yourself, then any superclasses you care to
> mimic, then call super. Any antisocial superclass ruins everyone's party.
>
> > Best,
> > Neil
> >
> >
> > On Fri, Feb 13, 2015 at 7:00 PM, Isaac Schwabacher <
> alexander.belopol...@gmail.com 
> ischwabac...@wisc.edu> wrote:
> >
> > > On 15-02-13, Neil Girdhar wrote:
> > > > I personally don't think this is a big enough issue to warrant
> any changes, but I think Serhiy's solution would be the ideal best with
> one additional parameter: the caller's type. Something like
> > > >
> > > > def __make_me__(self, cls, *args, **kwargs)
> > > >
> > > >
> > > > and the idea is that any time you want to construct a type, instead
> of
> > > >
> > > >
> > > > self.__class__(assumed arguments…)
> > > >
> > > >
> > > > where you are not sure that the derived class' constructor knows
> the right argument types, you do
> > > >
> > > >
> > > > def SomeCls:
> > > > def some_method(self, ...):
> > > > return self.__make_me__(SomeCls, assumed arguments…)
> > > >
> > > >
> > > > Now the derived class knows who is asking for a copy. In the case of
> defaultdict, for example, he can implement __make_me__ as follows:
> > > >
> > > >
> > > > def __make_me__(self, cls, *args, **kwargs):
> > > > if cls is dict: return default_dict(self.default_factory, *args,
> **kwargs)
> > > > return default_dict(*args, **kwargs)
> > > >
> > > >
> > > > essentially the caller is identifying himself so that the receiver
> knows how to interpret the arguments.
> > > >
> > > >
> > > > Best,
> > > >
> > > >
> > > > Neil
> > >
> > > Such a method necessarily involves explicit switching on classes... ew.
> > > Also, to make this work, a class needs to have a relationship with its
> superclass's superclasses. So in order for DefaultDict's subclasses
> not to need to know about dict, it would need to look like this:
> > >
> > > class DefaultDict(dict):
> > > @classmethod # instance method doesn't make sense here
> > > def __make_me__(cls, base, *args, **kwargs): # make something like
> base(*args, **kwargs)
> > > # when we get here, nothing in cls.__mro__ above DefaultDict
> knows how to construct an equivalent to base(*args, **kwargs) using its own
> constructor
> > > if base is DefaultDict:
> > > return DefaultDict(*args, **kwargs) # if DefaultDict is
> the best we can do, do it
> > > elif base is dict:
> > > return cls.__make_me__(DefaultDict, None, *args, **kwargs)
> # subclasses that know about DefaultDict but not dict will intercept this
> > > 

Re: [Python-Dev] subclassing builtin data structures

2015-02-13 Thread Neil Girdhar
Unlike a regular method, you would never need to call super since you
should know everyone that could be calling you.  Typically, when you call
super, you have something like this:

A < B, C

B < D

so you end up with

mro: A, B, C, D

And then when A calls super and B calls super it gets C which it doesn't
know about.

But in the case of make_me, it's someone like C who  is calling make_me.
If it gets a method in B, then that's a straight-up bug.  make_me needs to
be reimplemented in A as well, and A would never delegate up since other
classes in the mro chain (like B) might not know about C.

Best,
Neil

On Fri, Feb 13, 2015 at 7:00 PM, Isaac Schwabacher 
wrote:

> On 15-02-13, Neil Girdhar  wrote:
> > I personally don't think this is a big enough issue to warrant any
> changes, but I think Serhiy's solution would be the ideal best with one
> additional parameter: the caller's type. Something like
> >
> > def __make_me__(self, cls, *args, **kwargs)
> >
> >
> > and the idea is that any time you want to construct a type, instead of
> >
> >
> > self.__class__(assumed arguments…)
> >
> >
> > where you are not sure that the derived class' constructor knows the
> right argument types, you do
> >
> >
> > def SomeCls:
> > def some_method(self, ...):
> > return self.__make_me__(SomeCls, assumed arguments…)
> >
> >
> > Now the derived class knows who is asking for a copy. In the case of
> defaultdict, for example, he can implement __make_me__ as follows:
> >
> >
> > def __make_me__(self, cls, *args, **kwargs):
> > if cls is dict: return default_dict(self.default_factory, *args,
> **kwargs)
> > return default_dict(*args, **kwargs)
> >
> >
> > essentially the caller is identifying himself so that the receiver knows
> how to interpret the arguments.
> >
> >
> > Best,
> >
> >
> > Neil
>
> Such a method necessarily involves explicit switching on classes... ew.
> Also, to make this work, a class needs to have a relationship with its
> superclass's superclasses. So in order for DefaultDict's subclasses not to
> need to know about dict, it would need to look like this:
>
> class DefaultDict(dict):
> @classmethod # instance method doesn't make sense here
> def __make_me__(cls, base, *args, **kwargs): # make something like
> base(*args, **kwargs)
> # when we get here, nothing in cls.__mro__ above DefaultDict knows
> how to construct an equivalent to base(*args, **kwargs) using its own
> constructor
> if base is DefaultDict:
> return DefaultDict(*args, **kwargs) # if DefaultDict is the
> best we can do, do it
> elif base is dict:
> return cls.__make_me__(DefaultDict, None, *args, **kwargs) #
> subclasses that know about DefaultDict but not dict will intercept this
> else:
> super(DefaultDict, cls).__make_me__(base, *args, **kwargs) #
> we don't know how to make an equivalent to base.__new__(*args, **kwargs),
> so keep looking
>
> I don't even think this is guaranteed to construct an object of class cls
> corresponding to a base(*args, **kwargs) even if it were possible, since
> multiple inheritance can screw things up. You might need to have an
> explicit list of "these are the superclasses whose constructors I can
> imitate", and have the interpreter find an optimal path for you.
>
> > On Fri, Feb 13, 2015 at 5:55 PM, Alexander Belopolsky <
> alexander.belopol...@gmail.com(javascript:main.compose()> wrote:
> >
> > >
> > > On Fri, Feb 13, 2015 at 4:44 PM, Neil Girdhar 
> > > 
> wrote:
> > >
> > > > Interesting:
> http://stackoverflow.com/questions/5490824/should-constructors-comply-with-the-liskov-substitution-principle
> > > >
> > >
> > >
> > > Let me humbly conjecture that the people who wrote the top answers
> have background in less capable languages than Python.
> > >
> > >
> > > Not every language allows you to call self.__class__(). In the
> languages that don't you can get away with incompatible constructor
> signatures.
> > >
> > >
> > > However, let me try to focus the discussion on a specific issue before
> we go deep into OOP theory.
> > >
> > >
> > > With python's standard datetime.date we have:
> > >
> > >
> > > >>> from datetime import *
> > > >>> class Date(date):
> > > ... pass
> > > ...
> > > >>> Date.today()
> > > Date(2015, 2, 13)
&

Re: [Python-Dev] subclassing builtin data structures

2015-02-13 Thread Neil Girdhar
I personally don't think this is a big enough issue to warrant any changes,
but I think Serhiy's solution would be the ideal best with one additional
parameter: the caller's type.  Something like

def __make_me__(self, cls, *args, **kwargs)

and the idea is that any time you want to construct a type, instead of

self.__class__(assumed arguments…)

where you are not sure that the derived class' constructor knows the right
argument types, you do

def SomeCls:
 def some_method(self, ...):
   return self.__make_me__(SomeCls, assumed arguments…)

Now the derived class knows who is asking for a copy.  In the case of
defaultdict, for example, he can implement __make_me__ as follows:

def __make_me__(self, cls, *args, **kwargs):
if cls is dict: return default_dict(self.default_factory, *args,
**kwargs)
return default_dict(*args, **kwargs)

essentially the caller is identifying himself so that the receiver knows
how to interpret the arguments.

Best,

Neil

On Fri, Feb 13, 2015 at 5:55 PM, Alexander Belopolsky <
alexander.belopol...@gmail.com> wrote:

>
> On Fri, Feb 13, 2015 at 4:44 PM, Neil Girdhar 
> wrote:
>
>> Interesting:
>> http://stackoverflow.com/questions/5490824/should-constructors-comply-with-the-liskov-substitution-principle
>>
>
> Let me humbly conjecture that the people who wrote the top answers have
> background in less capable languages than Python.
>
> Not every language allows you to call self.__class__().  In the languages
> that don't you can get away with incompatible constructor signatures.
>
> However, let me try to focus the discussion on a specific issue before we
> go deep into OOP theory.
>
> With python's standard datetime.date we have:
>
> >>> from datetime import *
> >>> class Date(date):
> ... pass
> ...
> >>> Date.today()
> Date(2015, 2, 13)
> >>> Date.fromordinal(1)
> Date(1, 1, 1)
>
> Both .today() and .fromordinal(1) will break in a subclass that redefines
> __new__ as follows:
>
> >>> class Date2(date):
> ... def __new__(cls, ymd):
> ... return date.__new__(cls, *ymd)
> ...
> >>> Date2.today()
> Traceback (most recent call last):
>   File "", line 1, in 
> TypeError: __new__() takes 2 positional arguments but 4 were given
> >>> Date2.fromordinal(1)
> Traceback (most recent call last):
>   File "", line 1, in 
> TypeError: __new__() takes 2 positional arguments but 4 were given
>
> Why is this acceptable, but we have to sacrifice the convenience of having
> Date + timedelta
> return Date to make it work  with Date2:
>
> >>> Date2((1,1,1)) + timedelta(1)
> datetime.date(1, 1, 2)
>
>
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] subclassing builtin data structures

2015-02-13 Thread Neil Girdhar
Interesting:
http://stackoverflow.com/questions/5490824/should-constructors-comply-with-the-liskov-substitution-principle

On Fri, Feb 13, 2015 at 3:37 PM, Isaac Schwabacher 
wrote:

> On 15-02-13, Guido van Rossum  wrote:
> > Are you willing to wait 10 days for an answer? I'm out of round
> tuits for a while.
>
> IIUC, the argument is that the Liskov Substitution Principle is a
> statement about how objects of a subtype behave relative to objects of a
> supertype, and it doesn't apply to constructors because they aren't
> behaviors of existing objects. So other overriding methods *should* be able
> to handle the same inputs that the respective overridden methods do, but
> constructors don't need to. Even though __init__ is written as an instance
> method, it seems like it's "morally" a part of the class method __new__
> that's only split off for convenience.
>
> If this message is unclear, it's because I don't really understand this
> myself and I'm trying to articulate my best understanding of what's been
> said on this thread and those it links to.
>
> ijs
>
> > On Fri, Feb 13, 2015 at 10:22 AM, Alexander Belopolsky <
> alexander.belopol...@gmail.com(javascript:main.compose()> wrote:
> >
> > >
> > > On Fri, Feb 13, 2015 at 1:19 PM, Alexander Belopolsky <
> alexander.belopol...@gmail.com(javascript:main.compose()> wrote:
> > > >>
> > > >> FWIW you're wrong when you claim that "a constructor is no
> different from any other method". Someone else should probably explain this
> (it's an old argument that's been thoroughly settled).
> > > >
> > > >
> > > > Well, the best answer I've got in the past [1] was "ask on
> python-dev since Guido called the operator overriding expectation." :-)
> > >
> > >
> > > And let me repost this bit of history [1]:
> > >
> > > Here is the annotated pre-r82065 code:
> > >
> > > 39876 gvanrossum def __add__(self, other):
> > > 39876 gvanrossum if isinstance(other, timedelta):
> > > 39928 gvanrossum return self.__class__(self.__days + other.__days,
> > > 39876 gvanrossum self.__seconds + other.__seconds,
> > > 39876 gvanrossum self.__microseconds + other.__microseconds)
> > > 40207 tim_one return NotImplemented
> > > 39876 gvanrossum
> > >
> > >
> > >
> > > [1] http://bugs.python.org/issue2267#msg125979
> > >
> > >
> >
> >
> >
> >
> > --
> > --Guido van Rossum (python.org/~guido(http://python.org/~guido))
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/mistersheik%40gmail.com
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] subclassing builtin data structures

2015-02-13 Thread Neil Girdhar
With Python's cooperative inheritance, I think you want to do everything
through one constructor sending keyword arguments up the chain.  The
keyword arguments are popped off as needed.  With this setup I don't think
you need "overloaded constructors".

Best,
Neil

On Fri, Feb 13, 2015 at 4:44 AM, Jonas Wielicki 
wrote:

> If I may humbly chime in this, with a hint...
>
> On 13.02.2015 05:01, Guido van Rossum wrote:
> > On Thu, Feb 12, 2015 at 7:41 PM, Ethan Furman 
> wrote:
> >> [snip]
> >> 2) always use the type of self when creating new instances
> >>
> >>pros:
> >>  - subclasses automatically maintain type
> >>  - much less code in the simple cases [1]
> >>
> >>cons:
> >>  - if constructor signatures change, must override all methods which
> >>create new objects
> >>
> >> Unless there are powerful reasons against number 2 (such as performance,
> >> or the effort to affect the change), it sure
> >> seems like the nicer way to go.
> >>
> >> So back to my original question: what other concerns are there, and has
> >> anybody done any benchmarks?
> >>
> >
> > Con for #2 is a showstopper. Forget about it.
>
> I would like to mention that there is another language out there which
> knows about virtual constructors (virtual like in virtual methods, with
> signature match requirements and such), which is FreePascal (and Delphi,
> and I think original Object Pascal too).
>
> It is actually a feature I liked about these languages, compared to
> C++03 and others, that constructors could be virtual and that classes
> were first-class citizens.
>
> Of course, Python cannot check the signature at compile time. But I
> think as long as it is documented, there should be no reason not to
> allow and support it. It really is analogous to other methods which need
> to have a matching signature.
>
> just my two cents,
> jwi
>
>
>
>
>
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/mistersheik%40gmail.com
>
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (no subject)

2015-02-09 Thread Neil Girdhar
ah, sorry… forget that I said "just as it is now" — I am losing track of
what's allowed in Python now!

On Tue, Feb 10, 2015 at 2:29 AM, Neil Girdhar  wrote:

>
>
> On Tue, Feb 10, 2015 at 2:20 AM, Victor Stinner 
> wrote:
>
>> To be logic, I expect [(*item) for item in mylist] to simply return
>> mylist.
>>
>
> If you want simply mylist as a list, that is [*mylist]
>
>> [*(item) for item in mylist] with mylist=[(1, 2), (3,)] could return [1,
>> 2, 3],
>>
> right
>
>> as just [*mylist], so "unpack" mylist.
>>
>
> [*mylist] remains equivalent list(mylist), just as it is now.   In one
> case, you're unpacking the elements of the list, in the other you're
> unpacking the list itself.
>
> Victor
>>
>> ___
>> Python-Dev mailing list
>> Python-Dev@python.org
>> https://mail.python.org/mailman/listinfo/python-dev
>> Unsubscribe:
>> https://mail.python.org/mailman/options/python-dev/mistersheik%40gmail.com
>>
>>
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (no subject)

2015-02-09 Thread Neil Girdhar
On Tue, Feb 10, 2015 at 2:20 AM, Victor Stinner 
wrote:

> To be logic, I expect [(*item) for item in mylist] to simply return mylist.
>

If you want simply mylist as a list, that is [*mylist]

> [*(item) for item in mylist] with mylist=[(1, 2), (3,)] could return [1,
> 2, 3],
>
right

> as just [*mylist], so "unpack" mylist.
>

[*mylist] remains equivalent list(mylist), just as it is now.   In one
case, you're unpacking the elements of the list, in the other you're
unpacking the list itself.

Victor
>
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/mistersheik%40gmail.com
>
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (no subject)

2015-02-09 Thread Neil Girdhar
On Tue, Feb 10, 2015 at 2:08 AM, Victor Stinner 
wrote:

>
> Le 10 févr. 2015 03:07, "Ethan Furman"  a écrit :
> > That line should read
> >
> > return func(*(args + fargs), **{**keywords, **fkeywords})
> >
> > to avoid the duplicate key error and keep the original functionality.
>
> To me, this is just ugly. It prefers the original code which use .update().
>
> Maybe the PEP should be changed to behave as .update()?
>
> Victor
>
>
Just for clarity, Ethan is right, but it could also be written:

return func(*args, *fargs, **{**keywords, **fkeywords})

Best,

Neil


> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/mistersheik%40gmail.com
>
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (no subject)

2015-02-09 Thread Neil Girdhar
On Tue, Feb 10, 2015 at 1:31 AM, Donald Stufft  wrote:

>
> > On Feb 10, 2015, at 12:55 AM, Greg Ewing 
> wrote:
> >
> > Donald Stufft wrote:
> >> However [*item for item in ranges] is mapped more to something like
> this:
> >> result = []
> >> for item in iterable:
> >>result.extend(*item)
> >
> > Actually it would be
> >
> >   result.extend(item)
> >
> > But if that bothers you, you could consider the expansion
> > to be
> >
> > result = []
> > for item in iterable:
> >   for item1 in item:
> >  result.append(item)
> >
> > In other words, the * is shorthand for an extra level
> > of looping.
> >
> >> and it acts differently than if you just did *item outside of a list
> comprehension.
> >
> > Not sure what you mean by that. It seems closely
> > analogous to the use of * in a function call to
> > me.
> >
>
> Putting aside the proposed syntax the current two statements are currently
> true:
>
> 1. The statement *item is roughly the same thing as (item[0], item[1],
> item[n])
> 2. The statement [something for thing in iterable] is roughly the same as:
>result = []
>for thing in iterable:
>result.append(something)
>This is a single loop where an expression is ran for each iteration of
> the
>loop, and the return result of that expression is appended to the
> result.
>
> If you combine these two things, the "something" in #2 becuase *item, and
> since
> *item is roughly the same thing as (item[0], item[1], item[n]) what you end
> up with is something that *should* behave like:
>
> result = []
> for thing in iterable:
> result.append((thing[0], thing[1], thing[n]))
>

That is what [list(something) for thing in iterable] does.

The iterable unpacking rule might have been better explained as follows:


On the left of assignment * is packing, e.g.
a, b, *cs = iterable

On the right of an assignment, * is an unpacking, e.g.

xs = a, b, *cs


In either case, the  elements of "cs" are treated the same as a and b.

Do you agree that [*x for x in [as, bs, cs]] === [*as, *bs, *cs] ?

Then the elements of *as are unpacked into the list, the same way that
those elements are currently unpacked in a regular function call

f(*as) === f(as[0], as[1], ...)

Similarly,

[*as, *bs, *cs] === [as[0], as[1], …, bs[0], bs[1], …, cs[0], cs[1], …]

The rule for function calls is analogous:


In a function definition, * is a packing, collecting extra positional
argument in a list.  E.g.,

def f(*args):

In a function call, * is an unpacking, expanding an iterable to populate
positional arguments.  E.g.,

f(*args)

—

PEP 448 proposes having arbitrary numbers of unpackings in arbitrary
positions.

I will be updating the PEP this week if I can find the time.


>
> Or to put it another way:
>
> >>> [*item for item in [[1, 2, 3], [4, 5, 6]]
> [(1, 2, 3), (4, 5, 6)]
>
>
> Is a lot more consistent with what *thing and list comprehensions already
> mean
> in Python than for the answer to be [1, 2, 3, 4, 5, 6].
> ---
> Donald Stufft
> PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA
>
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/mistersheik%40gmail.com
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (no subject)

2015-02-09 Thread Neil Girdhar
On Mon, Feb 9, 2015 at 7:53 PM, Donald Stufft  wrote:

>
> On Feb 9, 2015, at 7:29 PM, Neil Girdhar  wrote:
>
> For some reason I can't seem to reply using Google groups, which is is
> telling "this is a read-only mirror" (anyone know why?)  Anyway, I'm going
> to answer as best I can the concerns.
>
> Antoine said:
>
> To be clear, the PEP will probably be useful for one single line of
>> Python code every 1. This is a very weak case for such an intrusive
>> syntax addition. I would support the PEP if it only added the simple
>> cases of tuple unpacking, left alone function call conventions, and
>> didn't introduce **-unpacking.
>
>
> To me this is more of a syntax simplification than a syntax addition.  For
> me the **-unpacking is the most useful part. Regarding utility, it seems
> that a many of the people on r/python were pretty excited about this PEP:
> http://www.reddit.com/r/Python/comments/2synry/so_8_peps_are_currently_being_proposed_for_python/
>
> —
>
> Victor noticed that there's a mistake with the code:
>
> >>> ranges = [range(i) for i in range(5)]
>> >>> [*item for item in ranges]
>> [0, 0, 1, 0, 1, 2, 0, 1, 2, 3]
>
>
> It should be a range(4) in the code.  The "*" applies to only item.  It is
> the same as writing:
>
> [*range(0), *range(1), *range(2), *range(3), *range(4)]
>
> which is the same as unpacking all of those ranges into a list.
>
> > function(**kw_arguments, **more_arguments)
>> If the key "key1" is in both dictionaries, more_arguments wins, right?
>
>
> There was some debate and it was decided that duplicate keyword arguments
> would remain an error (for now at least).  If you want to merge the
> dictionaries with overriding, then you can still do:
>
> function(**{**kw_arguments, **more_arguments})
>
> because **-unpacking in dicts overrides as you guessed.
>
> —
>
>
>
> On Mon, Feb 9, 2015 at 7:12 PM, Donald Stufft  wrote:
>
>>
>> On Feb 9, 2015, at 4:06 PM, Neil Girdhar  wrote:
>>
>> Hello all,
>>
>> The updated PEP 448 (https://www.python.org/dev/peps/pep-0448/) is
>> implemented now based on some early work by Thomas Wouters (in 2008) and
>> Florian Hahn (2013) and recently completed by Joshua Landau and me.
>>
>> The issue tracker http://bugs.python.org/issue2292  has  a working
>> patch.  Would someone be able to review it?
>>
>>
>> I just skimmed over the PEP and it seems like it’s trying to solve a few
>> different things:
>>
>> * Making it easy to combine multiple lists and additional positional args
>> into a function call
>> * Making it easy to combine multiple dicts and additional keyword args
>> into a functional call
>> * Making it easy to do a single level of nested iterable "flatten".
>>
>
> I would say it's:
> * making it easy to unpack iterables and mappings in function calls
> * making it easy to unpack iterables  into list and set displays and
> comprehensions, and
> * making it easy to unpack mappings into dict displays and comprehensions.
>
>
>
>>
>> Looking at the syntax in the PEP I had a hard time detangling what
>> exactly it was doing even with reading the PEP itself. I wonder if there
>> isn’t a way to combine simpler more generic things to get the same outcome.
>>
>> Looking at the "Making it easy to combine multiple lists and additional
>> positional args into a  function call" aspect of this, why is:
>>
>> print(*[1], *[2], 3) better than print(*[1] + [2] + [3])?
>>
>> That's already doable in Python right now and doesn't require anything
>> new to handle it.
>>
>
> Admittedly, this wasn't a great example.  But, if [1] and [2] had been
> iterables, you would have to cast each to list, e.g.,
>
> accumulator = []
> accumulator.extend(a)
> accumulator.append(b)
> accumulator.extend(c)
> print(*accumulator)
>
> replaces
>
> print(*a, b, *c)
>
> where a and c are iterable.  The latter version is also more efficient
> because it unpacks only a onto the stack allocating no auxilliary list.
>
>
> Honestly that doesn’t seem like the way I’d write it at all, if they might
> not be lists I’d just cast them to lists:
>
> print(*list(a) + [b] + list(c))
>

Sure, that works too as long as you put in the missing parentheses.


>
> But if casting to list really is that big a deal, then perhaps a better
> solution is to simply make it so that something like ``a_list +
> an_iterable`` is valid and the iterable would just be consumed and +’d onto
> the 

Re: [Python-Dev] (no subject)

2015-02-09 Thread Neil Girdhar
Just an FYI:
http://www.reddit.com/r/Python/comments/2v8g26/python_350_alpha_1_has_been_released/

448 was mentioned here (by Python lay people — not developers).

On Mon, Feb 9, 2015 at 7:56 PM, Neil Girdhar  wrote:

> The admonition is against syntax that currently exists.
>
> On Mon, Feb 9, 2015 at 7:53 PM, Barry Warsaw  wrote:
>
>> On Feb 09, 2015, at 07:46 PM, Neil Girdhar wrote:
>>
>> >Also, regarding calling argument order, not any order is allowed.
>> Regular
>> >arguments must precede other kinds of arguments.  Keyword arguments must
>> >precede **-args.  *-args must precede **-args.   However, I agree with
>> >Antoine that PEP 8 should be updated to suggest that *-args should
>> precede
>> >any keyword arguments.  It is currently allowed to write f(x=2, *args),
>> >which is equivalent to f(*args, x=2).
>>
>> But if we have to add a PEP 8 admonition against some syntax that's being
>> newly added, why is this an improvement?
>>
>> I had some more snarky/funny comments to make, but I'll just say -1.  The
>> Rationale in the PEP doesn't sell me on it being an improvement to Python.
>>
>> Cheers,
>> -Barry
>> ___
>> Python-Dev mailing list
>> Python-Dev@python.org
>> https://mail.python.org/mailman/listinfo/python-dev
>> Unsubscribe:
>> https://mail.python.org/mailman/options/python-dev/mistersheik%40gmail.com
>>
>
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (no subject)

2015-02-09 Thread Neil Girdhar
The admonition is against syntax that currently exists.

On Mon, Feb 9, 2015 at 7:53 PM, Barry Warsaw  wrote:

> On Feb 09, 2015, at 07:46 PM, Neil Girdhar wrote:
>
> >Also, regarding calling argument order, not any order is allowed.  Regular
> >arguments must precede other kinds of arguments.  Keyword arguments must
> >precede **-args.  *-args must precede **-args.   However, I agree with
> >Antoine that PEP 8 should be updated to suggest that *-args should precede
> >any keyword arguments.  It is currently allowed to write f(x=2, *args),
> >which is equivalent to f(*args, x=2).
>
> But if we have to add a PEP 8 admonition against some syntax that's being
> newly added, why is this an improvement?
>
> I had some more snarky/funny comments to make, but I'll just say -1.  The
> Rationale in the PEP doesn't sell me on it being an improvement to Python.
>
> Cheers,
> -Barry
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/mistersheik%40gmail.com
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (no subject)

2015-02-09 Thread Neil Girdhar
Yes, that's exactly right.  It does not affect the callee.

Regarding function call performance, nothing has changed for the originally
accepted argument lists: the opcodes generated are the same and they are
processed in the same way.

Also, regarding calling argument order, not any order is allowed.  Regular
arguments must precede other kinds of arguments.  Keyword arguments must
precede **-args.  *-args must precede **-args.   However, I agree with
Antoine that PEP 8 should be updated to suggest that *-args should precede
any keyword arguments.  It is currently allowed to write f(x=2, *args),
which is equivalent to f(*args, x=2).

Best,

Neil

On Mon, Feb 9, 2015 at 7:30 PM, Larry Hastings  wrote:

>
>
> What's an example of a way inspect.signature must change?  I thought PEP
> 448 added new unpacking shortcuts which (for example) change the *caller*
> side of a function call.  I didn't realize it impacted the *callee* side
> too.
>
>
> */arry*
>
> On 02/09/2015 03:14 PM, Antoine Pitrou wrote:
>
> On Tue, 10 Feb 2015 08:43:53 +1000
> Nick Coghlan   wrote:
>
>  For example, the potential for arcane call arguments suggests the need for
> a PEP 8 addition saying "first standalone args, then iterable expansions,
> then mapping expansions", even though syntactically any order would now be
> permitted at call time.
>
>  There are other concerns:
>
> - inspect.signature() must be updated to cover the new call
>   possibilities
>
> - function call performance must not be crippled by the new
>   possibilities
>
> Regards
>
> Antoine.
>
>
> ___
> Python-Dev mailing 
> listPython-Dev@python.orghttps://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: 
> https://mail.python.org/mailman/options/python-dev/larry%40hastings.org
>
>
>
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/mistersheik%40gmail.com
>
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (no subject)

2015-02-09 Thread Neil Girdhar
For some reason I can't seem to reply using Google groups, which is is
telling "this is a read-only mirror" (anyone know why?)  Anyway, I'm going
to answer as best I can the concerns.

Antoine said:

To be clear, the PEP will probably be useful for one single line of
> Python code every 1. This is a very weak case for such an intrusive
> syntax addition. I would support the PEP if it only added the simple
> cases of tuple unpacking, left alone function call conventions, and
> didn't introduce **-unpacking.


To me this is more of a syntax simplification than a syntax addition.  For
me the **-unpacking is the most useful part. Regarding utility, it seems
that a many of the people on r/python were pretty excited about this PEP:
http://www.reddit.com/r/Python/comments/2synry/so_8_peps_are_currently_being_proposed_for_python/

—

Victor noticed that there's a mistake with the code:

>>> ranges = [range(i) for i in range(5)]
> >>> [*item for item in ranges]
> [0, 0, 1, 0, 1, 2, 0, 1, 2, 3]


It should be a range(4) in the code.  The "*" applies to only item.  It is
the same as writing:

[*range(0), *range(1), *range(2), *range(3), *range(4)]

which is the same as unpacking all of those ranges into a list.

> function(**kw_arguments, **more_arguments)
> If the key "key1" is in both dictionaries, more_arguments wins, right?


There was some debate and it was decided that duplicate keyword arguments
would remain an error (for now at least).  If you want to merge the
dictionaries with overriding, then you can still do:

function(**{**kw_arguments, **more_arguments})

because **-unpacking in dicts overrides as you guessed.

—



On Mon, Feb 9, 2015 at 7:12 PM, Donald Stufft  wrote:

>
> On Feb 9, 2015, at 4:06 PM, Neil Girdhar  wrote:
>
> Hello all,
>
> The updated PEP 448 (https://www.python.org/dev/peps/pep-0448/) is
> implemented now based on some early work by Thomas Wouters (in 2008) and
> Florian Hahn (2013) and recently completed by Joshua Landau and me.
>
> The issue tracker http://bugs.python.org/issue2292  has  a working
> patch.  Would someone be able to review it?
>
>
> I just skimmed over the PEP and it seems like it’s trying to solve a few
> different things:
>
> * Making it easy to combine multiple lists and additional positional args
> into a function call
> * Making it easy to combine multiple dicts and additional keyword args
> into a functional call
> * Making it easy to do a single level of nested iterable "flatten".
>

I would say it's:
* making it easy to unpack iterables and mappings in function calls
* making it easy to unpack iterables  into list and set displays and
comprehensions, and
* making it easy to unpack mappings into dict displays and comprehensions.



>
> Looking at the syntax in the PEP I had a hard time detangling what exactly
> it was doing even with reading the PEP itself. I wonder if there isn’t a
> way to combine simpler more generic things to get the same outcome.
>
> Looking at the "Making it easy to combine multiple lists and additional
> positional args into a  function call" aspect of this, why is:
>
> print(*[1], *[2], 3) better than print(*[1] + [2] + [3])?
>
> That's already doable in Python right now and doesn't require anything new
> to handle it.
>

Admittedly, this wasn't a great example.  But, if [1] and [2] had been
iterables, you would have to cast each to list, e.g.,

accumulator = []
accumulator.extend(a)
accumulator.append(b)
accumulator.extend(c)
print(*accumulator)

replaces

print(*a, b, *c)

where a and c are iterable.  The latter version is also more efficient
because it unpacks only a onto the stack allocating no auxilliary list.


> Looking at the "making it easy to do a single level of nsted iterable
> 'flatten'"" aspect of this, the example of:
>
> >>> ranges = [range(i) for i in range(5)]
> >>> [*item for item in ranges]
> [0, 0, 1, 0, 1, 2, 0, 1, 2, 3]
>
> Conceptually a list comprehension like [thing for item in iterable] can be
> mapped to a for loop like this:
>
> result = []
> for item in iterable:
> result.append(thing)
>
> However [*item for item in ranges] is mapped more to something like this:
>
> result = []
> for item in iterable:
> result.extend(*item)
>
> I feel like switching list comprehensions from append to extend just
> because of a * is really confusing and it acts differently than if you just
> did *item outside of a list comprehension. I feel like the
> itertools.chain() way of doing this is *much* clearer.
>
> Finally there's the "make it easy to combine multiple dicts into a
> function call" aspect of this. This I think is the biggest thi

Re: [Python-Dev] (no subject)

2015-02-09 Thread Neil Girdhar
That wording is my fault.  I'll update the PEP to remove the word
"currently" after waiting a bit to see if there are any other problems.

Best,

Neil

On Mon, Feb 9, 2015 at 6:16 PM, Benjamin Peterson 
wrote:

>
>
> On Mon, Feb 9, 2015, at 17:12, Neil Girdhar wrote:
> > Right,
> >
> > Just to be clear though:  **-args must follow any *-args and position
> > arguments.  So at worst, your example is:
> >
> > f(x, y, *k, *b, c,  **w, **d)
> >
> > Best,
>
> Ah, I guess I was confused by this sentence in the PEP: " Function calls
> currently have the restriction that keyword arguments must follow
> positional arguments and ** unpackings must additionally follow *
> unpackings."
>
> That suggests that that rule is going to change.
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (no subject)

2015-02-09 Thread Neil Girdhar
Right,

Just to be clear though:  **-args must follow any *-args and position
arguments.  So at worst, your example is:

f(x, y, *k, *b, c,  **w, **d)

Best,

Neil

On Mon, Feb 9, 2015 at 5:10 PM, Benjamin Peterson 
wrote:

>
>
> On Mon, Feb 9, 2015, at 16:32, Guido van Rossum wrote:
> > FWIW, I've encouraged Neil and others to complete this code as a
> > prerequisite for a code review (but I can't review it myself). I am
> > mildly
> > in favor of the PEP -- if the code works and looks maintainable I would
> > accept it. (A few things got changed in the PEP as a result of the work.)
>
> In a way, it's a simplification, since functions are now simply called
> with a sequence of "generalized arguments"; there's no privileged kwarg
> or vararg. Of course, I wonder how much of f(**w, x, y, *k, *b, **d, c)
> we would see...
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] (no subject)

2015-02-09 Thread Neil Girdhar
Hello all,

The updated PEP 448 (https://www.python.org/dev/peps/pep-0448/) is
implemented now based on some early work by Thomas Wouters (in 2008) and
Florian Hahn (2013) and recently completed by Joshua Landau and me.

The issue tracker http://bugs.python.org/issue2292  has  a working patch.
Would someone be able to review it?

Thank you very much,

Neil
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Code review for PEP 448

2015-01-30 Thread Neil Girdhar
Hello all,

PEP 448 (https://www.python.org/dev/peps/pep-0448/) is mostly implemented
now based on some early implementations by twouters (in 2008) and fhahn
(2013) and recently by Joshua and I.

The issue tracker http://bugs.python.org/issue2292  has:
* a working patch, and
* discussion and updates to the PEP (the most conservative interpretations
were taken).

I was wondering if anyone would mind reviewing the patch or even just
trying it out to let us know if there are any corner cases that we missed.
 (I need to get back to my actual work, so it would be good to have this
reviewed before I forget everything if possible.)

Thank you,

Neil
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Disassembly of generated comprehensions

2015-01-25 Thread Neil Girdhar
Perfect, thanks!

On Sun, Jan 25, 2015 at 7:08 AM, Petr Viktorin  wrote:

> On Sun, Jan 25, 2015 at 12:55 PM, Neil Girdhar 
> wrote:
> > How do I disassemble a generated comprehension?
> >
> > For example, I am trying to debug the following:
> >
> >>>> dis.dis('{**{} for x in [{1:2}]}')
> >   1   0 LOAD_CONST   0 ( at
> > 0x10160b7c0, file "", line 1>)
> >   3 LOAD_CONST   1 ('')
> >   6 MAKE_FUNCTION0
> >   9 LOAD_CONST   2 (2)
> >  12 LOAD_CONST   3 (1)
> >  15 BUILD_MAP1
> >  18 BUILD_LIST   1
> >  21 GET_ITER
> >  22 CALL_FUNCTION1 (1 positional, 0 keyword pair)
> >  25 RETURN_VALUE
> >
> > (This requires the new patch in issue 2292.)
> >
> > The code here looks fine to me, so I need to look into the code object
> > .  How do I do that?
>
> Put it in a function, then get it from the function's code's constants.
> I don't have the patch applied but it should work like this even for
> the new syntax:
>
> >>> import dis
> >>> def f(): return {{} for x in [{1:2}]}
> ...
> >>> dis.dis(f)
>   1   0 LOAD_CONST   1 ( at
> 0x7ff2c0647420, file "", line 1>)
>   3 LOAD_CONST   2 ('f..')
>   6 MAKE_FUNCTION0
>   9 BUILD_MAP1
>  12 LOAD_CONST   3 (2)
>  15 LOAD_CONST   4 (1)
>  18 STORE_MAP
>  19 BUILD_LIST   1
>  22 GET_ITER
>  23 CALL_FUNCTION1 (1 positional, 0 keyword pair)
>  26 RETURN_VALUE
> >>> f.__code__.co_consts[1]  # from "LOAD_CONST 1"
>  at 0x7ff2c0647420, file "", line 1>
> >>> dis.dis(f.__code__.co_consts[1])
>   1   0 BUILD_SET0
>   3 LOAD_FAST0 (.0)
> >>6 FOR_ITER12 (to 21)
>   9 STORE_FAST   1 (x)
>  12 BUILD_MAP0
>  15 SET_ADD  2
>  18 JUMP_ABSOLUTE6
> >>   21 RETURN_VALUE
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Disassembly of generated comprehensions

2015-01-25 Thread Neil Girdhar
How do I disassemble a generated comprehension?

For example, I am trying to debug the following:

>>> dis.dis('{**{} for x in [{1:2}]}')
  1   0 LOAD_CONST   0 ( at
0x10160b7c0, file "", line 1>)
  3 LOAD_CONST   1 ('')
  6 MAKE_FUNCTION0
  9 LOAD_CONST   2 (2)
 12 LOAD_CONST   3 (1)
 15 BUILD_MAP1
 18 BUILD_LIST   1
 21 GET_ITER
 22 CALL_FUNCTION1 (1 positional, 0 keyword pair)
 25 RETURN_VALUE

(This requires the new patch in issue 2292.)

The code here looks fine to me, so I need to look into the code object
.  How do I do that?

Thanks,

Neil
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Why are generated files in the repository?

2015-01-25 Thread Neil Girdhar
That makes sense.  Thanks for explaining.

On Sun, Jan 25, 2015 at 4:55 AM, Thomas Wouters  wrote:

>
>
> On Sun, Jan 25, 2015 at 5:05 AM, Neil Girdhar 
> wrote:
>
>> But you can remove Python/graminit.c and "make clean && make" works,
>> right?
>>
>
> If you can write to the directory, yes. Except if you build in a way that
> you can't run pgen on the host system, like in a cross build (this may have
> been fixed with the last few rounds of cross build fixes) or when
> instrumenting Python. Checking these files in trades very minor "committer
> pain" (tossing merge conflicts and regenerating the files) for equally
> minor pain in the much more diverse group of people compiling CPython.
>
>
>>
>> On Sat, Jan 24, 2015 at 11:00 PM, Nick Coghlan 
>> wrote:
>>
>>>
>>> On 25 Jan 2015 01:09, "Benjamin Peterson"  wrote:
>>> >
>>> >
>>> >
>>> > On Sat, Jan 24, 2015, at 03:00, Nick Coghlan wrote:
>>> > > On 20 January 2015 at 10:53, Benjamin Peterson 
>>> > > wrote:
>>> > > >
>>> > > >
>>> > > > On Mon, Jan 19, 2015, at 19:40, Neil Girdhar wrote:
>>> > > >> I was also wondering why files like Python/graminit.c are in the
>>> > > >> respository?  They generate spurious merge conflicts.
>>> > > >
>>> > > > Convenience mostly.
>>> > >
>>> > > It also gets us a round a couple of bootstrapping problems, where
>>> > > generating some of those files requires a working Python interpreter,
>>> > > which you may not have if you just cloned the source tree or unpacked
>>> > > the tarball.
>>> >
>>> > We could distribute the generated files in tarballs as part of the
>>> > release process.
>>>
>>> It's far more developer friendly to aim to have builds from a source
>>> check-out "just work" if we can. That's pretty much where we are today
>>> (getting external dependencies for the optional parts on *nix can still be
>>> a bit fiddly - it may be worth maintaining instructions for at least apt
>>> and yum in the developer guide that cover that)
>>>
>>> Cheers,
>>> Nick.
>>>
>>
>>
>> ___
>> Python-Dev mailing list
>> Python-Dev@python.org
>> https://mail.python.org/mailman/listinfo/python-dev
>> Unsubscribe:
>> https://mail.python.org/mailman/options/python-dev/thomas%40python.org
>>
>>
>
>
> --
> Thomas Wouters 
>
> Hi! I'm an email virus! Think twice before sending your email to help me
> spread!
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Why are generated files in the repository?

2015-01-24 Thread Neil Girdhar
But you can remove Python/graminit.c and "make clean && make" works, right?

On Sat, Jan 24, 2015 at 11:00 PM, Nick Coghlan  wrote:

>
> On 25 Jan 2015 01:09, "Benjamin Peterson"  wrote:
> >
> >
> >
> > On Sat, Jan 24, 2015, at 03:00, Nick Coghlan wrote:
> > > On 20 January 2015 at 10:53, Benjamin Peterson 
> > > wrote:
> > > >
> > > >
> > > > On Mon, Jan 19, 2015, at 19:40, Neil Girdhar wrote:
> > > >> I was also wondering why files like Python/graminit.c are in the
> > > >> respository?  They generate spurious merge conflicts.
> > > >
> > > > Convenience mostly.
> > >
> > > It also gets us a round a couple of bootstrapping problems, where
> > > generating some of those files requires a working Python interpreter,
> > > which you may not have if you just cloned the source tree or unpacked
> > > the tarball.
> >
> > We could distribute the generated files in tarballs as part of the
> > release process.
>
> It's far more developer friendly to aim to have builds from a source
> check-out "just work" if we can. That's pretty much where we are today
> (getting external dependencies for the optional parts on *nix can still be
> a bit fiddly - it may be worth maintaining instructions for at least apt
> and yum in the developer guide that cover that)
>
> Cheers,
> Nick.
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Any grammar experts?

2015-01-24 Thread Neil Girdhar
Thanks, I had tried it and got the ambiguities, but I wasn't sure if those
would disappear with editing some peripheral file.

Yes, you're right about the set branch.

Thank you,

Neil

On Sat, Jan 24, 2015 at 10:29 PM, Guido van Rossum  wrote:

> Have you tried it yet?
>
> I think you have to inline dictpopulator, because dictpopulator can start
> with the same tokens as test, and the parser doesn't backtrack. So it
> wouldn't know how to decide whether to take the dictpopulator branch or the
> set branch. If you inline it, the parser will know, because it does
> something clever within the rule.
>
> As-is, I get a lot of errors from pgen about ambiguity. This one seems to
> work (but you still have to adjust the code generator of course):
>
> dictorsetmaker: ( ((test ':' test | '**' test) (comp_for | (',' (test ':'
> test | '**' test))* [','])) |
>(test (comp_for | (',' test)* [','])) )
>
> Also I presume you want a similar treatment for the set branch (replace
> both tests with (test | '*' test).
>
> Good luck! There's plenty of code to crib from for the code generation.
>
> --Guido
>
> On Sat, Jan 24, 2015 at 6:10 PM, Neil Girdhar 
> wrote:
>
>> To finish PEP 448, I need to update the grammar for syntax such as
>>
>> {**x for x in it}
>>
>> and
>>
>> {1:2, 3:4, **a}
>>
>> It's been a long time since I've looked at grammars and I could really
>> use the advice of an expert.  I'm considering replacing:
>>
>> dictorsetmaker: ( (test ':' test (comp_for | (',' test ':' test)* [',']))
>> |
>>   (test (comp_for | (',' test)* [','])) )
>>
>> with:
>>
>> dictpopulator: test ':' test | '**' test
>> dictorsetmaker: ( (dictpopulator (comp_for | (',' dictpopulator)* [',']))
>> |
>>(test (comp_for | (',' test)* [','])) )
>>
>> Am I headed in the right direction?  Of course I will need to edit
>> parsermodule.c and ast.c.
>>
>> Best,
>>
>> Neil
>>
>> ___
>> Python-Dev mailing list
>> Python-Dev@python.org
>> https://mail.python.org/mailman/listinfo/python-dev
>> Unsubscribe:
>> https://mail.python.org/mailman/options/python-dev/guido%40python.org
>>
>>
>
>
> --
> --Guido van Rossum (python.org/~guido)
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Any grammar experts?

2015-01-24 Thread Neil Girdhar
To finish PEP 448, I need to update the grammar for syntax such as

{**x for x in it}

and

{1:2, 3:4, **a}

It's been a long time since I've looked at grammars and I could really use
the advice of an expert.  I'm considering replacing:

dictorsetmaker: ( (test ':' test (comp_for | (',' test ':' test)* [','])) |
  (test (comp_for | (',' test)* [','])) )

with:

dictpopulator: test ':' test | '**' test
dictorsetmaker: ( (dictpopulator (comp_for | (',' dictpopulator)* [','])) |
   (test (comp_for | (',' test)* [','])) )

Am I headed in the right direction?  Of course I will need to edit
parsermodule.c and ast.c.

Best,

Neil
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 448 (almost finished!) — Question regarding test_ast

2015-01-22 Thread Neil Girdhar
Thanks for taking a look.  I looked at inspect and I can't see anything
that needs to change since it's the caller rather than the receiver who has
more options after this PEP.  Did you see anything in particular?

Best,

Neil

On Thu, Jan 22, 2015 at 12:23 PM, Walter Dörwald 
wrote:

> On 20 Jan 2015, at 17:34, Neil Girdhar wrote:
>
> > My question first:
> > test_ast is mostly generated code, but I can't find where it is being
> > generated.  I am pretty sure I know how to fix most of the introduced
> > problems.  Who is generating test_ast??
> >
> > Update:
> >
> > So far, I've done the following:
> >
> > Updated the patch to 3.5
> > Fixed the grammar to accept final commas in argument lists always, and to
> > work with the already implemented code.
> > Fixed the ast to accept what it needs to accept and reject according to
> the
> > limitation laid down by Guido.
> > Fixed the parsing library.
> >
> > Fixed these tests:
> > test_ast.py
> > test_extcall.py
> > test_grammar.py
> > test_syntax.py
> > test_unpack_ex.py
> >
> > As far as I can tell, all I have left is to fix test_ast and possibly
> write
> > some more tests (there are already some new tests and some of the old
> > negative tests expecting SyntaxError are now positive tests).
>
> inspect.signature might need an update.
>
> Servus,
>Walter
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Why does STORE_MAP not take a parameter?

2015-01-21 Thread Neil Girdhar
Building argument lists and dicts in python entails the following opcode
pattern:

  1   0 BUILD_MAP3
  3 LOAD_CONST   0 (2)
  6 LOAD_CONST   1 (1)
  9 STORE_MAP
 10 LOAD_CONST   2 (4)
 13 LOAD_CONST   3 (3)
 16 STORE_MAP
 17 LOAD_CONST   4 (6)
 20 LOAD_CONST   5 (5)
 23 STORE_MAP

Building lists on the other hand works like this:

  1   0 LOAD_CONST   0 (1)
  3 LOAD_CONST   1 (2)
  6 LOAD_CONST   2 (3)
  9 BUILD_LIST   3


Why not have BUILD_MAP work like BUILD_LIST?   I.e., STORE_MAP takes a
parameter n and adds the last n pairs of stack elements into the n-1 stack
element (the dictionary).

This means that the data would live on the stack until it is all suddenly
transferred to the dict, but I'm not sure about the effect on performance
(whether positive or negative).

It does save on instruction cache misses.

It should be an easy change to implement since all calls to BUILD_MAP could
be replaced with "BUILD_MAP 1" and then optimized later.

Best,

Neil
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 448 (almost finished!) — Question regarding test_ast

2015-01-20 Thread Neil Girdhar
Okay, I think it's ready for a code review.  Would anyone be kind enough to
offer comments?

On Tue, Jan 20, 2015 at 12:10 PM, Neil Girdhar 
wrote:

> Thanks!
>
> On Tue, Jan 20, 2015 at 12:09 PM, Benjamin Peterson 
> wrote:
>
>> $ ./python Lib/test/test_ast.py -g
>> exec_results = [
>> ('Module', [('Expr', (1, 0), ('NameConstant', (1, 0), None))]),
>> ('Module', [('FunctionDef', (1, 0), 'f', ('arguments', [], None, [], [],
>> None, []), [('Pass', (1, 9))], [], None)]),
>> ('Module', [('FunctionDef', (1, 0), 'f', ('arguments', [('arg', (1, 6),
>> 'a', None)], None, [], [], None, []), [('Pass', (1, 10))], [], None)]),
>> ('Module', [('FunctionDef', (1, 0), 'f', ('arguments', [('arg', (1, 6),
>> 'a', None)], None, [], [], None, [('Num', (1, 8), 0)]), [('Pass', (1,
>> 12))], [], None)]),
>> ('Module', [('FunctionDef', (1, 0), 'f', ('arguments', [], ('arg', (1,
>> 7), 'args', None), [], [], None, []), [('Pass', (1, 14))], [], None)]),
>> ('Module', [('FunctionDef', (1, 0), 'f', ('arguments', [], None, [], [],
>> ('arg', (1, 8), 'kwargs', None), []), [('Pass', (1, 17))], [], None)]),
>> 
>>
>> On Tue, Jan 20, 2015, at 12:06, Neil Girdhar wrote:
>> > Hi Benjamin,
>> >
>> > I'm having trouble finding where it is generating the lines below
>> >
>> >  EVERYTHING BELOW IS GENERATED #
>> >
>> > Neither a call to test_ast nor a make (in case it's generated somewhere
>> > else) regenerate those lines if they have been removed.
>> >
>> > How were those lines generated?
>> >
>> > Best,
>> > Neil
>> >
>> >
>> > On Tue, Jan 20, 2015 at 11:36 AM, Benjamin Peterson <
>> benja...@python.org>
>> > wrote:
>> >
>> > >
>> > >
>> > > On Tue, Jan 20, 2015, at 11:34, Neil Girdhar wrote:
>> > > > My question first:
>> > > > test_ast is mostly generated code, but I can't find where it is
>> being
>> > > > generated.  I am pretty sure I know how to fix most of the
>> introduced
>> > > > problems.  Who is generating test_ast??
>> > >
>> > > It generates itself.
>> > >
>>
>
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 448 (almost finished!) — Question regarding test_ast

2015-01-20 Thread Neil Girdhar
Thanks!

On Tue, Jan 20, 2015 at 12:09 PM, Benjamin Peterson 
wrote:

> $ ./python Lib/test/test_ast.py -g
> exec_results = [
> ('Module', [('Expr', (1, 0), ('NameConstant', (1, 0), None))]),
> ('Module', [('FunctionDef', (1, 0), 'f', ('arguments', [], None, [], [],
> None, []), [('Pass', (1, 9))], [], None)]),
> ('Module', [('FunctionDef', (1, 0), 'f', ('arguments', [('arg', (1, 6),
> 'a', None)], None, [], [], None, []), [('Pass', (1, 10))], [], None)]),
> ('Module', [('FunctionDef', (1, 0), 'f', ('arguments', [('arg', (1, 6),
> 'a', None)], None, [], [], None, [('Num', (1, 8), 0)]), [('Pass', (1,
> 12))], [], None)]),
> ('Module', [('FunctionDef', (1, 0), 'f', ('arguments', [], ('arg', (1,
> 7), 'args', None), [], [], None, []), [('Pass', (1, 14))], [], None)]),
> ('Module', [('FunctionDef', (1, 0), 'f', ('arguments', [], None, [], [],
> ('arg', (1, 8), 'kwargs', None), []), [('Pass', (1, 17))], [], None)]),
> 
>
> On Tue, Jan 20, 2015, at 12:06, Neil Girdhar wrote:
> > Hi Benjamin,
> >
> > I'm having trouble finding where it is generating the lines below
> >
> >  EVERYTHING BELOW IS GENERATED #
> >
> > Neither a call to test_ast nor a make (in case it's generated somewhere
> > else) regenerate those lines if they have been removed.
> >
> > How were those lines generated?
> >
> > Best,
> > Neil
> >
> >
> > On Tue, Jan 20, 2015 at 11:36 AM, Benjamin Peterson  >
> > wrote:
> >
> > >
> > >
> > > On Tue, Jan 20, 2015, at 11:34, Neil Girdhar wrote:
> > > > My question first:
> > > > test_ast is mostly generated code, but I can't find where it is being
> > > > generated.  I am pretty sure I know how to fix most of the introduced
> > > > problems.  Who is generating test_ast??
> > >
> > > It generates itself.
> > >
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 448 (almost finished!) — Question regarding test_ast

2015-01-20 Thread Neil Girdhar
Hi Benjamin,

I'm having trouble finding where it is generating the lines below

 EVERYTHING BELOW IS GENERATED #

Neither a call to test_ast nor a make (in case it's generated somewhere
else) regenerate those lines if they have been removed.

How were those lines generated?

Best,
Neil


On Tue, Jan 20, 2015 at 11:36 AM, Benjamin Peterson 
wrote:

>
>
> On Tue, Jan 20, 2015, at 11:34, Neil Girdhar wrote:
> > My question first:
> > test_ast is mostly generated code, but I can't find where it is being
> > generated.  I am pretty sure I know how to fix most of the introduced
> > problems.  Who is generating test_ast??
>
> It generates itself.
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] PEP 448 (almost finished!) — Question regarding test_ast

2015-01-20 Thread Neil Girdhar
My question first:
test_ast is mostly generated code, but I can't find where it is being
generated.  I am pretty sure I know how to fix most of the introduced
problems.  Who is generating test_ast??

Update:

So far, I've done the following:

Updated the patch to 3.5
Fixed the grammar to accept final commas in argument lists always, and to
work with the already implemented code.
Fixed the ast to accept what it needs to accept and reject according to the
limitation laid down by Guido.
Fixed the parsing library.

Fixed these tests:
test_ast.py
test_extcall.py
test_grammar.py
test_syntax.py
test_unpack_ex.py

As far as I can tell, all I have left is to fix test_ast and possibly write
some more tests (there are already some new tests and some of the old
negative tests expecting SyntaxError are now positive tests).

Best,

Neil
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] How do I ensure that my code is being executed?

2015-01-20 Thread Neil Girdhar
(in fact, it was Python/getargs.c)

On Tue, Jan 20, 2015 at 10:01 AM, Neil Girdhar 
wrote:

> Okay, found it thanks.
>
> On Tue, Jan 20, 2015 at 9:59 AM, Neil Girdhar 
> wrote:
>
>> Good eye!  I did the following grep:
>>
>> ~/cpython: grep -R takes.exac *
>> Doc/c-api/bytes.rst:   Identical to :c:func:`PyBytes_FromFormat` except
>> that it takes exactly two
>> Doc/c-api/unicode.rst:   Identical to :c:func:`PyUnicode_FromFormat`
>> except that it takes exactly two
>> Doc/library/unittest.mock.rst:   TypeError: () takes exactly 3
>> arguments (1 given)
>> Doc/whatsnew/2.0.rst:The ``\x`` escape in string literals now takes
>> exactly 2 hex digits.  Previously
>> Lib/test/test_compileall.py:def test_d_takes_exactly_one_dir(self):
>> Lib/test/test_inspect.py:# f1 takes exactly 2 arguments
>> Lib/test/test_inspect.py:# f1/f2 takes exactly/at most 2
>> arguments
>> Lib/tkinter/__init__.py:# TypeError: setvar() takes exactly 3
>> arguments (2 given)
>> Modules/_ctypes/_ctypes.c: "call takes exactly %d
>> arguments xxx (%zd given)",
>> Objects/methodobject.c:"%.200s() takes exactly one
>> argument (%zd given)",
>> Binary file Objects/methodobject.o matches
>> Binary file Programs/_freeze_importlib matches
>> Binary file Programs/_testembed matches
>> Python/ceval.c: "%.200s() takes exactly one argument
>> (%d given)",
>> Python/ceval.c.orig: "%.200s() takes exactly one
>> argument (%d given)",
>> Binary file Python/ceval.o matches
>> Binary file libpython3.5dm.a matches
>> Binary file python.exe matches
>>
>> I'll keep searching…
>>
>> On Tue, Jan 20, 2015 at 9:52 AM, Stefan Ring  wrote:
>>
>>> On Tue, Jan 20, 2015 at 3:35 PM, Neil Girdhar 
>>> wrote:
>>> > I get error:
>>> >
>>> > TypeError: init_builtin() takes exactly 1 argument (0 given)
>>> >
>>> > The only source file that can generate that error is
>>> > Modules/_ctypes/_ctypes.c, but when I make changes to that file such
>>> as:
>>> >
>>> > PyErr_Format(PyExc_TypeError,
>>> >  "call takes exactly %d arguments XYZABC (%zd
>>> given)",
>>> >  inargs_index, actual_args);
>>> >
>>> > I do not see any difference after make clean and a full rebuild.  How
>>> is
>>> > this possible?  I need to debug the arguments passed.
>>>
>>> The message says "argument", the source code says "arguments" (I
>>> suppose that you only added the XYZABC), so this cannot be source of
>>> this exception.
>>>
>>> grep for "given" in ceval.c
>>>
>>
>>
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] How do I ensure that my code is being executed?

2015-01-20 Thread Neil Girdhar
Okay, found it thanks.

On Tue, Jan 20, 2015 at 9:59 AM, Neil Girdhar  wrote:

> Good eye!  I did the following grep:
>
> ~/cpython: grep -R takes.exac *
> Doc/c-api/bytes.rst:   Identical to :c:func:`PyBytes_FromFormat` except
> that it takes exactly two
> Doc/c-api/unicode.rst:   Identical to :c:func:`PyUnicode_FromFormat`
> except that it takes exactly two
> Doc/library/unittest.mock.rst:   TypeError: () takes exactly 3
> arguments (1 given)
> Doc/whatsnew/2.0.rst:The ``\x`` escape in string literals now takes
> exactly 2 hex digits.  Previously
> Lib/test/test_compileall.py:def test_d_takes_exactly_one_dir(self):
> Lib/test/test_inspect.py:# f1 takes exactly 2 arguments
> Lib/test/test_inspect.py:# f1/f2 takes exactly/at most 2
> arguments
> Lib/tkinter/__init__.py:# TypeError: setvar() takes exactly 3
> arguments (2 given)
> Modules/_ctypes/_ctypes.c: "call takes exactly %d
> arguments xxx (%zd given)",
> Objects/methodobject.c:"%.200s() takes exactly one
> argument (%zd given)",
> Binary file Objects/methodobject.o matches
> Binary file Programs/_freeze_importlib matches
> Binary file Programs/_testembed matches
> Python/ceval.c: "%.200s() takes exactly one argument
> (%d given)",
> Python/ceval.c.orig: "%.200s() takes exactly one
> argument (%d given)",
> Binary file Python/ceval.o matches
> Binary file libpython3.5dm.a matches
> Binary file python.exe matches
>
> I'll keep searching…
>
> On Tue, Jan 20, 2015 at 9:52 AM, Stefan Ring  wrote:
>
>> On Tue, Jan 20, 2015 at 3:35 PM, Neil Girdhar 
>> wrote:
>> > I get error:
>> >
>> > TypeError: init_builtin() takes exactly 1 argument (0 given)
>> >
>> > The only source file that can generate that error is
>> > Modules/_ctypes/_ctypes.c, but when I make changes to that file such as:
>> >
>> > PyErr_Format(PyExc_TypeError,
>> >  "call takes exactly %d arguments XYZABC (%zd
>> given)",
>> >  inargs_index, actual_args);
>> >
>> > I do not see any difference after make clean and a full rebuild.  How is
>> > this possible?  I need to debug the arguments passed.
>>
>> The message says "argument", the source code says "arguments" (I
>> suppose that you only added the XYZABC), so this cannot be source of
>> this exception.
>>
>> grep for "given" in ceval.c
>>
>
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] How do I ensure that my code is being executed?

2015-01-20 Thread Neil Girdhar
Good eye!  I did the following grep:

~/cpython: grep -R takes.exac *
Doc/c-api/bytes.rst:   Identical to :c:func:`PyBytes_FromFormat` except
that it takes exactly two
Doc/c-api/unicode.rst:   Identical to :c:func:`PyUnicode_FromFormat` except
that it takes exactly two
Doc/library/unittest.mock.rst:   TypeError: () takes exactly 3
arguments (1 given)
Doc/whatsnew/2.0.rst:The ``\x`` escape in string literals now takes exactly
2 hex digits.  Previously
Lib/test/test_compileall.py:def test_d_takes_exactly_one_dir(self):
Lib/test/test_inspect.py:# f1 takes exactly 2 arguments
Lib/test/test_inspect.py:# f1/f2 takes exactly/at most 2
arguments
Lib/tkinter/__init__.py:# TypeError: setvar() takes exactly 3
arguments (2 given)
Modules/_ctypes/_ctypes.c: "call takes exactly %d
arguments xxx (%zd given)",
Objects/methodobject.c:"%.200s() takes exactly one argument
(%zd given)",
Binary file Objects/methodobject.o matches
Binary file Programs/_freeze_importlib matches
Binary file Programs/_testembed matches
Python/ceval.c: "%.200s() takes exactly one argument
(%d given)",
Python/ceval.c.orig: "%.200s() takes exactly one
argument (%d given)",
Binary file Python/ceval.o matches
Binary file libpython3.5dm.a matches
Binary file python.exe matches

I'll keep searching…

On Tue, Jan 20, 2015 at 9:52 AM, Stefan Ring  wrote:

> On Tue, Jan 20, 2015 at 3:35 PM, Neil Girdhar 
> wrote:
> > I get error:
> >
> > TypeError: init_builtin() takes exactly 1 argument (0 given)
> >
> > The only source file that can generate that error is
> > Modules/_ctypes/_ctypes.c, but when I make changes to that file such as:
> >
> > PyErr_Format(PyExc_TypeError,
> >  "call takes exactly %d arguments XYZABC (%zd
> given)",
> >  inargs_index, actual_args);
> >
> > I do not see any difference after make clean and a full rebuild.  How is
> > this possible?  I need to debug the arguments passed.
>
> The message says "argument", the source code says "arguments" (I
> suppose that you only added the XYZABC), so this cannot be source of
> this exception.
>
> grep for "given" in ceval.c
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] How do I ensure that my code is being executed?

2015-01-20 Thread Neil Girdhar
Sorry, I should have provided more context.

Best,

Neil

On Tue, Jan 20, 2015 at 9:55 AM, Brett Cannon  wrote:

>
>
> On Tue Jan 20 2015 at 9:53:52 AM Benjamin Peterson 
> wrote:
>
>>
>>
>> On Tue, Jan 20, 2015, at 09:51, Brett Cannon wrote:
>> > This is a mailing to discuss the development *of* Python, not its *use*.
>> > You should be able to get help from python-list or #python on IRC.
>>
>> To be fair, he's asking to debug his patch in
>> https://bugs.python.org/issue2292
>
>
> Ah, sorry about that. The issue wasn't referenced in the email.
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] How do I ensure that my code is being executed?

2015-01-20 Thread Neil Girdhar
Hi Skip,

I'm trying to finish the implementation of PEP 448.  I have updated the
patch to 3.5, fixed the grammar, and the ast.  There is a bug with the
argument counting or unpacking, which I can't seem to locate.

Best,

Neil

On Tue, Jan 20, 2015 at 9:53 AM, Skip Montanaro 
wrote:

> On Tue, Jan 20, 2015 at 8:35 AM, Neil Girdhar 
> wrote:
> >
> > I get error:
> >
> > TypeError: init_builtin() takes exactly 1 argument (0 given)
> >
> > The only source file that can generate that error is
> Modules/_ctypes/_ctypes.c, but when I make changes to that file such as:
> >
> > PyErr_Format(PyExc_TypeError,
> >  "call takes exactly %d arguments XYZABC (%zd
> given)",
> >  inargs_index, actual_args);
> >
> > I do not see any difference after make clean and a full rebuild.  How is
> this possible?  I need to debug the arguments passed.
>
> Neil,
>
> I'm a little bit confused. Why are you modifying the Python
> interpreter to see if your code (presumably not part of the Python
> interpreter) is being executed? I will take a stab at your question
> though, and suggest you aren't actually running the interpreter you
> just built.
>
> Can you provide some more context for your question?
>
> One last thing. Are you working on Python itself
> (python-dev@python.org is the right place to ask questions) or using
> Python to develop an application (python-dev is not the right place,
> try python-l...@python.org)?
>
> Skip
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] How do I ensure that my code is being executed?

2015-01-20 Thread Neil Girdhar
I get error:

TypeError: init_builtin() takes exactly 1 argument (0 given)

The only source file that can generate that error
is Modules/_ctypes/_ctypes.c, but when I make changes to that file such as:

PyErr_Format(PyExc_TypeError,
 "call takes exactly %d arguments XYZABC (%zd given)",
 inargs_index, actual_args);

I do not see any difference after make clean and a full rebuild.  How is
this possible?  I need to debug the arguments passed.

Best,

Neil
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Why are generated files in the repository?

2015-01-19 Thread Neil Girdhar
Hi everyone,

I tried to work on PEP 448 and updated the latest patch to Python 3.5.  I
uploaded the new diff here: http://bugs.python.org/issue2292.  I don't know
how to debug further.  Is there a way to view the compiled output despite
Python not starting up?

I was also wondering why files like Python/graminit.c are in the
respository?  They generate spurious merge conflicts.

Best,

Neil
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [Python-ideas] Expose `itertools.count.start` and implement `itertools.count.__eq__` based on it, like `range`.

2014-06-07 Thread Neil Girdhar
On Sat, Jun 7, 2014 at 5:50 AM, Nick Coghlan  wrote:

> On 7 June 2014 19:36, Ram Rachum  wrote:
> > My need is to have an infinite immutable sequence. I did this for myself
> by
> > creating a simple `count`-like stateless class, but it would be nice if
> that
> > behavior was part of `range`.
>
> Handling esoteric use cases like it sounds yours was is *why* user
> defined classes exist. It does not follow that "I had to write a
> custom class to solve my problem" should lead to a standard library or
> builtin changing unless you can make a compelling case for:
>
> * the change being a solution to a common problem that a lot of other
> people also have. "I think it might be nice" and "it would have been
> useful to me to help solve this weird problem I had that one time"
> isn't enough.
> * the change fitting in *conceptually* with the existing language and
> tools. In this case, "infinite sequence" is a fundamentally incoherent
> concept in Python - len() certainly won't work, and negative indexing
> behaviour is hence not defined. By contrast, since iterables and
> iterators aren't required to support len() the way sequences are,
> infinite iterable and infinite iterator are both perfectly well
> defined.
>

With all due respect, “"infinite sequence" is a fundamentally incoherent
concept in Python” is a bit hyperbolic.  It would be perfectly reasonable
to have them, but they're not defined (yet).

>
> Cheers,
> Nick.
>
> --
> Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com