[Python-Dev] Re: Proto-PEP part 4: The wonderful third option

2022-05-01 Thread Carl Meyer via Python-Dev
Hi Paul,

On Sun, May 1, 2022 at 3:47 PM Paul Bryan  wrote:
>
> Can someone state what's currently unpalatable about 649? It seemed to 
> address the forward-referencing issues, certainly all of the cases I was 
> expecting to encounter.

Broadly speaking I think there are 3-4 issues to resolve as part of
moving forward with PEP 649:

1) Class decorators (the most relevant being @dataclass) that need to
inspect something about annotations, and because they run right after
class definition, the laziness of PEP 649 is not sufficient to allow
forward references to work. Roughly in a similar boat are `if
TYPE_CHECKING` use cases where annotations reference names that aren't
ever imported.

2) "Documentation" use cases (e.g. built-in "help()") that really
prefer access to the original text of the annotation, not the repr()
of the fully-evaluated object -- this is especially relevant if the
annotation text is a nice short meaningful type alias name, and the
actual value is some massive unreadable Union type.

3) Ensuring that we don't regress import performance too much.

4) A solid migration path from the status quo (where many people have
already started adopting PEP 563) to the best future end state.
Particularly for libraries that want to support the full range of
supported Python versions.

Issues (1) and (2) can be resolved under PEP 649 by providing a way to
run the __co_annotations__ function without erroring on
not-yet-defined names, I think we have agreement on a plan there.
Performance of the latest PEP 649 reference implementation does not
look too bad relative to PEP 563 in my experiments, so I think this is
not an issue -- there are ideas for how we could reduce the overhead
even further. The migration path is maybe the most difficult issue --
specifically how to weigh "medium-term migration pain" (which under
some proposals might last for years) vs "best long-term end state."
Still working on reaching consensus there, but we have options to
choose from. Expect a more thorough proposal (probably in the form of
an update to PEP 649?) sometime after PyCon.

Carl
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/M3FB6QHB2IOMEXDGHFRHYQEDR3KGZPHG/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Proto-PEP part 1: Forward declaration of classes

2022-04-26 Thread Carl Meyer via Python-Dev
On Tue, Apr 26, 2022 at 7:24 PM Greg Ewing 
wrote:

> On 27/04/22 2:01 am, Chris Angelico wrote:
> > That would be the case if monkeypatching were illegal. Since it's not,
> > wherein lies the difference?
>
> The proposed feature is analogous to forward declaring a
> struct in C. Would you call what C does monkeypatching?
>

It is not analogous; it is a false analogy that obscures the issues with
this proposal in Python.

A C forward declaration (not to mention the full struct declaration also!)
is purely for the compiler; at runtime one can have a pointer to some
memory that the compiler expects to be shaped like that struct, but one can
never get hold of any runtime value that is “the struct definition itself,”
let alone a runtime value that is the nascent forward-declared
yet-to-be-completed struct. So clearly there can be no patching of
something that never exists at runtime at all.

Python is quite different from C in this respect.  Classes are runtime
objects, and so is the “forward declared class” object. The proposal is for
a class object to initially at runtime be the latter, and then later (at
some point that is not well defined if the implementation is in a separate
module, because global module import ordering is an unstable emergent
property of all the imports in the entire codebase) may suddenly,
everywhere and all at once, turn into the former. Any given module that
imports the forward declared name can have no guarantee when (if ever) that
object will magically transform into something that is safe to use.

Whether we call it monkeypatching or not is irrelevant. Having global
singleton objects change from one thing to a very different thing, at an
unpredictable point in time, as a side effect of someone somewhere
importing some other module, causes very specific problems in being able to
locally reason about code. I think it is more useful to discuss the
specific behavior and its consequences than what it is called.

Carl
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/CQ7TAV6TWGEG2HLVY7T46U6JCPESRACR/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Proto-PEP part 4: The wonderful third option

2022-04-26 Thread Carl Meyer via Python-Dev
On Tue, Apr 26, 2022 at 1:25 PM Guido van Rossum  wrote:
> I also would like to hear more about the problem this is trying to solve, 
> when th real-world examples. (E.g. from pydantic?)

Yes please. I think these threads have jumped far too quickly into
esoteric details of implementation and syntax, without critical
analysis of whether the semantics of the proposal are in fact a good
solution to a real-world problem that someone has.

I've already outlined in a more detailed reply on the first thread why
I don't think forward declarations provide a practically useful
solution to forward reference problems for users of static typing
(every module that imports something that might be a forward reference
would have to import its implementation also, turning every one-line
import of that class into two or three lines) and causes new problems
for every user of Python due to its reliance on import side effects
causing global changes at a distance. See
https://mail.python.org/archives/list/python-dev@python.org/message/NMCS77YFM2V54PUB66AXEFTE4NXFHWPI/
for details.

Under PEP 649, forward references are a small problem confined to the
edge case of early resolution of type annotations. There are simple
and practical appropriately-scoped solutions easily available for that
small problem: providing a way to resolve type annotations at runtime
without raising NameError on not-yet-defined names. Such a facility
(whether default or opt-in) is practically useful for many users of
annotations (including dataclasses and documentation tools), which
have a need to introspect some aspects of annotations without
necessarily needing every part of the annotation to resolve. The
existence of such a facility is a reasonable special case for
annotations specifically, because annotations are fundamentally
special: they provide a description of code, rather than being only a
part of the code. (This special-ness is precisely also why they cause
more forward references in the first place.)

IMO, this forward declaration proposal takes a small problem in a
small corner of the language and turns it into a big problem for the
whole language, without even providing as nice and usable an option
for common use cases as "PEP 649 with option for lax resolution" does.
This seems like a case study in theoretical purity ("resolution of
names in annotations must not be special") running roughshod over
practicality.

Carl
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/RVQSLD435BFKEVIMY2AIA5MCJB37BPHK/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Proto-PEP part 1: Forward declaration of classes

2022-04-24 Thread Carl Meyer via Python-Dev
On Sun, Apr 24, 2022 at 10:20 AM Joao S. O. Bueno  wrote:
>
> I am not worried about the bikeshed part of which syntax to use -
> and more worried with the possible breaking of a lot of stuff, unless
> we work with creation of a non-identical "forward object" that is
> rebound, as in plain name binding, when the second part
> is declared. I've stated that amidst my ramblings,
> but Nick Coghlan managed to keep it suscint at
> https://mail.python.org/archives/list/python-dev@python.org/message/DMITVTUIQKJW6RYVOPQXHD54VSYE7QHA/

I don't think name rebinding works. That means that if we have
`forward class Bar` in module `foo` and `continue class Bar: ...` in
module `foo.impl`, if module `baz` does `from foo import Bar`, it will
forever have either the forward reference object or the real class,
and which one it has is entirely unpredictable (depends on import
ordering accidents of the rest of the codebase.) If `baz` happens to
be imported before `foo.impl`, the name `Bar` in the `baz` namespace
will never be resolved to the real class, and isn't resolvable to the
real class without some outside intervention.

> """
> Something worth considering: whether forward references need to be
> *transparent* at runtime. If they can just be regular ForwardRef objects
> then much of the runtime complexity will go away (at the cost of some
> potentially confusing identity check failures between forward references
> and the actual class objects).
>
> ForwardRef's constructor could also potentially be enhanced to accept a
> "resolve" callable, and a "resolve()" method added to its public API,
> although the extra complexity that would bring may not be worth it.
> """

I'm not sure how this `resolve()` method is even possible under the
proposed syntax. If `forward class Bar` and `continue class Bar` are
in different modules, then how can `forward class Bar` (which must
create the "forward reference" object) possibly know which module
`continue class Bar: ...` exists in? How can it know how to resolve
itself?

Carl
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/WLZRZIMPRST52UMINB5VB57TOIQVTYQH/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Proto-PEP part 1: Forward declaration of classes

2022-04-24 Thread Carl Meyer via Python-Dev
Hi Larry,

On Sat, Apr 23, 2022 at 1:53 AM Larry Hastings  wrote:
> But rather than speculate further, perhaps someone who works on one of the 
> static type analysis checkers will join the discussion and render an informed 
> opinion about how easy or hard it would be to support "forward class" and 
> "continue class".

I work on a Python static type checker.

I think a major issue with this proposal is that (in the
separate-modules case) it requires monkey-patching as an import side
effect, which is quite hard for both humans and static analysis tools
to reason effectively about.

Imagine we have a module `foo` that contains `forward class Bar`, a
separate module `foo.impl` that contains `continue class Bar: ...`,
and then a module `baz` that contains `import foo`. What type of
object is `foo.Bar` during the import of `baz`? Will it work for the
module body of `baz` to create a singleton instance `my_bar =
foo.Bar()`?

The answer is that we have no idea. `foo.Bar` might be a
non-instantiable "forward class declaration" (or proxy object, in your
second variation), or it might be a fully-constituted class. Which one
it is depends on accidents of import order anywhere else in the
codebase. If any other module happens to have imported `foo.impl`
before `baz` is imported, then `foo.Bar` will be the full class. If
nothing else has imported `foo.impl`, then it will be a
non-instantiable declaration/proxy. This question of import order
potentially involves any other module in the codebase, and the only
way to reliably answer it is to run the entire program; neither a
static type checker nor a reader of the code can reliably answer it in
the general case. It will be very easy to write a module `baz` that
does `import foo; my_bar = foo.Bar()` and have it semi-accidentally
work initially, then later break mysteriously due to a change in
imports in a seemingly unrelated part of the codebase, which causes
`baz` to now be imported before `foo.impl` is imported, instead of
after.

There is another big problem for static type checkers with this
hypothetical module `baz` that only imports `foo`. The type checker
cannot know the shape of the full class `Bar` unless it sees the right
`continue Bar: ...` statement. When analyzing `baz`, it can't just go
wandering the filesystem aimlessly in hopes of encountering some
module with `continue Bar: ...` in it, and hope that's the right one.
(Even worse considering it might be `continue snodgrass: ...` or
anything else instead.) So this means a type checker must require that
any module that imports `Bar` MUST itself import `foo.impl` so the
type checker has a chance of understanding what `Bar` actually is.

This highlights an important difference between this proposal and
languages with real forward declarations. In, say, C++, a forward
declaration of a function or class contains the full interface of the
function or class, i.e. everything a type checker (or human reader)
would need to know in order to know how it can use the function or
class. In this proposal, that is not true; lots of critical
information about the _interface_ of the class (what methods and
attributes does it have, what are the signatures of its methods?) are
not knowable without also seeing the "implementation." This proposal
does not actually forward declare a class interface; all it declares
is the existence of the class (and its inheritance hierarchy.) That's
not sufficient information for a type checker or a human reader to
make use of the class.

Taken together, this means that every single `import foo` in the
codebase would have to be accompanied by an `import foo.impl` right
next to it. In some cases (if `foo.Bar` is not used in module-level
code and we are working around a cycle) it might be safe for the
`import foo.impl` to be within an `if TYPE_CHECKING:` block; otherwise
it would need to be a real runtime import. But it must always be
there. So every single `import foo` in the codebase must now become
two or three lines rather than one.

There are of course other well-known problems with import-time side
effects. All the imports of `foo.impl` in the codebase would exist
only for their side effect of "completing" Bar, not because anyone
actually uses a name defined in `foo.impl`. Linters would flag these
imports as unused, requiring extra cruft to silence the linter. Even
worse, these imports would tend to appear unused to human readers, who
might remove them and be confused why that breaks the program.

All of these import side-effect problems can be resolved by
dis-allowing module separation and requiring `forward class` and
`continue class` to appear in the same module. But then the proposal
no longer helps with resolving inter-module cycles, only intra-module
ones.

Because of these issues (and others that have been mentioned), I don't
think this proposal is a good solution to forward references. I think
PEP 649, with some tricks that I've mentioned elsewhere to allow
introspecting annotations u

[Python-Dev] Re: Declarative imports

2022-04-09 Thread Carl Meyer via Python-Dev
Hi Malthe,

On Fri, Apr 8, 2022 at 12:04 PM Malthe  wrote:
> Actually, to me the interesting idea is not so much lazy imports – I
> think they should not be lazy, at least that was my initial thought. I
> think they should be immediately resolved before anything else in that
> module:

I'm +0.25 on your idea as simply a streamlined syntax for inline
imports (given actually finding an appropriate syntax, which I haven't
thought much about; @ probably doesn't work due to the conflict with
decorator syntax, but there might be other options.). If it existed I
would probably use it occasionally, but I don't feel a strong need for
it.

But I think your proposal is much stronger if you eliminate the
hoisting from it; with the hoisting I'd be -1. Out-of-source-order
execution like this is just quite surprising in the context of Python.

> 1. This would settle any discussion about performance impact (there
> wouldn't be any).

If the inline import is actually a performance problem because a
certain code path is very hot, the solution is simple: don't use the
inline import there, use a top-of-module import instead.

> 2. This would enable IDEs, typers and other tooling to know the type
> using existing import logic.

I don't think it enables any such thing. Static-analysis tooling has
only the source code to work with, runtime behavior doesn't affect it.
If the runtime executes these imports out-of-order, that won't make
the slightest difference to how easily IDEs and type checkers can
analyze the source code.

> 3. Catch errors early!

The very strong precedent in Python is that errors in code are caught
when the code runs, and the code runs more or less when you'd expect
it to, in source order. If you want to catch errors earlier, use a
static analysis tool to help catch them.

Carl
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/ARI44O62CRMAF2IKPHJVLU5D2ADR2DP6/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Declarative imports

2022-04-09 Thread Carl Meyer via Python-Dev
Hi Barry,

On Fri, Apr 8, 2022 at 12:44 PM Barry Warsaw  wrote:
>
> Start up overhead due to imports is a real problem for some class of 
> applications, e.g. CLIs, and I’ve seen a lot of hacks implemented to get 
> Python CLIs to be more responsive.  E.g. getting from invocation to —help 
> output is a major UX problem.

Definitely, we have this same problem, and also the same symptom of
people pushing hard to rewrite Python CLIs in Go for this reason.

> It’s often more complicated than just imports alone though.  Expensive module 
> scope initializations and decorators contribute to this problem.

One of our projects that can prevent much of this expensive work being
done at import time is Strict Modules[1]. Currently it's only
available as part of Cinder, though we're hoping to make it
pip-installable as part of our project to make Cinder's features more
easily accessible.

Our experience in practice, though, has been that universally lazy
imports is somewhat easier to adopt than Strict Modules, and has had a
much bigger overall impact on reducing startup time for big CLIs (and
a big web server too; as you note it's not as serious an issue for a
web server in production, but restart time still does make a
difference to dev speed / experience.) Removing slow stuff happening
at import time helps, but it'll never match the speed of not doing the
import at all! We've seen startup time improvements up to 70% in
real-world CLIs just by making imports lazy. We've also opened an
issue to discuss the possibility of upstreaming this. [2]

[1] https://github.com/facebookincubator/cinder/#strict-modules
[2] https://bugs.python.org/issue46963
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/62OTFJMAMQ2WHZ4H3TUEJTECMPJDQ557/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Declarative imports

2022-04-08 Thread Carl Meyer via Python-Dev
An interesting point in the lazy imports design space that I hadn't
previously considered could be:

- lazy imports are explicitly marked and usage of the imported name
within the module is transparent, but
- lazily imported names are _not_ visible in the module namespace;
they can't be accessed by other modules or re-exported; they are
internal-use-only within the module

This compromise would, I think, make it possible to implement lazy
imports entirely in the compiler (effectively as syntax sugar for an
inline import at every usage site), which is definitely an
implementation improvement.

I think in practice explicitly marking lazy imports would make it
somewhat harder to gain the benefits of lazy imports for e.g. speeding
up startup time in a large CLI, compared to an implicit/automatic
approach. But still could be usable to get significant benefits.

Carl
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/ZT6CXQPFCWZD2M65YXCSAPPGNDGA6WNE/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Declarative imports

2022-04-08 Thread Carl Meyer via Python-Dev
You only get the ease-of-implementation benefit if you are willing to
explicitly mark every _use_ of a lazy-imported name as special (and
give the fully qualified name at every usage site). This is rather
more cumbersome (assuming multiple uses in a module) than just
explicitly marking an import as lazy in one location and then using
the imported name in multiple places normally.

Other "lazy import" solutions are trying to solve a problem where you
want the name to be usable (without special syntax or marking) in many
different places in a module, and visible in the module namespace
always -- but not actually imported until someone accesses/uses it.
The difficulty arises because in this case you need some kind of
placeholder for the "deferred import", but you need to avoid this
"deferred object" escaping and becoming visible to Python code without
being resolved first. Explicitly marking which imports are lazy is
fine if you want it (it's just a matter of syntax), but it doesn't do
anything to solve the problem of allowing usage of the lazy-imported
name to be transparent.

I agree that the idea that top-of-module imports help readability is
overstated; it sounds slightly Stockholm-syndrome-ish to me :)
Top-of-module imports are frankly a pain to maintain and a pain to
read (because they are often distant from the uses of the names). But
they are a necessary evil if you want a) namespaces and b) not
constantly retyping fully-qualified names at every usage site. Python
is pretty committed to namespaces at this point (and I wouldn't want
to change that), so that leaves the choice between top-of-module
imports vs fully qualifying every use of every name; pick your poison.
(Inline imports in a scope with multiple uses are a middle ground.)

Carl
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/N4T2YMPHBLJXKCFA5CIPBFIZJJKO7SHR/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 684: A Per-Interpreter GIL

2022-03-09 Thread Carl Meyer
Hi Eric, just one note:

On Wed, Mar 9, 2022 at 7:13 PM Eric Snow  wrote:
> > Maybe say “e.g. with Instagram's Cinder” – both the household name and
> > the project you can link to?
>
> +1
>
> Note that Instagram isn't exactly using Cinder.

This sounds like a misunderstanding somewhere. Instagram server is
"exactly using Cinder" :)

>  I'll have to check if  Cinder uses the pre-fork model.

It doesn't really make sense to ask whether "Cinder uses the pre-fork
model" -- Cinder is just a CPython variant, it can work with all the
same execution models CPython can. Instagram server uses Cinder with a
pre-fork execution model. Some other workloads use Cinder without
pre-forking.

Carl
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/5A3E6VCEY5XZXEFPGHNGKPM3HXQEJRTX/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Function Prototypes

2021-12-23 Thread Carl Meyer
On Thu, Dec 23, 2021 at 7:38 PM Barry Warsaw  wrote:
>
> On Dec 23, 2021, at 17:09, Guido van Rossum  wrote:
> >
> > Mark's proposal was
> > ```
> > @Callable
> > def func(params): pass
> > ```
> > My question is, why does it need `@Callable`? Lukasz proposed just using 
> > any (undecorated) function, with the convention being that the body is 
> > `...` (to which I would add the convention that the function *name* be 
> > capitalized, since it is a type). My question (for Mark, or for anyone who 
> > supports `@Callable`) is why bother with the decorator. It should be easy 
> > to teach a type checker about this:
> >
> > ```
> > def SomeFn(x: float) -> int:
> > ...
> >
> > def twice(f: SomeFn) -> SomeFn:
> > return lambda x: f(f(x))
> > ```
>
> That seems pretty intuitive to me.  The conventions you mention would be just 
> that though, right?  I.e. `pass` could be used, but whatever the body is it 
> would be ignored for type checking `twice()` in this case, right?

I think this was briefly mentioned in another thread, but it seems to
have been lost in the discussion here, so I want to mention it again
because I think it's important: aside from the verbosity and hassle of
needing to always define callable types out-of-line and give them a
name, another significant downside to the function-as-type approach is
that generally Python signatures are too specific for intuitive use as
a callback type. Note that in the example immediately above, a
typechecker should error on this call:

```
def float_to_int(y: float) -> int:
return int(y)

twice(float_to_int)
```

The intent of the programmer was probably that `twice` should accept
any function taking a single float argument and returning an int, but
in fact, given the possibility of keyword calls, the names of
non-positional-only parameters are part of the function signature too.
Since the body of `twice` could call `f(x=...)`, a typechecker must
error on the use of `float_to_int` as the callback, since its
parameter is not named `x`.

In order to correctly express their intent, the programmer must
instead ensure that their callable type takes a positional-only
argument:

```
def SomeFn(_x: float, /) -> int:
...
```

The need to almost always use the extra `/` in the callable type
signature in order to get the desired breadth of signature, and the
likelihood of forgetting it the first time around until someone tries
to pass a second callback value and gets a spurious error, is in my
mind a major negative to functions-as-callable-type.

So I agree with Steven: this issue, plus the verbosity and
out-of-line-ness, mean that Callable would continue to be preferable
for many cases, meaning we'd end up with two ways to do it, neither
clearly preferable to the other. I don't think out-of-line
function-as-type could ever be an acceptable full replacement for
Callable, whereas I think PEP 677 immediately would be.

Carl
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/4QPK23NPIZ6GNV47J27S3ZPSM52LE3PD/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Type annotations, PEP 649 and PEP 563

2021-10-21 Thread Carl Meyer
On Thu, Oct 21, 2021 at 12:06 PM Jelle Zijlstra
 wrote:

> I would want this for my type checker, pyanalyze. I'd want to get the raw 
> annotation and turn it into a type. For example, if the annotation is 
> `List[SomeType]` and `SomeType` is imported in `if TYPE_CHECKING`, I'd at 
> least still be able to extract `List[Any]` instead of bailing out completely. 
> With some extra analysis work on the module I could even get the real type 
> out of the `if TYPE_CHECKING` block.

If you're up for the extra analysis on the module, wouldn't it be just
as possible to front-load this analysis instead of doing it after the
fact, and perform the imports in the `if TYPE_CHECKING` block prior to
accessing the annotation data? Or perform a fake version of the
imports that just sets every name imported in `if TYPE_CHECKING` to
`Any`?

Carl
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/PEJBJBSFTTA4U3BXQW2N6ABGL5A4V2NF/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Type annotations, PEP 649 and PEP 563

2021-10-21 Thread Carl Meyer
On Thu, Oct 21, 2021 at 10:44 AM Damian Shaw
 wrote:
> Sorry for the naive question but why doesn't "TYPE_CHECKING" work under PEP 
> 649?
>
> I think I've seen others mention this but as the code object isn't executed 
> until inspected then if you are just using annotations for type hints it 
> should work fine?
>
> Is the use case wanting to use annotations for type hints and real time 
> inspection but you also don't want to import the objects at run time?

Yes, you're right. And I don't think PEP 649 and PEP 563 are really
all that different in this regard: if you have an annotation using a
non-imported name, you'll be fine as long as you don't introspect it
at runtime. If you do, you'll get a NameError. And with either PEP you
can work around this if you need to by ensuring you do the imports
first if you're going to need the runtime introspection of the
annotations.

The difference is that PEP 563 makes it easy to introspect the
annotation _as a string_ without triggering NameError, and PEP 649
takes that away, but I haven't seen anyone describe a really
compelling use case for that.

> If that's really such a strong use cause couldn't PEP 649 be modified to 
> return a repr of the code object when it gets a NameError? Either by 
> attaching it to the NameError exception or as part of a ForwardRef style 
> object if that's how PEP 649 ends up getting implemented?

It could, but this makes evaluation of annotations oddly different
from evaluation of any other code, which is something that it's
reasonable to try to avoid if possible.

Carl
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/7ZVH5ADSQ7LTBPMZY6FOUGIJUBF33JPL/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 649: Deferred Evaluation Of Annotations

2021-08-10 Thread Carl Meyer
On Mon, Aug 9, 2021 at 9:31 PM Inada Naoki  wrote:
> Currently, reference implementation of PEP 649 has been suspended.
> We need to revive it and measure performance/memory impact.

I volunteered to check performance impact in practice on the Instagram
codebase, which is almost fully annotated. However, when I tried, I
found that the reference implementation of PEP 649 wasn't even able to
import its own test file without a crash. I detailed the problem in
https://github.com/larryhastings/co_annotations/issues/12 a couple
months ago, but haven't gotten any response. Still willing to do this
testing on the IG codebase, but it seems like the implementation needs
some additional work before that will be possible.

Carl
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/BOHZB2NBTSKMK5RXIJ75JLFIR6GVYY57/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 563 and 649: The Great Compromise

2021-04-26 Thread Carl Meyer
On Sun, Apr 25, 2021 at 10:30 AM Brett Cannon  wrote:
> I know I would be curious, especially if backwards compatibility can be 
> solved reasonably (for those that haven't lived this, deferred execution 
> historically messes up code relying on import side-effects and trackbacks are 
> weird as they occur at access time instead of at the import statement).

I had been assuming that due to backward compatibility and performance
of `LOAD_GLOBAL`, this would need to be a new form of import,
syntactically distinguished. But the performance and some of the
compatibility concerns could be alleviated by making all imports
deferred by default, and then resolving any not-yet-resolved imports
at the end of module execution. This is perhaps even better for the
non-typing case, since it would generally fix most import cycle
problems in Python. (It would be sort of equivalent to moving all
imports that aren't needed for module execution to the end of the
module, which is another ugly but effective workaround for cycles.) It
would have the downside that type-only imports which will never be
needed at runtime at all will still be imported, even if
`__annotations__` are never accessed.

I think it's still problematic for backward compatibility with import
side effects, though, so if we did this at the very least it would
have to be behind a `__future__` import.

Carl
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/RM5R5TZ5675H5VRVBKLTFOV3B6WVNDJP/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 563 and 649: The Great Compromise

2021-04-24 Thread Carl Meyer
Hi Larry,

This is a creative option, but I am optimistic (now that the SC
decision has removed the 3.10 deadline urgency) that we can find a
path forward that is workable for everyone and doesn't require a
permanent compiler feature flag and a language that is permanently
split-brained about annotation semantics. Since I have access to a
real-world large codebase with almost complete adoption of type
annotations (and I care about its import performance), I'm willing to
test PEP 649 on it (can't commit to doing it right away, but within
the next month or so) and see how much import performance is impacted,
and how much of that can be gained back by interning tweaks as
discussed in the other thread. My feeling is that if the performance
turns out to be reasonably close in a real codebase, and we can find a
workable solution for `if TYPE_CHECKING`, we should go ahead with PEP
649: IMO aside from those two issues its semantics are a better fit
for the rest of the language and preferable to PEP 563.

I do think that a solution to the `if TYPE_CHECKING` problem should be
identified as part of PEP 649. My favorite option there would be a new
form of import that is lazy (the import does not actually occur until
the imported name is loaded at runtime). This has prior art in
previous discussions about "module descriptors"; IIRC Neil Schemenauer
even had a branch a while ago where all module attributes were
modified to behave this way (I might be remembering the details
wrong.) It also has overlap with use cases served by the existing
`demandimport` library used by hg, and `importlib.util.LazyLoader`,
although it is strictly more capable because it can work with `from
module import Thing` style imports as well. If there's any interest in
this as a solution to inter-module annotation forward references, I'm
also willing to work on that in the 3.11 timeframe.

Carl
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/VBG2LXU6OHROQ3NPF373L7W4W23B24DE/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Typing syntax and ecosystem

2021-04-16 Thread Carl Meyer
Hi Barry & Luciano,

Barry Warsaw wrote:
> Actually, I think it’s time for a comprehensive guide to type annotations.  
> Just anecdotally, I was trying to annotate a library of mine and was having 
> an impossible time of it, until a chat with Guido lead me to 
> @typing.overload.  That solved my problem intuitively and easily, but I just 
> didn’t know about it.  Right now, there’s information spread out all over the 
> place, the stdlib documentation, tool documentation, StackOverflow :D etc.  
> It’s a complicated topic that I think a comprehensive guide, a tutorial, etc. 
> could really help with.
> One of my favorite frameworks for thinking about documentation on a topic 
> such as this is:
> https://documentation.divio.com/
> I really think that would help people get into Python type annotations, both 
> casually and deeply.
> > I volunteer to help with a "Typing HOWTO". For the next few months, I
> > can offer to review if someone else writes it. In the second semester,
> > I could write it myself, if the experts on typing and the type
> > checkers would be willing to review it.
> > I don’t know whether I’ll have time to *start* something any time soon, but 
> > I would also volunteer to be a reviewer and/or provide some content.

I'm also interested in helping with this.

I think the first question to answer is, are the current mypy docs 
(https://mypy.readthedocs.io/en/stable/) insufficient for this purpose, and 
why? They do include both tutorial-style "getting started" paths as well as 
reference documentation. If they are not serving this purpose, why not? Is it 
due to their content or structure, or just because they are framed as "the mypy 
docs" and not "a typed Python HOWTO", so they don't find the right audience?

If we do need to write something new, one resource I can offer is an "intro to 
typed Python" talk I gave at PyCon 2018: 
https://www.youtube.com/watch?v=pMgmKJyWKn8

I've received feedback from many people without previous experience with typing 
that this talk was useful to them in understanding both the why and the how. If 
it seems useful I'd potentially be willing to adapt and update the content and 
code examples from this talk into written form as a starting point.

Carl
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/MEZRNWGXAQ5PCVYTOFTOLS7XHORLVMCB/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 649: Deferred Evaluation Of Annotations Using Descriptors, round 2

2021-04-14 Thread Carl Meyer via Python-Dev
Hi Larry,

On 4/14/21, 1:56 PM, "Larry Hastings"  wrote:

>My plan was to post it here and see what the response was first.  Back in 
> January, when I posted the first draft, I got some very useful feedback that 
> resulted in some dramatic changes.  This time around, so far, nobody has 
> suggested even minor changes.  Folks have just expressed their opinions about 
> it (which is fine).

This is not true. I suggested yesterday (in 
https://mail.python.org/archives/list/python-dev@python.org/message/DSZFE7XTRK2ESRJDPQPZIDP2I67E76WH/
 ) that PEP 649 could avoid making life worse for users of type annotations 
(relative to PEP 563) if it replaced runtime-undefined names with forward 
reference markers, as implemented in 
https://github.com/larryhastings/co_annotations/pull/3

Perhaps you've chosen to ignore the suggestion, but that's not the same as 
nobody suggesting any changes ;)

Carl

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/4LXOED3ABKDSNUDJ3JTNEGTXD3R7TEWT/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 649: Deferred Evaluation Of Annotations Using Descriptors, round 2

2021-04-13 Thread Carl Meyer via Python-Dev
Hi Larry,

On 4/12/21, 6:57 PM, "Larry Hastings"  wrote:
Again, by "works on PEP 563 semantics", you mean "doesn't raise an error".  
But the code has an error.  It's just that it has been hidden by PEP 563 
semantics.
I don't agree that changing Python to automatically hide errors is an 
improvement.  As the Zen says: "Errors should never pass silently."

This is really the heart of the debate over PEP 649 vs PEP 563.  If you 
examine an annotation, and it references an undefined symbol, should that throw 
an error?  There is definitely a contingent of people who say "no, that's 
inconvenient for us".  I think it should raise an error.  Again from the Zen: 
"Special cases aren't special enough to break the rules."  Annotations are 
expressions, and if evaluating an expression fails because of an undefined 
name, it should raise a NameError.

Normally in Python, if you reference a symbol in a function definition line, 
the symbol must be defined at that point in module execution. Forward 
references are not permitted, and will raise `NameError`.

And yet you have implemented PEP 649, whose entire raison d'être is to 
implement a "special case" to "break the rules" by delaying evaluation of 
annotations such that a type annotation, unlike any other expression in the 
function definition line, may include forward reference names which will not be 
defined until later in the module.

The use case for `if TYPE_CHECKING` imports is effectively the same. They are 
just forward references to names in other modules which can't be imported 
eagerly, because e.g. it would cause a cycle. Those who have used type 
annotations in earnest are likely to confirm that such inter-module forward 
references are just as necessary as intra-module forward references for the 
usability of type annotations.

So it doesn't seem that we have here is a firm stand on principle of the Zen, 
it appears to rather be a disagreement about exactly where to draw the line on 
the "special case" that we all already seem to agree is needed.

The Zen argument seems to be a bit of a circular one: I have defined PEP 649 
semantics in precisely this way, therefore code that works with PEP 649 does 
not have an error, and code that does not work with PEP 649 "has an error" 
which must be surfaced!

With PEP 563, although `get_type_hints()` cannot natively resolve inter-module 
forward references and raises `NameError`, it is possible to work around this 
by supplying a globals dict to `get_type_hints()` that has been augmented with 
those forward-referenced names. Under the current version of PEP 649, it 
becomes impossible to get access to such type annotations at runtime at all, 
without reverting to manually stringifying the annotation and then using 
something like `get_type_hints()`. So for users of type annotations who need 
`if TYPE_CHECKING` (which I think is most users of type annotations), the 
best-case overall effect of PEP 649 will be that a) some type annotations have 
to go back to being ugly strings in the source, and b) if type annotation 
values are needed at runtime, `get_type_hints()` will still be as necessary as 
it ever was.

It is possible for PEP 649 to draw the line differently and support both 
intra-module and inter-module forward references in annotations, by doing 
something like https://github.com/larryhastings/co_annotations/pull/3 and 
replacing unknown names with forward-reference markers, so the annotation 
values are still accessible at runtime. This meets the real needs of users of 
type annotations better, and gives up none of the benefits of PEP 649.

Carl



___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/DSZFE7XTRK2ESRJDPQPZIDP2I67E76WH/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Adding a "call_once" decorator to functools

2020-04-30 Thread Carl Meyer
On Thu, Apr 30, 2020 at 3:12 PM Raymond Hettinger
 wrote:
> Thanks for the concrete example.  AFAICT, it doesn't require (and probably 
> shouldn't have) a lock to be held for the duration of the call.  Would it be 
> fair to say the 100% of your needs would be met if we just added this to the 
> functools module?
>
>   call_once = lru_cache(maxsize=None)
>
> That's discoverable, already works, has no risk of deadlock, would work with 
> multiple argument functions, has instrumentation, and has the ability to 
> clear or reset.

Yep, I think that's fair. We've never AFAIK had a problem with
`lru_cache` races, and if we did, in most cases we'd be fine with
having it called twice.

I can _imagine_ a case where the call loads some massive dataset
directly into memory and we really couldn't afford it being loaded
twice under any circumstance, but even if we have a case like that, we
don't do enough threading for it ever to have been an actual problem
that I'm aware of.

> I'm still looking for an example that actually requires a lock to be held for 
> a long duration.

Don't think I can provide a real-world one from my own experience! Thanks,

Carl
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/XLXJFZ4K67RDEI3WUK2FNEKH547C36GK/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Adding a "call_once" decorator to functools

2020-04-30 Thread Carl Meyer
On Wed, Apr 29, 2020 at 9:36 PM Raymond Hettinger
 wrote:
> Do you have some concrete examples we could look at?   I'm having trouble 
> visualizing any real use cases and none have been presented so far.

This pattern occurs not infrequently in our Django server codebase at
Instagram. A typical case would be that we need a client object to
make queries to some external service, queries using the client can be
made from various locations in the codebase (and new ones could be
added any time), but there is noticeable overhead to the creation of
the client (e.g. perhaps it does network work at creation to figure
out which remote host can service the needed functionality) and so
having multiple client objects for the same remote service existing in
the same process is waste.

Or another similar case might be creation of a "client" object for
querying a large on-disk data set.

> Presumably, the initialization function would have to take zero arguments,

Right, typically for a globally useful client object there are no
arguments needed, any required configuration is also already available
globally.

> have a useful return value,

Yup, the object which will be used by other code to make network
requests or query the on-disk data set.

> must be called only once,

In our use cases it's more a SHOULD than a MUST. Typically if it were
called two or three times in the process due to some race condition
that would hardly matter. However if it were called anew for every
usage that would be catastrophically inefficient.

> not be idempotent,

Any function like the ones I'm describing can be trivially made
idempotent by initializing a global variable and short-circuit
returning that global if already set. But that's precisely the
boilerplate this utility seeks to replace.

> wouldn't fail if called in two different processes,

Separate processes would each need their own and that's fine.

> can be called from multiple places,

Yes, that's typical for the uses I'm describing.

> and can guarantee that a decref, gc, __del__, or weakref callback would never 
> trigger a reentrant call.

"Guarantee" is too strong, but at least in our codebase use of Python
finalizers is considered poor practice and they are rarely used, and
in any case it would be extraordinarily strange for a finalizer to
make use of an object like this that queries an external resource. So
this is not a practical concern. Similarly it would be very strange
for creation of an instance of a class to call a free function whose
entire purpose is to create and return an instance of that very class,
so reentrancy is also not a practical concern.

> Also, if you know of a real world use case, what solution is currently being 
> used.  I'm not sure what alternative call_once() is competing against.

Currently we typically would use either `lru_cache` or the manual
"cache" using a global variable. I don't think that practically
`call_once` would be a massive improvement over either of those, but
it would be slightly clearer and more discoverable for the use case.

> Do you have any thoughts on what the semantics should be if the inner 
> function raises an exception?  Would a retry be allowed?  Or does call_once() 
> literally mean "can never be called again"?

For the use cases I'm describing, if the method raises an exception
the cache should be left unpopulated and a future call should try
again.

Arguably a better solution for these cases is to push the laziness
internal to the class in question, so it doesn't do expensive or
dangerous work on instantiation but delays it until first use. If that
is done, then a simple module-level instantiation suffices to replace
the `call_once` pattern. Unfortunately in practice we are often
dealing with existing widely-used APIs that weren't designed that way
and would be expensive to refactor, so the pattern continues to be
necessary. (Doing expensive or dangerous work at import time is a
major problem that we must avoid, since it causes every user of the
system to pay that startup cost in time and risk of failure, even if
for their use the object would never be used.)

Carl
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/K73NIHFBXWCM2GUWPVJUNI44TSWASIRD/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 617: New PEG parser for CPython

2020-04-21 Thread Carl Meyer
On Sat, Apr 18, 2020 at 10:38 PM Guido van Rossum  wrote:
>
> Note that, while there is indeed a docs page about 2to3, the only docs for 
> lib2to3 in the standard library reference are a link to the source code and a 
> single "Note: The lib2to3 API should be considered unstable and may change 
> drastically in the future."
>
> Fortunately,  in order to support the 2to3 application, lib2to3 doesn't need 
> to change, because the syntax of Python 2 is no longer changing. :-) Choosing 
> to remove 2to3 is an independent decision. And lib2to3 does not depend in any 
> way on the old parser module. (It doesn't even use the standard tokenize 
> module, but incorporates its own version that is slightly tweaked to support 
> Python 2.)

Indeed! Thanks for clarifying, I now recall that I already knew it
doesn't, but forgot.

The docs page for 2to3 does currently say "lib2to3 could also be
adapted to custom applications in which Python code needs to be edited
automatically." Perhaps at least this sentence should be removed, and
maybe also replaced with a clearer note that lib2to3 not only has an
unstable API, but also should not necessarily be expected to continue
to parse future Python versions, and thus building tools on top of it
should be discouraged rather than recommended. (Maybe even use the
word "deprecated.") Happy to submit a PR for this if you agree it's
warranted.

It still seems to me that it wouldn't hurt for PEP 617 itself to also
mention this shift in lib2to3's effective status (from "available but
no API stability guarantee" to "probably will not parse future Python
versions") as one of its indirect effects.

> You've mentioned a few different tools that already use different 
> technologies: LibCST depends on parso which has a fork of pgen2, lib2to3 
> which has the original pgen2. I wonder if this would be an opportunity to 
> move such parsing support out of the standard library completely. There are 
> already two versions of pegen, but neither is in the standard library: there 
> is the original pegen repo which is where things started, and there is a fork 
> of that code in the CPython Tools directory (not yet in the upstream repo, 
> but see PR 19503).
>
> The pegen tool has two generators, one generating C code and one generating 
> Python code. I think that the C generator is really only relevant for CPython 
> itself: it relies on the builtin tokenizer (the one written in C, not the 
> stdlib tokenize.py) and the generated C code depends on many internal APIs. 
> In fact the C generator in the original pegen repo doesn't work with Python 
> 3.9 because those internal APIs are no longer exported. (It also doesn't work 
> with Python 3.7 or older because it makes critical use of the walrus 
> operator. :-) Also, once we started getting serious about replacing the old 
> parser, we worked exclusively on the C generator in the CPython Tools 
> directory, so the version in the original pegen repo is lagging quite a bit 
> behind (is is the Python grammar in that repo). But as I said you're not 
> gonna need it.
>
> On the other hand, the Python generator is designed to be flexible, and while 
> it defaults to using the stdlib tokenize.py tokenizer, you can easily hook up 
> your own. Putting this version in the stdlib would be a mistake, because the 
> code is pretty immature; it is really waiting for a good home, and if parso 
> or LibCST were to decide to incorporate a fork of it and develop it into a 
> high quality parser generator for Python-like languages that would be great. 
> I wouldn't worry much about the duplication of code -- the Python generator 
> in the CPython Tools directory is only used for one purpose, and that is to 
> produce the meta-parser (the parser for grammars) from the meta-grammar. And 
> I would happily stop developing the original pegen once a fork is being 
> developed.

Thanks, this is all very clarifying! I hadn't even found the original
gvanrossum/pegen repo, and was just looking at the CPython PR for PEP
617. Clearly I haven't been following this work closely.

> Another option would be to just improve the python generator in the original 
> pegen repo to satisfy the needs of parso and LibCST. Reading the blurb for 
> parso it looks like it really just parses *Python*, which is less ambitious 
> than pegen. But it also seems to support error recovery, which currently 
> isn't part of pegen. (However, we've thought about it.) Anyway, regardless of 
> how exactly this is structured someone will probably have to take over 
> development and support. Pegen started out as a hobby project to educate 
> myself about PEG parsers. Then I wrote a bunch of blog posts about my 
> approach, and finally I started working on using it to generate a replacement 
> for the old pgen-based parser. But I never found the time to make it an 
> appealing parser generator tool for other languages, even though that was on 
> my mind as a future possibility. It will take some time t

[Python-Dev] Re: PEP 617: New PEG parser for CPython

2020-04-18 Thread Carl Meyer
The PEP is exciting and is very clearly presented, thank you all for
the hard work!

Considering the comments in the PEP about the new parser not
preserving a parse tree or CST, I have some questions about the future
options for Python language-services tooling which requires a CST in
order to round-trip and modify Python code. Examples in this space
include auto-formatters, refactoring tools, linters with autofix, etc.
Today many such tools (e.g. Black, 2to3) are based on lib2to3. Other
tools already have their own parser (e.g. LibCST -- which I help
maintain -- and Jedi both use parso, a fork of pgen2).

1) 2to3 and lib2to3 are not mentioned in the PEP, but are a documented
part of the standard library used by some very popular tools, and
currently depend on pgen2. A quick search of the PEP 617 pull request
does not suggest that it modifies lib2to3. Will lib2to3 also be
removed in Python 3.10 along with the old parser? It might be good for
the PEP to address the future of 2to3 and lib2to3 explicitly.

2) As these tools make the necessary adaptations to support Python
3.10, which may no longer be parsable with an LL(1) parser, will we be
able to leverage any part of pegen to construct a lossless Python CST,
or will we likely need to fork pegen outside of CPython or build a
wholly new parser? It would be neat if an alternate grammar could be
written in pegen that has access to all tokens (including NL and
COMMENT) for this purpose; that would save a lot of code duplication
and potential for inconsistency. I haven't had a chance to fully read
through the PEP 617 pull request, but it looks like its tokenizer
wrapper currently discards NL and COMMENT. I understand this is a
distinct use case with distinct needs and I'm not suggesting that we
should make significant sacrifices in the performance or
maintainability of pegen to serve it, but if it's possible to enable
some sharing by making API choices now before it's merged, that seems
worth considering.

Carl
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/5MPFFCYOEDKEPKNSNIDZ7H6AYTXUFFAY/
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-Dev] Third and hopefully final post: PEP 557, Data Classes

2017-11-30 Thread Carl Meyer
On 11/29/2017 05:02 PM, Guido van Rossum wrote:
> I tried to look up the discussion but didn't find much except that you
> flagged this as an issue. To repeat, your concern is that isdataclass()
> applies to *instances*, not classes, which is how Eric has designed it,
> but you worry that either through the name or just because people don't
> read the docs it will be confusing. What do you suppose we do? I think
> making it work for classes as well as for instances would cause another
> category of bugs (confusion between cases where a class is needed vs. an
> instance abound in other situations -- we don't want to add to that).
> Maybe it should raise TypeError when passed a class (unless its
> metaclass is a dataclass)? Maybe it should be renamed to
> isdataclassinstance()? That's a mouthful, but I don't know how common
> the need to call this is, and people who call it a lot can define their
> own shorter alias.

Yeah, I didn't propose a specific fix because I think there are several
options (all mentioned in this thread already), and I don't really have
strong feelings about them:

1) Keep the existing function and name, let it handle either classes or
instances. I agree that this is probably not the best option available,
though IMO it's still marginally better than the status quo).

2) Punt the problem by removing the function; don't add it to the public
API at all until we have demonstrated demand.

3) Rename it to "is_dataclass_instance" (and maybe also keep a separate
"is_dataclass" for testing classes directly). (Then there's also the
choice about raising TypeError vs just returning False if a function is
given the wrong type; I think TypeError is better.)

Carl





signature.asc
Description: OpenPGP digital signature
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Third and hopefully final post: PEP 557, Data Classes

2017-11-29 Thread Carl Meyer
On 11/29/2017 03:26 PM, Eric V. Smith wrote:
> I've posted a new version of PEP 557, it should soon be available at
> https://www.python.org/dev/peps/pep-0557/.
> 
> The only significant changes since the last version are:
> 
> - changing the "compare" parameter to be "order", since that more
> accurately reflects what it does.
> - Having the combination of "eq=False" and "order=True" raise an
> exception instead of silently changing eq to True.
> 
> There were no other issues raised with the previous version of the PEP.

Not quite; I also raised the issue of isdataclass(ADataClass) returning
False. I still think that's likely to be a cause of bug reports if left
as-is.

Carl



signature.asc
Description: OpenPGP digital signature
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Second post: PEP 557, Data Classes

2017-11-25 Thread Carl Meyer
Hi Eric,

Really excited about this PEP, thanks for working on it. A couple minor
questions:

> If compare is True, then eq is ignored, and __eq__ and __ne__ will be
automatically generated.

IMO it's generally preferable to make nonsensical parameter combinations
an immediate error, rather than silently ignore one of them. Is there a
strong reason for letting nonsense pass silently here?

(I reviewed the previous thread; there was a lot of discussion about
enums/flags vs two boolean params, but I didn't see explicit discussion
of this issue; the only passing references I noticed said the invalid
combo should be "disallowed", e.g. Guido in [1], which to me implies "an
error.")

> isdataclass(instance): Returns True if instance is an instance of a
Data Class, otherwise returns False.

Something smells wrong with the naming here. If I have

@dataclass
class Person:
name: str

I think it would be considered obvious and undeniable (in English prose,
anyway) that Person is a dataclass. So it seems wrong to have
`isdataclass(Person)` return `False`. Is there a reason not to let it
handle either a class or an instance (looks like it would actually
simplify the implementation)?

Carl


 [1] https://mail.python.org/pipermail/python-dev/2017-September/149505.html

On 11/25/2017 01:06 PM, Eric V. Smith wrote:
> The updated version should show up at
> https://www.python.org/dev/peps/pep-0557/ shortly.
> 
> The major changes from the previous version are:
> 
> - Add InitVar to specify initialize-only fields.
> - Renamed __dataclass_post_init__() to __post_init().
> - Rename cmp to compare.
> - Added eq, separate from compare, so you can test
>   unorderable items for equality.
> - Flushed out asdict() and astuple().
> - Changed replace() to just call __init__(), and dropped
>   the complex post-create logic.
> 
> The only open issues I know of are:
> - Should object comparison require an exact match on the type?
>   https://github.com/ericvsmith/dataclasses/issues/51
> - Should the replace() function be renamed to something else?
>   https://github.com/ericvsmith/dataclasses/issues/77
> 
> Most of the items that were previously discussed on python-dev were
> discussed in detail at https://github.com/ericvsmith/dataclasses. Before
> rehashing an old discussion, please check there first.
> 
> Also at https://github.com/ericvsmith/dataclasses is an implementation,
> with tests, that should work with 3.6 and 3.7. The only action item for
> the code is to clean up the implementation of InitVar, but that's
> waiting for PEP 560. Oh, and if PEP 563 is accepted I'll also need to do
> some work.
> 
> Feedback is welcomed!
> 
> Eric.
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/carl%40oddbird.net



signature.asc
Description: OpenPGP digital signature
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 484 proposal: don't default to Optional if argument default is None

2017-05-09 Thread Carl Meyer
On 05/09/2017 10:28 AM, Guido van Rossum wrote:
> There's a proposal to change one detail of PEP 484. It currently says:
> 
> An optional type is also automatically assumed when the default value is
> |None|, for example::
> 
> |def handle_employee(e: Employee = None): ... |
> 
> This is equivalent to::
> 
> |def handle_employee(e: Optional[Employee] = None) -> None: ... |
> 
> 
> Now that we've got some experience actually using Optional with mypy
> (originally mypy ignored Optional), we're beginning to think that this
> was a bad idea. There's more discussion at
> https://github.com/python/typing/issues/275 and an implementation of the
> change (using a command-line flag) in
> https://github.com/python/mypy/pull/3248.
> 
> Thoughts? Some function declarations will become a bit more verbose, but
> we gain clarity (many users of annotations don't seem to be familiar
> with this feature) and consistency (since this rule doesn't apply to
> variable declarations and class attribute declarations).

I've been code-reviewing a lot of diffs adding type coverage over the
last few months, and implicit-Optional has been among the most common
points of confusion. So I favor this change.

It might be nice to have a less verbose syntax for Optional, but that
can be a separate discussion.

Carl



signature.asc
Description: OpenPGP digital signature
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP-498: Literal String Formatting

2015-08-10 Thread Carl Meyer
On 08/10/2015 02:49 PM, Eric V. Smith wrote:
> On 08/10/2015 02:44 PM, Yury Selivanov wrote:
>>
>>
>> On 2015-08-10 2:37 PM, Eric V. Smith wrote:
 Besides, any expression you have to calculate can go in a local that
 will get
> interpolated.  The same goes for any !r or other formatting
 modifiers.  In an
> i18n context, you want to stick to the simplest possible substitution
> placeholders.
>>> This is why I think PEP-498 isn't the solution for i18n. I'd really like
>>> to be able to say, in a debugging context:
>>>
>>> print('a:{self.a} b:{self.b} c:{self.c} d:{self.d}')
>>>
>>> without having to create locals to hold these 4 values.
>>
>> Why can't we restrict expressions in f-strings to
>> attribute/item getters?
>>
>> I.e. allow f'{foo.bar.baz}' and f'{self.foo["bar"]}' but
>> disallow f'{foo.bar(baz=something)}'
> 
> It's possible. But my point is that Barry doesn't even want
> attribute/item getters for an i18n solution, and I'm not willing to
> restrict it that much.

I don't think attribute access and item access are on the same level
here. In terms of readability of the resulting string literal, it would
be reasonable to allow attribute access but disallow item access. And I
think attribute access is reasonable to allow in the context of an i18n
solution as well (but item access is not). Item access is much harder to
read and easier for translators to mess up because of all the extra
punctuation (and the not-obvious-to-a-non-programmer distinction between
a literal or variable key).

There's also the solution used by the Django and Jinja templating
languages, where dot-notation can mean either attribute access
(preferentially) or item access with literal key (as fallback). That
manages to achieve both a high level of readability of the
literal/template, and a high level of flexibility for the context
provider (who may find it easier to provide a dictionary than an
object), but may fail the "too different from Python" test.

Carl



signature.asc
Description: OpenPGP digital signature
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Issues not responded to.

2015-07-30 Thread Carl Meyer
On 07/30/2015 09:03 PM, Nikolaus Rath wrote:
> Nick recently mentioned that the PSF might be able to help, but that the
> initiative for that needs to come from the core developers. So why don't
> you guys ask the PSF to e.g. sponsor some of the work that no one feels
> motivated to do in their spare time?
> 
> To avoid issues with some people being paid for work that others
> contribute in their free time one could introduce a new keyword in the
> tracker (say "ugly"). Whenever a core developer sees an issue that he[1]
> thinks should be worked on, but that he really does not want to do in
> his free time, he tags it with "ugly" and the issue becomes available
> for PSF-sponsored work.

I'm a Django core developer. For the last half-year or so, the Django
Software Foundation has (for the first time) paid a "Django Fellow" or
two (currently Tim Graham) to work on core Django. For me the experience
has been excellent. Having a Django Fellow significantly reduces the
guilt-burden of being part of the core team; it frees me to do the work
that I find most interesting, without needing to worry that other
necessary work won't get done. Releases are made on time, new tickets
are triaged, and security issues are attended to, whether I find the
time to do it myself or not, because someone is paid to ensure it
happens. I've never been the person on the core team who took on the
majority of that burden as a volunteer, but I _still_ (perhaps
especially?) feel the guilt-burden lifted. And having that burden lifted
hasn't decreased the overall amount of time I devote to Django; it's
increased it significantly, because spending time on Django has become
more fun.

Contributing to Django is also more fun now than it used to be (for core
developers and, I think, for everyone else) because Tim has been able to
devote significant chunks of time to infrastructure (the CI server and
the GitHub workflow, e.g. having the test suite and several other
automated code quality checks run automatically on every GitHub pull
request) that nobody ever found time to do as a volunteer.

So based on my experience with the transition to having a DSF-paid
Fellow on the Django core team, and having watched important python-dev
work (e.g. the core workflow stuff) linger due to lack of available
volunteer time, I'd recommend that python-dev run, not walk, to ask the
PSF board to fund a similar position for Python core.

Of course there may be differences between the culture of python-dev and
Django core that I'm not fully aware of that may make a difference in
how things work out. And finding the right person for the job is
critical, of course. I think the Django experience suggests that an
existing long-time contributor who is already known and trusted by the
core team is a good bet. Also that the Fellow needs to already have, or
quickly gain, commit privileges themselves.

For whatever it's worth,

Carl



signature.asc
Description: OpenPGP digital signature
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Single-file Python executables

2015-05-28 Thread Carl Meyer
On 05/28/2015 11:52 AM, Paul Moore wrote:
[snip]
> Nevertheless, I would like to understand how Unix can manage to have a
> Python 3.4.3 binary at 4kb. Does that *really* have no external
> dependencies (other than the C library)? Are we really comparing like
> with like here?

I don't know what Donald was looking at, but I'm not seeing anything
close to that 4k figure here. (Maybe he's on OS X, where framework
builds have a "stub" executable that just execs the real one?)

On my Ubuntu Trusty system, the system Python 3.4 executable is 3.9M,
and the one I compiled myself from source, without any special options,
is almost 12M. (Not really sure what accounts for that difference -
Ubuntu system Python uses shared libraries for more stuff?)

Carl



signature.asc
Description: OpenPGP digital signature
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Status on PEP-431 Timezones

2015-04-08 Thread Carl Meyer
Hi Lennart,

On 04/08/2015 09:18 AM, Lennart Regebro wrote:
> I wrote PEP-431 two years ago, and never got around to implement it.
> This year I got some renewed motivation after Berker Peksağ made an
> effort of implementing it.
> I'm planning to work more on this during the PyCon sprints, and also
> have a BoF session or similar during the conference.
> 
> Anyone interested in a session on this, mail me and we'll set up a
> time and place!

I'm interested in the topic, and would probably attend a BoF at PyCon.
Comments below:

> If anyone is interested in the details of the problem, this is it.
> 
> The big problem is the ambiguous times, like 02:30 a time when you
> move the clock back one hour, as there are two different 02:30's that
> day. I wrote down my experiences with looking into and trying to
> implement several different solutions. And the problem there is
> actually how to tell the datetime if it is before or after the
> changeover.
> 
> 
> == How others have solved it ==
> 
> === dateutil.tz: Ignore the problem ===
> 
> dateutil.tz simply ignores the problems with ambiguous datetimes, keeping them
> ambiguous.
> 
> 
> === pytz: One timezone instance per changeover ===
> 
> Pytz implements ambiguous datetimes by having one class per timezone. Each
> change in the UTC offset changes, either because of a DST changeover, or 
> because
> the timezone changes, is represented as one instance of the class.
> 
> All instances are held in a list which is a class attribute of the timezone
> class. You flag in which DST changeover you are by uising different instances
> as the datetimes tzinfo. Since the timezone this way knows if it is DST or 
> not,
> the datetime as a whole knows if it's DST or not.
> 
> Benefits:
> - Only known possible implementation without modifying stdlib, which of course
>   was a requirement, as pytz is a third-party library.
> - DST offset can be quickly returned, as it does not need to be calculated.
> Drawbacks:
> - A complex and highly magical implementation of timezones that is hard to
>   understand.
> - Required new normalize()/localize() functions on the timezone, and hence
>   the API is not stdlib's API.
> - Hundreds of instances per timezone means slightly more memory usage.
> 
> 
> == Options for PEP 431 ==
> 
> === Stdlib option 0: Ignore it ===
> 
> I don't think this is an option, really. Listed for completness.
> 
> 
> === Stdlib option 1: One timezone instance per changeover ===
> 
> Option 1 is to do it like pytz, have one timezone instance per changeover.
> However, this is likely not possible to do without fundamentally changing the
> datetime API, or making it very hard to use.
> 
> For example, when creating a datetime instance and passing in a tzinfo today
> this tzinfo is just attached to the datetime. But when having multiple
> instances of tzinfos this means you have to select the correct one to pass in.
> pytz solves this with the .localize() method, which let's the timezone
> class choose which instance to pass in.
> 
> We can't pass in the timezone class into datetime(), because that would
> require datetime.__new__ to create new datetimes as a part of the timezone
> arithmetic. These in turn, would create new datetimes in __new__ as a part of
> the timezone arithmetic, which in turn, yeah, you get it...
> 
> I haven't been able to solve that issue without either changing the API/usage,
> or getting infinite recursions.
> 
> Benefits:
> - Proven soloution through pytz.
> - Fast dst() call.
> Drawbacks:
> - Trying to use this technique with the current API tends to create
>   infinite recursions. It seems to require big API changes.
> - Slow datetime() instance creation.

I think "proven solution" is a significant benefit.

Today, anyone who is serious about correct timezone handling in Python
is almost certainly using pytz. So is adopting pytz's expanded API into
the stdlib really a big problem? It probably presents _fewer_
back-compatibility issues with real-world code than taking a different
approach from pytz would.

> === Stdlib option 2: A datetime _is_dst flag ===
> 
> By having a flag on the datetime instance that says "this is in DST or not"
> the timezone implementation can be kept simpler.

Is this really adequate? pytz's implementation handles far more than "is
DST or not", it also correctly handles historical timezone changes. How
would those be handled under this proposal?

> You also have to either calculate if the datetime is in a DST or not either
> when creating it, which demands datetime object creations, and causes infinite
> recursions, or you have to calculate it when needed, which means you can
> get "Ambiguous date time errors" at unexpected times later.
> 
> Also, when trying to implement this, I get bogged down in the complexities
> of how tzinfo and datetime is calling each other back and forth, and when
> to pass in the current is_dst and when to pass in the the desired is_dst, etc.
> The API and current implementation is 

Re: [Python-Dev] pep8 reasoning

2014-04-25 Thread Carl Meyer
On 04/25/2014 01:55 PM, Ethan Furman wrote:
> On 04/25/2014 12:45 PM, Florent wrote:
>> 2014-04-25 18:10 GMT+02:00 Nick Coghlan:
>>>
>>> And if you're going to publish a tool to enforce your *personal* style
>>> guide and include your own custom rules that the "this is OK" examples
>>> in PEP 8 fail to satisfy, don't call it "pep8".
>>
>> Two cases where signaled in the issue #256 last month, where the tool
>> is arguably not compliant with some of the current conventions of the
>> PEP.
>> I've highlighted the reasons behind these checkers in the issue
>> tracker directly.
>> I disagree that they are in direct opposition with the PEP 8 coding
>> conventions, though.
> 
> The problem is that you've named it pep8.  To me, that means I can run
> it and get PEP 8 results.  If I have to add a manual configuration to
> get actual PEP 8 semantics, it's a buggy tool.
> 
> For a similar example, I am the author/maintainer of enum34, which
> purports to be the backport of Python's 3.4 enum class.  While I could
> go and make changes to it to better match my style, or even the styles
> of other users, calling it enum34 after that would be misleading as some
> one couldn't then switch from using enum34 in Python 3.2 to using the
> default enum in Python 3.4.
> 
> If you had extra switches to turn on extra behavior, that would be
> fine.  Leaving it as it is, and calling it something else would be
> fine.  But as it stands now, with the name of pep8 and the behavior of
> failing on the PEP 8 document... well, that's false advertising.

I think this fuss is unreasonable and unwarranted.

I'd like to thank Florent for taking the time to maintain an
extremely-useful tool that makes it feasible to keep to a consistent PEP
8 style throughout a large codebase with many contributors, and I think
he should have the leeway as maintainer to make the necessary judgment
calls about precisely which PEP 8 recommendations are reported precisely
how by the tool, given that:

- we aren't talking about real variance from the spirit or
recommendations of PEP 8 (though you wouldn't know it from some of the
rhetoric here about "personal preferences")

- the tool makes it very easy to turn off whichever errors you don't
prefer to enforce.

- PEP 8 changes regularly (as Florent noted, the offending code example
was only added recently), and there is value in the automated tool
maintaining some stability for its users.

Personally, I would much rather see pep8.py err on the side of being too
strict with PEP 8's recommendations than too loose. Again, it's not hard
to turn off the ones you don't want.

If python-dev wants to control the precise behavior of pep8.py, bring it
into the standard library and adopt maintenance of it. Otherwise, please
give Florent some grace.

Carl
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 461 updates

2014-01-15 Thread Carl Meyer
Hi Ethan,

I haven't chimed into this discussion, but the direction it's headed
recently seems right to me. Thanks for putting together a PEP. Some
comments on it:

On 01/15/2014 05:13 PM, Ethan Furman wrote:
> 
> Abstract
> 
> 
> This PEP proposes adding the % and {} formatting operations from str to
> bytes [1].

I think the PEP could really use a rationale section summarizing _why_
these formatting operations are being added to bytes; namely that they
are useful when working with various ASCIIish-but-not-properly-text
network protocols and file formats, and in particular when porting code
dealing with such formats/protocols from Python 2.

Also I think it would be useful to have a section summarizing the
primary objections that have been raised, and why those objections have
been overruled (presuming the PEP is accepted). For instance: the main
objection, AIUI, has been that the bytes type is for pure bytes-handling
with no assumptions about encoding, and thus we should not add features
to it that assume ASCIIness, and that may be attractive nuisances for
people writing bytes-handling code that should not assume ASCIIness but
will once they use the feature. And the refutation: that the bytes type
already provides some operations that assume ASCIIness, and these new
formatting features are no more of an attractive nuisance than those;
since the syntax of the formatting mini-languages themselves itself
assumes ASCIIness, there is not likely to be any temptation to use it
with binary data that cannot.

Although it can be hard to arrive at accurate and agreed-on summaries of
the discussion, recording such summaries in the PEP is important; it may
help save our future selves and colleagues from having to revisit all
these same discussions and megathreads.

> Overriding Principles
> =
> 
> In order to avoid the problems of auto-conversion and value-generated
> exceptions,
> all object checking will be done via isinstance, not by values contained
> in a
> Unicode representation.  In other words::
> 
>   - duck-typing to allow/reject entry into a byte-stream
>   - no value generated errors

This seems self-contradictory; "isinstance" is type-checking, which is
the opposite of duck-typing. A duck-typing implementation would not use
isinstance, it would call / check for the existence of a certain magic
method instead.

I think it might also be good to expand (very) slightly on what "the
problems of auto-conversion and value-generated exceptions" are; that
is, that the benefit of Python 3's model is that encoding is explicit,
not implicit, making it harder to unwittingly write code that works as
long as all data is ASCII, but fails as soon as someone feeds in
non-ASCII text data.

Not everyone who reads this PEP will be steeped in years of discussion
about the relative merits of the Python 2 vs 3 models; it doesn't hurt
to spell out a few assumptions.


> Proposed semantics for bytes formatting
> ===
> 
> %-interpolation
> ---
> 
> All the numeric formatting codes (such as %x, %o, %e, %f, %g, etc.)
> will be supported, and will work as they do for str, including the
> padding, justification and other related modifiers, except locale.
> 
> Example::
> 
>>>> b'%4x' % 10
>b'   a'
> 
> %c will insert a single byte, either from an int in range(256), or from
> a bytes argument of length 1.
> 
> Example:
> 
> >>> b'%c' % 48
> b'0'
> 
> >>> b'%c' % b'a'
> b'a'
> 
> %s is restricted in what it will accept::
> 
>   - input type supports Py_buffer?
> use it to collect the necessary bytes
> 
>   - input type is something else?
> use its __bytes__ method; if there isn't one, raise an exception [2]
> 
> Examples:
> 
> >>> b'%s' % b'abc'
> b'abc'
> 
> >>> b'%s' % 3.14
> Traceback (most recent call last):
> ...
> TypeError: 3.14 has no __bytes__ method
> 
> >>> b'%s' % 'hello world!'
> Traceback (most recent call last):
> ...
> TypeError: 'hello world' has no __bytes__ method, perhaps you need
> to encode it?
> 
> .. note::
> 
>Because the str type does not have a __bytes__ method, attempts to
>directly use 'a string' as a bytes interpolation value will raise an
>exception.  To use 'string' values, they must be encoded or otherwise
>transformed into a bytes sequence::
> 
>   'a string'.encode('latin-1')
> 
> format
> --
> 
> The format mini language codes, where they correspond with the
> %-interpolation codes,
> will be used as-is, with three exceptions::
> 
>   - !s is not supported, as {} can mean the default for both str and
> bytes, in both
> Py2 and Py3.
>   - !b is supported, and new Py3k code can use it to be explicit.
>   - no other __format__ method will be called.
> 
> Numeric Format Codes
> 
> 
> To properly handle int and float subclasses, int(), index(), and float()
> will be called on the
> obje

Re: [Python-Dev] XML DoS vulnerabilities and exploits in Python

2013-02-20 Thread Carl Meyer
On 02/20/2013 03:35 PM, Greg Ewing wrote:
> Carl Meyer wrote:
>> An XML parser that follows the XML standard is never safe to expose to
>> untrusted input.
> 
> Does the XML standard really mandate that a conforming parser
> must blindly download any DTD URL given to it from the real
> live internet? Somehow I doubt that.

For a validating parser, the spec does mandate that. It permits
non-validating parsers (browsers are the only example given) to simply
note the existence of an external entity reference and "retrieve it for
display only on demand." [1]

But this isn't particularly relevant; the quoted statement is true even
if you ignore the external reference issues entirely and consider only
entity-expansion DoS. Some level of non-conformance to the spec is
necessary to make parsing of untrusted XML safe.

Carl

[1] http://www.w3.org/TR/xml/#include-if-valid
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] XML DoS vulnerabilities and exploits in Python

2013-02-20 Thread Carl Meyer
On 02/20/2013 01:53 PM, Skip Montanaro wrote:
>> That's not very good. XML parsers are supposed to parse XML according
>> to standards. Is the goal to have them actually do that, or just
>> address DDOS issues?
> 
> Having read through Christian's mail and several of his references, it
> seems to me that addressing the DDoS issues is preferable to blindly
> following a standard that predates the Morris worm by a couple years.
> Everyone played nice before that watershed event.  Heck, back then you
> could telnet to g...@prep.ai.mit.edu without a password!

Also, despite the title of this thread, the vulnerabilities include
fetching of external DTDs and entities (per standard), which opens up
attacks that are worse than just denial-of-service. In our initial
Django release advisory we carelessly lumped the potential XML
vulnerabilities together under the "DoS" label, and were quickly corrected.

An XML parser that follows the XML standard is never safe to expose to
untrusted input. This means the choice is just whether the stdlib XML
parsers should be safe by default, or follow the standard by default.
(Given either choice, the other option can still be made available via
flags).

Carl
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] venv scripts for fish and csh shells

2012-07-19 Thread Carl Meyer
On 07/19/2012 10:26 AM, Andrew Svetlov wrote:
> virtualenv has virtualenv.csh and virtualenv.fish files.
> Is there any reason for restricting venv to bash/zsh only?

No. As far as I'm concerned, a patch to port the virtualenv csh and fish
activate scripts to pyvenv would be welcome (though I can't commit said
patch, so it might be good to hear if Vinay has a different opinion).

Carl

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Why no venv in existing directory?

2012-07-19 Thread Carl Meyer
Hi Stefan,

On 07/19/2012 06:28 AM, Stefan H. Holek wrote:
> While trying 3.3 beta I found that I cannot use my favorite
> virtualenv pattern with pyvenv:
> 
> $ virtualenv . Installing.done.
> 
> $ pyvenv . Error: Directory exists: /Users/stefan/sandbox/foo
> 
> I appreciate that this behavior is documented and was in the PEP from
> the start: "If the target directory already exists an error will be
> raised, unless the --clear or --upgrade option was provided."
> 
> Still, I am curious what the rationale behind this restriction is.

I'd have no problem with lifting the restriction.

I don't recall any clear rationale; I think it was probably just the
simplest implementation initially, and no one ever raised it as an issue
in the PEP process.

Carl


___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Packaging documentation and packaging.pypi API

2012-06-20 Thread Carl Meyer
Hi Alexis,

On 06/20/2012 10:57 AM, Alexis Métaireau wrote:
> Le mer. 20 juin 2012 18:45:23 CEST, Paul Moore a écrit :
>> Thanks - as you say, it's not so much the actual problem as the
>> principle of what the packaging API offers that matters here. Although
>> it does make a good point - to what extent do the packaging APIs draw
>> on existing experience like that of pip? Given that tools like pip are
>> used widely to address real requirements, it would seem foolish to
>> *not* draw on that experience in designing a stdlib API.
> 
> IIRC, pip relies nly onthe XML/RPC API to get information about the
> distributions from the cheeseshop. the code that's in packaging.pypi was
> built with the implementation in setuptools in mind, so we keep
> compatibility with setuptools "easy_install".

No, this is not accurate. Pip's PackageFinder uses setuptools-compatible
link-scraping, not the XMLRPC API, and it is the PackageFinder that is
used to actually find distributions to install. I think PackageFinder is
pretty much equivalent to what packaging.pypi is intended to do.

Pip does have a separate "search" command that uses the XMLRPC API -
this is the only part of pip that uses XMLRPC. I consider this a bug in
pip, because the results can be inconsistent with actual installation
using PackageFinder, and "search" can't be used with mirrors or private
indexes (unless they implement the XMLRPC API). The "search" command
should just use PackageFinder instead.

> That is, this leverages one question more on my side: was/is pip
> intended to be used as a library rather than as a tool / are there some
> people that are actually building tools on top of pip this way?

Pip's internal APIs are not documented, and they aren't the cleanest
APIs ever, but some of them (particularly PackageFinder and
InstallRequirement/RequirementSet) can be used without too much
difficulty, and some people are using them. Not a lot of people, I don't
think; I don't have hard numbers. I haven't seen much in the way of
public reusable tools built atop pip, but I've talked with a few people
building internal deployment tools that use pip as a library.

Carl
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Packaging documentation and packaging.pypi API

2012-06-20 Thread Carl Meyer
Hi Paul,

On 06/20/2012 09:29 AM, Paul Moore wrote:
> As a specific example, one thing I would like to do is to be able to
> set up a packaging.pypi client object that lets me query and download
> distributions. However, rather than just querying PyPI (the default)
> I'd like to be able to set up a list of locations (PyPI, a local
> server, and maybe some distribution files stored on my PC) and combine
> results from all of them. This differs from the mirror support in that
> I want to combine the lists, not use one as a fallback if the other
> doesn't exist. From the documentation, I can't tell if this is
> possible, or a feature request, or unsupported... (Actually, there's
> not even any documentation saying how the URL(s) in index_url should
> behave, so how exactly do I set up a local repository anyway...?)

This is perhaps a tangent, as your point here is to point out what the
API of packaging.pypi ought to allow - but pip's PackageFinder class can
easily do exactly this for you. Feel free to follow up with me for
details if this is actually still a problem you need to solve.

Carl
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 405 (built-in virtualenv) status

2012-06-04 Thread Carl Meyer
Hello Christian,

On 06/03/2012 03:56 PM, Éric Araujo wrote:
> Le 02/06/2012 12:59, Christian Tismer a écrit :
>> One urgent question: will this feature be backported to Python 2.7?
> 
> Features are never backported to the stable versions.  virtualenv still
> exists as a standalone project which is compatible with 2.7 though.

To add to Éric's answer: the key difference between virtualenv and
pyvenv, allowing pyvenv environments to be much simpler, relies on a
change to the interpreter itself. This won't be backported to 2.7, and
can't be released as a standalone package.

It would be possible to backport the Python API and command-line UI of
pyvenv (which are different from virtualenv) as a PyPI package
compatible with Python 2.7. Because it wouldn't have the interpreter
change, it would have to still create environments that look like
virtualenv environments (i.e. they'd have to have chunks of the stdlib
symlinked in and a custom site.py). I suppose this could be useful if
wanting to script creation of venvs across Python 2 and Python 3, but
the utility seems limited enough that I have no plans to do this.

Carl
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 405 (Python Virtual Environments) and Windows script support

2012-05-28 Thread Carl Meyer
On 05/28/2012 04:24 PM, Nick Coghlan wrote:
> It would have been better if the issue of script management on Windows
> had been raised in PEP 405 itself - I likely would have declared PEP
> 397 a dependency *before* accepting it (even if that meant the feature
> missed the alpha 4 deadline and first appeared in beta 1, or
> potentially even missed 3.3 altogether).
> 
> However, I'm not going to withdraw the acceptance of the PEP over this
> - while I would have made a different decision at the time given the
> additional information (due to the general preference to treat Windows
> as a first class deployment target), I think reversing my decision now
> would make the situation worse rather than better.

I think it's unfortunate that this issue (which is
http://bugs.python.org/issue12394) has become entangled with PEP 405 at
all, since AFAICT it is entirely orthogonal. This is a
distutils2/packaging issue regarding how scripts are installed on
Windows. It happens to be relevant when trying to install things into a
PEP 405 venv on Windows, but it applies to a non-virtual Python
installation on Windows every bit as much as it applies to a PEP 405
environment. In an earlier discussion with Vinay I thought we had agreed
that it was an orthogonal issue and that this proposed patch for it
would be removed from the PEP 405 reference implementation before it was
merged to CPython trunk; I think that would have been preferable.

This is why there is no mention of the issue in PEP 405 - it doesn't
belong there, because it is not related.

> That means the important question is what needs to happen before beta
> 1 at the end of June. As I see it, we have two ways forward:
> 
> 1. My preferred option: bring PEP 397 up to scratch as a specification
> for the behaviour of the Python launcher (perhaps with Vinay stepping
> up as a co-author to help Mark if need be), find a BDFL delegate (MvL?
> Brian Curtin?) and submit that PEP for acceptance within the next few
> weeks. The updated PEP 397 should include an explanation of exactly
> how it will help with the correct implementation of PEP 405 on Windows
> (this may involve making the launcher pyvenv aware).
> 
> 2. The fallback option: remove the currently checked in build
> artifacts from source control and incorporate them into the normal
> Windows build processes (both the main VS 2010 process, and at least
> the now-legacy VS 2008 process)
> 
> For alpha 4, I suggest going with MvL's suggestion - drop the binaries
> from Mercurial and accept that this aspect of PEP 405 simply won't
> work on Windows until the first beta.

Regardless, these sound like the right options moving forward, with the
clarification that it is not any "aspect of PEP 405" that will not work
until a fix is merged, it is simply an existing limitation of
distutils2/packaging on Windows. And that if anything needs to be
reverted, temporarily or permanently, it should not be all of the PEP
405 implementation, rather just this packaging fix.

Carl



signature.asc
Description: OpenPGP digital signature
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 405 (pyvenv) and system Python upgrades

2012-05-08 Thread Carl Meyer

Hi Paul,

On 05/07/2012 04:16 PM, Paul Moore wrote:

On 7 May 2012 21:55, "Martin v. Löwis"  wrote:

This sounds to me like a level of complexity unwarranted by the severity
of the problem, especially when considering the additional burden it
imposes on alternative Python implementations.



OTOH, it *significantly* reduces the burden on Python end users, for
whom creation of a venv under a privileged account is a significant
hassle.


Personally, I would find a venv which required being run as an admin
account to be essentially unusable on Windows (particularly Windows 7,
where this means creating venvs in an "elevated" console window).

Allowing for symlinks as an option is fine, I guess, but I'd be -1 on
it being the default.


I don't think anyone has proposed making symlinks the default on 
Windows. At this point the two options on Windows would be to use the 
--symlink option explicitly, or else to need to run "pyvenv --upgrade" 
on your envs if you upgrade the underlying Python in-place (and there's 
a breaking incompatibility between the new stdlib and the old 
interpreter, which there almost never will be if the past is any 
indication).


I expect most users will opt for the latter option (equivalent to how 
current virtualenv works, except virtualenv doesn't have an --upgrade 
flag so you have to upgrade manually), but the former is also available 
if some prefer it.


In any case, the situation will be no worse than it is with virtualenv 
today.


Carl
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 405 (pyvenv) and system Python upgrades

2012-05-07 Thread Carl Meyer

On 05/07/2012 03:52 AM, "Martin v. Löwis" wrote:

3) Symlink the interpreter rather than copying. I include this here for
the sake of completeness, but it's already been rejected due to
significant problems on older Windows' and OS X.


That sounds the right solution to me. PEP 405 specifies that bin/python3
exists, but not that it is the actual Python interpreter binary that is
normally used. For each target system, a solution should be defined that
allows in-place updates of Python that also update all venvs automatically.


I propose that for Windows, that solution is to have a new enough 
version of Windows and the necessary privileges, and use the --symlink 
option to the pyvenv script, or else to manually update venvs using 
pyvenv --upgrade.



For example, for Windows, it would be sufficient to just have the
executable in bin/, as the update will only affect pythonXY.dll.
That executable may be different from the regular python.exe, and
it might be necessary that it locates its Python installation first.


This sounds to me like a level of complexity unwarranted by the severity 
of the problem, especially when considering the additional burden it 
imposes on alternative Python implementations.


Carl
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 405 (pyvenv) and system Python upgrades

2012-05-07 Thread Carl Meyer

On 05/07/2012 04:26 AM, Ronald Oussoren wrote:

On 7 May, 2012, at 11:52, Martin v. Löwis wrote:

3) Symlink the interpreter rather than copying. I include this
here for the sake of completeness, but it's already been rejected
due to significant problems on older Windows' and OS X.


That sounds the right solution to me. PEP 405 specifies that
bin/python3 exists, but not that it is the actual Python
interpreter binary that is normally used. For each target system, a
solution should be defined that allows in-place updates of Python
that also update all venvs automatically.

For example, for Windows, it would be sufficient to just have the
executable in bin/, as the update will only affect pythonXY.dll.
That executable may be different from the regular python.exe, and
it might be necessary that it locates its Python installation
first. For Unix, symlinks sound fine. Not sure what the issue with
OS X is.


The bin/python3 executable in a framework is a small stub that
execv's the real interpreter that is stuffed in a Python.app bundle
inside the Python framework. That's done to ensure that GUI code can
work from the command-line, Apple's GUI framework refuse to work when
the executable is not in an application bundle.

Because of this trick pyvenv won't know which executable the user
actually called and hence cannot find the pyvenv configuration file
(which is next to the stub executable).


It occurs to me, belatedly, that this also means that upgrades should be 
a non-issue with OS X framework builds (presuming the upgraded 
actual-Python-binary gets placed in the same location, and the 
previously copied stub will still exec it without trouble), in which 
case we can symlink on OS X non-framework builds and copy on OS X 
framework builds and be happy.


Carl
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 405 (pyvenv) and system Python upgrades

2012-05-06 Thread Carl Meyer

On 05/05/2012 12:41 AM, Nick Coghlan wrote:

On Sat, May 5, 2012 at 6:49 AM, Carl Meyer  wrote:

1) Document it and provide a tool for easily upgrading a venv in this
situation. This may be adequate. In practice the situation is quite rare:
2.6.8/2.7.3 is the only actual example in the history of virtualenv that I'm
aware of. The disadvantage is that if the problem does occur, the error will
probably be quite confusing and seemingly unrelated to pyvenv.


I think this is the way to go, for basically the same reasons that we
did it this way this time: there's no good reason to pay an ongoing
cost to further mitigate the risks associated with an already
incredibly rare event.


This seems to be the rough consensus. I'll update the PEP with a note 
about this, and we'll consider switching back to symlink-by-default on 
Linux.


Carl
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 405 (pyvenv) and system Python upgrades

2012-05-06 Thread Carl Meyer

On 05/05/2012 04:40 AM, Antoine Pitrou wrote:

On Fri, 04 May 2012 14:49:03 -0600
Carl Meyer  wrote:

3) Symlink the interpreter rather than copying. I include this here for
the sake of completeness, but it's already been rejected due to
significant problems on older Windows' and OS X.


Perhaps symlinking could be used at least on symlinks-friendly OSes?
I expect older Windows to disappear one day :-) So the only left
outlier would be OS X.


It certainly could - at one point the reference implementation did 
exactly this. I understand though that even on newer Windows' there are 
administrator-privilege issues with symlinks, and I don't know that 
there's any prospect of the OS X stub executable going away, so I think 
if we did this we should assume that we're accepting a more-or-less 
permanent cross-platform difference in the default behavior of venvs. 
Maybe that's ok; it would mean that for Linux users there'd be no need 
to run any venv-upgrade script at all when Python is updated, which is 
certainly a plus.


At one point it was argued that we shouldn't symlink by default because 
users expect venvs to be isolated and not upgraded implicitly. I think 
this discussion reveals that that's a false argument, since the stdlib 
will be upgraded implicitly regardless, and that's just as likely to 
break something as an interpreter update (and more likely than upgrading 
them in sync). IOW, if you want real full isolation from a system 
Python, you build your own Python, you don't use pyvenv.


Carl

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 405 (pyvenv) and system Python upgrades

2012-05-06 Thread Carl Meyer

On 05/05/2012 02:38 AM, Vinay Sajip wrote:

Nick Coghlan  gmail.com>  writes:


Personally, I expect that "always update your virtual environment
binaries after updating the system Python to a new point release" will
itself become a recommended practice when using virtual environments.


Of course, the venv update tool will need to only update environments which were
set up with the particular version of Python which was updated. ISTM pyvenv.cfg
will need to have a version=X.Y.Z line in it, which is added during venv
creation. That information will be used by the tool to only update specific
environments.


I don't think the added "version" key in pyvenv.cfg is needed; the 
"home" key provides enough information to know whether the virtualenv 
was created by the particular Python that was upgraded.


The "version" key could in theory be useful to know whether a particular 
venv created by that Python has or has not yet been upgraded to match, 
but since the upgrade is trivial and idempotent I don't think that is 
important.


Carl
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] PEP 405 (pyvenv) and system Python upgrades

2012-05-04 Thread Carl Meyer

Hi all,

The recent virtualenv breakage in Python 2.6.8 and 2.7.3 reveals an 
issue that deserves to be explicitly addressed in PEP 405: what happens 
when the system Python underlying a venv gets an in-place bugfix 
upgrade. If the bugfix includes a simultaneous change to the interpreter 
and standard library such that the older interpreter will not work with 
the newer standard library, all venvs created from that Python 
installation will be broken until the new interpreter is copied into them.


Choices for how to address this:

1) Document it and provide a tool for easily upgrading a venv in this 
situation. This may be adequate. In practice the situation is quite 
rare: 2.6.8/2.7.3 is the only actual example in the history of 
virtualenv that I'm aware of. The disadvantage is that if the problem 
does occur, the error will probably be quite confusing and seemingly 
unrelated to pyvenv.


2) In addition to the above, introduce a versioning marker in the 
standard library (is there one already?) and have some code somewhere 
(insert hand-waving here) check sys.version_info against the stdlib 
version, and fail fast with an unambiguous error if there is a mismatch. 
This makes the failure more explicit, but at the significant cost of 
making it more common: at every mismatch, not just in the 
apparently-rare case of a breaking change.


3) Symlink the interpreter rather than copying. I include this here for 
the sake of completeness, but it's already been rejected due to 
significant problems on older Windows' and OS X.


4) Adopt a policy of interpreter/stdlib cross-compatibility within a 
given X.Y version of Python. I don't expect this to be a popular choice, 
given the additional testing requirements it imposes, but it would 
certainly be the nicest option from the PEP 405 standpoint (and may also 
be complementary to proposals for splitting out the stdlib). In the 
2.6.8/2.7.3 case, this would have been technically trivial to do, but 
the choice was made not to do it in order to force virtualenv users to 
adopt the security-fixed Python interpreter.


Thoughts?

Carl
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] outdated info on download pages for older versions

2012-05-02 Thread Carl Meyer

Hi all,

Are the download pages for older Python versions supposed to be kept up 
to date at all? I just noticed that the 2.4.6 download page 
(http://www.python.org/download/releases/2.4.6/) says things like 
"Python 2.4 is now in security-fix-only mode" (whereas in fact it no 
longer gets even security fixes), and "Python 2.6 is the latest release 
of Python."


While checking to see if there was a SIG that would be more appropriate 
for this question, I also noticed that if one clicks on Community | 
Mailing Lists in the left sidebar of python.org, there's a "Special 
Interest Groups" link under "Mailing Lists" which is a 404 (not to 
mention redundant, as there's also one parallel to "Mailing Lists" that 
works).


(Please do let me know if there is a more appropriate forum for website 
issues/questions).


Carl
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Changes in html.parser may cause breakage in client code

2012-04-27 Thread Carl Meyer

On 04/27/2012 08:36 AM, Guido van Rossum wrote:

Someone should contact the Django folks. Alex Gaynor?


I committed the relevant code to Django (though I didn't write the 
patch), and I've been following this thread. I have it on my todo list 
to review this code again with Ezio's suggestions in mind. So you can 
consider "the Django folks" contacted.


Carl
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Virtualenv not portable from Python 2.7.2 to 2.7.3 (os.urandom missing)

2012-03-29 Thread Carl Meyer
On 03/29/2012 11:39 AM, David Malcolm wrote:
>> jaraco@vdm-dev:~$ env/bin/python -c "import os; os.urandom()"
>> Traceback (most recent call last):
>>   File "", line 1, in 
>> AttributeError: 'module' object has no attribute 'urandom'
> 
> It looks like this a symptom of the move of urandom to os.py to
> posximodule et al.
> 
> At first glance, it looks like this specific hunk should be reverted:
> http://hg.python.org/cpython/rev/a0f43f4481e0#l7.1
> so that if you're running with the new stdlib but an old python binary
> the combination can still have a usable os.urandom

Indeed, I've just tested and verified that this does fix the problem.

> Should this be tracked in bugs.python.org?

I've added this option as a comment on bug 1. The title of that bug
is worded such that it could be reasonably resolved either with the
backwards-compatibility fix or the release notes addition, the release
managers can decide what seems appropriate to them.

Carl



signature.asc
Description: OpenPGP digital signature
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Virtualenv not portable from Python 2.7.2 to 2.7.3 (os.urandom missing)

2012-03-29 Thread Carl Meyer
Thanks Jason for raising this. I just assumed that this was virtualenv's
fault (which it is) and didn't consider raising it here, but a note in
the release notes for the affected Python versions will certainly reach
many more of the likely-to-be-affected users.

FTR, I confirmed that the issue also affects the upcoming point releases
for 3.1 and 3.2, as well as 2.6 and 2.7. Jason filed issue 1 to
track the addition to the release notes for those versions.

Carl



signature.asc
Description: OpenPGP digital signature
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Virtualenv not portable from Python 2.7.2 to 2.7.3 (os.urandom missing)

2012-03-28 Thread Carl Meyer
Hi Jason,

On 03/28/2012 12:22 PM, Jason R. Coombs wrote:
> To reproduce, using virtualenv 1.7+ on Python 2.7.2 on Ubuntu, create a
> virtualenv. Move that virtualenv to a host with Python 2.7.3RC2 yields:
> 
> jaraco@vdm-dev:~$ /usr/bin/python2.7 -V
> 
> Python 2.7.3rc2
> 
> jaraco@vdm-dev:~$ env/bin/python -V
> 
> Python 2.7.2
> 
> jaraco@vdm-dev:~$ env/bin/python -c "import os; os.urandom()"
> 
> Traceback (most recent call last):
> 
>   File "", line 1, in 
> 
> AttributeError: 'module' object has no attribute 'urandom'
> 
> This bug causes Django to not start properly (under some circumstances).
> 
> I reviewed the changes between v2.7.2 and 2.7 (tip) and it seems there
> was substantial refactoring of the os and posix modules for urandom.
> 
> I still don’t fully understand why the urandom method is missing
> (because the env includes the python 2.7.2 executable and stdlib).

In Python 2.6.8/2.7.3, urandom is built into the executable. A
virtualenv doesn't contain the whole stdlib, only the bits necessary to
bootstrap site.py. So the problem arises from trying to use the 2.7.3
stdlib with a 2.7.2 interpreter.

> I suspect this change is going to cause some significant backward
> compatibility issues. Is there a recommended workaround? Should I file a
> bug?

The workaround is easy: just re-run virtualenv on that path with the new
interpreter.

I was made aware of this issue a few weeks ago, and added a warning to
the virtualenv "news" page:
http://www.virtualenv.org/en/latest/news.html  I'm not sure where else
to publicize it.

Carl



signature.asc
Description: OpenPGP digital signature
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Python install layout and the PATH on win32 (Rationale part 1: Regularizing the layout)

2012-03-26 Thread Carl Meyer
On 03/23/2012 09:22 PM, PJ Eby wrote:
> On Mar 23, 2012 3:53 PM, "Carl Meyer" > On 03/23/2012 12:35 PM, PJ Eby wrote:
>> > AFAICT, virtualenvs are overkill for most development anyway.  If you're
>> > not using distutils except to install dependencies, then configure
>> > distutils to install scripts and libraries to the same directory, and
>> > then do all your development in that directory.  Presto!  You now have a
>> > cross-platform "virtualenv".  Want the scripts on your path?  Add that
>> > directory to your path... or if on Windows, don't bother, since the
>> > current directory is usually on the path.   (In fact, if you're only
>> > using easy_install to install your dependencies, you don't even need to
>> > edit the distutils configuration, just use "-md targetdir".)
>>
>> Creating and using a virtualenv is, in practice, _easier_ than any of
>> those alternatives,
> 
> Really?  As I said, I've never seen the need to try, since just
> installing stuff to a directory on PYTHONPATH seems quite easy enough
> for me.
> 
>> that the "isolation from system site-packages" feature is quite popular
>> (the outpouring of gratitude when virtualenv went isolated-by-default a
>> few months ago was astonishing), and AFAIK none of your alternative
>> proposals support that at all.
> 
> What is this isolation for, exactly?  If you don't want site-packages on
> your path, why not use python -S?
> 
> (Sure, nobody knows about these things, but surely that's a
> documentation problem, not a tooling problem.)
> 
> Don't get me wrong, I don't have any deep objection to virtualenvs, I've
> just never seen the *point* (outside of the scenarios I mentioned),

No problem. I was just responding to the assertion that people only use
virtualenvs because they aren't aware of the alternatives, which I don't
believe is true. It's likely many people aren't aware of python -S, or
of everything that's possible via distutils.cfg. But even if they were,
for the cases where I commonly see virtualenv in use, it solves the same
problems more easily and with much less fiddling with config files and
environment variables.

Case in point: libraries that also install scripts for use in
development or build processes. If you're DIY, you have to figure out
where to put these, too, and make sure it's on your PATH. And if you
want isolation, not only do you have to remember to run python -S every
time, you also have to edit every script wrapper to put -S in the
shebang line. With virtualenv+easy_install/pip, all of these things Just
Simply Work, and (mostly) in an intuitive way. That's why people use it.

> thus don't see what great advantage will be had by rearranging layouts
> to make them shareable across platforms, when "throw stuff in a
> directory" seems perfectly serviceable for that use case already.  Tools
> that *don't* support "just throw it in a directory" as a deployment
> option are IMO unpythonic -- practicality beats purity, after all.  ;-)

No disagreement here. I think virtualenv's sweet spot is as a convenient
tool for development environments (used in virtualenvwrapper fashion,
where the file structure of the virtualenv itself is hidden away and you
never see it at all). I think it's fine to deploy _into_ a virtualenv,
if you find that convenient too (though I think there are real
advantages to deploying just a big ball of code with no need for
installers). But I see little reason to make virtualenvs relocatable or
sharable across platforms. I don't think virtualenvs as on on-disk file
structure make a good distribution/deployment mechanism at all.

IOW, I hijacked this thread (sorry) to respond to a specific denigration
of the value of virtualenv that I disagree with. I don't care about
making virtualenvs consistent across platforms.

Carl



signature.asc
Description: OpenPGP digital signature
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Python install layout and the PATH on win32 (Rationale part 1: Regularizing the layout)

2012-03-23 Thread Carl Meyer
Hi PJ,

On 03/23/2012 12:35 PM, PJ Eby wrote:
> AFAICT, virtualenvs are overkill for most development anyway.  If you're
> not using distutils except to install dependencies, then configure
> distutils to install scripts and libraries to the same directory, and
> then do all your development in that directory.  Presto!  You now have a
> cross-platform "virtualenv".  Want the scripts on your path?  Add that
> directory to your path... or if on Windows, don't bother, since the
> current directory is usually on the path.   (In fact, if you're only
> using easy_install to install your dependencies, you don't even need to
> edit the distutils configuration, just use "-md targetdir".)

Creating and using a virtualenv is, in practice, _easier_ than any of
those alternatives, so it's hard to see it as "overkill." Not to mention
that the "isolation from system site-packages" feature is quite popular
(the outpouring of gratitude when virtualenv went isolated-by-default a
few months ago was astonishing), and AFAIK none of your alternative
proposals support that at all.

Carl



signature.asc
Description: OpenPGP digital signature
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Python install layout and the PATH on win32

2012-03-20 Thread Carl Meyer
On 03/20/2012 12:19 PM, Paul Moore wrote:
> I also note that I'm assuming virtualenv will change to match whatever
> the Python version it's referencing does. I don't see how you can
> guarantee that, but if there are discrepancies between virtualenvs and
> installed Pythons, my level of objection goes up a little more.

Virtualenv will follow whatever Python does, yes.

Carl



signature.asc
Description: OpenPGP digital signature
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 405 (built-in virtualenv) status

2012-03-19 Thread Carl Meyer
Hello Kristján,

On 03/19/2012 03:26 AM, Kristján Valur Jónsson wrote:
> Hi Carl. I'm very interested in this work. At CCP we work heavily
> with virtual environments.  Except that we don't use virtualenv
> because it is just a pain in the neck.  We like to be able to run
> virtual python environments of various types as they arrive checked
> out of source control repositories, without actually "installing"
> anything. For some background, please see:
> http://blog.ccpgames.com/kristjan/2010/10/09/using-an-isolated-python-exe/.
> It's a rather quick read, actually.
> 
> The main issue for us is: How to prevent your local python.exe from
> reading environment variables and running some global site.py?
> 
> There are a number of points raised in the above blog, please take a
> look at the "Musings" at the end.

Thanks, I read the blog post. I think there are some useful points
there; I too find the startup sys.path behavior of Python a bit more
complex and magical than I'd prefer (I'm sure it's grown organically
over the years to address a variety of different needs and concerns, not
all of which I understand or am even aware of).

I think there's one important (albeit odd and magical) bit of Python's
current behavior that you are missing in your blog post. All of the
initial sys.path directories are constructed relative to sys.prefix and
sys.exec_prefix, and those values in turn are determined (if PYTHONHOME
is not set), by walking up the filesystem tree from the location of the
Python binary, looking for the existence of a file at the relative path
"lib/pythonX.X/os.py" (or "Lib/os.py" on Windows). Python takes the
existence of this file to mean that it's found the standard library, and
sets sys.prefix accordingly. Thus, you can achieve reliable full
isolation from any installed Python, with no need for environment
variables, simply by placing a file (it can even be empty) at that
relative location from the location of your Python binary. You will
still get some default paths added on sys.path, but they will all be
relative to your Python binary and thus presumably under your control;
nothing from any other location will be on sys.path. I doubt you will
find this solution satisfyingly elegant, but you might nonetheless find
it practically useful.

The essence of PEP 405 is simply to provide a less magical way to
achieve this same result, by locating a "pyvenv.cfg" file next to (or
one directory up from) the Python binary.

The bulk of the work in PEP 405 is aimed towards a rather different goal
from yours - to be able to share an installed Python's copy of the
standard library among a number of virtual environments. This is the
purpose of the "home" key in pyvenv.cfg and the added sys.base_prefix
(which point to the Python installation whose standard library will be
used). I think this serves a valuable and common use case, but I wonder
if your use case couldn't also be served with a minor tweak to PEP 405.
Currently it ignores a pyvenv.cfg file with no "home" key; instead, it
could set sys.prefix and sys.base_prefix both to the location of that
pyvenv.cfg. For most purposes, this would result in a broken Python (no
standard library), but it might help you?

Beyond that possible tweak, while I certainly wouldn't oppose any effort
to clean up / document / make-optional Python's startup sys.path-setting
behavior, I think it's mostly orthogonal to PEP 405, and I don't think
it would be helpful to expand the scope of PEP 405 to include that effort.

Carl



signature.asc
Description: OpenPGP digital signature
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Python install layout and the PATH on win32

2012-03-16 Thread Carl Meyer
Hi Mark,

On 03/16/2012 05:53 PM, Mark Hammond wrote:
> * All executables and scripts go into the root of the Python install.
> This directory is largely empty now - it is mainly a container for other
> directories.  This would solve the problem of needing 2 directories on
> the PATH and mean existing tools which locate the executable would work
> fine.

I hate to seem like I'm piling on now after panning your last brainstorm
:-), but... this would be quite problematic for virtualenv users, many
of whom do rely on the fact that the virtualenv "stuff" is confined to
within a limited set of known subdirectories, and they can overlay a
virtualenv and their own project code with just a few virtualenv
directories vcs-ignored.

I would prefer either the status quo or the proposed cross-platform
harmonization.

Carl



signature.asc
Description: OpenPGP digital signature
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Python install layout and the PATH on win32

2012-03-16 Thread Carl Meyer
Hi Van,

On 03/16/2012 08:08 AM, Lindberg, Van wrote:
>> Changing the directory name is in fact a new and different (and much
>> more invasive) special case, because distutils et al install scripts
>> there, and that directory name is part of the distutils install scheme.
>> Installers don't care where the Python binary is located, so moving it
>> in with the other scripts has very little impact.
> 
> So would changing the distutils install scheme in 3.3 - as defined and 
> declared by distutils - lead to a change in your code?
> 
> Alternatively stated, do you independently figure out that your 
> virtualenv is on Windows and then put things in Scripts, etc, or do you 
> use sysconfig? If sysconfig gave you different (consistent) values 
> across platforms, how would that affect your code?

Both virtualenv and PEP 405 pyvenv figure out the platform at
venv-creation time, and hard-code certain information about the correct
layout for that platform (Scripts vs bin, as well as lib/pythonx.x vs
Lib), so the internal layout of the venv matches the system layout on
that platform. The key fact is that there is then no special-casing
necessary when code runs within the virtualenv (particularly installer
code); the same install scheme that would work in the system Python will
also Just Work in the virtualenv.

I'm not concerned about changes to distutils/sysconfig install schems to
make them more compatible across platforms from the POV of virtualenv;
we can easily update the current platform-detection code to do the right
thing depending on both platform and Python version.

I do share Éric's concern about whether distutils' legacy install
schemes would be updated or not, and how this would affect backwards
compatibility for older installer code, but that's pretty much
orthogonal to virtualenv/pyvenv.

I don't want to make the internal layout of a virtualenv differ from the
system Python layout on the same platform, which (IIUC) was Mark's proposal.

Hope that clarifies,

Carl



signature.asc
Description: OpenPGP digital signature
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Python install layout and the PATH on win32

2012-03-15 Thread Carl Meyer
On 03/15/2012 05:10 PM, Mark Hammond wrote:
> On 16/03/2012 10:48 AM, Carl Meyer wrote:
>> The implementation of virtualenv (and especially PEP 405 pyvenv) are
>> largely based around making sure that the internal layout of a
>> virtualenv is identical to the layout of an installed Python on that
>> same platform, to avoid any need to special-case virtualenvs in
>> distutils. The one exception to this is the location of the python
>> binary itself in Windows virtualenvs; we do place it inside Scripts\ so
>> that the virtualenv can be "activated" by adding only a single path to
>> the shell PATH. But I would be opposed to any additional special-casing
>> of the internal layout of a virtualenv
> ...
> 
> Unless I misunderstand, that sounds like it should keep everyone happy;
> there already is a special case for the executable on Windows being in a
> different place in an installed layout vs a virtual-env layout. Changing
> this to use "bin" instead of "Scripts" makes the virtualenv more
> consistent across platforms and doesn't impose any additional
> special-casing for Windows (just slightly changes the existing
> special-case :)

Changing the directory name is in fact a new and different (and much
more invasive) special case, because distutils et al install scripts
there, and that directory name is part of the distutils install scheme.
Installers don't care where the Python binary is located, so moving it
in with the other scripts has very little impact.

Carl



signature.asc
Description: OpenPGP digital signature
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Python install layout and the PATH on win32

2012-03-15 Thread Carl Meyer
On 03/15/2012 04:19 PM, Mark Hammond wrote:
> On 16/03/2012 8:57 AM, VanL wrote:
>> On 3/14/2012 6:30 PM, Mark Hammond wrote:
>>>
>>> So why not just standardize on that new layout for virtualenvs?
>>
>> That sounds like the worst of all worlds - keep all the existing special
>> cases, and add one.
> 
> I'm not so sure.  My concern is that this *will* break external tools
> that attempt to locate the python executable from an installed
> directory.  However, I'm not sure this requirement exists for virtual
> environments - such tools probably will not attempt to locate the
> executable in a virtual env as there is no standard place for a virtual
> env to live.
> 
> So having a standard layout in the virtual envs still seems a win - we
> keep the inconsistency in the layout of the "installed" Python, but
> tools which work with virtualenvs still get a standardized layout.

The implementation of virtualenv (and especially PEP 405 pyvenv) are
largely based around making sure that the internal layout of a
virtualenv is identical to the layout of an installed Python on that
same platform, to avoid any need to special-case virtualenvs in
distutils. The one exception to this is the location of the python
binary itself in Windows virtualenvs; we do place it inside Scripts\ so
that the virtualenv can be "activated" by adding only a single path to
the shell PATH. But I would be opposed to any additional special-casing
of the internal layout of a virtualenv that would require tools
installing software inside virtualenv to use a different install scheme
than when installing to a system Python.

In other words, I would much rather that tools have to understand a
different layout between Windows virtualenvs and Unixy virtualenvs
(because most tools don't have to care anyway, distutils just takes care
of it, and to the extent they do have to care, they have to adjust
anyway in order to work with installed Pythons) than that they have to
understand a different layout between virtualenv and non- on the same
platform. To as great an extent as possible, tools shouldn't have to
care whether they are dealing with a virtualenv.

A consistent layout all around would certainly be nice, but I'm not
venturing any opinion on whether it's worth the backwards incompatibility.

Carl



signature.asc
Description: OpenPGP digital signature
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 405 (built-in virtualenv) status

2012-03-15 Thread Carl Meyer
On 03/15/2012 03:02 PM, Lindberg, Van wrote:
> FYI, the location of the tcl/tk libraries does not appear to be set in 
> the virtualenv, even if tkinter is installed and working in the main 
> Python installation. As a result, tk-based apps will not run from a 
> virtualenv.

Thanks for the report! I've added this to the list of open issues in the
PEP and I'll look into it.

Carl



signature.asc
Description: OpenPGP digital signature
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] PEP 405 (built-in virtualenv) status

2012-03-15 Thread Carl Meyer
A brief status update on PEP 405 (built-in virtualenv) and the open issues:

1. As mentioned in the updated version of the language summit notes,
Nick Coghlan has agreed to pronounce on the PEP.

2. Ned Deily discovered at the PyCon sprints that the current reference
implementation does not work with an OS X framework build of Python.
We're still working to discover the reason for that and determine
possible fixes.

3. If anyone knows of a pair of packages in which both need to build
compiled extensions, and the compilation of the second depends on header
files from the first, that would be helpful to me in testing the other
open issue (installation of header files). (I thought numpy and scipy
might fit this bill, but I'm currently not able to install numpy at all
under Python 3 using pysetup, easy_install, or pip.)

Thanks,

Carl



signature.asc
Description: OpenPGP digital signature
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Counting collisions for the win

2012-01-19 Thread Carl Meyer
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi Victor,

On 01/19/2012 05:48 PM, Victor Stinner wrote:
[snip]
> Using a randomized hash may
> also break (indirectly) real applications because the application
> output is also somehow "randomized". For example, in the Django test
> suite, the HTML output is different at each run. Web browsers may
> render the web page differently, or crash, or ... I don't think that
> Django would like to sort attributes of each HTML tag, just because we
> wanted to fix a vulnerability.

I'm a Django core developer, and if it is true that our test-suite has a
dictionary-ordering dependency that is expressed via HTML attribute
ordering, I consider that a bug and would like to fix it. I'd be
grateful for, not resentful of, a change in CPython that revealed the
bug and prompted us to fix it. (I presume that it is true, as it sounds
like you experienced it directly; I don't have time to play around at
the moment, but I'm surprised we haven't seen bug reports about it from
users of 64-bit Pythons long ago). I can't speak for the core team, but
I doubt there would be much disagreement on this point: ideally Django
would run equally well on any implementation of Python, and as far as I
know none of the alternative implementations guarantee hash or
dict-ordering compatibility with CPython.

I don't have the expertise to speak otherwise to the alternatives for
fixing the collisions vulnerability, but I don't believe it's accurate
to presume that Django would not want to fix a dict-ordering dependency,
and use that as a justification for one approach over another.

Carl
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAk8Y83oACgkQ8W4rlRKtE2cNawCg5q/p1+OOKFYDymDJGoClBBlg
WNAAn3xevD+0CqAQ+mFNHCBhtLgw8IYv
=HDOh
-END PGP SIGNATURE-
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] readd u'' literal support in 3.3?

2011-12-09 Thread Carl Meyer
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 12/09/2011 07:45 AM, Lennart Regebro wrote:
> The slowness of running 2to3 during install time can be fixed by not
> doing so, but instead running it when the distribution is created,
> including both Python 2 and Python 3 code in the distribution.
> 
> http://python3porting.com/2to3.html#distribution-section
> 
> There are no tools that support this at the moment though. I guess it
> would be cool if Distribute supported making these kinds of
> distributions...

Doesn't just this move the problem to testing? Presumably one wants to
test that changes to the code don't break under Python 3, and ideally at
every change, not only at release time.

Carl
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAk7iNfYACgkQ8W4rlRKtE2dsqACffHkX7fVtCnmu8E4rdbfNdAfS
0fIAoLKzkmV3woLjXQP2sb8FcnlSgrux
=7pRs
-END PGP SIGNATURE-
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] readd u'' literal support in 3.3?

2011-12-09 Thread Carl Meyer
On 12/09/2011 08:35 AM, Michael Foord wrote:
> On 9 Dec 2011, at 15:13, Barry Warsaw wrote:
>> Oh, I remember this one, because I think I reported and fixed it.
>> But I take it as a given that Python 2.6 is the minimal (sane)
>> version to target for one-codebase cross-Python code.
>> 
> 
> In mock (at least 5000 lines of code including tests) I target 2.4 ->
> 3.2+. Admittedly mock does little I/O but does some fairly crazy
> introspection (and even found bugs in Python 3 because of it).

pip and virtualenv also both support 2.4 - 3.2+ from a single codebase
(pip is ~7300 lines of code including tests, virtualenv ~1600). I
consider them a bit of a special case; since they are both early-stage
bootstrapping tools, the inconvenience level for users of a 2to3 step or
having to keep separate versions around would be higher than for an
ordinary library.

But I will say that the workarounds necessary to support 2.4 - 3.2 have
not really been problematic enough to tempt me towards a more complex
workflow, and I would probably take the single-codebase approach with
another port, even if I needed to support pre-2.6. The sys.exc_info()
business is ugly indeed, but (IMHO) not bad enough to warrant adding
2to3 hassles into the maintenance workflow.

Carl
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 405 (proposed): Python 2.8 Release Schedule

2011-11-10 Thread Carl Meyer
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1



On 11/10/2011 07:17 AM, Vinay Sajip wrote:
> Barry Warsaw  python.org> writes:
>> On Nov 10, 2011, at 08:58 AM, Nick Coghlan wrote:
>>> Now you need to persuade Vinay to let you trade PEP numbers with the
>>> pyvenv PEP. Having an unrelease schedule as PEP 404 is too good an
>>> opportunity to pass up :)
>>
>> Brilliant suggestion!  Vinay? :)
>>
> 
> Actually you need Carl Meyer's agreement, not mine - he's the one writing the
> PEP. But I'm in favour :-)

No objection here.

Carl
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAk68DeYACgkQ8W4rlRKtE2c9pACgvYw22k3HQOgjmRjNk+F5AdW4
QIcAoLgzdPb8PNNHqqEdGYWGeMp0lD3I
=u9HS
-END PGP SIGNATURE-
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] draft PEP: virtual environments

2011-11-08 Thread Carl Meyer
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 11/08/2011 05:43 PM, Nick Coghlan wrote:
> I'm actually finding I quite like the virtualenv scheme of having
> "sys.prefix" refer to the virtual environment and "sys.real_prefix"
> refer to the interpeter's default environment. If pyvenv used the same
> naming scheme, then a lot of code designed to work with virtualenv
> would probably "just work" with pyvenv as well.

Indeed. I've already been convinced (see my reply to Chris McDonough
earlier) that this is the more practical approach. I've already updated
my copy of the PEP on Bitbucket
(https://bitbucket.org/carljm/python-peps/src/0936d8e00e5b/pep-0404.txt)
to reflect this switch, working (slowly) on an update of the reference
implementation.

Carl
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAk650A0ACgkQ8W4rlRKtE2cYuACgk5oRU54R+w4jHAynvW/QAxNU
mQQAoI0zM4wzpPdOa0RIvEuAkUCmm+jT
=RMyV
-END PGP SIGNATURE-
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] draft PEP: virtual environments

2011-11-01 Thread Carl Meyer
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 10/31/2011 09:57 PM, Stephen J. Turnbull wrote:
> That's fine, but either make sure it works with a POSIX-conformant
> /bin/sh, or make the shebang explicitly bash (bash is notoriously
> buggy in respect of being POSIX-compatible when named "sh").

It has no shebang line, it must be sourced not run (its only purpose is
to modify the current shell environment).

Carl
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAk6wEJYACgkQ8W4rlRKtE2dNGQCguHy8iYMgWIJyaQqABObt5ecv
esIAnjmuHYH+G8JBGBzcwZzj8sofPinc
=MR6D
-END PGP SIGNATURE-
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] draft PEP: virtual environments

2011-10-31 Thread Carl Meyer
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 10/31/2011 10:28 AM, Paul Moore wrote:
> On 31 October 2011 16:08, Tres Seaver  wrote:
>> On 10/31/2011 11:50 AM, Carl Meyer wrote:
>>
>>> I have no problem including the basic posix/nt activate scripts if
>>> no one else is concerned about the added maintenance burden there.
>>>
>>> I'm not sure that my cross-shell-scripting fu is sufficient to
>>> write posix/activate in a cross-shell-compatible way; I use bash
>>> and am not very familiar with other shells. If it runs under
>>> /bin/sh is that sufficient to make it compatible with "all Unix
>>> shells" (for some definition of "all")? If so, I can work on this.
>>
>>
>> I would say this is a perfect "opportunity to delegate," in this case
>> to the devotees of other cults^Wshells than bash.

Good call - we'll stick with what we've got until such devotees show up :-)

Hey devotees, if you're listening, this is what you want to test/port:
https://bitbucket.org/vinay.sajip/pythonv/src/6d057cfaaf53/Lib/venv/scripts/posix/activate

For reference, here's what virtualenv ships with (includes a .fish and
.csh script):
https://github.com/pypa/virtualenv/tree/develop/virtualenv_support

> For Windows, can you point me at the nt scripts? If they aren't too
> complex, I'd be willing to port to Powershell.

Thanks! They are here:
https://bitbucket.org/vinay.sajip/pythonv/src/6d057cfaaf53/Lib/venv/scripts/nt

Carl
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAk6vAKMACgkQ8W4rlRKtE2eEfwCgtpzQtUktUSU8ZyDDeqjD0yEe
QXgAoLoCD8EQ74jHR1lWPFjgnwQFkM46
=6+Rn
-END PGP SIGNATURE-
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] draft PEP: virtual environments

2011-10-31 Thread Carl Meyer
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 10/30/2011 06:28 AM, Antoine Pitrou wrote:
> On Sun, 30 Oct 2011 12:10:18 + (UTC)
> Vinay Sajip  wrote:
>>
>>> We already have Unix shell scripts and BAT files in the source tree. Is
>>> it really complicated to maintain these additional shell scripts? Is
>>> there a lot of code in them?
>>
>> No, they're pretty small: wc -l gives
>>
>> 76 posix/activate (Bash script, contains deactivate() function)
>> 31 nt/activate.bat
>> 17 nt/deactivate.bat
>>
>> The question is whether we should stop at that, or whether there should be
>> support for tcsh, fish etc. such as virtualenv provides.
> 
> I don't think we need additional support for more or less obscure
> shells.
> Also, if posix/activate is sufficiently well written (don't ask me
> how :-)), it should presumably be compatible with all Unix shells?

I have no problem including the basic posix/nt activate scripts if no
one else is concerned about the added maintenance burden there.

I'm not sure that my cross-shell-scripting fu is sufficient to write
posix/activate in a cross-shell-compatible way; I use bash and am not
very familiar with other shells. If it runs under /bin/sh is that
sufficient to make it compatible with "all Unix shells" (for some
definition of "all")? If so, I can work on this.

Carl
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAk6uw80ACgkQ8W4rlRKtE2e0AACcCGqxp/HWxX0UAqtS9W5y+UDr
weAAnjF4YdsCUvb4bXFloEGt1b7KlvWB
=2bd+
-END PGP SIGNATURE-
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] draft PEP: virtual environments

2011-10-31 Thread Carl Meyer
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 10/31/2011 09:35 AM, Vinay Sajip wrote:
> That's true, I hadn't thought of that. So then it sounds like the thing to do 
> is
> make venv a package and have the code in venv/__init__.py, then have the 
> scripts
> in a 'scripts' subdirectory below that. The API would then change to take the
> absolute pathname of the scripts directory to install from, right?

That sounds right to me.

Carl
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAk6uwXEACgkQ8W4rlRKtE2fUQgCfb1Cn7OYZzt3/xoKzmJuCmvbt
B9sAn0kuBZzjVImIC1r8Jb506KbsRHBN
=lgga
-END PGP SIGNATURE-
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] draft PEP: virtual environments

2011-10-31 Thread Carl Meyer
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 10/30/2011 04:47 PM, Vinay Sajip wrote:
> Antoine Pitrou  pitrou.net> writes:
>> It would be even simpler not to process it at all, but install the
>> scripts as-is (without the execute bit) :)
> Sure, but such an approach makes it difficult to provide a mechanism which is
> easily extensible; for example, with the current approach, it is 
> straightforward
> for third party tools to either easily replace completely, update selectively 
> or
> augment simply the scripts provided by base classes.

I don't understand this point either. It seems to me too that having the
scripts installed as plain data files inside a package is just as easy
or easier for third-party tools to work with flexibly in all of the ways
you mention, compared to having them available in any kind of zipped format.

The current os.name-based directory structure can still be used, and we
can still provide the helper to take such a directory structure and
install the appropriate scripts based on os.name.

I don't see any advantage to zipping. If done at install-time (which is
necessary to make the scripts maintainable in the source tree) it also
has the downside of introducing another difficulty in supporting source
builds equivalently to installed builds.

Carl
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAk6uvkYACgkQ8W4rlRKtE2ezZwCfUv80rp7Vg//zRA471R9JJDlj
83gAn0e9r76c9WkjutLcpbRjeopFkmew
=Z0kj
-END PGP SIGNATURE-
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] draft PEP: virtual environments

2011-10-31 Thread Carl Meyer
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 10/28/2011 05:10 PM, Chris McDonough wrote:
>> Why not modify sys.prefix?
>> - --
>>
>> As discussed above under `Backwards Compatibility`_, this PEP proposes
>> to add ``sys.site_prefix`` as "the prefix relative to which
>> site-package directories are found". This maintains compatibility with
>> the documented meaning of ``sys.prefix`` (as the location relative to
>> which the standard library can be found), but means that code assuming
>> that site-packages directories are found relative to ``sys.prefix``
>> will not respect the virtual environment correctly.
>>
>> Since it is unable to modify ``distutils``/``sysconfig``,
>> `virtualenv`_ is forced to instead re-point ``sys.prefix`` at the
>> virtual environment.
>>
>> An argument could be made that this PEP should follow virtualenv's
>> lead here (and introduce something like ``sys.base_prefix`` to point
>> to the standard library and header files), since virtualenv already
>> does this and it doesn't appear to have caused major problems with
>> existing code.
>>
>> Another argument in favor of this is that it would be preferable to
>> err on the side of greater, rather than lesser, isolation. Changing
>> ``sys.prefix`` to point to the virtual environment and introducing a
>> new ``sys.base_prefix`` attribute would err on the side of greater
>> isolation in the face of existing code's use of ``sys.prefix``.
> 
> It would seem to make sense to me to err on the side of greater
> isolation, introducing sys.base_prefix to indicate the base prefix (as
> opposed to sys.site_prefix indicating the venv prefix).  Bugs introduced
> via a semi-isolated virtual environment are very difficult to
> troubleshoot.  It would also make changes to existing code unnecessary.
> I have encountered no issues with virtualenv doing this so far.

I'm convinced that this is the better tradeoff. I'll begin working on a
branch of the reference implementation that does things this way. Thanks
for the feedback.

Carl
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAk6uq2IACgkQ8W4rlRKtE2chHQCgik136LkoQ/JE6b3r4astWcog
kYYAoN7ESaPlZOaYeok5t0i9hMkb2L4g
=/Rn1
-END PGP SIGNATURE-
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] draft PEP: virtual environments

2011-10-28 Thread Carl Meyer
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hello python-dev,

As has been discussed here previously, Vinay Sajip and I are working on
a PEP for making "virtual Python environments" a la virtualenv [1] a
built-in feature of Python 3.3.

This idea was first proposed on python-dev by Ian Bicking in February
2010 [2]. It was revived at PyCon 2011 and has seen discussion on
distutils-sig [3], more recently again on python-dev [4] [5], and most
recently on python-ideas [6].

Full text of the draft PEP is pasted below, and also available on
Bitbucket [7]. The reference implementation is also on Bitbucket [8].
For known issues in the reference implementation and cases where it does
not yet match the PEP, see the open issues list [9].

In particular, please note the "Open Questions" section of the draft
PEP. These are areas where we are still unsure of the best approach, or
where we've received conflicting feedback and haven't reached consensus.
We welcome your thoughts on anything in the PEP, but feedback on the
open questions is especially useful.

We'd also especially like to hear from Windows and OSX users, from
authors of packaging-related tools (packaging/distutils2, zc.buildout)
and from Python implementors (PyPy, IronPython, Jython).

If it is easier to review and comment on the PEP after it is published
on python.org, I can submit it to the PEP editors anytime. Otherwise
I'll wait until we've resolved a few more of the open questions, as it's
easier for me to update the PEP on Bitbucket.

Thanks!

Carl


[1] http://virtualenv.org
[2] http://mail.python.org/pipermail/python-dev/2010-February/097787.html
[3] http://mail.python.org/pipermail/distutils-sig/2011-March/017498.html
[4] http://mail.python.org/pipermail/python-dev/2011-June/111903.html
[5] http://mail.python.org/pipermail/python-dev/2011-October/113883.html
[6] http://mail.python.org/pipermail/python-ideas/2011-October/012500.html
[7] https://bitbucket.org/carljm/pythonv-pep/src/
[8] https://bitbucket.org/vinay.sajip/pythonv/
[9] https://bitbucket.org/vinay.sajip/pythonv/issues?status=new&status=open

PEP: XXX
Title: Python Virtual Environments
Version: $Revision$
Last-Modified: $Date$
Author: Carl Meyer 
Status: Draft
Type: Standards Track
Content-Type: text/x-rst
Created: 13-Jun-2011
Python-Version: 3.3
Post-History: 24-Oct-2011, 28-Oct-2011


Abstract


This PEP proposes to add to Python a mechanism for lightweight
"virtual environments" with their own site directories, optionally
isolated from system site directories.  Each virtual environment has
its own Python binary (allowing creation of environments with various
Python versions) and can have its own independent set of installed
Python packages in its site directories, but shares the standard
library with the base installed Python.


Motivation
==

The utility of Python virtual environments has already been well
established by the popularity of existing third-party
virtual-environment tools, primarily Ian Bicking's `virtualenv`_.
Virtual environments are already widely used for dependency management
and isolation, ease of installing and using Python packages without
system-administrator access, and automated testing of Python software
across multiple Python versions, among other uses.

Existing virtual environment tools suffer from lack of support from
the behavior of Python itself.  Tools such as `rvirtualenv`_, which do
not copy the Python binary into the virtual environment, cannot
provide reliable isolation from system site directories. Virtualenv,
which does copy the Python binary, is forced to duplicate much of
Python's ``site`` module and manually symlink/copy an ever-changing
set of standard-library modules into the virtual environment in order
to perform a delicate boot-strapping dance at every
startup. (Virtualenv copies the binary because symlinking it does not
provide isolation, as Python dereferences a symlinked executable
before searching for `sys.prefix`.)

The ``PYTHONHOME`` environment variable, Python's only existing
built-in solution for virtual environments, requires
copying/symlinking the entire standard library into every
environment. Copying the whole standard library is not a lightweight
solution, and cross-platform support for symlinks remains inconsistent
(even on Windows platforms that do support them, creating them often
requires administrator privileges).

A virtual environment mechanism integrated with Python and drawing on
years of experience with existing third-party tools can be lower
maintenance, more reliable, and more easily available to all Python
users.

.. _virtualenv: http://www.virtualenv.org

.. _rvirtualenv: https://github.com/kvbik/rvirtualenv


Specification
=

When the Python binary is executed, it attempts to determine its
prefix (which it stores in ``sys.prefix``), which is then used to find
the standard library and other key 

Re: [Python-Dev] Status of the built-in virtualenv functionality in 3.3

2011-10-06 Thread Carl Meyer
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 10/06/2011 12:12 PM, Nick Coghlan wrote:
> sandbox is a bit close to Victor's pysandbox for restricted execution
> environments.
> 
> 'nest' would probably work, although I don't recall the 'egg'
> nomenclature featuring heavily in the current zipimport or packaging
> docs, so it may be a little obscure.
> 
> 'pyenv' is another possible colour for the shed, although a quick
> Google search suggests that may have few name clash problems.
> 
> 'appenv' would be yet another colour, since that focuses on the idea
> of 'environment per application'.

I still think 'venv' is preferable to any of the other options proposed
thus far. It makes the virtualenv "ancestry" clearer, doesn't repeat
"py" (which seems entirely unnecessary in the name of a stdlib module,
though it could be prepended to a script name if we do a script), and
doesn't try to introduce new semantic baggage to the concept, which is
already familiar to most Python devs under the name "virtualenv".

Carl
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAk6N9H4ACgkQ8W4rlRKtE2doVACcChCim7CNS0czZisjEmw9NblS
MqkAn1FyT+A/UiKodCh1siHrQXf2/yZQ
=TAUV
-END PGP SIGNATURE-
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Status of the built-in virtualenv functionality in 3.3

2011-10-06 Thread Carl Meyer
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi Éric,

Vinay is more up to date than I am on the current status of the
implementation. I need to update the PEP draft we worked on last spring
and get it posted (the WIP is at
https://bitbucket.org/carljm/pythonv-pep but is out of date with the
latest implementation work).

On 10/06/2011 08:12 AM, Éric Araujo wrote:
> Oh, let’s not forget naming.  We can’t reuse the module name virtualenv
> as it would shadow the third-party module name, and I’m not fond of
> “virtualize”: it brings OS-level virtualization to my mind, not isolated
> Python environments.

What about "venv"? It's short, it's already commonly used colloquially
to refer to virtualenv so it makes an accurate and unambiguous mental
association, but AFAIK it is currently unused as a script or module name.

Carl
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAk6N0+QACgkQ8W4rlRKtE2fCOwCg1YOWcMCZH6HOdyKepcQG3RgB
T48AoIIqol+sUpOAFI+4HJH/dAdX5Xwm
=DLjq
-END PGP SIGNATURE-
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] In-Python virtualisation and packaging

2011-06-13 Thread Carl Meyer
On 06/13/2011 06:46 PM, Carl Meyer wrote:
> FWIW, historically pretty much every new Python version has broken
> virtualenv

I should clarify that this is because of the delicate stdlib
bootstrapping virtualenv currently has to do; the built-in virtualenv
eliminates this entirely and will require much less maintenance for new
Python versions.

Carl
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] In-Python virtualisation and packaging

2011-06-13 Thread Carl Meyer
On 06/13/2011 08:07 AM, Nick Coghlan wrote:
> On Mon, Jun 13, 2011 at 10:50 PM, Vinay Sajip  wrote:
>> You're right in terms of the current Python ecosystem and 3.x adoption, 
>> because
>> of course this approach requires support from Python itself in terms of its
>> site.py code. However, virtual environments have a utility beyond supporting
>> older Pythons on newer OSes, since another common use case is having 
>> different
>> library environments sandboxed from each other on different projects, even if
>> all those projects are using Python 3.3+.
> 
> Yeah, even if the innate one struggles on later OS releases that
> changed things in a backwards incompatible way, it will still be
> valuable on the OS versions that are around at the time that version
> of Python gets released.

FWIW, historically pretty much every new Python version has broken
virtualenv, and new OS versions almost never have. Virtualenv isn't
especially OS-dependent (not nearly as much as some other stdlib
modules): the most OS-dependent piece is "shell activation", and that's
a feature I would prefer to entirely leave out of the stdlib virtualenv
(it's a convenience, not a necessity for virtualenv use, and the need to
maintain it for a variety of OS shells is a maintenance burden I don't
think Python should adopt).

In fact, the only new-OS-version adjustment I can recall virtualenv
needing to make is when Debian introduced dist-packages -- but even that
doesn't really apply, as that was distro-packager change to Python
itself. With a built-in virtualenv it would be the distro packagers
responsibility to make sure their patch to Python doesn't break the
virtualenv module.

So I don't think a virtualenv stdlib module would be at all likely to
break on a new OS release, if Python itself is not broken by that OS
release. (It certainly wouldn't be the stdlib module most likely to be
broken by OS changes, in comparison to e.g. shutil, threading...)

Carl
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] In-Python virtualisation and packaging

2011-06-13 Thread Carl Meyer


On 06/13/2011 06:55 AM, Michael Foord wrote:
> There are two options:
>
> Bring the full functionality into the standard library so that Python
> supports virtual environments out of the box. As is the case with adding
> anything to the standard library it will then be impossible to add
> features to the virtualization support in Python 3.3 once 3.3 is
> released - new features will go into 3.4.

I think it's not hard to provide enough hooks to allow third-party tools
to extend the virtualenv-creation process, while still having enough
code in the stdlib to allow actual creation of virtualenvs. Virtualenv
already has very few features, and doesn't get very much by way of new
feature requests -- all the UI sugar and significant shell integration
goes into Doug Hellmann's virtualenvwrapper, and I wouldn't foresee that
changing.

IOW, I don't think the maintenance concerns outweigh the benefits of
being able to create virtualenvs with an out-of-the-box Python.

> Add only the minimal changes required to support a third-party virtual
> environment tool.
> 
> Virtual environments are phenomenally useful, so I would support having
> the full tool in the standard library, but it does raise maintenance and
> development issues.
> 
> Don't forget windows support! ;-)
> 
> All the best,
> 
> Michael Foord
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [RELEASE] Python 2.7.2 release candidate 1

2011-05-29 Thread Carl Meyer


On 05/29/2011 06:11 PM, Jack Diederich wrote:
> We don't, but many projects do
> release new features with bugfix version numbers - I'm looking at you,
> Django.

Really? Do you have an example of a new Django feature that was released
in a bugfix version number? Just curious, since that's certainly not the
documented release policy. [1]

Carl


[1] https://docs.djangoproject.com/en/dev/internals/release-process/
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] python and super

2011-04-15 Thread Carl Meyer


On 04/15/2011 08:53 AM, Michael Foord wrote:
>> If we treat django's failure to use super as a bug, you want the
>> Python language to work-around that bug so that:
> 
> What you say (that this particular circumstance could be treated as a
> bug in django) is true, 

Just as a side note: if there is a bug demonstrated here, it is in
unittest2, not Django. Django's TestCase subclasses don't even override
__init__ or setUp, so there is no opportunity for them to call or fail
to call super() in either case.

If you re-read Ricardo's original presentation of the case, he correctly
noted that it is unittest2's TestCase which does not call super() and
thus prevents cooperative multiple inheritance. I'm not sure who in this
thread first mis-read his post and called it a possible bug in Django,
but it was a mis-reading which now appears to be self-propagating ;-)

Carl
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Python 3.3 release schedule posted

2011-03-23 Thread Carl Meyer
Hi Georg,

On 03/23/2011 03:56 PM, Georg Brandl wrote:
> For 3.3, I'd like to revive the tradition of listing planned large-scale
> changes in the PEP.  Please let me know if you plan any such changes,
> at any time.  (If they aren't codified in PEP form, we should think about
> whether they should be.)

Over in distutils-sig there's been extensive discussion at and since
PyCon of a built-in Python virtual-environment tool, similar to
virtualenv, targeted hopefully for 3.3. This is something that's seen
some discussion on python-dev previously; I now have a working prototype
and am working on a PEP. When the PEP is ready I'll bring it up for
discussion on python-ideas and then python-dev; anyone interested in
checking it out sooner can go read the discussions at distutils-sig.

Carl
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Submitting changes through Mercurial

2011-03-21 Thread Carl Meyer
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi Senthil,

On 03/21/2011 09:57 PM, Senthil Kumaran wrote:
> - In the above issue, why is two same bitbutket urls are provided. (It
> is redundant).

I updated the patch, and the second time around the "remote hg repo" box
was empty. I wasn't sure what to do so I filled it again. Probably would
be nice if this was detected and the repo not listed a second time.

> - Also, how is it that system is creating patch from a repository
> outside of hg.python.org? What if the user had an older version in
> remote repo and tip in the hg.python.org has moved forward?

Don't know exactly how it's implemented, but I would guess it's using
"hg incoming --patch" or similar, which would handle this transparently;
the newer revisions at hg.python.org would just be ignored in generating
the diff.

Carl


-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAk2IBiQACgkQFRcxmeyPUXLnTACeII/z+gSZPt5d1ycyn2rcqaxr
4/kAn3l/JQdT2lMk2M6Ll+EA4mTEo39B
=hxIJ
-END PGP SIGNATURE-
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com