Re: [Distutils] Changing the "install hooks" mechanism for PEP 426
On Thu, Aug 15, 2013 at 7:16 PM, Nick Coghlan wrote: > But if we're only going to validate it via hooks, why not have the "mapping > of names to export specifiers" just be a recommended convention for > extensions rather than a separate exports field? I guess I didn't explain it very well, because that's roughly what I meant: a single namespace for all "extensions", structured as a mapping from group names to submappings whose keys can be arbitrary values, but whose values must then be either a string or a JSON object, and if it's a string, then it should be an export specifier. To put it another way, I'm saying something slightly stronger than a recommended convention: making it a requirement that strings at that level be import specifiers, and only allowing mappings as an alternative. In that way, there is a minimum level of validation possible for the majority of extensions *by default*, without needing an explicit validator declared. To put it another way, it ensures that there's a kind of lingua franca or lowest-common denominator that lets somebody understand what's going on in most extensions, without having to understand a new *structural* schema for every extension group. (Just a *syntactical* one) > As an extension, pydist.extension_hooks would also be non-conventional, > since it would define a new namespace, where extension names map to an > export group of hooks. A separate export group per hook would be utterly > unreadable. If you already know what keys go in an entry point group, there's a good chance you're doing it wrong. Normally, the whole point of the group is that the keys are defined by the publisher, not the consumer. The normal pattern is that the consumer names the group (representing a hook), and the publishers name the extensions (representing implementations for the hook). I don't see how it makes it unreadable, but then I think in terms of the ini syntax or setup.py syntax for defining entry points, which is all very flat. IMO, creating a second-level data structure for this doesn't make a whole lot of sense, because now you're nesting something. I'm not even clear why you need separate registrations for the different hooks anyway; ISTM a single hook with an event parameter is sufficient. Even if it weren't, I'd be inclined to just make the information part of the key in that case, e.g. [pydist.extension_listeners] preinstall:foo.bar = some.module:hook This sort of thing is very flat and easy to express in a simple configuration syntax, which we really shouldn't lose sight of. It's just as easy to have write a syntax validator as a structure validator, but if you start with structures then you have to back-figure a syntax. I'd very much like it to be easy to define a simple flat syntax that's usable for 90%+ of extension use cases... which means I'd rather not see the PEP make up its own data structures when they're not actually needed. Don't get me wrong, I'm okay with allowing JSON structures for extensions in place of export strings, but I don't think there's been a single use case proposed as yet that actually *works better* as a data structure. If you need to do something like have a bunch of i18n/l10n resource definitions with locales and subpaths and stuff like that... awesome. That's something that might make a lot of sense for JSON. But when the ultimate point of the data structure is to define an importable entry point, and the information needed to identify it can be put into a relatively short human readable string, ISTM that the One Obvious Way to do it is something like a setuptools entry point -- i.e. a basic key-value pair in a consumer-defined namespace, mapping a semantically-valued name to an importable object. And *most* use cases for extensions, that I'm aware of, fit that bill. You have to be doing something pretty complex to need anything more complicated, *and* there has to be a possibility that you're going to avoid importing the related code or putting in on sys.path, or else you don't actually *save* anything by putting it in the metadata. IOW, if you're going to have to import it anyway, there is no point to putting it in the metadata; you might as well import it. The only things that make sense to put in metadata for these things are data that tells you whether or not you need to import it. Generally, this means keys, not values, in other words. (Which is why l10n and scripts make sense to not be entry points: at the time you use them, you're not importing 'em.) >That's why I'm still inclined to make this one a separate top > level field: *installers* have to know how to bootstrap the hook system, and > I like the symmetry of separate, relatively flat, publication and > subscription interfaces. I don't really see the value of a separate top-level field, but then that's because I don't see anything at all special about these hooks that demands something more sophisticated than common entry points. AFAICT it's a YAGNI
Re: [Distutils] Changing the "install hooks" mechanism for PEP 426
On 15 Aug 2013 12:27, "PJ Eby" wrote: > > On Thu, Aug 15, 2013 at 12:36 PM, Vinay Sajip wrote: > > PJ Eby telecommunity.com> writes: > >> than nested.) So I would suggest that an export can either be an > >> import identifier string, *or* a JSON object with arbitrary contents. > > [snip] > >> Given how many use cases are already met today by providing > >> import-based exports, ISTM that they are the 20% that provides 80% of > >> the value; arbitrary JSON is the 80% that only provides 20%, and so > >> should not be the entry point (no pun intended) for people dealing > >> with extensions. > > > > The above two statements seem to be contradictory as to the value of > > arbitrary JSON. > > I don't see a contradiction. I said that the majority of use cases > (the figurative 80% of value) can be met with just a string (20% of > complexity), and that a minority of use cases (20% of value) would be > met by JSON (80% of complexity). > > This is consistent with STASCTAP, i.e., simple things are simple, > complex things are possible. > > To be clear: I am *against* arbitrary JSON as the core protocol; it > should be only for "complex things are possible" and only used when > absolutely required. I think we are in agreement on this. But if we're only going to validate it via hooks, why not have the "mapping of names to export specifiers" just be a recommended convention for extensions rather than a separate exports field? pydist.install_hooks, pydist.console_scripts, pydist.gui_scripts would then all be conventional export groups. pydist.prebuilt_commands would be non-conventional, since the values would be relative file paths rather than export specifiers. As an extension, pydist.extension_hooks would also be non-conventional, since it would define a new namespace, where extension names map to an export group of hooks. A separate export group per hook would be utterly unreadable. That's why I'm still inclined to make this one a separate top level field: *installers* have to know how to bootstrap the hook system, and I like the symmetry of separate, relatively flat, publication and subscription interfaces. Cheers, Nick. > > > > I think the metadata format is a communication tool between > > developers as much as anything else (though intended to be primarily > > consumed by software), so I think KISS and YAGNI should be our watch-words > > (in terms of what the PEP allows), until specific uses have been identified. > > +100. > > > >> That would make it easier, I think, to implement both a full-featured > >> replacement for setuptools entry point API, and allow simple > > > > What do you feel is missing in terms of functionality? > > What I was saying is that starting from a base of arbitrary JSON (as > Nick seemed to be proposing) would make it *harder* to provide the > simple functionality. Not that adding JSON is needed to support > setuptools functionality. Setuptools does just fine with plain export > strings! > > I don't want to lose that simplicity; the "export string or JSON" > suggestion was a compromise counterproposal to Nick's "let's just use > arbitrary JSON structures". > > > > I think the thing here is to identify what the components in the build > > system would be (as an abstraction), how they would interact etc. If we look > > at how the build side of distutils works, it's all pretty much hardcoded > > once you specify the inputs, without doing a lot of work to subclass, > > monkey-patch etc. all over the place. It's unnecessarily hard to do even > > simple stuff like "use this set of compilation flags for only this specific > > set of sources in my extension". In any realistic build pipeline you'd need > > to be able to insert components into the pipeline, sometimes to augment the > > work of other components, sometimes to replace it etc. and ISTM we don't > > really know how any of that would work (at a meta level, I mean). > > I was assuming that we leave build tools to build tool developers. If > somebody wants to create a pipelined or meta-tool system, then > projects that want to use that can just say, "I use the foobar > metabuild system". For installer-tool purposes, it suffices to say > what system will be responsible, and have a standard for how to invoke > build systems and get wheels or the raw materials from which the wheel > should be created. > > *How* this build system gets the raw materials and does the build is > its own business. It might use extensions, or it might be setup.py > based, or Makefile based, or who knows whatever else. That's none of > the metadata PEP's business, really. Just how to invoke the builder > and get the outputs. > ___ > Distutils-SIG maillist - Distutils-SIG@python.org > http://mail.python.org/mailman/listinfo/distutils-sig ___ Distutils-SIG maillist - Distutils-SIG@python.org http://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] Changing the "install hooks" mechanism for PEP 426
On Thu, Aug 15, 2013 at 12:36 PM, Vinay Sajip wrote: > PJ Eby telecommunity.com> writes: >> than nested.) So I would suggest that an export can either be an >> import identifier string, *or* a JSON object with arbitrary contents. > [snip] >> Given how many use cases are already met today by providing >> import-based exports, ISTM that they are the 20% that provides 80% of >> the value; arbitrary JSON is the 80% that only provides 20%, and so >> should not be the entry point (no pun intended) for people dealing >> with extensions. > > The above two statements seem to be contradictory as to the value of > arbitrary JSON. I don't see a contradiction. I said that the majority of use cases (the figurative 80% of value) can be met with just a string (20% of complexity), and that a minority of use cases (20% of value) would be met by JSON (80% of complexity). This is consistent with STASCTAP, i.e., simple things are simple, complex things are possible. To be clear: I am *against* arbitrary JSON as the core protocol; it should be only for "complex things are possible" and only used when absolutely required. I think we are in agreement on this. > I think the metadata format is a communication tool between > developers as much as anything else (though intended to be primarily > consumed by software), so I think KISS and YAGNI should be our watch-words > (in terms of what the PEP allows), until specific uses have been identified. +100. >> That would make it easier, I think, to implement both a full-featured >> replacement for setuptools entry point API, and allow simple > > What do you feel is missing in terms of functionality? What I was saying is that starting from a base of arbitrary JSON (as Nick seemed to be proposing) would make it *harder* to provide the simple functionality. Not that adding JSON is needed to support setuptools functionality. Setuptools does just fine with plain export strings! I don't want to lose that simplicity; the "export string or JSON" suggestion was a compromise counterproposal to Nick's "let's just use arbitrary JSON structures". > I think the thing here is to identify what the components in the build > system would be (as an abstraction), how they would interact etc. If we look > at how the build side of distutils works, it's all pretty much hardcoded > once you specify the inputs, without doing a lot of work to subclass, > monkey-patch etc. all over the place. It's unnecessarily hard to do even > simple stuff like "use this set of compilation flags for only this specific > set of sources in my extension". In any realistic build pipeline you'd need > to be able to insert components into the pipeline, sometimes to augment the > work of other components, sometimes to replace it etc. and ISTM we don't > really know how any of that would work (at a meta level, I mean). I was assuming that we leave build tools to build tool developers. If somebody wants to create a pipelined or meta-tool system, then projects that want to use that can just say, "I use the foobar metabuild system". For installer-tool purposes, it suffices to say what system will be responsible, and have a standard for how to invoke build systems and get wheels or the raw materials from which the wheel should be created. *How* this build system gets the raw materials and does the build is its own business. It might use extensions, or it might be setup.py based, or Makefile based, or who knows whatever else. That's none of the metadata PEP's business, really. Just how to invoke the builder and get the outputs. ___ Distutils-SIG maillist - Distutils-SIG@python.org http://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] Changing the "install hooks" mechanism for PEP 426
PJ Eby telecommunity.com> writes: > I think that as part of the spec, we should either reserve multiple > prefixes for Python/stdlib use, or have a single, always-reserved > top-level prefix like 'py.' that can be subdivided in the future. +1 There's quite a lot of stuff in your post that I haven't digested yet, but one thing confused me early on: > than nested.) So I would suggest that an export can either be an > import identifier string, *or* a JSON object with arbitrary contents. [snip] > Given how many use cases are already met today by providing > import-based exports, ISTM that they are the 20% that provides 80% of > the value; arbitrary JSON is the 80% that only provides 20%, and so > should not be the entry point (no pun intended) for people dealing > with extensions. The above two statements seem to be contradictory as to the value of arbitrary JSON. I think the metadata format is a communication tool between developers as much as anything else (though intended to be primarily consumed by software), so I think KISS and YAGNI should be our watch-words (in terms of what the PEP allows), until specific uses have been identified. > That would make it easier, I think, to implement both a full-featured > replacement for setuptools entry point API, and allow simple What do you feel is missing in terms of functionality? > It's just extensions, IMO. What else *is* there? You *could* define > a core metadata field that says, "this is the distribution I depend on I think the thing here is to identify what the components in the build system would be (as an abstraction), how they would interact etc. If we look at how the build side of distutils works, it's all pretty much hardcoded once you specify the inputs, without doing a lot of work to subclass, monkey-patch etc. all over the place. It's unnecessarily hard to do even simple stuff like "use this set of compilation flags for only this specific set of sources in my extension". In any realistic build pipeline you'd need to be able to insert components into the pipeline, sometimes to augment the work of other components, sometimes to replace it etc. and ISTM we don't really know how any of that would work (at a meta level, I mean). Regards, Vinay Sajip ___ Distutils-SIG maillist - Distutils-SIG@python.org http://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] Changing the "install hooks" mechanism for PEP 426
Nick Coghlan gmail.com> writes: > Sounds fair - let's use "pydist", since we want these definitions to be > somewhat independent of their reference implementation in distlib :) Seems reasonable. > Based on PJE's feedback, I'm also starting to think that the > exports/extensions split is artificial and we should drop it. Instead, > there should be a "validate" export hook that build tools can call to > check for export validity, and the contents of an export group be > permitted to be arbitrary JSON. I don't know that we should allow arbitrary JSON here: I would wait to see what it is we need, and keep it restricted for now until the more detailed understanding of those needs becomes more apparent. Arbitrary JSON is likely to be needed for *implementations* of things, but not necessarily for *interfaces* between things. The PEP 426 scope should be mainly focused on dependency resolution, other installer requirements and interactions between installed distributions (exports). > The installers are still going to have to be export_hooks aware, though, > since the registered handlers are how the whole export system will be > bootstrapped. Distil currently supports the preuninstall/postinstall hooks, and I expect to extend this to other types of hook. > Something else I'm wondering: should the metabuild system be separate, I think it should be separate, though of course there will be a role for exports. The JSON metadata needed for source packaging and building can be quite large (example at [1]), and IMO doesn't really belong with the PEP 426 metadata. Currently, the extended metadata used by distil for building contains the whole PEP 426 metadata as an "index-metadata" sub-dictionary. It's already a fairly generic build system - though simple, it can build e.g. C/C++/Fortran extensions, handle Cython, SWIG and so on, without using any of distutils. However, there's still lots of work to be done to generalise the interfaces between different parts of the system so that building can be plug and play - it's a bit opaque at the moment, but I expect that will improve. Regards, Vinay Sajip [1] http://red-dove.com/pypi/projects/A/Assimulo/package-2.2.json ___ Distutils-SIG maillist - Distutils-SIG@python.org http://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] Changing the "install hooks" mechanism for PEP 426
On Thu, Aug 15, 2013 at 9:21 AM, Nick Coghlan wrote: > > On 15 Aug 2013 00:39, "Vinay Sajip" wrote: >> >> PJ Eby telecommunity.com> writes: >> >> > The build system *should* reserve at least one (subdivisible) >> > namespace for itself, and use that mechanism for its own extension, >> >> +1 - dog-food :-) > > Sounds fair - let's use "pydist", since we want these definitions to be > somewhat independent of their reference implementation in distlib :) I think that as part of the spec, we should either reserve multiple prefixes for Python/stdlib use, or have a single, always-reserved top-level prefix like 'py.' that can be subdivided in the future. Extensions are a honking great idea, so the stdlib will probably do more of them in the future. Likewise, future standards and informational PEPs will likely document specific extension protocols of general and specialized interest. (Notice, for example, that extensions could be used to publicize what database drivers are installed and available on a system.) > Based on PJE's feedback, I'm also starting to think that the > exports/extensions split is artificial and we should drop it. Instead, there > should be a "validate" export hook that build tools can call to check for > export validity, and the contents of an export group be permitted to be > arbitrary JSON. I think there is still something to be said for STASCTAP: simple things are simple, complex things are possible. (Also, flat is better than nested.) So I would suggest that an export can either be an import identifier string, *or* a JSON object with arbitrary contents. That would make it easier, I think, to implement both a full-featured replacement for setuptools entry point API, and allow simple extensions to be simple. It means, too, that simple exports can be defined with a flatter syntax (ala setuptools' ini format) in tools that generate the JSON. Given how many use cases are already met today by providing import-based exports, ISTM that they are the 20% that provides 80% of the value; arbitrary JSON is the 80% that only provides 20%, and so should not be the entry point (no pun intended) for people dealing with extensions. Removing the extension/export split also raises a somewhat different question, which is what to *call* them. I'm sort of leaning towards "extensions" as the general category, with "exports" being extensions that consist of an importable object, and "JSON extensions" for ones that are a JSON mapping object. So the terminology would be: Extension group - package like names, subdivisible as a namespace, should have a prefix associated with a project that defines the semantics of the extension group; analagous to Eclipse's notion of an "extension point" Extension name - arbitrary string, unique per distribution for a given group, but not required to be globally unique even for the group. Specific names or specific syntax for names may be specified by the creators of the group, and may optionally be validated. Extension object - either an "export string" specifying an importable object, or a JSON object. If a string, must be syntactically valid as an export; it is not, however, required to reference a module in the distribution that exports it; it *should* be in that distribution or one of its dependencies, however. So, an extension is machine-usable metadata published by a distribution in order to be (optionally) consumed by other distributions. It can be either static JSON metadata, or an importable object. The semantics of an extension are defined by its group, and other extensions can be used to validate those semantics. Any project that wants to be able to use plugins or extensions of some kind, can define its own groups, and publish extensions for validating them. Python itself will reserve and define a group namespace for extending the build and installation system, including a sub-namespace where the validators can be declared. > So we would have "pydist.commands" and "pydist.export_hooks" as export > groups, with "distlib" used as an example of how to define handlers for > them. Is 'commands' for scripts, or something else? Following "flat is better than nested", I would suggest not using arbitrary JSON for these when it's easy to define new dotted groups. (Keeping to such a style will make it easier for humans to define this stuff in the first place, before it's turned into JSON.) (Note, btw, that having more dots in a name does not necessarily equal "nested", whereas replacing those dots with nested JSON structures most definitely *is* "nested"!) Similarly, I'd just as soon see e.g. pydist.hooks.* subgroups, rather than a dedicated data structure. A 'pydist.validators' group would of course also be needed for syntax validation, with extension names in that group possibly allowing trailing '*' or '**' wildcards. (There will of course need to be a validation API, which is why I think that a separate PEP for the "extensions" system is probably
Re: [Distutils] How to handle launcher script importability?
PJ Eby telecommunity.com> writes: > used for these things nowadays, rather than PATH... but I think it > only works for .exe's, so there we go again back to the land of .exe's > just plain Work Better On Windows.) In that vein, I've updated distlib to install only a single .exe per script, where the script is appended to a stock launcher .exe as a zip with the script as a single __main__.py in it. I've not produced a new release, but the BitBucket repos for both distlib [1] and the launcher [2] are up to date. Note that the launcher is still not the PEP 397 launcher, but the simpler one which I developed when working on PEP 405 and whose .exes have been shipping with distlib. To try it out, you can download the latest distil.py from [3]. My smoke tests with both generated and pre-built scripts pass, but I'd be grateful if people could try it out and provide feedback. Wheel support should be up to date in terms of PEP 426 and the discussions about launchers: wheel builds should have no platform-specific files other than for binary extensions, and when installing from a wheel, wrapped scripts declared in the metadata should be installed. Regards, Vinay Sajip [1] https://bitbucket.org/pypa/distlib [2] https://bitbucket.org/vinay.sajip/simple_launcher [3] https://bitbucket.org/vinay.sajip/docs-distil/downloads/distil.py ___ Distutils-SIG maillist - Distutils-SIG@python.org http://mail.python.org/mailman/listinfo/distutils-sig
[Distutils] devpi-1.0: improved PyPI-server with upload/test/staging tool
devpi-1.0: PyPI server and packaging/testing/release tool = devpi-1.0 brings an improved PyPI caching and internal index server as well as a new abilities for tox-testing and staging your Python release packages. For a (long) list of changes, see the below CHANGELOG. Documentation got revamped and extended and now contains three quickstart scenarios. First the Quickstart tutorial for pypi-mirroring on your laptop:: http://doc.devpi.net/1.0/quickstart-pypimirror.html And if you want to manage your releases or implement staging as an individual or within an organisation:: http://doc.devpi.net/1.0/quickstart-releaseprocess.html If you want to permanently install devpi-server and potentially access it from many clients:: http://doc.devpi.net/1.0/quickstart-server.html More documentation and the beginning of an exhaustive user manual:: http://doc.devpi.net/latest/ Note that devpi-1.0 is not data-compatible to the previous 0.9.4 release: You need to start with a fresh devpi-1.0 installation and upload your packages again. Future releases of devpi should support data migration more directly. best and have fun, holger krekel Changelog 1.0 (-0.9.4) devpi-server: - rename "--datadir" to "--serverdir" to better match the also picked up DEVPI_SERVERDIR environment variable. - fix a strange effect in that sometimes tools ask to receive a package url with a "#md5=..." arriving at the server side. We now strip that part out before trying to serve the file. - on startup don't create any initial indexes other than the "root/pypi" pypi caching mirror. - introduce ``--start``, ``--stop`` and ``--log`` commands for controling a background devpi-server run. (these commands previously were implemented with the devpi-client and the "server" sub command) - fix issue27: provide full list of pypi names in root/pypi's simple view (and simple pages from inheriting indices) - default to "eventlet" server when creating deployment with --gendeploy - fix issue25: return 403 Forbidden when trying to delete the root user. - fix name mangling issue for pypi-cache: "project_name*" is now matched correctly when a lookup for "project-name" happens. - fix issue22: don't bypass CDN by default, rather provide an "--bypass-cdn" option to do it (in case you have cache-invalidation troubles) - fix issue20 and fix issue23: normalize index specs internally ("/root/dev" -> "root/dev") and check if base indices exist. - add Jenkins build job triggering for running the tests for a package through tox. - inheritance cleanup: inherited versions for a project are now shadowed and not shown anymore with getreleaselinks() or in +simple pages if the "basename" is exactly shadowed. - fix issue16: enrich projectconfig json with a "+shadow" file which lists shadowed "versions" - initial wheel support: accept "whl" uploads and support caching of whl files from pypi.python.org - implemented internal push operation between devpi indexes - show "docs" link if documentation has been uploaded - pushing releases to pypi.python.org will now correctly report the filetype/pyversion in the metadata. - add setting of acl_upload for indexes. Only the owning user and acl_upload users may upload releases, files or documentation to an index. - add --passwd USER option for setting a user's password server-side - don't require email setting for creating users devpi-client: - removed ``server`` subcommand and options for controling background devpi-server processes to become options of ``devpi-server`` itself. - fix issue14: lookup "python" from PATH for upload/packaging activities instead of using "sys.executable" which comes from the interpreter executing the "devpi" script. This allows to alias "devpi" to come from a virtualenv which is separate from the one used to perform packaging. - fix issue35: "devpi index" cleanly errors out if no index is specified or in use. - remember authentication on a per-root basis and cleanup "devpi use" interactions. This makes switching between multiple devpi instances more seemless. - fix issue17: better reporting when "devpi use" does not operate on valid URL - test result upload and access: - "devpi test" invokes "tox --result-json ..." and uploads the test result log to devpi-server. - "devpi list [-f] PKG" shows test result information. - add "uploadtrigger_jenkins" configuration option through "devpi index". - fix issue19: devpi use now memorizes --venv setting properly. Thanks Laurent. - fix issue16: show files from shadowed versions - initial wheel support: "devpi upload --format=bdist_wheel" now uploads a wheel format file to the index. (XXX "devpi install" will trigger pip commands with option "--use-wheels".) - fix issue15: docs will now be built via "setup.py build_sphinx" using a internal build di
[Distutils] tox-1.6: install_command, develop, py25 support, json-reporting ...
tox-1.6: support for install_command, develop, json-reporting = Welcome to a new release of tox, the virtualenv-based test automation manager. This release brings some new major features: - installer_command: you can customize the command user for installing packages and dependencies. Thanks Carl Meyer. - usedevelop: you can use "develop" mode ("pip install -e") either by configuring it in your tox.ini or through the new "--develop" option. Thank Monty Tailor. - python2.5: tox ships internally virtualenv-1.9.1 and can thus run tests create virtualenvs and run tests against python2.5 even if you have a newer virtualenv version installed. While tox-1.6 should otherwise be compatible to tox-1.5, the new $HOME-isolation ($HOME is set to a temporary directory when installing packages) might trigger problems if your tests relied on $HOME configuration files -- which they shouldn't if you want to repeatability. If that causes problems, please file an issue. Docs and more information at: http://tox.testrun.org/tox/latest/ have fun, holger 1.6 Changelog -- - fix issue35: add new EXPERIMENTAL "install_command" testenv-option to configure the installation command with options for dep/pkg install. Thanks Carl Meyer for the PR and docs. - fix issue91: python2.5 support by vendoring the virtualenv-1.9.1 script and forcing pip<1.4. Also the default [py25] environment modifies the default installer_command (new config option) to use pip without the "--pre" option which was introduced with pip-1.4 and is now required if you want to install non-stable releases. (tox defaults to install with "--pre" everywhere). - during installation of dependencies HOME is now set to a pseudo location ({envtmpdir}/pseudo-home). If an index url was specified a .pydistutils.cfg file will be written with an index_url setting so that packages defining ``setup_requires`` dependencies will not silently use your HOME-directory settings or https://pypi.python.org. - fix issue1: empty setup files are properly detected, thanks Anthon van der Neuth - remove toxbootstrap.py for now because it is broken. - fix issue109 and fix issue111: multiple "-e" options are now combined (previously the last one would win). Thanks Anthon van der Neut. - add --result-json option to write out detailed per-venv information into a json report file to be used by upstream tools. - add new config options ``usedevelop`` and ``skipsdist`` as well as a command line option ``--develop`` to install the package-under-test in develop mode. thanks Monty Tailor for the PR. - always unset PYTHONDONTWRITEBYTE because newer setuptools doesn't like it - if a HOMEDIR cannot be determined, use the toxinidir. - refactor interpreter information detection to live in new tox/interpreters.py file, tests in tests/test_interpreters.py. ___ Distutils-SIG maillist - Distutils-SIG@python.org http://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] Changing the "install hooks" mechanism for PEP 426
On 15 Aug 2013 00:39, "Vinay Sajip" wrote: > > PJ Eby telecommunity.com> writes: > > > The build system *should* reserve at least one (subdivisible) > > namespace for itself, and use that mechanism for its own extension, > > +1 - dog-food :-) Sounds fair - let's use "pydist", since we want these definitions to be somewhat independent of their reference implementation in distlib :) Based on PJE's feedback, I'm also starting to think that the exports/extensions split is artificial and we should drop it. Instead, there should be a "validate" export hook that build tools can call to check for export validity, and the contents of an export group be permitted to be arbitrary JSON. So we would have "pydist.commands" and "pydist.export_hooks" as export groups, with "distlib" used as an example of how to define handlers for them. The installers are still going to have to be export_hooks aware, though, since the registered handlers are how the whole export system will be bootstrapped. Something else I'm wondering: should the metabuild system be separate, or is it just some more export hooks and you define the appropriate export group to say which build system to invoke? And rather than each installer having to define their own fallback, we'd just implement the appropriate hooks in setuptools to call setup.py. (Installers would still need an explicit fallback for legacy metadata). Cheers, Nick. > > Regards, > > Vinay Sajip > > ___ > Distutils-SIG maillist - Distutils-SIG@python.org > http://mail.python.org/mailman/listinfo/distutils-sig ___ Distutils-SIG maillist - Distutils-SIG@python.org http://mail.python.org/mailman/listinfo/distutils-sig