Re: [Distutils] Serverside Dependency Resolution and Virtualenv Build Server
Am 12.11.2015 um 13:32 schrieb Leonardo Rochael Almeida: > Hi Thomas, > > I think your idea could be very useful as an accelerator if installation in > closed environments, as you suggested in your last e-mail, but which wasn't > clear in your first. > > After all, in closed environments you have control of the machine > architecture of all clients, and can be reasonably sure that the wheels you > build server-side are installable client-side. > > By default, when proposing ideas on this list, people tend to assume they're > ideas being proposed to PyPI itself, unless there is a very clear mention > that this is not the case, hence Donald's answer. > > My only comment about your idea would be that since packages get upgraded all > the time, then the "fuzzy set of requirements" can't be treated as the cache > key, otherwise your pre-built virtualenvs will get stale all the time... :-) thank you very much. This makes my happy, since you look at it in detail > Rather, the cache key of the pre-built virtual environments should be the > "fixed set of packages with exactly pinned versions" that was resolved from > the fuzzy set. Yes -- http://www.thomas-guettler.de/ ___ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] Serverside Dependency Resolution and Virtualenv Build Server
On Nov 12, 2015 6:32 AM, "Leonardo Rochael Almeida" wrote: > > Hi Thomas, > > I think your idea could be very useful as an accelerator if installation in closed environments, as you suggested in your last e-mail, but which wasn't clear in your first. > > After all, in closed environments you have control of the machine architecture of all clients, and can be reasonably sure that the wheels you build server-side are installable client-side. > > By default, when proposing ideas on this list, people tend to assume they're ideas being proposed to PyPI itself, unless there is a very clear mention that this is not the case, hence Donald's answer. > > My only comment about your idea would be that since packages get upgraded all the time, then the "fuzzy set of requirements" can't be treated as the cache key, otherwise your pre-built virtualenvs will get stale all the time... > > Rather, the cache key of the pre-built virtual environments should be the "fixed set of packages with exactly pinned versions" that was resolved from the fuzzy set. * [(PKG, VERSTR)] * {sys.platform: platform strings} * [or] the revision of a meta-(package/module) and build options * e.g. --make-relocatable, prefix ... like a PPA build farm with a parameterized test 'grid'? > > Regards, > > Leo > > > On 12 November 2015 at 06:55, Thomas Güttler wrote: >> >> Am 11.11.2015 um 13:59 schrieb Donald Stufft: >>> >>> On November 11, 2015 at 1:30:57 AM, Thomas Güttler ( guettl...@thomas-guettler.de) wrote: Maybe I am missing something, but still think server side dependency resolution is possible. >>> >>> I don’t believe it’s possible nor desirable to have the server handle dependency resolution, at least not without >>> removing some currently supported features and locking out some future features from ever happening. >> >> >> I can understand you, if you say it is not desirable. >> >> I like the general concept of simple clients and solving complicated stuff at the server. >> >> Now to "possible": >> >> - What features are not supported if you do resolve dependencies on the server? >> - What features are not possible in the future? >> >> >>> >>> Currently pip can be configured with multiple repository locations that it will use when resolving dependencies. By >>> default this only includes PyPI but people can either remove that, or add additional repository locations. In order >>> to support this we need a resolver that can union multiple repositories together before doing the resolving. If the >>> repository itself was the one handling the resolution than we are locked into a single repository per invocation of >>> pip. >> >> >> I am aware of that. In our company the CI system has no access to pypi.org. All packages come from our package server which contains a mirror of some pypi packages. >> >> If this can be done on the client side today, I see no problem doing this on the server-side tomorrow. >> >>> Additionally, pip can also be configured to use a simple directory full of files as a repository. Since this is just >>> a simple directory, there *is* no server process running that would allow for a server side resolver to happen and >>> pip either *must* handle the resolution itself in this case or it must disallow these feature all together. >> >> >> Same as above: can be done on a server, too. >> >>> >>> Additionally, the fact that we currently treat the server as a “dumb” server, means that someone can implement a PEP >>> 503 compatible repository very trivially with pretty much any web server that supports static files and automatically >>> generating an index for static files. Switching to server side resolution would require removing this capability and >>> force everyone to run a dedicated repository software that can handle that resolution. >> >> >> You currently treat the server as a "dump" server. That's ok. >> >> Did I think I want to replace your server with my idea? I am very sorry if you thought this way. >> >> My solution is optional and just an idea. I never meant that pypi.or or the new wheel server should use my idea. >> >> You use the word "force". Nobody gets forced just because there is an alternative. >> >>> Additionally, we want there to be as little variance in the requests that people make to the repository as possible. >>> We utilize a caching CDN layer which handles > 80% of the total traffic to PyPI which is the primary reason we’ve >>> been able to scale to handling 5TB and ~50 million requests a day with a skeleton crew of people. If we move to a >>> server side dependency resolution than we reduce our ability to ensure that as many requests as possible are served >>> directly out of the cache rather than having to be go back to our backend servers. >> >> >> Your thoughts were too fast. There are a lot of private package hostings servers in intranets of companies. >> >> In this context the load can be handled very well. And if you have CI-Systems asking for the same stuff >> over a
Re: [Distutils] Serverside Dependency Resolution and Virtualenv Build Server
Hi Thomas, I think your idea could be very useful as an accelerator if installation in closed environments, as you suggested in your last e-mail, but which wasn't clear in your first. After all, in closed environments you have control of the machine architecture of all clients, and can be reasonably sure that the wheels you build server-side are installable client-side. By default, when proposing ideas on this list, people tend to assume they're ideas being proposed to PyPI itself, unless there is a very clear mention that this is not the case, hence Donald's answer. My only comment about your idea would be that since packages get upgraded all the time, then the "fuzzy set of requirements" can't be treated as the cache key, otherwise your pre-built virtualenvs will get stale all the time... Rather, the cache key of the pre-built virtual environments should be the "fixed set of packages with exactly pinned versions" that was resolved from the fuzzy set. Regards, Leo On 12 November 2015 at 06:55, Thomas Güttler wrote: > Am 11.11.2015 um 13:59 schrieb Donald Stufft: > >> On November 11, 2015 at 1:30:57 AM, Thomas Güttler ( >> guettl...@thomas-guettler.de) wrote: >> >>> >>> Maybe I am missing something, but still think server side dependency >>> resolution is possible. >>> >>> >> I don’t believe it’s possible nor desirable to have the server handle >> dependency resolution, at least not without >> removing some currently supported features and locking out some future >> features from ever happening. >> > > I can understand you, if you say it is not desirable. > > I like the general concept of simple clients and solving complicated stuff > at the server. > > Now to "possible": > > - What features are not supported if you do resolve dependencies on the > server? > - What features are not possible in the future? > > > >> Currently pip can be configured with multiple repository locations that >> it will use when resolving dependencies. By >> default this only includes PyPI but people can either remove that, or add >> additional repository locations. In order >> to support this we need a resolver that can union multiple repositories >> together before doing the resolving. If the >> repository itself was the one handling the resolution than we are locked >> into a single repository per invocation of >> pip. >> > > I am aware of that. In our company the CI system has no access to pypi.org. > All packages come from our package server which contains a mirror of some > pypi packages. > > If this can be done on the client side today, I see no problem doing this > on the server-side tomorrow. > > Additionally, pip can also be configured to use a simple directory full of >> files as a repository. Since this is just >> a simple directory, there *is* no server process running that would allow >> for a server side resolver to happen and >> pip either *must* handle the resolution itself in this case or it must >> disallow these feature all together. >> > > Same as above: can be done on a server, too. > > >> Additionally, the fact that we currently treat the server as a “dumb” >> server, means that someone can implement a PEP >> 503 compatible repository very trivially with pretty much any web server >> that supports static files and automatically >> generating an index for static files. Switching to server side resolution >> would require removing this capability and >> force everyone to run a dedicated repository software that can handle >> that resolution. >> > > You currently treat the server as a "dump" server. That's ok. > > Did I think I want to replace your server with my idea? I am very sorry if > you thought this way. > > My solution is optional and just an idea. I never meant that pypi.or or > the new wheel server should use my idea. > > You use the word "force". Nobody gets forced just because there is an > alternative. > > Additionally, we want there to be as little variance in the requests that >> people make to the repository as possible. >> We utilize a caching CDN layer which handles > 80% of the total traffic >> to PyPI which is the primary reason we’ve >> been able to scale to handling 5TB and ~50 million requests a day with a >> skeleton crew of people. If we move to a >> server side dependency resolution than we reduce our ability to ensure >> that as many requests as possible are served >> directly out of the cache rather than having to be go back to our backend >> servers. >> > > Your thoughts were too fast. There are a lot of private package hostings > servers in intranets of companies. > > In this context the load can be handled very well. And if you have > CI-Systems asking for the same stuff > over and over again, caching could improve the speed very much. You can do > caching at high level: all > projects going through CI in one company benefit. > > Finally, we want to move further away from trusting the actual repository >> where we can. In the future we’ll be >> allowing package signin
Re: [Distutils] Serverside Dependency Resolution and Virtualenv Build Server
Am 11.11.2015 um 13:59 schrieb Donald Stufft: On November 11, 2015 at 1:30:57 AM, Thomas Güttler (guettl...@thomas-guettler.de) wrote: Maybe I am missing something, but still think server side dependency resolution is possible. I don’t believe it’s possible nor desirable to have the server handle dependency resolution, at least not without removing some currently supported features and locking out some future features from ever happening. I can understand you, if you say it is not desirable. I like the general concept of simple clients and solving complicated stuff at the server. Now to "possible": - What features are not supported if you do resolve dependencies on the server? - What features are not possible in the future? Currently pip can be configured with multiple repository locations that it will use when resolving dependencies. By default this only includes PyPI but people can either remove that, or add additional repository locations. In order to support this we need a resolver that can union multiple repositories together before doing the resolving. If the repository itself was the one handling the resolution than we are locked into a single repository per invocation of pip. I am aware of that. In our company the CI system has no access to pypi.org. All packages come from our package server which contains a mirror of some pypi packages. If this can be done on the client side today, I see no problem doing this on the server-side tomorrow. Additionally, pip can also be configured to use a simple directory full of files as a repository. Since this is just a simple directory, there *is* no server process running that would allow for a server side resolver to happen and pip either *must* handle the resolution itself in this case or it must disallow these feature all together. Same as above: can be done on a server, too. Additionally, the fact that we currently treat the server as a “dumb” server, means that someone can implement a PEP 503 compatible repository very trivially with pretty much any web server that supports static files and automatically generating an index for static files. Switching to server side resolution would require removing this capability and force everyone to run a dedicated repository software that can handle that resolution. You currently treat the server as a "dump" server. That's ok. Did I think I want to replace your server with my idea? I am very sorry if you thought this way. My solution is optional and just an idea. I never meant that pypi.or or the new wheel server should use my idea. You use the word "force". Nobody gets forced just because there is an alternative. Additionally, we want there to be as little variance in the requests that people make to the repository as possible. We utilize a caching CDN layer which handles > 80% of the total traffic to PyPI which is the primary reason we’ve been able to scale to handling 5TB and ~50 million requests a day with a skeleton crew of people. If we move to a server side dependency resolution than we reduce our ability to ensure that as many requests as possible are served directly out of the cache rather than having to be go back to our backend servers. Your thoughts were too fast. There are a lot of private package hostings servers in intranets of companies. In this context the load can be handled very well. And if you have CI-Systems asking for the same stuff over and over again, caching could improve the speed very much. You can do caching at high level: all projects going through CI in one company benefit. Finally, we want to move further away from trusting the actual repository where we can. In the future we’ll be allowing package signing that will make it possible to survive a compromise of the repository. However there is no way to do that if the repository needs to be able to dynamically generate a list of packages that need to be installed as part of a resolution process because by definition that needs to be done on the fly and thus must be signed by a key that the repository has access too if it’s signed at all. However, since the metadata for a package can be signed once and then it never changes, that can be signed by a human when they are uploading to PyPI and than pip can verify the signature on that metadata before feeding it into the resolver. This would allow us to treat PyPI as just an untrusted middleman instead of something that is essentially going to be allowed to force us to execute arbitrary code whenever someone does a pip install (because it’ll be able to instruct us to install any package, and packages can contain arbitrary code). My idea is made of two parts which don't depend on each other. The main (first) part is dep resolution on server: Input: install_requires list with fuzzy version requirements Output: version pinned package list. If the server was hacked. What could a black hat hacker have done? He could
Re: [Distutils] Serverside Dependency Resolution and Virtualenv Build Server
On November 11, 2015 at 1:30:57 AM, Thomas Güttler (guettl...@thomas-guettler.de) wrote: > > Maybe I am missing something, but still think server side dependency > resolution is possible. > I don’t believe it’s possible nor desirable to have the server handle dependency resolution, at least not without removing some currently supported features and locking out some future features from ever happening. Currently pip can be configured with multiple repository locations that it will use when resolving dependencies. By default this only includes PyPI but people can either remove that, or add additional repository locations. In order to support this we need a resolver that can union multiple repositories together before doing the resolving. If the repository itself was the one handling the resolution than we are locked into a single repository per invocation of pip. Additionally, pip can also be configured to use a simple directory full of files as a repository. Since this is just a simple directory, there *is* no server process running that would allow for a server side resolver to happen and pip either *must* handle the resolution itself in this case or it must disallow these feature all together. Additionally, the fact that we currently treat the server as a “dumb” server, means that someone can implement a PEP 503 compatible repository very trivially with pretty much any web server that supports static files and automatically generating an index for static files. Switching to server side resolution would require removing this capability and force everyone to run a dedicated repository software that can handle that resolution. Additionally, we want there to be as little variance in the requests that people make to the repository as possible. We utilize a caching CDN layer which handles > 80% of the total traffic to PyPI which is the primary reason we’ve been able to scale to handling 5TB and ~50 million requests a day with a skeleton crew of people. If we move to a server side dependency resolution than we reduce our ability to ensure that as many requests as possible are served directly out of the cache rather than having to be go back to our backend servers. Finally, we want to move further away from trusting the actual repository where we can. In the future we’ll be allowing package signing that will make it possible to survive a compromise of the repository. However there is no way to do that if the repository needs to be able to dynamically generate a list of packages that need to be installed as part of a resolution process because by definition that needs to be done on the fly and thus must be signed by a key that the repository has access too if it’s signed at all. However, since the metadata for a package can be signed once and then it never changes, that can be signed by a human when they are uploading to PyPI and than pip can verify the signature on that metadata before feeding it into the resolver. This would allow us to treat PyPI as just an untrusted middleman instead of something that is essentially going to be allowed to force us to execute arbitrary code whenever someone does a pip install (because it’ll be able to instruct us to install any package, and packages can contain arbitrary code). Hopefully that answers your question about why it’s unlikely that we’ll ever move to a server side dependency resolver because even though it is possible to do so, doing it would severely regress a number of very important features. - Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA ___ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] Serverside Dependency Resolution and Virtualenv Build Server
#PythonPackageBuildEnvironment ... * pip-tools: https://github.com/nvie/pip-tools * "are there updates?" * "which updates would there be?" * devpi: https://bitbucket.org/hpk42/devpi/ * "where do we push these?" * "does it build (do the included package tests pass)?" "does it build (do the included package tests pass)?" * [Makefile], setup.py, tox.ini, travis.yml, dox.yml * https://github.com/pypa/warehouse * Src: https://bitbucket.org/pypa/pypi * https://westurner.org/tools/#pypi * Docs: https://westurner.org/tools/#python-packages ... " Re: [Distutils] Where should I put tests when packaging python modules?" https://code.activestate.com/lists/python-distutils-sig/26482/ > [...] > * https://tox.readthedocs.org/en/latest/config.html * https://github.com/docker/docker-registry/blob/master/tox.ini #flake8 * dox = docker + tox | PyPI: https://pypi.python.org/pypi/dox | Src: https://git.openstack.org/cgit/stackforge/dox/tree/dox.yml * docker-compose.yml | Docs: https://docs.docker.com/compose/ | Docs: https://github.com/docker/compose/blob/master/docs/yml.md *https://github.com/kelseyhightower/kubernetes-docker-files/blob/master/docker-compose.yml *https://github.com/kubernetes/kubernetes/blob/master/docs/user-guide/pods.md#alternatives-considered * https://github.com/docker/docker/issues/8781 ( pods ( containers ) ) * http://docs.buildbot.net/latest/tutorial/docker.html *http://docs.buildbot.net/current/tutorial/docker.html#building-and-running-buildbot tox.ini often is not sufficient: * [Makefile: make test/tox] * setup.py * tox.ini * docker/platform-ver/Dockerfile * [dox.yml] * [docker-compose.yml] * [CI config] * http://docs.buildbot.net/current/manual/configuration.html * jenkins-kubernetes, jenkins-mesos > /[..] On Wed, Nov 11, 2015 at 12:30 AM, Thomas Güttler < guettl...@thomas-guettler.de> wrote: > Am 10.11.2015 um 21:54 schrieb Wes Turner: > > * It is [currently [#PEP426JSONLD)] necessary to run setup.py with each > given destination platform, because parameters are expanded within the > scope of setup.py. > > OK > > > >* Because of this, client side dependency resolution (with a given > platform) is currently the only viable option for something like this > > Are you sure that this conclusion is the only solution? > > A server could create a new container/VM to run setup.py. > > Then the install_requires can be cached (for this plattform). > > Maybe I am missing something, but still think server side dependency > resolution is possible. > > Please tell me what's wrong with my conclusion. > > > > > > > ... > > > > * Build: Docker, Tox (Dox) to build package(s) > >* Each assembly of packages is / could be a package with a setup.py > (and/or a requirements.txt) > > * And tests: > > * > http://conda.pydata.org/docs/building/meta-yaml.html#test-section > > * Release: DevPi > > * http://doc.devpi.net/latest/ > > * conda env environment.yml YAML: > http://conda.pydata.org/docs/using/envs.html > >* [x] conda packages > >* [x] pip packages > >* [ ] system packages (configuration management) > > > > And then, really, Is there a stored version of this instance of a named > Docker image? > > #reproducibility #linkedreproducibility > > I don't fully understand the above. > > I guess you had the container/VM solution in mind, too. > > There is a new topic in your mail which I will reply to in a new thread. > > Regards, > Thomas Güttler > > > -- > http://www.thomas-guettler.de/ > ___ > Distutils-SIG maillist - Distutils-SIG@python.org > https://mail.python.org/mailman/listinfo/distutils-sig > ___ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] Serverside Dependency Resolution and Virtualenv Build Server
Am 10.11.2015 um 21:54 schrieb Wes Turner: > * It is [currently [#PEP426JSONLD)] necessary to run setup.py with each given > destination platform, because parameters are expanded within the scope of > setup.py. OK >* Because of this, client side dependency resolution (with a given > platform) is currently the only viable option for something like this Are you sure that this conclusion is the only solution? A server could create a new container/VM to run setup.py. Then the install_requires can be cached (for this plattform). Maybe I am missing something, but still think server side dependency resolution is possible. Please tell me what's wrong with my conclusion. > > ... > > * Build: Docker, Tox (Dox) to build package(s) >* Each assembly of packages is / could be a package with a setup.py > (and/or a requirements.txt) > * And tests: > * http://conda.pydata.org/docs/building/meta-yaml.html#test-section > * Release: DevPi > * http://doc.devpi.net/latest/ > * conda env environment.yml YAML: http://conda.pydata.org/docs/using/envs.html >* [x] conda packages >* [x] pip packages >* [ ] system packages (configuration management) > > And then, really, Is there a stored version of this instance of a named > Docker image? > #reproducibility #linkedreproducibility I don't fully understand the above. I guess you had the container/VM solution in mind, too. There is a new topic in your mail which I will reply to in a new thread. Regards, Thomas Güttler -- http://www.thomas-guettler.de/ ___ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig
Re: [Distutils] Serverside Dependency Resolution and Virtualenv Build Server
* It is [currently [#PEP426JSONLD)] necessary to run setup.py with each given destination platform, because parameters are expanded within the scope of setup.py. * Because of this, client side dependency resolution (with a given platform) is currently the only viable option for something like this ... * Build: Docker, Tox (Dox) to build package(s) * Each assembly of packages is / could be a package with a setup.py (and/or a requirements.txt) * And tests: * http://conda.pydata.org/docs/building/meta-yaml.html#test-section * Release: DevPi * http://doc.devpi.net/latest/ * conda env environment.yml YAML: http://conda.pydata.org/docs/using/envs.html * [x] conda packages * [x] pip packages * [ ] system packages (configuration management) And then, really, Is there a stored version of this instance of a named Docker image? #reproducibility #linkedreproducibility On Sat, Nov 7, 2015 at 8:37 AM, Thomas Güttler wrote: > I wrote down a tought about Serverside Dependency Resolution and > Virtualenv Build Server > > > What do you think? > > Latest version: https://github.com/guettli/virtualenv-build-server > > virtualenv-build-server > ### > > Rough roadmap how a server to build virtualenvs for the python programming > language could be implemented. > > Highlevel goal > -- > > Make creating new virtual envionments for the Python programming language > easy and fast. > > Input: fuzzy requirements like this: django>=1.8, requests=>2.7 > > Output: virtualenv with packages installed. > > Two APIs > > > #. Resolve fuzzy requirements to a fixed set of packages with exactly > pinned versions. > #. Read fixed set of packages. Build virtualenv according to given > platform. > > > Steps > - > > Steps: > > #. Client sends list of fuzzy requirements to server: > >* I need: django>=1.8, requests=>2.7, ... > > > #. Server solves the fuzzy requirements to a fixed set of requirememnts: > django==1.8.2, requests==2.8.1, ... > > #. Client reads the fixed set of requirements. > > #. Optional: Client sends fixed set of requirements to the server. Telling > him the plattform > >* My platform: sys.version==2.7.6 and sys.platform=linux2 > > #. Server builds a virtualenv according to the fixed set of requirements. > > #. Server sends the environment to the client > > #. Client unpacks the data and has a usable virtualenv > > Benefits > > > Speed: > > * There is only one round-trip from client to server. If the dependencies > get resolved on the client the client would need to download the available > version information. > * Caching: If the server gets input parameters (fuzzy requirements and > platform information) which he has seen before, he can return the cached > result from the previous request. > > Possible Implementations > > > APIs > > Both APIs could be implementated by a webservice/Rest interface passing > json or yaml. > > Serverside > == > > > Implementation Strategie "PostgreSQL" > . > > Since the API is de-coupled from the internals the implementation could be > exchanged without the need for changes at the client side. > > I suggest using the PostgreSQL und resolving the dependcy graph using SQL > (WITH RECURSIVE). > > The package and version data gets stored in PostgreSQL via ORM (Django or > SQLAlchemy). > > The version numbers need to be normalized to ascii to allow fast > comparision. > > Related: https://www.python.org/dev/peps/pep-0440/ > > Implementation Strategie "Node.js" > .. > > I like python, but I am not married with it. Why not use a different tools > that is already working? Maybe the node package manager: > https://www.npmjs.com/ > > Questions > - > Are virtualenv relocatable? AFAIK they are not. > > General Thoughts > > > * Ignore Updates. Focus on creating new virtualenvs. The server can do > caching and that's why I prefer creating virtualenvs which never get > updated. They get created and removed (immutable). > > > I won't implement it > > > This idea is in the public domain. If you are young and brave or old and > wise: Go ahead, try to implement it. Please communicate early and often. > Ask on mailing-lists or me for feedback. Good luck :-) > > I love feedback > --- > > Please tell me want you like or dislike: > > * typos and spelling stuff (I am not a native speaker) > * alternative implementation strategies. > * existing software which does this (even if implemented in a different > programming language). > * ... > > -- > http://www.thomas-guettler.de/ > ___ > Distutils-SIG maillist - Distutils-SIG@python.org > https://mail.python.org/mailman/listinfo/distutils-sig > ___ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mail
[Distutils] Serverside Dependency Resolution and Virtualenv Build Server
I wrote down a tought about Serverside Dependency Resolution and Virtualenv Build Server What do you think? Latest version: https://github.com/guettli/virtualenv-build-server virtualenv-build-server ### Rough roadmap how a server to build virtualenvs for the python programming language could be implemented. Highlevel goal -- Make creating new virtual envionments for the Python programming language easy and fast. Input: fuzzy requirements like this: django>=1.8, requests=>2.7 Output: virtualenv with packages installed. Two APIs #. Resolve fuzzy requirements to a fixed set of packages with exactly pinned versions. #. Read fixed set of packages. Build virtualenv according to given platform. Steps - Steps: #. Client sends list of fuzzy requirements to server: * I need: django>=1.8, requests=>2.7, ... #. Server solves the fuzzy requirements to a fixed set of requirememnts: django==1.8.2, requests==2.8.1, ... #. Client reads the fixed set of requirements. #. Optional: Client sends fixed set of requirements to the server. Telling him the plattform * My platform: sys.version==2.7.6 and sys.platform=linux2 #. Server builds a virtualenv according to the fixed set of requirements. #. Server sends the environment to the client #. Client unpacks the data and has a usable virtualenv Benefits Speed: * There is only one round-trip from client to server. If the dependencies get resolved on the client the client would need to download the available version information. * Caching: If the server gets input parameters (fuzzy requirements and platform information) which he has seen before, he can return the cached result from the previous request. Possible Implementations APIs Both APIs could be implementated by a webservice/Rest interface passing json or yaml. Serverside == Implementation Strategie "PostgreSQL" . Since the API is de-coupled from the internals the implementation could be exchanged without the need for changes at the client side. I suggest using the PostgreSQL und resolving the dependcy graph using SQL (WITH RECURSIVE). The package and version data gets stored in PostgreSQL via ORM (Django or SQLAlchemy). The version numbers need to be normalized to ascii to allow fast comparision. Related: https://www.python.org/dev/peps/pep-0440/ Implementation Strategie "Node.js" .. I like python, but I am not married with it. Why not use a different tools that is already working? Maybe the node package manager: https://www.npmjs.com/ Questions - Are virtualenv relocatable? AFAIK they are not. General Thoughts * Ignore Updates. Focus on creating new virtualenvs. The server can do caching and that's why I prefer creating virtualenvs which never get updated. They get created and removed (immutable). I won't implement it This idea is in the public domain. If you are young and brave or old and wise: Go ahead, try to implement it. Please communicate early and often. Ask on mailing-lists or me for feedback. Good luck :-) I love feedback --- Please tell me want you like or dislike: * typos and spelling stuff (I am not a native speaker) * alternative implementation strategies. * existing software which does this (even if implemented in a different programming language). * ... -- http://www.thomas-guettler.de/ ___ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig