Tarek Ziadé wrote: > On Thu, Jan 21, 2010 at 12:51 PM, M.-A. Lemburg <m...@egenix.com> wrote: > [..] >> The problem gets real when putting the data up on the web for >> users to download via a browser. If they then install directly from >> the file without checking signatures, they can easily be tricked >> into executing malware - and that would put the original author >> of such a package into a pretty bad light. >> >> In any case, that was just a list of examples. > > What about restricting the mirrors to the non web part in that case ? > > Because the mirroring infrastructure is really intended for what I > would call a "professional" usage of PyPI, where it matters if it's > down for some time. And this usage is always done through automated > tools. > > If the PyPI *website* part is down for a while, it's a minor annoyance > for people that are installing by clicking. > > Then, in a second phase, we could have a second mirroring level with a > web part, and ask for the maintainer to sign a "mirror agreement" to > make him responsible in case he's a bad guy, and make him/her > acknowledge some PSF members maybe ? Because the people that are > willing to maintain mirrors are respected/known developers. > > But the latter is not really what we need for our everyday work.
Sure, we could do all those things, but such a process will cause a lot of admin overhead on part of the PSF. Using a content delivery system we'd avoid such administration work. The PSF would have to sign agreements with 10-20 mirror providers, it wouldn't have to setup a monitoring system, keep checking the mirror web content, etc. Moreover, there would also be mirrors in parts of the world that are currently not well covered by Pythonistas and thus less likely to get a local mirror server setup. How to arrange all this is really a PSF question more than anything else. Also note that using a static file layout would make the whole synchronization mechanism a lot easier - not only for content delivery networks, but also for dedicated volunteer run mirrors. There are lots of mirror scripts out there that work with rsync or FTP, so no need to reinvent the wheel. AFAICTL, all the data on PyPI is static and can be rendered as files in a directory dump. A simple cronjob could take care of this every few minutes or so and extract the data to a local directory which is then made accessible to mirrors. -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Jan 21 2010) >>> Python/Zope Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ ::: Try our new mxODBC.Connect Python Database Interface for free ! :::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 http://www.egenix.com/company/contact/ _______________________________________________ Catalog-SIG mailing list Catalog-SIG@python.org http://mail.python.org/mailman/listinfo/catalog-sig