On Tue, Dec 8, 2015 at 3:23 PM, Ionel Cristian Mărieș <cont...@ionelmc.ro>
wrote:

>
>
> On Wed, Dec 9, 2015 at 12:51 AM, Greg Ewing <greg.ew...@canterbury.ac.nz>
> wrote:
>
>> Are you sure you were actually unlinking the old file
>> and creating a new one, rather than overwriting the
>> existing file? The latter would certainly cause trouble
>> if you were able to do it.
>>
>
> ​I had two instances of this ​problem:
>
>    - pip upgrading (pip removes old version first) some package with C
>    extensions while processes using that still runs
>    - removing (yes, rm -rf, not inplace) and recreating a virtualenv
>    while processes using that still runs
>
> It's wrong to think "should be safe on linux". Linux lets you do very
> stupid things. But that don't make them right or feasible to do in the
> general case.
>
> You can do it, sure, but the utility and safety are limited and very
> specific in scope. You gotta applaud Windows for getting this right.
>
It's true that this feature of Unix filesystems doesn't automatically make
all forms of upgrade safe; in particular, it breaks in cases where an
already-running process needs to open some sort of resource/plugin file,
and an upgrade process has removed the file or replaced it with an
incompatible one in between when the program was started and when it tried
to access the resource.

But, seriously, I've been swapping out libraries like libc on running
systems on a weekly basis for years (this is pretty standard for debian
users), and it basically just works. It's definitely better to reboot after
such upgrades to make sure that the new version is in use (e.g. a new
version of openssl with security fixes), and to avoid issues like the ones
described in the previous paragraph, but generally speaking it's easily
possible to have a program that runs fine despite its virtualenv having
been deleted out from under it -- the rule is simply that any open/mmap'ed
file will continue, perfectly reliably, to refer to the original file until
the program exits, even if that file no longer has a name in the filesystem
(which is what rm does).

A common example of where you can get weirdness in Python is that Python
waits until it has to actually print a traceback before loading the
original source code (.py file -- most of the time it just uses the .pyc
file), so if you upgrade a python library in-place then existing processes
will continue to execute the original code and show correct file names and
line numbers in tracebacks, but the actual source lines printed in
tracebacks will be incorrect.

I don't really care about trying to rank Windows vs Unix as being "better",
obviously there are trade-offs here. (Though it would be nice if Windows
had SOME more reasonable solution to the upgrade problem.) Just want to
make sure that the actual semantics here are clear -- there's nothing
mysterious about the Unix semantics, and it's pretty easy to predict what
will work and what won't once you understand what's going on.

-n

-- 
Nathaniel J. Smith -- http://vorpus.org
_______________________________________________
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig

Reply via email to