Re: [Python-Dev] SSL certificates recommendations for downstreampython packagers
Cory Benfield writes: > From a security perspective I think we have to discount the > possibility of administrator error from our threat model. I disagree in a certain sense, and in that sense you don't discount it -- see below. > A threat model that includes “defend the system against intrusions > that the administrator incorrectly allows” I agree that child-proof locks don't work. The point of having a category called "administrator error" in the threat model is not to instantiate it, but merely to recognize it: > where we allow configuration we have a duty to ensure that it’s as > easy as possible to configure correctly, and in particular defaults should (1) "deny everything" (well, nearly), and (2) be robust ("forbid what is not explicitly permitted") to configuration changes that allow accesses wherever Python can reasonably achieve that. > but when using the system trust store most of the configuration is > actually provided by the OS tools, rather than by the > above-mentioned “you”, so that’s not in our control. OK, up to the problem that OS tools may not be accessible or may be considered unreliable. I trust you guys to do something sane there, and I agree it's covered by the "we can't correct admin mistakes in complex environments" clause that you invoked above. Python cannot take responsibility for guessing what might happen in any given configuration in such environments. > However, it’s unquestionable that the *safest* route to go down in > terms of preserving the expectations of users is to use the > platform-native TLS implementation wholesale, rather than do a > hybrid model like Chrome does where OpenSSL does the protocol bits > and the system does the X509 bits. That way Python ends up behaving > basically like Edge or Safari on the relevant platforms, or perhaps > more importantly behaving like .NET on Windows and like > CoreFoundation on macOS, which is a much better place to be in > terms of user and administrator expectations. OK, I can't help you with the details, but I can at least say I feel safer when you say that's where you're going. :-) ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] User + sys time bigger than real time, in case of no real parallelism
On Sonntag, 5. Februar 2017 07:59:04 Marco Buttu wrote: > I am really sorry for the OT :-( I asked elsewhere but without any > answer :-( > I can not figure out why in this short example the user+sys time is > bigger than real time. The example executes the task() functions twice, > with each execution in a separate thread. The task() just increment > 10**6 times a global int: > > $ cat foo.py > from threading import Thread, Lock > > result = 0 > lock = Lock() > > def task(): > global result > for i in range(10**6): > lock.acquire() > result += 1 > lock.release() > > if __name__ == '__main__': > t1, t2 = Thread(target=task), Thread(target=task) > t1.start() > t2.start() > t1.join() > t2.join() > print('result:', result) > > When I execute it (Python 3.6), I get a sys+user time bigger than the > real time: > > $ time python foo.py > result: 200 > > real 0m7.088s > user 0m6.597s > sys0m5.043s > > That is usually what I can expect in case of tasks executed in parallel > on different CPUs. But my example should not be the case, due to the > GIL. What am I missing? Thank you very much, and sorry again for the OT :( /usr/bin/time(!) gives some insight: $ /usr/bin/time python3 foo.py result: 200 1.18user 1.30system 0:01.52elapsed 163%CPU (0avgtext+0avgdata 8228maxresident)k 8inputs+0outputs (0major+1151minor)pagefaults 0swaps Obviously, the spinning happens outside the GIL... Cheers, Pete ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] User + sys time bigger than real time, in case of no real parallelism
> That is usually what I can expect in case of tasks executed in parallel on > different CPUs. But my example should not be the case, due to the GIL. What > am I missing? Thank you very much, and sorry again for the OT :( With such finely intermingled thread activity, there might be a fair bit of spinlock spinning involved. Additionally, I suspect that the kernel does not track CPU time at microsecond precision and will tend to round the used times up. Obviously, this is not a reasonable way to use threads. The example is only effective at producing lots of overhead. ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com