On Tue, Nov 11, 2014 at 3:55 AM, Jed Brown <j...@jedbrown.org> wrote:
> KAUST (or KSA?) internet can be flaky at times and my "make" was > (silently) hanging indefinitely while trying to connect to mcs.anl.gov. > Manually touching .nagged allows my build to proceed. The hang could be > fixed by adding a reasonable timeout, but I can't find a timeout in > urllib. Aron suggests that I try curl because all built-in Python url > libraries are terrible, but I don't think we can depend on curl being > installed, so we'd have to fall back to something. We could implement a > timeout using threads, if threads weren't broken on some architectures. > > Meanwhile, the professor next to me runs Little Snitch on his Mac and > wants to know why PETSc's build is trying to connect to Argonne's > servers. His first thought was that it was a there for surveillance. > I find it hard to believe he is a science professor with that kind of inference from the data (you can see it retrieves a webpage). Maybe climate ;) > PETSc has a significant number of users that work behind firewalls or > are otherwise sensitive to outgoing connections. Although nagupgrade > helps people stay updated and reduces some support email, I think it is > unprofessional and a failure mode that I'd rather avoid. > urllib2 has the timeout argument, so we should switch. I am not sure I see retrieving a webpage as unprofessional. Is there a better way to update information, or do we want a model that is completely dead once downloaded? I think people now assume that this is not true. Matt -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener