On November 11, 2015 at 1:38:38 PM, Nathaniel Smith ([email protected]) wrote:
> > Guaranteeing a clean stdout/stderr is hard: it means you have  
> to be careful to correctly capture and process the output of every  
> child you invoke (e.g. compilers), and deal correctly with the  
> tricky aspects of pipes (deadlocks, sigpipe, ...). And even  
> then you can get thwarted by accidentally importing the wrong  
> library into your main process, and discovering that it writes  
> directly to stdout/stderr on some error condition. And it may  
> or may not respect your resetting of sys.stdout/sys.stderr  
> at the python level. So to be really reliable the only thing to  
> do is to create some pipes and some threads to read the pipes and  
> do the dup2 dance (but not everyone will actually do this, they'll  
> just accept corrupted output on errors) and ugh, all of this is  
> a huge hassle that massively raises the bar on implementing simple  
> build systems.

How is this not true for a worker.py process as well? If the worker process 
communicates via stdout then it has to make sure it captures the stdout and 
redirects it before calling into the Python API and then undoes that 
afterwords. It makes it harder to do incremental output actually because a 
Python function can’t return in the middle of execution so we’d need to make it 
some sort of akward generator protocol to make that happen too.

-----------------
Donald Stufft
PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA


_______________________________________________
Distutils-SIG maillist  -  [email protected]
https://mail.python.org/mailman/listinfo/distutils-sig

Reply via email to