[issue46490] Add "follow_symlinks=False" support for "os.utime()" on Windows

2022-01-23 Thread Delgan


New submission from Delgan :

Hi.

Currently, trying to use "os.utime(path, timestamps, follow_symlinks=False)" 
raises a exception on Windows: "NotImplementedError: utime: follow_symlinks 
unavailable on this platform".

Looking at the Win32 API it seems possible to open a symbolic link by 
specifying the "FILE_FLAG_OPEN_REPARSE_POINT" flag: 
https://docs.microsoft.com/en-us/windows/win32/api/fileapi/nf-fileapi-createfilew#symbolic-link-behavior

Do you think it would be possible to update "os.utime()" implementation and 
optionally pass the flag here: 
https://github.com/python/cpython/blob/ca78130d7eb5265759697639e42487ec6d0a4caf/Modules/posixmodule.c#L5516
 ?

--
components: Library (Lib)
messages: 411399
nosy: Delgan
priority: normal
severity: normal
status: open
title: Add "follow_symlinks=False" support for "os.utime()" on Windows
versions: Python 3.11

___
Python tracker 
<https://bugs.python.org/issue46490>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40399] IO streams locking can be broken after fork() with threads

2020-04-29 Thread Delgan


Delgan  added the comment:

Yeah, I just wanted to illustrate the issue with a more realistic example. The 
thread is often abstracted away by a class or a library. Conclusion: do not 
abstract it away. :)

I've noticed that the mere fact of using "sys.stderr.write()", without even 
involving a queue, could cause the problem.

Out of curiosity: my understanding of "sys.stderr" being a "TextIOWrapper" 
implies it's not thread-safe. Therefore, do you have any idea of which lock is 
involved in this issue?

--

___
Python tracker 
<https://bugs.python.org/issue40399>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40399] IO streams locking can be broken after fork() with threads

2020-04-28 Thread Delgan


Delgan  added the comment:

Thank you for having looked into the problem.

To be more specific, I don't generally mix threads with multiprocessing, but 
it's a situation where there is one global and hidden consumer thread listening 
to a queue for non-blocking logging.

Actually, I think the problem is reproducible using the QueueListener provided 
in "logging.handlers". The following snippet is inspired by this part of the 
documentation: 
https://docs.python.org/3/howto/logging-cookbook.html#dealing-with-handlers-that-block

--

import logging
import multiprocessing
import queue
from logging.handlers import QueueHandler, QueueListener


if __name__ == "__main__":
que = multiprocessing.Queue()

queue_handler = QueueHandler(que)
handler = logging.StreamHandler()
listener = QueueListener(que, handler)
root = logging.getLogger()
root.addHandler(queue_handler)
listener.start()

for i in range(1):
root.warning('Look out!')
p = multiprocessing.Process(target=lambda: None)
p.start()
p.join()
print("Not hanging yet", i)


listener.stop()

--

___
Python tracker 
<https://bugs.python.org/issue40399>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40399] Program hangs if process created right after adding object to a Queue

2020-04-28 Thread Delgan


Delgan  added the comment:

Another curiosity: if 'print("Consumed:", queue.get())' is replaced by either 
'print("Consumed")' or 'queue.get()', then the program keeps running without 
stopping. Only a combination of both makes the program to hang.

Also reproducible on 3.5.2.

--

___
Python tracker 
<https://bugs.python.org/issue40399>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40399] Program hangs if process created right after adding object to a Queue

2020-04-28 Thread Delgan


Delgan  added the comment:

I noticed the bug is reproducible even if the child process does not put object 
in the queue:




import multiprocessing 
import threading 
import time 

if __name__ == "__main__": 
  queue = multiprocessing.SimpleQueue() 

  def consume(queue):
while True:
  print("Consumed:", queue.get())

  thread = threading.Thread(target=consume, args=(queue,))
  thread.start()
  
  for i in range(1):
queue.put(i)
p = multiprocessing.Process(target=lambda: None) 
p.start() 
p.join()

print("Not hanging yet", i)

--

___
Python tracker 
<https://bugs.python.org/issue40399>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40399] Program hangs if process created right after adding object to a Queue

2020-04-26 Thread Delgan


New submission from Delgan :

Hi.

I have a very basic program:
- one multiprocessing SimpleQueue
- one consumer thread
- one loop:
  - add one item to the queue
  - create a new process with itself add one item to the queue
  - wait for the process to end

For some unknown reason, it will hangs after some time. 

I know the docs said:

> This means that if you try joining that process you may get a deadlock unless 
> you are sure that all items which have been put on the queue have been 
> consumed. Similarly, if the child process is non-daemonic then the parent 
> process may hang on exit when it tries to join all its non-daemonic children.

That's why I added "time.sleep(1)" inside the process, to make sure all items 
added by the child process are consumed. You can remove it and the hang will 
happen faster.


I'm using Python 3.8.2 on Linux. Forcing program to terminate with Ctrl+C 
(twice):

^CTraceback (most recent call last):
  File "bug.py", line 23, in 
p.join()
  File "/usr/lib/python3.8/multiprocessing/process.py", line 149, in join
res = self._popen.wait(timeout)
  File "/usr/lib/python3.8/multiprocessing/popen_fork.py", line 47, in wait
return self.poll(os.WNOHANG if timeout == 0.0 else 0)
  File "/usr/lib/python3.8/multiprocessing/popen_fork.py", line 27, in poll
pid, sts = os.waitpid(self.pid, flag)
KeyboardInterrupt
^CError in atexit._run_exitfuncs:
Traceback (most recent call last):
  File "/usr/lib/python3.8/multiprocessing/popen_fork.py", line 27, in poll
pid, sts = os.waitpid(self.pid, flag)
KeyboardInterrupt

--
components: Library (Lib)
files: bug.py
messages: 367317
nosy: Delgan
priority: normal
severity: normal
status: open
title: Program hangs if process created right after adding object to a Queue
type: behavior
versions: Python 3.8
Added file: https://bugs.python.org/file49094/bug.py

___
Python tracker 
<https://bugs.python.org/issue40399>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38900] Add a glossary entry for "callable" objects

2019-11-23 Thread Delgan


Delgan  added the comment:

I agree, it's straightforward. I just thought it could be useful to have a 
proper definition in the official documentation.

For example, this question on StackOverflow actually received many views: 
https://stackoverflow.com/questions/111234/what-is-a-callable

What do you think is the best documentation entry I can link to while 
mentioning a "callable" type? Just the builtin "callable()"? The 
`collections.abc.Callable` is interesting but does not look very appropriate 
for basic functions.

--

___
Python tracker 
<https://bugs.python.org/issue38900>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38900] Add a glossary entry for "callable" objects

2019-11-23 Thread Delgan


New submission from Delgan :

Hi.

Quick use case explanation about this: I would like to document my function, 
stating that it accepts any "callable" object and linking to a proper 
definition of such object.

For example, I'm already doing this when my function accepts a "file object". 
There exists no such type I can link to, so I link to the glossary definition: 
https://docs.python.org/3/glossary.html#term-file-object

I could link to the "callable()" built-in but than does not reflect well the 
expected *type* of the argument. For now, I define the argument as a "function" 
( https://docs.python.org/3/glossary.html#term-function ) but this is too 
restrictive.

The definition for "callable" would be pretty straightforward, just mentioning 
it should implement "__call__", also linking to "emulating callable object" ( 
https://docs.python.org/3/reference/datamodel.html#emulating-callable-objects ) 
and / or the "callable()" builtin ( 
https://docs.python.org/3/library/functions.html#callable ).

------
assignee: docs@python
components: Documentation
messages: 357366
nosy: Delgan, docs@python
priority: normal
severity: normal
status: open
title: Add a glossary entry for "callable" objects
versions: Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9

___
Python tracker 
<https://bugs.python.org/issue38900>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38762] Logging displays wrong "processName" if "sys.modules" is cleared in child process

2019-11-10 Thread Delgan


New submission from Delgan :

Hi.

In order to display the process name in logs, the "logging" module checks for 
the presence of "multiprocessing" in "sys.modules" before calling 
"current_process()". If "multiprocessing" is not found in "sys.modules", it 
assumes that the current process is the main one.

See : 
https://github.com/python/cpython/blob/af46450bb97ab9bd38748e75aa849c29fdd70028/Lib/logging/__init__.py#L340-L341

However, nothing prevents a child process to delete 
"sys.module['multiprocessing']", which causes the process name to be wrongly 
displayed as "MainProcess".

I attached a reproducible example, but this is straightforward to understand. 
Although it should not happen very often in practice, it is still theoretically 
possible. Obviously, one could say "just don't clear sys.modules", but I 
suppose there might exist tools doing such thing for good reasons (like 
resetting the test environment).

Issues which lead to the current implementation:
- issue4301
- issue7120
- issue8200

Possible fixes: 
- Force import "multiprocessing.current_process()" even if not already loaded
- Add function "os.main_pid()" and set "processName" to "MainProcess" only if 
"os.getpid() == os.main_pid()"

--
components: Library (Lib)
files: test.py
messages: 356338
nosy: Delgan
priority: normal
severity: normal
status: open
title: Logging displays wrong "processName" if "sys.modules" is cleared in 
child process
type: behavior
versions: Python 3.9
Added file: https://bugs.python.org/file48705/test.py

___
Python tracker 
<https://bugs.python.org/issue38762>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com