On Thu, 2 Mar 2023 12:45:50 +1100, Chris Angelico
declaimed the following:
>
>As have all CPUs since; it's the only way to implement locks (push the
>locking all the way down to the CPU level).
>
Xerox Sigma (circa 1970): Modify and Test (byte/halfword/word)
Granted, that was a
On 2023-03-02, Chris Angelico wrote:
> On Thu, 2 Mar 2023 at 08:01, <2qdxy4rzwzuui...@potatochowder.com> wrote:
>> On 2023-03-01 at 14:35:35 -0500,
>> avi.e.gr...@gmail.com wrote:
>> > What would have happened if all processors had been required to have
>> > some low level instruction that effecti
On Thu, 2 Mar 2023 at 13:02, Weatherby,Gerard wrote:
>
> So I guess we know what would have happened.
>
Yep. It's not what I was talking about, but it's also a very important
concurrency management feature.
ChrisA
--
https://mail.python.org/mailman/listinfo/python-list
a more efficient threading
lock?)
*** Attention: This is an external email. Use caution responding, opening
attachments or clicking on links. ***
On Thu, 2 Mar 2023 at 08:01, <2qdxy4rzwzuui...@potatochowder.com> wrote:
>
> On 2023-03-01 at 14:35:35 -0500,
> avi.e.gr...@gmail.
On Thu, 2 Mar 2023 at 08:01, <2qdxy4rzwzuui...@potatochowder.com> wrote:
>
> On 2023-03-01 at 14:35:35 -0500,
> avi.e.gr...@gmail.com wrote:
>
> > What would have happened if all processors had been required to have
> > some low level instruction that effectively did something in an atomic
> > way
On 2023-03-01 at 14:35:35 -0500,
avi.e.gr...@gmail.com wrote:
> What would have happened if all processors had been required to have
> some low level instruction that effectively did something in an atomic
> way that allowed a way for anyone using any language running on that
> machine a way to do
On Thu, 2 Mar 2023 at 06:37, wrote:
>
> If a workaround like itertools.count.__next__() is used because it will not
> be interrupted as it is implemented in C, then I have to ask if it would
> make sense for Python to supply something similar in the standard library
> for the sole purpose of a use
very directly using the atomic operation directly.
-Original Message-
From: Python-list On
Behalf Of Dieter Maurer
Sent: Wednesday, March 1, 2023 1:43 PM
To: Chris Angelico
Cc: python-list@python.org
Subject: Look free ID genertion (was: Is there a more efficient threading
lock?)
Chris
Chris Angelico wrote at 2023-3-1 12:58 +1100:
> ...
> The
>atomicity would be more useful in that context as it would give
>lock-free ID generation, which doesn't work in Python.
I have seen `itertools.count` for that.
This works because its `__next__` is implemented in "C" and
therefore will not
On Wed, 1 Mar 2023 at 10:04, Barry wrote:
>
> > Though it's still probably not as useful as you might hope. In C, if I
> > can do "int id = counter++;" atomically, it would guarantee me a new
> > ID that no other thread could ever have.
>
> C does not have to do that atomically. In fact it is free
> Though it's still probably not as useful as you might hope. In C, if I
> can do "int id = counter++;" atomically, it would guarantee me a new
> ID that no other thread could ever have.
C does not have to do that atomically. In fact it is free to use lots of
instructions to build the int value.
On Mon, 27 Feb 2023 at 17:28, Michael Speer wrote:
>
> https://github.com/python/cpython/commit/4958f5d69dd2bf86866c43491caf72f774ddec97
>
> it's a quirk of implementation. the scheduler currently only checks if it
> needs to release the gil after the POP_JUMP_IF_FALSE, POP_JUMP_IF_TRUE,
> JUMP_AB
elease notes given a quick glance.
>
> I do agree that you shouldn't depend on this unless you find a written
> guarantee of the behavior, as it is likely an implementation quirk of some
> kind
>
> --[code]--
>
> import threading
>
> UPDATES = 1000
>
unless you find a written
guarantee of the behavior, as it is likely an implementation quirk of some
kind
--[code]--
import threading
UPDATES = 1000
THREADS = 256
vv = 0
def update_x_times( xx ):
for _ in range( xx ):
global vv
vv += 1
def main():
tts = []
On Mon, 27 Feb 2023 at 10:42, Jon Ribbens via Python-list
wrote:
>
> On 2023-02-26, Chris Angelico wrote:
> > On Sun, 26 Feb 2023 at 16:16, Jon Ribbens via Python-list
> > wrote:
> >> On 2023-02-25, Paul Rubin wrote:
> >> > The GIL is an evil thing, but it has been around for so long that most
>
> And yet, it appears that *something* changed between Python 2 and Python
3 such that it *is* atomic:
I haven't looked, but something to check in the source is opcode
prediction. It's possible that after the BINARY_OP executes, opcode
prediction jumps straight to the STORE_FAST opcode, avoiding t
> 2 2 LOAD_FAST_CHECK 0 (x)
> 4 LOAD_CONST 1 (1)
> 6 BINARY_OP 13 (+=)
> 10 STORE_FAST 0 (x)
> 12 LOAD_CONST 0 (None)
> 14 RETURN_VALUE
>>&
On 2023-02-26, Barry Scott wrote:
> On 25/02/2023 23:45, Jon Ribbens via Python-list wrote:
>> I think it is the case that x += 1 is atomic but foo.x += 1 is not.
>
> No that is not true, and has never been true.
>
>:>>> def x(a):
>:... a += 1
>:...
>:>>>
>:>>> dis.dis(x)
> 1 0 RESU
On 25/02/2023 23:45, Jon Ribbens via Python-list wrote:
I think it is the case that x += 1 is atomic but foo.x += 1 is not.
No that is not true, and has never been true.
:>>> def x(a):
:... a += 1
:...
:>>>
:>>> dis.dis(x)
1 0 RESUME 0
2 2 LOAD_FAST
e GIL during computation. Here's
a script that's quite capable of saturating my CPU entirely - in fact,
typing this email is glitchy due to lack of resources:
import threading
import bcrypt
results = [0, 0]
def thrd():
for _ in range(10):
ok = bcrypt.checkpw(b"pass
and foo.x+=1 would be if the
__iadd__ method both mutates and returns something other than self,
which is quite unusual. (Most incrementing is done by either
constructing a new object to return, or mutating the existing one, but
not a hybrid.)
Consider this:
import threading
d = {0:0, 1:0, 2:0, 3
y one thread will be
running at any moment regardless of CPU count.
Common wisdom is that Python threading works well for I/O bound
systems, where each thread spends most of its time waiting for some I/O
operation to complete -- thereby allowing the OS to schedule other threads.
For CP
On 2023-02-25, Paul Rubin wrote:
> Jon Ribbens writes:
>>> 1) you generally want to use RLock rather than Lock
>> Why?
>
> So that a thread that tries to acquire it twice doesn't block itself,
> etc. Look at the threading lib docs for more info.
Yes, I know wha
On 2023-02-25, Paul Rubin wrote:
> Skip Montanaro writes:
>> from threading import Lock
>
> 1) you generally want to use RLock rather than Lock
Why?
> 2) I have generally felt that using locks at the app level at all is an
> antipattern. The main way I've stayed san
Re sqlite and threads. The C API can be compiled to be thread safe from my
Reading if the sqlite docs. What I have not checked is how python’s bundled
sqlite
is compiled. There are claims python’s sqlite is not thread safe.
Barry
--
https://mail.python.org/mailman/listinfo/python-list
On 2/25/2023 4:41 PM, Skip Montanaro wrote:
Thanks for the responses.
Peter wrote:
Which OS is this?
MacOS Ventura 13.1, M1 MacBook Pro (eight cores).
Thomas wrote:
> I'm no expert on locks, but you don't usually want to keep a lock while
> some long-running computation goes on. You wan
“I'm no expert on locks, but you don't usually want to keep a lock while
some long-running computation goes on. You want the computation to be
done by a separate thread, put its results somewhere, and then notify
the choreographing thread that the result is ready.”
Maybe. There are so many poss
Thanks for the responses.
Peter wrote:
> Which OS is this?
MacOS Ventura 13.1, M1 MacBook Pro (eight cores).
Thomas wrote:
> I'm no expert on locks, but you don't usually want to keep a lock while
> some long-running computation goes on. You want the computation to be
> done by a separate thr
On 2023-02-25 09:52:15 -0600, Skip Montanaro wrote:
> BLOB_LOCK = Lock()
>
> def get_terms(text):
> with BLOB_LOCK:
> phrases = TextBlob(text, np_extractor=EXTRACTOR).noun_phrases
> for phrase in phrases:
> yield phrase
>
> When I monitor the application using py-spy, that
On 2/25/2023 10:52 AM, Skip Montanaro wrote:
I have a multi-threaded program which calls out to a non-thread-safe
library (not mine) in a couple places. I guard against multiple
threads executing code there using threading.Lock. The code is
straightforward:
from threading import Lock
On 2023-02-25 09:52:15 -0600, Skip Montanaro wrote:
> I have a multi-threaded program which calls out to a non-thread-safe
> library (not mine) in a couple places. I guard against multiple
> threads executing code there using threading.Lock. The code is
> straightforward:
>
&
I have a multi-threaded program which calls out to a non-thread-safe
library (not mine) in a couple places. I guard against multiple
threads executing code there using threading.Lock. The code is
straightforward:
from threading import Lock
# Something in textblob and/or nltk doesn't play
ably don't need multi-processing just to utilize all your cores.
On the other hand you have some nicely separated task which can be
parallelized, so multi-threading should help (async probably would work
just as well or as badly as multi-threading but I find that harder to
understand so I would disc
batch processing large chunks of e-mail messages -
either send a message to the thread, or set a global variable to tell it
to end the run after the current process item has finished off, just in
case.
So, I think that for now, threading is probably the simplest to look into.
Later on, was also consid
On Sat, 7 Jan 2023 at 04:54, jacob kruger wrote:
>
> I am just trying to make up my mind with regards to what I should look
> into working with/making use of in terms of what have put in subject line?
>
>
> As in, if want to be able to trigger multiple/various threads/processes
> to run in the bac
On 2023-01-06 10:18:24 +0200, jacob kruger wrote:
> I am just trying to make up my mind with regards to what I should look into
> working with/making use of in terms of what have put in subject line?
>
>
> As in, if want to be able to trigger multiple/various threads/processes to
> run in the bac
interface, or via global variables, but, possibly while processing other
forms of user interaction via the normal/main process, what would be
recommended?
As in, for example, the following page mentions some possibilities, like
threading, asyncio, etc., but, without going into too much detail
Chris Angelico wrote:
> I'm still curious as to the workload (requests per second), as it might still
> be worth going for the feeder model. But if your current system works, then
> it may be simplest to debug that rather than change.
It is by all accounts a low-traffic situation, maybe one reques
On Sat, 26 Feb 2022 at 05:16, Robert Latest via Python-list
wrote:
>
> Chris Angelico wrote:
> > Depending on your database, this might be counter-productive. A
> > PostgreSQL database running on localhost, for instance, has its own
> > caching, and data transfers between two apps running on the s
Greg Ewing wrote:
> * If more than one thread calls get_data() during the initial
> cache filling, it looks like only one of them will wait for
> the thread -- the others will skip waiting altogether and
> immediately return None.
Right. But that needs to be dealt with somehow. No data is no data.
Chris Angelico wrote:
> Depending on your database, this might be counter-productive. A
> PostgreSQL database running on localhost, for instance, has its own
> caching, and data transfers between two apps running on the same
> computer can be pretty fast. The complexity you add in order to do
> you
On 25/02/22 1:08 am, Robert Latest wrote:
My question is: Is this a solid approach? Am I forgetting something?
I can see a few of problems:
* If more than one thread calls get_data() during the initial
cache filling, it looks like only one of them will wait for
the thread -- the others will sk
On Fri, 25 Feb 2022 at 06:54, Robert Latest via Python-list
wrote:
>
> I have a multi-threaded application (a web service) where several threads need
> data from an external database. That data is quite a lot, but it is almost
> always the same. Between incoming requests, timestamped records get a
that gets only
"topped up" with the most recent records on each request:
from threading import Lock, Thread
class MyCache():
def __init__(self):
self.cache = None
self.cache_lock = Lock()
def _update(self):
n
Johannes Bauer wrote at 2021-12-6 00:50 +0100:
>I'm a bit confused. In my scenario I a mixing threading with
>multiprocessing. Threading by itself would be nice, but for GIL reasons
>I need both, unfortunately. I've encountered a weird situation in which
>multiprocessing
> On 5 Dec 2021, at 23:50, Johannes Bauer wrote:
>
> Hi there,
>
> I'm a bit confused. In my scenario I a mixing threading with
> multiprocessing. Threading by itself would be nice, but for GIL reasons
> I need both, unfortunately. I've encount
Am 06.12.21 um 13:56 schrieb Martin Di Paola:
> Hi!, in short your code should work.
>
> I think that the join-joined problem is just an interpretation problem.
>
> In pseudo code the background_thread function does:
>
> def background_thread()
> # bla
> print("join?")
> # bla
> print("j
threads that
you spawned (background_thread functions).
I hope that this can guide you to fix or at least narrow the issue.
Thanks,
Martin.
On Mon, Dec 06, 2021 at 12:50:11AM +0100, Johannes Bauer wrote:
Hi there,
I'm a bit confused. In my scenario I a mixing threading with
multiprocess
Hi there,
I'm a bit confused. In my scenario I a mixing threading with
multiprocessing. Threading by itself would be nice, but for GIL reasons
I need both, unfortunately. I've encountered a weird situation in which
multiprocessing Process()es which are started in a new thread don't
On Wed, Jun 23, 2021 at 5:34 AM Dan Stromberg wrote:
>
> I put together a little python runtime comparison, with an embarallel,
> cpu-heavy threading microbenchmark.
>
> It turns out that the performance-oriented Python implementations, Pypy3
> and Nuitka3, are both poor a
I put together a little python runtime comparison, with an embarallel,
cpu-heavy threading microbenchmark.
It turns out that the performance-oriented Python implementations, Pypy3
and Nuitka3, are both poor at threading, as is CPython of course.
On the plus side for CPython, adding cpu-heavy
from multiprocessing import Process
> from threading import Thread
> from time import sleep
> import cv2
>
> def show(im, title, location):
> cv2.startWindowThread()
> cv2.namedWindow(title)
> cv2.moveWindow(title, *location)
> cv2.imshow(title, im)
>
On Sun, 30 Aug 2020 09:59:15 +0100
Barry Scott wrote:
>
>
> > On 29 Aug 2020, at 18:01, Dennis Lee Bieber
> > wrote:
> >
> > On Sat, 29 Aug 2020 18:24:10 +1000, John O'Hagan
> > declaimed the following:
> >
> >> There's no error without the sleep(1), nor if the Process is
> >> started befor
On 2020-08-30, Barry wrote:
>* The child process is created with a single thread—the one that
> called fork(). The entire virtual address space of the parent is
> replicated in the child, including the states of mutexes,
> condition variables, and other pthr
here is a memory sharing between those processes, what happens
> on one thread in the first process is totally independant of what
> happens in the copy of this thread in the other process.
>
> I'm not specialist on multi-threading in Python, but it should not
> change any
happens
on one thread in the first process is totally independant of what
happens in the copy of this thread in the other process.
I'm not specialist on multi-threading in Python, but it should not
change anything. Both processes (father and child) don't share the same
thread, each one has
> On 29 Aug 2020, at 18:01, Dennis Lee Bieber wrote:
>
> On Sat, 29 Aug 2020 18:24:10 +1000, John O'Hagan
> declaimed the following:
>
>> There's no error without the sleep(1), nor if the Process is started
>> before the Thread, nor if two Processes are used instead, nor if two
>> Threads ar
On Sat, 29 Aug 2020 13:01:12 -0400
Dennis Lee Bieber wrote:
> On Sat, 29 Aug 2020 18:24:10 +1000, John O'Hagan
> declaimed the following:
>
> >There's no error without the sleep(1), nor if the Process is started
> >before the Thread, nor if two Processes are used instead, nor if two
> >Threads
> On Aug 29, 2020, at 10:12 PM, Stephane Tougard via Python-list
> wrote:
>
> On 2020-08-29, Dennis Lee Bieber wrote:
>> Under Linux, multiprocessing creates processes using fork(). That means
>> that, for some fraction of time, you have TWO processes sharing the same
>> thread and all t
On Sun, Aug 30, 2020 at 4:01 PM Stephane Tougard via Python-list
wrote:
>
> On 2020-08-29, Dennis Lee Bieber wrote:
> > Under Linux, multiprocessing creates processes using fork(). That
> > means
> > that, for some fraction of time, you have TWO processes sharing the same
> > thread and al
On 2020-08-29, Dennis Lee Bieber wrote:
> Under Linux, multiprocessing creates processes using fork(). That means
> that, for some fraction of time, you have TWO processes sharing the same
> thread and all that entails (if it doesn't overlay the forked process with
> a new executable, they a
Dear list
Thanks to this list, I haven't needed to ask a question for
a very long time, but this one has me stumped.
Here's the minimal 3.8 code, on Debian testing:
-
from multiprocessing import Process
from threading import Thread
from time import sleep
import cv2
def show
se it in various
other calls
with SendMessageW and those methods works.
> Is there a SHOW arg that you need to pass?
No.
Here the minimal code, just in case one is interested.
import ctypes
from ctypes import wintypes
import platform
from threading import Thread
user32 = ctypes.WinDLL('u
t; Barry
>>
>>> .
>>>
>>> Eren
>>>
>>>> Am Mi., 15. Apr. 2020 um 20:12 Uhr schrieb Barry Scott <
>>>> ba...@barrys-emacs.org>:
>>>>
>>>>
>>>>
>>>>>> On 15 Apr 2020,
ryone,
> >>>
> >>> the following happens on Windows7 x64 and Python37 x64
> >>>
> >>> I have a plugin DLL for a C++ application in which Python37 is
> embedded.
> >>> The plugin itself works, except when I want to use the threading
> modul
um 20:12 Uhr schrieb Barry Scott <
>> ba...@barrys-emacs.org>:
>>
>>
>>
>>>> On 15 Apr 2020, at 13:30, Eko palypse wrote:
>>>
>>> Hi everyone,
>>>
>>> the following happens on Windows7 x64 and Python37 x64
>>>
n DLL for a C++ application in which Python37 is embedded.
> > The plugin itself works, except when I want to use the threading module.
> >
> > If I start a Python script in my plugin which uses the threading module
> > I can verify via ProcessExplorer that the thread is
> On 15 Apr 2020, at 13:30, Eko palypse wrote:
>
> Hi everyone,
>
> the following happens on Windows7 x64 and Python37 x64
>
> I have a plugin DLL for a C++ application in which Python37 is embedded.
> The plugin itself works, except when I want to use the threading
Hi everyone,
the following happens on Windows7 x64 and Python37 x64
I have a plugin DLL for a C++ application in which Python37 is embedded.
The plugin itself works, except when I want to use the threading module.
If I start a Python script in my plugin which uses the threading module
I can
On 24Jan2020 21:08, Dennis Lee Bieber wrote:
My suggestion for your capacity thing: use a Semaphore, which is a
special thread safe counter which cannot go below zero.
from threading import Semaphore
def start_test(sem, args...):
sem.acquire()
... do stuff with args
k in python2 in places for time being. Thanks.
import time
import datetime
import threading
import random
big_list = []
def date_stamp():
return "[" + datetime.datetime.now().strftime('%Y-%m-%d
%H:%M:%S:%f')[:-3] + "] "
for i in range(1, 5000):
big_list.app
only 10 Threads.
print "Waiting on Threads..."
t.join()
This waits for only the last Thread.
My suggestion for your capacity thing: use a Semaphore, which is a
special thread safe counter which cannot go below zero.
from threading import Semaphore
def start_test(sem, args.
On Sat, Jan 25, 2020 at 9:05 AM Matt wrote:
>
> Created this example and it runs.
>
> import time
> import threading
>
> big_list = []
>
> for i in range(1, 200):
> big_list.append(i)
>
> def start_test():
> while big_list: #is this
>
Created this example and it runs.
import time
import threading
big_list = []
for i in range(1, 200):
big_list.append(i)
def start_test():
while big_list: #is this
list_item = big_list.pop() #and this thread safe
print list_item, port
time.sleep(1)
print
, for the other threads to see.
Would that be thread safe?
On Fri, Jan 24, 2020 at 2:44 PM Chris Angelico wrote:
On Sat, Jan 25, 2020 at 7:35 AM Matt wrote:
>
> I am using this example for threading in Python:
>
> from threading import Thread
>
> def start_test( address,
thread safe?
On Fri, Jan 24, 2020 at 2:44 PM Chris Angelico wrote:
>
> On Sat, Jan 25, 2020 at 7:35 AM Matt wrote:
> >
> > I am using this example for threading in Python:
> >
> > from threading import Thread
> >
> > def start_test( address, port ):
&
On Sat, 25 Jan 2020 07:43:49 +1100
Chris Angelico wrote:
> On Sat, Jan 25, 2020 at 7:35 AM Matt wrote:
> >
> > I am using this example for threading in Python:
> >
> > from threading import Thread
> >
> > def start_test( address, port ):
> > pri
On Sat, Jan 25, 2020 at 7:35 AM Matt wrote:
>
> I am using this example for threading in Python:
>
> from threading import Thread
>
> def start_test( address, port ):
> print address, port
> sleep(1)
>
> for line in big_list:
> t = Thread(
I am using this example for threading in Python:
from threading import Thread
def start_test( address, port ):
print address, port
sleep(1)
for line in big_list:
t = Thread(target=start_test, args=(line, 80))
t.start()
But say big_list has thousands of items and I only want to
I have a small script that goes down a list of domain names, does some
DNS lookups for santity checks, then if the checks look OK fetches
http://{domain}/ with requests.get() and looks at the text, if any,
returned.
When I run the checks in parallel with concurrent.futures, the script
inevitably h
hread kicked off, and the very first thread had got it up to 655,562
by the time the second thread had started and gotten to that print statement.
-Original Message-
From: Python-list On
Behalf Of ast
Sent: Wednesday, December 4, 2019 10:18 AM
To: python-list@python.org
Subject: threadin
:
import threading
x = 0
def test():
global x
for i in range(100):
x+=1
threadings = []
for i in range(100):
t = threading.Thread(target=test)
threadings.append(t)
t.start()
for t in threadings:
t.join()
print(x)
1
The result is always correct: 1
Why
On 5/29/19, Dennis Lee Bieber wrote:
>
> In the OP's example code, with just one thread started, the easiest
> solution is to use
>
> y.start()
> y.join()
>
> to block the main thread. That will, at least, let the try/except catch the
> interrupt. It does not, however, kill the s
On 5/29/19, David Raymond wrote:
>
> Keyboard interrupts are only received by the main thread, which in this case
> completes real quick.
>
> So what happens for me is that the main thread runs to completion instantly
> and leaves nothing alive to receive the keyboard interrupt, which means the
>
.org] On Behalf Of
nihar Modi
Sent: Wednesday, May 29, 2019 4:39 AM
To: python-list@python.org
Subject: Threading Keyboard Interrupt issue
I have written a simple code that involves threading, but it does not go to
except clause after Keyboard interrupt. Can you suggest a way out. I have
pasted th
On Thu, May 30, 2019 at 1:45 AM nihar Modi wrote:
>
> I have written a simple code that involves threading, but it does not go to
> except clause after Keyboard interrupt. Can you suggest a way out. I have
> pasted the code below. It does not print 'hi' after keyboard inte
I have written a simple code that involves threading, but it does not go to
except clause after Keyboard interrupt. Can you suggest a way out. I have
pasted the code below. It does not print 'hi' after keyboard interrupt and
just stops.
import threading
def loop():
while true:
pr
On 2019-02-17 09:52, Prahallad Achar wrote:
Hello Friends,
I got an requirement as stated below,
1. main thread will be running and other event should run parallel
In more detail
One function will be keep dumping data and other event function should
trigger at some event but dumping data should
Hello Friends,
I got an requirement as stated below,
1. main thread will be running and other event should run parallel
In more detail
One function will be keep dumping data and other event function should
trigger at some event but dumping data should be keep running.
Sorry, I can not give any ex
Another way on unix that doesn't use signals:
import select, sys
print("Enter something: ", end = "")
sys.stdout.flush()
fds = select.select((0,), (), (), 5)
if fds[0] == [0]:
data = sys.stdin.readline()
print("You entered:", data)
else:
print("Too late!")
--
Greg
--
https://mail.py
I have some success with this. I am not sure if this would work longer term,
as in restarting it, but so far so good. Any issue with this new code?
import time
from threading import Thread
th=Thread()
class Answer(Thread):
def run(self):
a=input("What is your answer:")
The way I've done the "input with timeout" requirement the OP requested is
dependent on the operating system. The current implementation of the input
function doesn't offer that feature.
https://docs.python.org/3/library/functions.html#input
In another language, I used low-levelsystem calls to
David D wrote:
> Is there a SIMPLE method that I can have a TIMER count down at a user input
> prompt - if the user doesn't enter information within a 15 second period, it
> times out.
Does this do what you want?
from threading import Timer
import sys
import os
def run_later():
This works, but does not do exactly what I want. When the user enters in a
correct answer, the program and threading stops. Any ideas on what I should
change?
import time
from threading import Thread
class Answer(Thread):
def run(self):
a=input("What is your answer:")
This works, but does not do exactly what I want. What I want to happen is :
when the user enters in a correct answer, the program and threading stops. Any
ideas on what I should change?
import time
from threading import Thread
class Answer(Thread):
def run(self):
a=input
On Wed, Jul 4, 2018 at 12:05 AM, Marko Rauhamaa wrote:
> Gregory Ewing :
>
>> Robin Becker wrote:
>>> if I leave out the signal.signal(signal.SIGALRM,signal.SIG_IGN) then
>>> the timeout function gets called anyway.
>>
>> Yes, it needs some more stuff around it to make it useful. Probably
>> you a
Won't this code send a signal *regardless* of the user input to the process
within 15 seconds. I don't see how it's tied to terminal input.
From what I can tell, you need to create your own version of input with a
timeout option. This doesn't do that.
--
Michael Vilain
650-322-6755
> On 02-Jul
Gregory Ewing :
> Robin Becker wrote:
>> if I leave out the signal.signal(signal.SIGALRM,signal.SIG_IGN) then
>> the timeout function gets called anyway.
>
> Yes, it needs some more stuff around it to make it useful. Probably
> you also want the signal handler to raise an exception and catch it
>
Robin Becker wrote:
if I leave out the signal.signal(signal.SIGALRM,signal.SIG_IGN) then the
timeout function gets called anyway.
Yes, it needs some more stuff around it to make it useful.
Probably you also want the signal handler to raise an
exception and catch it somewhere rather than exiting
On 03/07/2018 07:12, Gregory Ewing wrote:
import signal, sys
def timeout(*args):
print("Too late!")
sys.exit(0)
signal.signal(signal.SIGALRM, timeout)
signal.setitimer(signal.ITIMER_REAL, 15)
data = input("Enter something: ")
print("You entered: ", data)
This doesn't work in windows
1 - 100 of 1562 matches
Mail list logo