Then, you must put the initialization (dynamically loading the modules)
into the function executed in the foreign process.
You could wrap the payload function into a class instances to achieve this.
In the foreign process, you call the instance which first performs
the initialization and then exe
Martin Di Paola wrote at 2022-3-6 20:42 +:
>>Try to use `fork` as "start method" (instead of "spawn").
>
>Yes but no. Indeed with `fork` there is no need to pickle anything. In
>particular the child process will be a copy of the parent so it will
>have all the modules loaded, including the dyna
g:
import multiprocessing
import multiprocessing.reduction
import pickle
pickle.dumps(multiprocessing.reduction.ForkingPickler)
In a separated Python console run the following:
import pickle
import sys
'multiprocessing' in sys.modules
False
pickle.loads()
'multiprocessing
> On 7 Mar 2022, at 02:33, Martin Di Paola wrote:
>
> Yes but I think that unpickle (pickle.loads()) does that plus
> importing any module needed
Are you sure that unpickle will import code? I thought it did not do that.
Barry
--
https://mail.python.org/mailman/listinfo/python-list
Yeup, that would be my first choice but the catch is that "sayhi" may
not be a function of the given module. It could be a static method of
some class or any other callable.
Ah, fair. Are you able to define it by a "path", where each step in
the path is a getattr() call?
Yes but I think th
that into account.
I agree that if the developer uses multiprocessing he/she needs to know
its implications. But if I can "smooth" any rough corner, I will try to
do it.
For example, the main project (developed by me) uses threads for
concurrency. It would be simpler to load the pl
On 7/03/22 9:36 am, Martin Di Paola wrote:
It *would* be my fault if multiprocessing.Process fails only because I'm
loading the code dynamically.
I'm not so sure about that. The author of the plugin knows they're
writing code that will be dynamically loaded, and can therefore
expect the kind of
On Mon, 7 Mar 2022 at 07:37, Martin Di Paola wrote:
>
>
>
> >
> >The way you've described it, it's a hack. Allow me to slightly redescribe it.
> >
> >modules = loader()
> >objs = init(modules)
> >
> >def invoke(mod, func):
> ># I'm assuming that the loader is smart enough to not load
> >#
Try to use `fork` as "start method" (instead of "spawn").
Yes but no. Indeed with `fork` there is no need to pickle anything. In
particular the child process will be a copy of the parent so it will
have all the modules loaded, including the dynamic ones. Perfect.
The problem is that `fork` is t
may
not be a function of the given module. It could be a static method of
some class or any other callable.
And doing the lookup by hand sounds complex.
The thing is that the use of multiprocessing is not something required by me
(by my plugin-engine), it was a decision of the developer of
Martin Di Paola wrote at 2022-3-6 12:42 +:
>Hi everyone. I implemented time ago a small plugin engine to load code
>dynamically.
>
>So far it worked well but a few days ago an user told me that he wasn't
>able to run in parallel a piece of code in MacOS.
>
>He was using multiprocessing.Process
de, we can assume that a module is itself, no matter what; it
won't be a perfect clone of itself, it will actually be the same
module.
If you want to support multiprocessing, I would recommend
disconnecting yourself from the concept of loaded modules, and instead
identify the target by its mod
Hi everyone. I implemented time ago a small plugin engine to load code
dynamically.
So far it worked well but a few days ago an user told me that he wasn't
able to run in parallel a piece of code in MacOS.
He was using multiprocessing.Process to run the code and in MacOS, the
default start metho
On Thu, 3 Feb 2022 at 13:32, Avi Gross via Python-list
wrote:
>
> Jen,
>
> I would not be shocked at incompatibilities in the system described making it
> hard to exchange anything, including text, but am not clear if there is a
> limitation of four bytes in what can be shared. For me, a charact
@python.org
Sent: Wed, Feb 2, 2022 1:27 pm
Subject: Re: Data unchanged when passing data to Python in multiprocessing
shared memory
An ASCII string will not work. If you convert 32894 to an ascii string you
will have five bytes, but you need four. In my original post I showed the C
program I
> On 2 Feb 2022, at 18:19, Jen Kris via Python-list
> wrote:
>
> It's not clear to me from the struct module whether it can actually
> auto-detect endianness.
It is impossible to auto detect endian in the general case.
> I think it must be specified, just as I had to do with int.from_byte
On Wed, 2 Feb 2022 19:16:19 +0100 (CET), Jen Kris
declaimed the following:
>It's not clear to me from the struct module whether it can actually
>auto-detect endianness. I think it must be specified, just as I had to do
>with int.from_bytes(). In my case endianness was dictated by how the four
y is also not.
>
> -Original Message-
> From: Dennis Lee Bieber
> To: python-list@python.org
> Sent: Wed, Feb 2, 2022 12:30 am
> Subject: Re: Data unchanged when passing data to Python in multiprocessing
> shared memory
>
>
> On Wed, 2 Feb 2022 00:40:22 +0100
It's not clear to me from the struct module whether it can actually auto-detect
endianness. I think it must be specified, just as I had to do with
int.from_bytes(). In my case endianness was dictated by how the four bytes
were populated, starting with the zero bytes on the left.
Feb 1, 202
m: Dennis Lee Bieber
To: python-list@python.org
Sent: Wed, Feb 2, 2022 12:30 am
Subject: Re: Data unchanged when passing data to Python in multiprocessing
shared memory
On Wed, 2 Feb 2022 00:40:22 +0100 (CET), Jen Kris
declaimed the following:
>
> breakup = int.from_bytes(byte_va
On Wed, 2 Feb 2022 00:40:22 +0100 (CET), Jen Kris
declaimed the following:
>
> breakup = int.from_bytes(byte_val, "big")
>print("this is breakup " + str(breakup))
>
>Python prints: this is breakup 32894
>
>Note that I had to switch from little endian to big endian. Python is little
>endian by
> On 1 Feb 2022, at 23:40, Jen Kris wrote:
>
> Barry, thanks for your reply.
>
> On the theory that it is not yet possible to pass data from a non-Python
> language to Python with multiprocessing.shared_memory, I bypassed the problem
> by attaching 4 bytes to my FIFO pipe message from NASM
Barry, thanks for your reply.
On the theory that it is not yet possible to pass data from a non-Python
language to Python with multiprocessing.shared_memory, I bypassed the problem
by attaching 4 bytes to my FIFO pipe message from NASM to Python:
byte_val = v[10:14]
where v is the message re
> On 1 Feb 2022, at 20:26, Jen Kris via Python-list
> wrote:
>
> I am using multiprocesssing.shared_memory to pass data between NASM and
> Python. The shared memory is created in NASM before Python is called.
> Python connects to the shm: shm_00 =
> shared_memory.SharedMemory(name='shm_
I am using multiprocesssing.shared_memory to pass data between NASM and Python.
The shared memory is created in NASM before Python is called. Python connects
to the shm: shm_00 =
shared_memory.SharedMemory(name='shm_object_00',create=False).
I have used shared memory at other points in thi
Johannes Bauer wrote at 2021-12-6 00:50 +0100:
>I'm a bit confused. In my scenario I a mixing threading with
>multiprocessing. Threading by itself would be nice, but for GIL reasons
>I need both, unfortunately. I've encountered a weird situation in which
>multiprocessing
> On 5 Dec 2021, at 23:50, Johannes Bauer wrote:
>
> Hi there,
>
> I'm a bit confused. In my scenario I a mixing threading with
> multiprocessing. Threading by itself would be nice, but for GIL reasons
> I need both, unfortunately. I've encount
Am 06.12.21 um 13:56 schrieb Martin Di Paola:
> Hi!, in short your code should work.
>
> I think that the join-joined problem is just an interpretation problem.
>
> In pseudo code the background_thread function does:
>
> def background_thread()
> # bla
> print("join?")
> # bla
> print("j
threads that
you spawned (background_thread functions).
I hope that this can guide you to fix or at least narrow the issue.
Thanks,
Martin.
On Mon, Dec 06, 2021 at 12:50:11AM +0100, Johannes Bauer wrote:
Hi there,
I'm a bit confused. In my scenario I a mixing threading with
multiprocess
Hi there,
I'm a bit confused. In my scenario I a mixing threading with
multiprocessing. Threading by itself would be nice, but for GIL reasons
I need both, unfortunately. I've encountered a weird situation in which
multiprocessing Process()es which are started in a new thread don't
On 6/17/2021 5:02 PM, Michael Boom wrote:
The below issue is pretty serious and it is preventing me from using a system I
wrote on a larger scale. How do I get this bug fixed? Thanks.
https://bugs.python.org/issue43329
Reduce your code to the minimum needed to exhibit the problem. Then run
y
quickly and I couldn't immediately tell if the problem was a bug in
multiprocessing or a mistake in the code shown. Just figuring that out
would take more than the very little time I was prepared to spend looking
at it so I moved on. If the OP hopes that someone else will use their
limited ti
; Got exception , ConnectionResetError(10054,
> 'An existing connection was forcibly closed by the remote host', None,
> 10054, None)
> Reconnecting
> Got exception , ConnectionResetError(10054,
> 'An existing connection was forcibly closed by the remote host
;No connection could be made because the
target machine actively refused it', None, 10061, None)
Reconnecting
Traceback (most recent call last):
File
"C:\Users\Alexander\AppData\Local\Programs\Python\Python37\lib\multiprocessing\connection.py",
line 619, in SocketClient
s.connect(address
The below issue is pretty serious and it is preventing me from using a system I
wrote on a larger scale. How do I get this bug fixed? Thanks.
https://bugs.python.org/issue43329
--
https://mail.python.org/mailman/listinfo/python-list
On Sat, Aug 29, 2020 at 06:24:10PM +1000, John O'Hagan wrote:
> Dear list
>
> Thanks to this list, I haven't needed to ask a question for
> a very long time, but this one has me stumped.
>
> Here's the minimal 3.8 code, on Debian testing:
>
> -
>
rror without the sleep(1), nor if the Process is
> >> started before the Thread, nor if two Processes are used instead,
> >> nor if two Threads are used instead. IOW the error only occurs if
> >> a Thread is started first, and a Process is started a little later.
> >>
On 2020-08-30, Barry wrote:
>* The child process is created with a single thread—the one that
> called fork(). The entire virtual address space of the parent is
> replicated in the child, including the states of mutexes,
> condition variables, and other pthr
> On 30 Aug 2020, at 11:03, Stephane Tougard via Python-list
> wrote:
>
> On 2020-08-30, Chris Angelico wrote:
>>> I'm not even that makes sense, how 2 processes can share a thread ?
>>>
>> They can't. However, they can share a Thread object, which is the
>> Python representation of a threa
On 2020-08-30, Chris Angelico wrote:
>> I'm not even that makes sense, how 2 processes can share a thread ?
>>
> They can't. However, they can share a Thread object, which is the
> Python representation of a thread. That can lead to confusion, and
> possibly the OP's error (I don't know for sure,
e used instead, nor if two
>> Threads are used instead. IOW the error only occurs if a Thread is
>> started first, and a Process is started a little later.
>>
>> Any ideas what might be causing the error?
>>
>
> Under Linux, multiprocessing creates proce
instead, nor if two
> >Threads are used instead. IOW the error only occurs if a Thread is
> >started first, and a Process is started a little later.
> >
> >Any ideas what might be causing the error?
> >
>
> Under Linux, multiprocessing creates proces
> On Aug 29, 2020, at 10:12 PM, Stephane Tougard via Python-list
> wrote:
>
> On 2020-08-29, Dennis Lee Bieber wrote:
>> Under Linux, multiprocessing creates processes using fork(). That means
>> that, for some fraction of time, you have TWO processes sharing th
On Sun, Aug 30, 2020 at 4:01 PM Stephane Tougard via Python-list
wrote:
>
> On 2020-08-29, Dennis Lee Bieber wrote:
> > Under Linux, multiprocessing creates processes using fork(). That
> > means
> > that, for some fraction of time, you have TWO processes sharing
On 2020-08-29, Dennis Lee Bieber wrote:
> Under Linux, multiprocessing creates processes using fork(). That means
> that, for some fraction of time, you have TWO processes sharing the same
> thread and all that entails (if it doesn't overlay the forked process with
> a new
Dear list
Thanks to this list, I haven't needed to ask a question for
a very long time, but this one has me stumped.
Here's the minimal 3.8 code, on Debian testing:
-
from multiprocessing import Process
from threading import Thread
from time import sleep
import cv2
def show
use in
the subprocess through args. That would include the Pipe connection.
Using multiprocessing in Linux requires the reference names to be global,
however the use of args is not required. Finally, Linux does not appear to
cause any problems if args are specified.
2. Even if you fix problem
On Monday, May 4, 2020 at 4:09:53 PM UTC-7, Terry Reedy wrote:
> On 5/4/2020 3:26 PM, John Ladasky wrote:
> > Several years ago I built an application using multiprocessing. It only
> > needed to work in Linux. I got it working fine. At the time,
> > concurrent.
On 5/4/2020 3:26 PM, John Ladasky wrote:
Several years ago I built an application using multiprocessing. It only needed
to work in Linux. I got it working fine. At the time, concurrent.futures did
not exist.
My current project is an application which includes a PyQt5 GUI, and a live
video
Several years ago I built an application using multiprocessing. It only needed
to work in Linux. I got it working fine. At the time, concurrent.futures did
not exist.
My current project is an application which includes a PyQt5 GUI, and a live
video feed with some real-time image processing
Upon using sympy's rubi_integrate upon my quad core
computer, I find that the first CPU is being used 100%,
whilst the other three are around 1% and 2% .
I'm wondering if you have some code to overcome this limitation.
--
https://mail.python.org/mailman/listinfo/python-list
forcing it back to using fork does
indeed “fix” the issue. Of course, as is noted there, the fork start method
should be considered unsafe, so I guess I get to re-architect everything I do
using multiprocessing that relies on data-sharing between processes. The Queue
example was just a mini
alize the Queue
>mp_comm_queue = mp.Queue()
>
>#Set up a pool to process a bunch of stuff in parallel
>pool = mp.Pool(initializer = pool_init, initargs = (mp_comm_queue,))
>...
>
>
Gotcha, thanks. I’ll look more into that initializer argument and see how I can
lev
= mp.Pool(initializer = pool_init, initargs = (mp_comm_queue,))
...
-Original Message-
From: David Raymond
Sent: Monday, April 6, 2020 4:19 PM
To: python-list@python.org
Subject: RE: Multiprocessing queue sharing and python3.8
Attempting reply as much for my own understanding.
Are you on
se the function that the Pool is executing finishes so
quickly.
Add a little extra info to the print calls (and/or set up logging to stdout
with the process name/id included) and you can see some of this. Here's the
hacked together changes I did for that.
import multiprocessing as mp
import
Under python 3.7 (and all previous versions I have used), the following code
works properly, and produces the expected output:
import multiprocessing as mp
mp_comm_queue = None #Will be initalized in the main function
mp_comm_queue2=mp.Queue() #Test pre-initalized as well
def
On 05Feb2020 15:48, Israel Brewster wrote:
In a number of places I have constructs where I launch several
processes using the multiprocessing library, then loop through said
processes calling join() on each one to wait until they are all
complete. In general, this works well, with the
In a number of places I have constructs where I launch several processes using
the multiprocessing library, then loop through said processes calling join() on
each one to wait until they are all complete. In general, this works well, with
the *apparent* exception of if something causes one of
answered here https://www.reddit.com/r/Python/comments/dxhgec/
how_does_multiprocessing_convert_a_methodrun_in/
basically starts two PVMs - the whole fork, check 'pid' trick.. one
process continues as the main thread and the other calls 'run'
--
https://mail.python.org/mailman/listinfo/python-li
https://pymotw.com/2/multiprocessing/basics.html
https://pymotw.com/2/threading/
I didn't follow this
1.
>The logger can also be configured through the logging configuration file
>API, using the name multiprocessing.
and
2.
>it is also possible to use a custom subc
On Sat, Jul 06, 2019 at 04:54:42PM +1000, Chris Angelico wrote:
> But if I comment out the signal.signal line, there seem to be no ill
> effects. I suspect that what you're seeing here is the multiprocessing
> module managing its own subprocesses, telling some of them to shut
>
On Sat, Jul 6, 2019 at 12:13 AM José María Mateos wrote:
>
> Hi,
>
> This is a minimal proof of concept for something that has been bugging me for
> a few days:
>
> ```
> $ cat signal_multiprocessing_poc.py
>
> import random
> import multiprocessing
>
xpected signal 15!
> ...
"multiprocessing.util" may register an exit function which calls
"terminate" which signals SIGTERM. There is also an "os.kill(..., SIGTERM)"
in "multiprocessing.popen_fork".
I would put some print at those places to determine if the SIGTERM
comes from "multiprocessing".
--
https://mail.python.org/mailman/listinfo/python-list
Hi,
This is a minimal proof of concept for something that has been bugging me for a
few days:
```
$ cat signal_multiprocessing_poc.py
import random
import multiprocessing
import signal
import time
def signal_handler(signum, frame):
raise Exception(f"Unexpected signal {signum}!&q
On 03/07/2019 18.37, Israel Brewster wrote:
> I have a script that benefits greatly from multiprocessing (it’s generating a
> bunch of images from data). Of course, as expected each process uses a chunk
> of memory, and the more processes there are, the more memory used. The amount
&
On 2019-07-03 08:37:50 -0800, Israel Brewster wrote:
> 1) Determine the total amount of RAM in the machine (how?), assume an
> average of 10GB per process, and only launch as many processes as
> calculated to fit. Easy, but would run the risk of under-utilizing the
> processing capabilities and tak
On 7/3/19 9:37 AM, ijbrews...@alaska.edu wrote:
I have a script that benefits greatly from multiprocessing (it’s generating a
bunch of images from data). Of course, as expected each process uses a chunk of
memory, and the more processes there are, the more memory used. The amount used
per
I have a script that benefits greatly from multiprocessing (it’s generating a
bunch of images from data). Of course, as expected each process uses a chunk of
memory, and the more processes there are, the more memory used. The amount used
per process can vary from around 3 GB (yes, gigabytes) to
With multiprocessing you can take advantage of multi-core processing as it
launches a separate python interpreter process and communicates with it via
shared memory (at least on windows). The big advantage of multiprocessing
module is that the interaction between processes is much richer than
Re: " My understanding (so far) is that the tradeoff of using multiprocessing
is that my manager script can not exit until all the work processes it starts
finish. If one of the worker scripts locks up, this could be problematic. Is
there a way to use multiprocessing where processe
ipts may end
> up running in parallel. There are no dependencies between individual worker
> scripts. I'm looking for the pros and cons of using multiprocessing or
> subprocess to launch these worker scripts. Looking for a solution that works
> across Windows and Linux. Open to usin
individual worker
scripts. I'm looking for the pros and cons of using multiprocessing or
subprocess to launch these worker scripts. Looking for a solution that works
across Windows and Linux. Open to using a 3rd party library. Hoping to avoid
the use of yet another system component like Cele
y split the list up into chunks
and process each chunk in parallel on a separate core. To that end, I
created a multiprocessing pool:
I recall a similar discussion when folk were being encouraged to move
away from monolithic and straight-line processing to modular functions
- it is more (CPU-
imply split the list up into chunks
>> and process each chunk in parallel on a separate core. To that end, I
>> created a multiprocessing pool:
>
>
> I recall a similar discussion when folk were being encouraged to move away
> from monolithic and straight-line processing t
x(). This
> > takes about 10 seconds to run.
> >
> > Looking at this, I am thinking it would lend itself well to
> > parallelization. Since the box at each “coordinate" is independent of all
> > others, it seems I should be able to simply split the list up in
f all
others, it seems I should be able to simply split the list up into chunks
and process each chunk in parallel on a separate core. To that end, I
created a multiprocessing pool:
I recall a similar discussion when folk were being encouraged to move
away from monolithic and straight-line proce
split the list up into chunks
and process each chunk in parallel on a separate core. To that end, I
created a multiprocessing pool:
pool = multiprocessing.Pool()
And then called pool.map() rather than just “map”. Somewhat to my surprise,
the execution time was virtually identical. Given the simplici
> On Feb 18, 2019, at 6:37 PM, Ben Finney wrote:
>
> I don't have anything to add regarding your experiments with
> multiprocessing, but:
>
> Israel Brewster writes:
>
>> Which creates and populates an 800x1000 “grid” (represented as a flat
>> list at
I don't have anything to add regarding your experiments with
multiprocessing, but:
Israel Brewster writes:
> Which creates and populates an 800x1000 “grid” (represented as a flat
> list at this point) of “boxes”, where a box is a
> shapely.geometry.box(). This takes about 10
f all others, it seems I
should be able to simply split the list up into chunks and process each chunk
in parallel on a separate core. To that end, I created a multiprocessing pool:
pool = multiprocessing.Pool()
And then called pool.map() rather than just “map”. Somewhat to my surprise
fredag den 10. august 2018 kl. 15.35.46 UTC+2 skrev Niels Kristian Jensen:
> Please refer to:
>
(cut)
It appears, that Python is simply not supported on Cygwin (!):
https://bugs.python.org/issue30563
Best regards,
Niels Kristian
--
https://mail.python.org/mailman/listinfo/python-list
On Friday, August 10, 2018 at 2:28:45 AM UTC-4, Léo El Amri wrote:
> That may be something simple: Did you actually protected the entry-point
> of your Python script with if __name__ == '__main__': ?
That was my first thought too; the script technically doesn't have top-level
code, so I figured I
or not:
adminnkj@DTDKCPHAS1060 ~
$ python3 -V
Python 3.6.4
adminnkj@DTDKCPHAS1060 ~
$ cat test.py
from multiprocessing import Pool
def f(x):
return x*x
if __name__ == '__main__':
p = Pool(5)
print(p.map(f, [1, 2, 3]))
adminnkj@DTDKCPHAS1060 ~
$ python3 test.py
--->
On 09/08/2018 19:33, Apple wrote:> So my program runs one script file,
and multiprocessing commands from that script file seem to fail to spawn
new processes.
>
> However, if that script file calls a function in a separate script file that
> it has imported, and that fu
On 2018-03-21 09:27:37 -0400, Larry Martell wrote:
> Yeah, I saw that and I wasn't trying to reinvent the wheel. On this
> page https://docs.python.org/2/library/multiprocessing.html it says
> this:
>
> The multiprocessing package offers both local and remote concurrency
Multiprocessing, not multithreading. Different processes. This is pretty
easy to do.
I have done this from a Python script to run an analysis program on many sets
of data, at once. To do it: 1) if there is going to be an output file, each
output file must have a distinct name. 2) To use
Hello,
I am new to logging module. I want to use logging module with multiprocessing.
can anyone help me understand how can I do it?. Any help would be appreciated.
Thank you.
--
https://mail.python.org/mailman/listinfo/python-list
1st sequence (script_1 to 3), master will execute the
> 2nd sequence (script_4 to 6).
>
> Each child script will be calling a multiprocessing function to process a
> task.
I could ask what motivates this convoluted-sounding structure...
>
> [snip]
>
>
> I like to know how
), master will execute the
2nd sequence (script_4 to 6).
Each child script will be calling a multiprocessing function to process a
task.
Master script is like,
for seq in seqs_to_launch:
for script in seq:
script().execute(data)
Each child script is like,
import multi_process_update
On Tue, Mar 20, 2018 at 11:15 PM, Steven D'Aprano
wrote:
> On Wed, 21 Mar 2018 02:20:16 +, Larry Martell wrote:
>
>> Is there a way to use the multiprocessing lib to run a job on a remote
>> host?
>
> Don't try to re-invent the wheel. This is a solved prob
On Wed, 21 Mar 2018 02:20:16 +, Larry Martell wrote:
> Is there a way to use the multiprocessing lib to run a job on a remote
> host?
Don't try to re-invent the wheel. This is a solved problem.
https://stackoverflow.com/questions/1879971/what-is-the-current-choice-for-doing-rp
Is there a way to use the multiprocessing lib to run a job on a remote
host?
--
https://mail.python.org/mailman/listinfo/python-list
I am excecting custom commands like shell on multiple linux hosts. and if in
one host one of the commands fail. I want that process not to proceed. If the
remote command throws an error i am logging it .. but the process goes to next
command . but if i terminate the command, the process will t
On Tue, Jan 30, 2018 at 7:26 PM, Terry Reedy wrote:
> On 1/30/2018 10:54 AM, Nicholas Cole wrote:
>
>> I have a strange problem on python 3.6.1
>
> [involving multiprocessing]
Interestingly it seems to have been a very subtle circular import
problem that was showing up only
On 1/30/2018 10:54 AM, Nicholas Cole wrote:
I have a strange problem on python 3.6.1
[involving multiprocessing]
I think the first thing you should do is upgrade to 3.6.4 to get all the
bugfixes since 3.6.4. I am pretty sure there have been some for
multiprocessing itself. *Then* see if you
not importing a single file but a subpackage __init__ file.
That __init__ file does not have its own __all__ statement, but seems
to just be relying on importing from different files in the
subpackage.
Could that be the problem? Even so, I'm unsure why it is showing up
only when used in multipro
*simplified* demonstration? A minimal sample program
> which we can run that demonstrates the issue?
[snip]
I find it extremely odd.
File A:
the multiprocessing code and the map function.
file B: a set of library functions called by the function called in file A.
file C: included in file B by us
On Tue, 30 Jan 2018 15:54:30 +, Nicholas Cole wrote:
[...]
> The function I am passing to map calls a function in another file within
> the same model. And that file has a
>
> from .some_file_in_the_package import *
>
> line at the top.
>
> However, in each function called in that file, I
Dear List,
I have a strange problem on python 3.6.1
I am using the multiprocessing function to parallelize an expensive
operation, using the multiprocessing.Pool() and Pool.map() functions.
The function I am passing to map calls a function in another file
within the same model. And that file
multiprocessing will never help, since the
operation is too fast with respect to the overhead involved in multiprocessing.
In that case just give up and think about ways of changing the original problem.
--
https://mail.python.org/mailman/listinfo/python-list
1 - 100 of 1001 matches
Mail list logo