I still don't get it. shm_unlink() works the same way unlink() does.
The resource itself doesn't cease to exist until all open file handles
are closed. From the shm_unlink() man page on Linux:
The operation of shm_unlink() is analogous to unlink(2): it
removes a shared memory
On 2012-08-02, Laszlo Nagy gand...@shopzeus.com wrote:
I still don't get it. shm_unlink() works the same way unlink() does.
The resource itself doesn't cease to exist until all open file
handles are closed. From the shm_unlink() man page on Linux:
The operation of shm_unlink() is
As I wrote I found many nice things (Pipe, Manager and so on), but
actually even
this seems to work: yes I did read the documentation.
Sorry, I did not want be offensive.
I was just surprised that it worked better than I expected even
without Pipes and Queues, but now I understand why..
2012/8/1 Laszlo Nagy gand...@shopzeus.com:
I was just surprised that it worked better than I expected even
without Pipes and Queues, but now I understand why..
Anyway now I would like to be able to detach subprocesses to avoid the
nasty code reloading that I was talking about in another
Thanks, there is another thing which is able to interact with running
processes in theory:
https://github.com/lmacken/pyrasite
I don't know though if it's a good idea to use a similar approach for
production code, as far as I understood it uses gdb.. In theory
though I could be able to set
2012/8/1 Laszlo Nagy gand...@shopzeus.com:
On thing is sure: os.fork() doesn't work under Microsoft Windows. Under
Unix, I'm not sure if os.fork() can be mixed with
multiprocessing.Process.start(). I could not find official documentation on
that. This must be tested on your actual platform.
Yes I know we don't care about Windows for this particular project..
I think mixing multiprocessing and fork should not harm, but probably
is unnecessary since I'm already in another process after the fork so
I can just make it run what I want.
Otherwise is there a way to do same thing only
In article mailman.2809.1343809166.4697.python-l...@python.org,
Laszlo Nagy gand...@shopzeus.com wrote:
Yes, I think that is correct. Instead of detaching a child process, you
can create independent processes and use other frameworks for IPC. For
example, Pyro. It is not as effective as
The most effective IPC is usually through shared memory. But there is no
OS independent standard Python module that can communicate over shared
memory.
It's true that shared memory is faster than serializing objects over a
TCP connection. On the other hand, it's hard to imagine anything
On 2012-08-01 12:59, Roy Smith wrote:
In article mailman.2809.1343809166.4697.python-l...@python.org,
Laszlo Nagy gand...@shopzeus.com wrote:
Yes, I think that is correct. Instead of detaching a child process, you
can create independent processes and use other frameworks for IPC. For
2012/8/1 Roy Smith r...@panix.com:
In article mailman.2809.1343809166.4697.python-l...@python.org,
Laszlo Nagy gand...@shopzeus.com wrote:
Yes, I think that is correct. Instead of detaching a child process, you
can create independent processes and use other frameworks for IPC. For
example,
On 2012-08-01, Laszlo Nagy gand...@shopzeus.com wrote:
As I wrote I found many nice things (Pipe, Manager and so on), but
actually even
this seems to work: yes I did read the documentation.
Sorry, I did not want be offensive.
I was just surprised that it worked better than I expected even
things get more tricky, because I can't use queues and pipes to
communicate with a running process that it's noit my child, correct?
Yes, I think that is correct.
I don't understand why detaching a child process on Linux/Unix would
make IPC stop working. Can somebody explain?
It is
Yes, I think that is correct.
I don't understand why detaching a child process on Linux/Unix would
make IPC stop working. Can somebody explain?
It is implemented with shared memory. I think (although I'm not 100%
sure) that shared memory is created *and freed up* (shm_unlink()
system call)
2012/8/1 Laszlo Nagy gand...@shopzeus.com:
So detaching the child process will not make IPC stop working. But exiting
from the original parent process will. (And why else would you detach the
child?)
--
http://mail.python.org/mailman/listinfo/python-list
Well it makes perfect sense if it
On Aug 1, 2012, at 9:25 AM, andrea crotti wrote:
[beanstalk] does look nice and I would like to have something like that..
But since I have to convince my boss of another external dependency I
think it might be worth
to try out zeromq instead, which can also do similar things and looks
more
On 2012-08-01, Laszlo Nagy gand...@shopzeus.com wrote:
things get more tricky, because I can't use queues and pipes to
communicate with a running process that it's noit my child, correct?
Yes, I think that is correct.
I don't understand why detaching a child process on Linux/Unix would
make
def procs():
mp = MyProcess()
# with the join we are actually waiting for the end of the running time
mp.add([1,2,3])
mp.start()
mp.add([2,3,4])
mp.join()
print(mp)
I think I got it now, if I already just mix the start before another
add, inside the
I think I got it now, if I already just mix the start before another
add, inside the Process.run it won't see the new data that has been
added after the start. So this way is perfectly safe only until the
process is launched, if it's running I need to use some
multiprocess-aware data
2012/7/31 Laszlo Nagy gand...@shopzeus.com:
I think I got it now, if I already just mix the start before another add,
inside the Process.run it won't see the new data that has been added after
the start. So this way is perfectly safe only until the process is launched,
if it's running I need
20 matches
Mail list logo