Re: write to the same file from multiple processes at the same time?

2005-05-31 Thread Steve Holden
Roy Smith wrote:
> Peter Hansen <[EMAIL PROTECTED]> wrote:
> 
>>The OP was probably on the right track when he suggested that things 
>>like SQLite (conveniently wrapped with PySQLite) had already solved this 
>>problem.
> 
> 
> Perhaps, but a relational database seems like a pretty heavy-weight 
> solution for a log file.

Excel seems like a pretty heavyweight solution for most of the 
applications it's used for, too. Most people are interested in solving a 
problem and moving on, and while this may lead to bloatware it can also 
lead to the inclusion of functionality that can be hugely useful in 
other areas of the application.

regards
  Steve
-- 
Steve Holden+1 703 861 4237  +1 800 494 3119
Holden Web LLC http://www.holdenweb.com/
Python Web Programming  http://pydish.holdenweb.com/

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: write to the same file from multiple processes at the same time?

2005-05-31 Thread Piet van Oostrum
Isn't a write to a file that's opened as append atomic in most operating
systems? At least in modern Unix systems. man open(2) should give more
information about this.

Like:
f = file("filename", "a")
f.write(line)
f.flush()

if line fits into the stdio buffer. Otherwise os.write can be used.

As this depends on the OS support for append, it is not portable. But
neither is locking. And I am not sure if it works for NFS-mounted files.
-- 
Piet van Oostrum <[EMAIL PROTECTED]>
URL: http://www.cs.uu.nl/~piet [PGP]
Private email: [EMAIL PROTECTED]
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: write to the same file from multiple processes at the same time?

2005-05-31 Thread gabor
Mike Meyer wrote:
> gabor <[EMAIL PROTECTED]> writes:
> 
> 
>>ok, i ended up with the following code:
>>
>>def syncLog(filename,text):
>> f = os.open(filename,os.O_WRONLY | os.O_APPEND)
>> fcntl.flock(f,fcntl.LOCK_EX)
>> os.write(f,text)
>> #FIXME: what about releasing the lock?
>> os.close(f)
>>
>>it seems to do what i need ( the flock() call waits until he can get
>>access).. i just don't know if i have to unlock() the file before i
>>close it..
> 
> 
> The lock should free when you close the file descriptor. Personally,
> I'm a great believer in doing things explicitly rather than
> implicitly, 


> and would add the extra fcntl.flock(f, fcntl.LOCK_UN) call
> before closing the file.

done :)

gabor
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: write to the same file from multiple processes at the same time?

2005-05-30 Thread Mike Meyer
gabor <[EMAIL PROTECTED]> writes:

> ok, i ended up with the following code:
>
> def syncLog(filename,text):
>  f = os.open(filename,os.O_WRONLY | os.O_APPEND)
>  fcntl.flock(f,fcntl.LOCK_EX)
>  os.write(f,text)
>  #FIXME: what about releasing the lock?
>  os.close(f)
>
> it seems to do what i need ( the flock() call waits until he can get
> access).. i just don't know if i have to unlock() the file before i
> close it..

The lock should free when you close the file descriptor. Personally,
I'm a great believer in doing things explicitly rather than
implicitly, and would add the extra fcntl.flock(f, fcntl.LOCK_UN) call
before closing the file.

 http://www.mired.org/home/mwm/
Independent WWW/Perforce/FreeBSD/Unix consultant, email for more information.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: write to the same file from multiple processes at the same time?

2005-05-30 Thread gabor
gabor wrote:
> Jp Calderone wrote:
> 
>> To briefly re-summarize, when you want to acquire a lock, attempt to 
>> create a directory with a well-known name.  When you are done with it, 
>> delete the directory.  This works across all platforms and filesystems 
>> likely to be encountered by a Python program.
> 
> 
> thanks...
> 
> but the problem now is that the cgi will have to wait for that directory 
>  to be gone, when he is invoked.. and i do not want to code that :)
> i'm too lazy..
> 
> so basically i want the code to TRY to write to the file, and WAIT if it 
>  is opened for write right now...
> 
> something like a mutex-synchronized block of the code...
> 
ok, i ended up with the following code:

def syncLog(filename,text):
 f = os.open(filename,os.O_WRONLY | os.O_APPEND)
 fcntl.flock(f,fcntl.LOCK_EX)
 os.write(f,text)
 #FIXME: what about releasing the lock?
 os.close(f)

it seems to do what i need ( the flock() call waits until he can get 
access).. i just don't know if i have to unlock() the file before i 
close it..


gabor
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: write to the same file from multiple processes at the same time?

2005-05-30 Thread gabor
jean-marc wrote:
> Sorry, why is the temp file solution 'stupid'?, (not
> aesthetic-pythonistic???) -  it looks OK: simple and direct, and
> certainly less 'heavy' than any db stuff (even embedded)
> 
> And  collating in a 'official log file' can be done periodically by
> another process, on a time-scale that is 'useful' if not
> instantaneous...
> 
> Just trying to understand here...
> 

actually this is what i implemented after asking the question, and works 
fine :)

i just thought that maybe there is a solution where i don't have to deal 
  with 4000 files in the temp folder :)

gabor
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: write to the same file from multiple processes at the same time?

2005-05-30 Thread gabor
Jp Calderone wrote:
> To briefly re-summarize, when you 
> want to acquire a lock, attempt to create a directory with a well-known 
> name.  When you are done with it, delete the directory.  This works 
> across all platforms and filesystems likely to be encountered by a 
> Python program.

thanks...

but the problem now is that the cgi will have to wait for that directory 
  to be gone, when he is invoked.. and i do not want to code that :)
i'm too lazy..

so basically i want the code to TRY to write to the file, and WAIT if it 
  is opened for write right now...

something like a mutex-synchronized block of the code...

gabor
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: write to the same file from multiple processes at the same time?

2005-05-28 Thread [EMAIL PROTECTED]
Well I just tried it on Linux anyway. I opened the file in two python
processes using append mode.

I then wrote simple function to write then flush what it is passed:

def write(msg):
   foo.write("%s\n" %  msg)
   foo.flush()

I then opened another terminal and did 'tail -f myfile.txt'.

It worked just fine.

Maybe that will help. Seems simple enough to me for basic logging.

Cheers,
Bill

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: write to the same file from multiple processes at the same time?

2005-05-28 Thread Mike Meyer
Paul Rubin  writes:
> Really, I think the Python library is somewhat lacking in not
> providing a simple, unified interface for doing stuff like this.

It's got one. Well, three, actually.

The syslog module solves the problem quite nicely, but only works on
Unix. If the OP is working on Unix systems, that may be a good
solution.

The logging module has a SysLogHandler that talks to syslog on
Unix. It also has an NTEventLogHandler for use on NT. I'm not familiar
with NT's event log, but I presume it has the same kind of
functionality as Unix's syslog facility.

http://www.mired.org/home/mwm/
Independent WWW/Perforce/FreeBSD/Unix consultant, email for more information.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: write to the same file from multiple processes at the same time?

2005-05-28 Thread Do Re Mi chel La Si Do
Hi !


On windows, with PyWin32, to read this little sample-code :


import time
import win32file, win32con, pywintypes

def flock(file):
hfile = win32file._get_osfhandle(file.fileno())
win32file.LockFileEx(hfile, win32con.LOCKFILE_EXCLUSIVE_LOCK, 0, 0x, 
pywintypes.OVERLAPPED())

def funlock(file):
hfile = win32file._get_osfhandle(file.fileno())
win32file.UnlockFileEx(hfile, 0, 0x, pywintypes.OVERLAPPED())


file = open("FLock.txt", "r+")
flock(file)
file.seek(123)
for i in range(500):
file.write("AA")
print i
time.sleep(0.001)

#funlock(file)
file.close()




Michel Claveau



-- 
http://mail.python.org/mailman/listinfo/python-list


Re: write to the same file from multiple processes at the same time?

2005-05-27 Thread Andy Leszczynski
gabor wrote:
> the problem is:
> what happens if 2 users invoke the cgi at the same time?

Would BerkleyDB support that?
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: write to the same file from multiple processes at the same time?

2005-05-27 Thread Paul Rubin
Peter Hansen <[EMAIL PROTECTED]> writes:
> I think the FAQ can answer that better than I can, since I'm not sure
> whether you're asking about any low-level (OS) locks it might use or
> higher-level (e.g. database-level locking) that it might use.  In
> summary, however, at the database level it provides only
> coarse-grained locking on the entire database.  It *is* supposed to be
> a relatively simple/lightweight solution compared to typical RDBMSes...

Compared to what the OP was asking for, which was a way to synchronize
appending to a serial log file, SQlite is very complex.  It's also
much more complex than (say) the dbm module, which is what Python apps
normally use as a lightweight db.

> (There's also an excrutiating level of detail about this whole area in
> the page at http://www.sqlite.org/lockingv3.html ).

Oh ok, it says it uses some special locking system calls on Windows.
Since those calls aren't in the Python stdlib, it must be using C
extensions, which again means complexity.  But it looks like the
built-in msvcrt module has ways to lock parts of files in Windows.

Really, I think the Python library is somewhat lacking in not
providing a simple, unified interface for doing stuff like this.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: write to the same file from multiple processes at the same time?

2005-05-27 Thread Jp Calderone
On 27 May 2005 15:10:16 -0700, Paul Rubin <"http://phr.cx"@nospam.invalid> 
wrote:
>Peter Hansen <[EMAIL PROTECTED]> writes:
>> And PySQLite conveniently wraps the relevant calls with retries when
>> the database is "locked" by the writing process, making it roughly a
>> no-brainer to use SQLite databases as nice simple log files where
>> you're trying to write from multiple CGI processes like the OP wanted.
>
>Oh, ok.  But what kind of locks does it use?

It doesn't really matter, does it?

I'm sure the locking mechanisms it uses have changed between different 
releases, and may even be selected based on the platform being used.

Jp
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: write to the same file from multiple processes at the same time?

2005-05-27 Thread Peter Hansen
Christopher Weimann wrote:
> On 05/27/2005-06:02PM, Peter Hansen wrote:
> 
>>Hmm... just tried it: you're right!  On the other hand, the results were 
>>unacceptable: each process has a separate file pointer, so it appears 
>>whichever one writes first will have its output overwritten by the 
>>second process.
> 
> Did you open the files for 'append' ? 

Nope.  I suppose that would be a rational thing to do for log files, 
wouldn't it?  I wonder what happens when one does that...

-Peter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: write to the same file from multiple processes at the same time?

2005-05-27 Thread Peter Hansen
Paul Rubin wrote:
> Peter Hansen <[EMAIL PROTECTED]> writes:
> 
>>And PySQLite conveniently wraps the relevant calls with retries when
>>the database is "locked" by the writing process, making it roughly a
>>no-brainer to use SQLite databases as nice simple log files where
>>you're trying to write from multiple CGI processes like the OP wanted.
> 
> Oh, ok.  But what kind of locks does it use?  

I think the FAQ can answer that better than I can, since I'm not sure 
whether you're asking about any low-level (OS) locks it might use or 
higher-level (e.g. database-level locking) that it might use.  In 
summary, however, at the database level it provides only coarse-grained 
locking on the entire database.  It *is* supposed to be a relatively 
simple/lightweight solution compared to typical RDBMSes...

(There's also an excrutiating level of detail about this whole area in 
the page at http://www.sqlite.org/lockingv3.html ).

-Peter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: write to the same file from multiple processes at the same time?

2005-05-27 Thread Peter Hansen
Peter Hansen wrote:
> Grant Edwards wrote:
>> Not in my experience.  At least under Unix, it's perfectly OK
>> to open a file while somebody else is writing to it.  Perhaps
>> Windows can't deal with that situation?
> 
> Hmm... just tried it: you're right!  

Umm... the part you were right about was NOT the possibility that 
Windows can't deal with the situation, but the suggestion that it might 
actually be able to (since apparently it can).  Sorry to confuse.

-Peter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: write to the same file from multiple processes at the same time?

2005-05-27 Thread Jp Calderone
On 27 May 2005 15:22:17 -0700, Paul Rubin <"http://phr.cx"@nospam.invalid> 
wrote:
>Jp Calderone <[EMAIL PROTECTED]> writes:
>> >Oh, ok.  But what kind of locks does it use?
>>
>> It doesn't really matter, does it?
>
>Huh?  Sure, if there's some simple way to accomplish the locking, the
>OP's act can do the same thing without SQlite's complexity.
>
>> I'm sure the locking mechanisms it uses have changed between
>> different releases, and may even be selected based on the platform
>> being used.
>
>Well, yes, but WHAT ARE THEY??

Beats me, and I'm certainly not going to dig through the code to find out :)  
For the OP's purposes, the mechanism I mentioned earlier in this thread is 
almost certainly adequate.  To briefly re-summarize, when you want to acquire a 
lock, attempt to create a directory with a well-known name.  When you are done 
with it, delete the directory.  This works across all platforms and filesystems 
likely to be encountered by a Python program.

Jp
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: write to the same file from multiple processes at the same time?

2005-05-27 Thread Christopher Weimann
On 05/27/2005-06:02PM, Peter Hansen wrote:
> 
> Hmm... just tried it: you're right!  On the other hand, the results were 
> unacceptable: each process has a separate file pointer, so it appears 
> whichever one writes first will have its output overwritten by the 
> second process.

Did you open the files for 'append' ? 

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: write to the same file from multiple processes at the same time?

2005-05-27 Thread Paul Rubin
Jp Calderone <[EMAIL PROTECTED]> writes:
> >Oh, ok.  But what kind of locks does it use?
> 
> It doesn't really matter, does it?

Huh?  Sure, if there's some simple way to accomplish the locking, the
OP's act can do the same thing without SQlite's complexity.

> I'm sure the locking mechanisms it uses have changed between
> different releases, and may even be selected based on the platform
> being used.

Well, yes, but WHAT ARE THEY??
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: write to the same file from multiple processes at the same time?

2005-05-27 Thread Paul Rubin
Peter Hansen <[EMAIL PROTECTED]> writes:
> And PySQLite conveniently wraps the relevant calls with retries when
> the database is "locked" by the writing process, making it roughly a
> no-brainer to use SQLite databases as nice simple log files where
> you're trying to write from multiple CGI processes like the OP wanted.

Oh, ok.  But what kind of locks does it use?  
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: write to the same file from multiple processes at the same time?

2005-05-27 Thread Peter Hansen
Paul Rubin wrote:
> http://www.sqlite.org/faq.html#q7
[snip]
> Multiple processes can have the same database open at the same
> time. Multiple processes can be doing a SELECT at the same
> time. But only one process can be making changes to the database
> at once.
> 
> But multiple processes changing the database simultaneously is
> precisely what the OP wants to do.

What isn't described in the above quote from the FAQ is how SQLite 
*protects* your data from corruption in this case, unlike the "raw" 
approach where you just use file handles.

And PySQLite conveniently wraps the relevant calls with retries when the 
database is "locked" by the writing process, making it roughly a 
no-brainer to use SQLite databases as nice simple log files where you're 
trying to write from multiple CGI processes like the OP wanted.

Disclaimer: I haven't actually done that myself, and have only started 
playing with pysqlite2 a day ago, but I have spent a fair bit of time 
experimenting and reading the relevant docs and I believe I've got this 
all correct.

-Peter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: write to the same file from multiple processes at the same time?

2005-05-27 Thread Peter Hansen
Grant Edwards wrote:
> On 2005-05-27, Peter Hansen <[EMAIL PROTECTED]> wrote:
>>Unfortunately this assumes that the open() call will always succeed, 
>>when in fact it is likely to fail sometimes when another file has 
>>already opened the file but not yet completed writing to it, AFAIK.
> 
> Not in my experience.  At least under Unix, it's perfectly OK
> to open a file while somebody else is writing to it.  Perhaps
> Windows can't deal with that situation?

Hmm... just tried it: you're right!  On the other hand, the results were 
unacceptable: each process has a separate file pointer, so it appears 
whichever one writes first will have its output overwritten by the 
second process.

Change the details, but the heart of my objection is the same.

-Peter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: write to the same file from multiple processes at the same time?

2005-05-27 Thread Grant Edwards
On 2005-05-27, Peter Hansen <[EMAIL PROTECTED]> wrote:
> Roy Smith wrote:
>> gabor <[EMAIL PROTECTED]> wrote:
>> On the other hand, you said that each process will be writing a single line 
>> of output at a time.  If you call flush() after each message is written, 
>> that should be enough to ensure that the each line gets written in a single 
>> write system call, which in turn should be good enough to ensure that 
>> individual lines of output are not scrambled in the log file.
>
> Unfortunately this assumes that the open() call will always succeed, 
> when in fact it is likely to fail sometimes when another file has 
> already opened the file but not yet completed writing to it, AFAIK.

Not in my experience.  At least under Unix, it's perfectly OK
to open a file while somebody else is writing to it.  Perhaps
Windows can't deal with that situation?

-- 
Grant Edwards   grante Yow!  FOOLED you! Absorb
  at   EGO SHATTERING impulse
   visi.comrays, polyester poltroon!!
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: write to the same file from multiple processes at the same time?

2005-05-27 Thread Jp Calderone
On 27 May 2005 06:43:04 -0700, Paul Rubin <"http://phr.cx"@nospam.invalid> 
wrote:
>Jp Calderone <[EMAIL PROTECTED]> writes:
>> >But they haven't.  They depend on messy things like server processes
>> >constantly running, which goes against the idea of a cgi that only
>> >runs when someone calls it.
>>
>> SQLite is an in-process dbm.
>
>http://www.sqlite.org/faq.html#q7
>
>(7) Can multiple applications or multiple instances of the same
>application access a single database file at the same time?
>
>Multiple processes can have the same database open at the same
>time. Multiple processes can be doing a SELECT at the same
>time. But only one process can be making changes to the database
>at once.
>
>But multiple processes changing the database simultaneously is
>precisely what the OP wants to do.

Er, no.  The OP precisely wants exactly one process to be able to write at a 
time.  If he was happy with multiple processes writing simultaneously, he 
wouldn't need any locking mechanism at all >:)

If you keep reading that FAQ entry, you discover that SQLite implements its own 
locking mechanism internally, allowing different processes to *interleave* 
writes to the database, and preventing any data corruption which might arise 
from simultaneous writes.

That said, I think an RDBM is a ridiculously complex solution to this simple 
problem.  A filesystem lock, preferably using the directory or symlink trick 
(but flock() is fun too, if you're into that sort of thing), is clearly the 
solution to go with here.

Jp
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: write to the same file from multiple processes at the same time?

2005-05-27 Thread jean-marc
Sorry, why is the temp file solution 'stupid'?, (not
aesthetic-pythonistic???) -  it looks OK: simple and direct, and
certainly less 'heavy' than any db stuff (even embedded)

And  collating in a 'official log file' can be done periodically by
another process, on a time-scale that is 'useful' if not
instantaneous...

Just trying to understand here...

JMD

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: write to the same file from multiple processes at the same time?

2005-05-27 Thread Gerhard Haering
On Fri, May 27, 2005 at 09:27:38AM -0400, Roy Smith wrote:
> Peter Hansen <[EMAIL PROTECTED]> wrote:
> > The OP was probably on the right track when he suggested that things 
> > like SQLite (conveniently wrapped with PySQLite) had already solved this 
> > problem.
> 
> Perhaps, but a relational database seems like a pretty heavy-weight 
> solution for a log file.

On the other hand, it works ;-)

-- Gerhard
-- 
Gerhard Häring - [EMAIL PROTECTED] - Python, web & database development


signature.asc
Description: Digital signature
-- 
http://mail.python.org/mailman/listinfo/python-list

Re: write to the same file from multiple processes at the same time?

2005-05-27 Thread Paul Rubin
Jp Calderone <[EMAIL PROTECTED]> writes:
> >But they haven't.  They depend on messy things like server processes
> >constantly running, which goes against the idea of a cgi that only
> >runs when someone calls it.
> 
> SQLite is an in-process dbm.

http://www.sqlite.org/faq.html#q7

(7) Can multiple applications or multiple instances of the same
application access a single database file at the same time?

Multiple processes can have the same database open at the same
time. Multiple processes can be doing a SELECT at the same
time. But only one process can be making changes to the database
at once.

But multiple processes changing the database simultaneously is
precisely what the OP wants to do.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: write to the same file from multiple processes at the same time?

2005-05-27 Thread fraca7
gabor a écrit :

> [snip]

Try this:

http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/65203
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: write to the same file from multiple processes at the same time?

2005-05-27 Thread Jp Calderone
On 27 May 2005 06:21:21 -0700, Paul Rubin <"http://phr.cx"@nospam.invalid> 
wrote:
>Peter Hansen <[EMAIL PROTECTED]> writes:
>> The OP was probably on the right track when he suggested that things
>> like SQLite (conveniently wrapped with PySQLite) had already solved
>> this problem.
>
>But they haven't.  They depend on messy things like server processes
>constantly running, which goes against the idea of a cgi that only
>runs when someone calls it.

SQLite is an in-process dbm.

Jp
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: write to the same file from multiple processes at the same time?

2005-05-27 Thread Roy Smith
Peter Hansen <[EMAIL PROTECTED]> wrote:
> The OP was probably on the right track when he suggested that things 
> like SQLite (conveniently wrapped with PySQLite) had already solved this 
> problem.

Perhaps, but a relational database seems like a pretty heavy-weight 
solution for a log file.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: write to the same file from multiple processes at the same time?

2005-05-27 Thread Paul Rubin
Peter Hansen <[EMAIL PROTECTED]> writes:
> The OP was probably on the right track when he suggested that things
> like SQLite (conveniently wrapped with PySQLite) had already solved
> this problem.

But they haven't.  They depend on messy things like server processes
constantly running, which goes against the idea of a cgi that only
runs when someone calls it.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: write to the same file from multiple processes at the same time?

2005-05-27 Thread Peter Hansen
Roy Smith wrote:
> gabor <[EMAIL PROTECTED]> wrote:
> On the other hand, you said that each process will be writing a single line 
> of output at a time.  If you call flush() after each message is written, 
> that should be enough to ensure that the each line gets written in a single 
> write system call, which in turn should be good enough to ensure that 
> individual lines of output are not scrambled in the log file.

Unfortunately this assumes that the open() call will always succeed, 
when in fact it is likely to fail sometimes when another file has 
already opened the file but not yet completed writing to it, AFAIK.

> If you want to do better than that, you need to delve into OS-specific 
> things like the flock function in the fcntl module on unix.

The OP was probably on the right track when he suggested that things 
like SQLite (conveniently wrapped with PySQLite) had already solved this 
problem.

-Peter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: write to the same file from multiple processes at the same time?

2005-05-27 Thread Roy Smith
gabor <[EMAIL PROTECTED]> wrote:
> so, how does one synchronizes several processes in python?

This is a very hard problem to solve in the general case, and the answer 
depends more on the operating system you're running on than on the 
programming language you're using.

On the other hand, you said that each process will be writing a single line 
of output at a time.  If you call flush() after each message is written, 
that should be enough to ensure that the each line gets written in a single 
write system call, which in turn should be good enough to ensure that 
individual lines of output are not scrambled in the log file.

If you want to do better than that, you need to delve into OS-specific 
things like the flock function in the fcntl module on unix.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: write to the same file from multiple processes at the same time?

2005-05-27 Thread Paul Rubin
gabor <[EMAIL PROTECTED]> writes:
> so, how does one synchronizes several processes in python?
> 
> first idea was that the cgi will create a new temp file every time,
> and at the end of the stress-test, i'll collect the content of all
> those files. but that seems as a stupid way to do it :(

There was a thread about this recently ("low-end persistence
strategies") and for Unix the simplest answer seems to be the
fcntl.flock function.  For Windows I don't know the answer.
Maybe os.open with O_EXCL works.

-- 
http://mail.python.org/mailman/listinfo/python-list