Am 23.10.2013 03:41, schrieb William A. Rowe Jr.:
On Tue, 22 Oct 2013 20:34:02 -0500
"William A. Rowe Jr." <[email protected]> wrote:
On Tue, 22 Oct 2013 08:13:08 -0400
Jeff Trawick <[email protected]> wrote:
On Tue, Oct 22, 2013 at 6:04 AM, Stefan Ruppert <[email protected]>
wrote:
Am 21.10.2013 20:39, schrieb Jeff Trawick:
On Mon, Oct 21, 2013 at 12:41 PM, Stefan Ruppert <[email protected]
<mailto:[email protected]>> wrote:
Am 21.10.2013 16:22, schrieb Jeff Trawick:
On Mon, Oct 21, 2013 at 8:57 AM, <[email protected]
<mailto:[email protected]>
<mailto:[email protected] <mailto:[email protected]>>>
wrote:
Author: trawick
Date: Mon Oct 21 12:57:05 2013
New Revision: 1534139
URL: http://svn.apache.org/r1534139
Log:
Merge r960671 from trunk:
Only deal with the mutex when XTHREAD is enabled.
This increases the
performance of buffered reads/writes tremendously.
* file_io/win32/readwrite.c:
(apr_file_read, apr_file_write): only manipulate
mutex when XTHREAD
Submitted by: Ivan Zhakov <ivan visualsvn.com
<http://visualsvn.com> <http://visualsvn.com>>
Trunk continues to allocate a mutex if buffered, even if
the XTHREAD
flag is on (a minor detail I suppose). That presumably
is a simple fix
after double checking all the references to mutex or
buffered in the
code used on Windows. ISTR other concerns about the
mutex or XTHREAD,
but I think this is an orthogonal issue.
Regarding exclusive access to a file under windows I filed a
bug in 2010:
https://issues.apache.org/__**bugzilla/show_bug.cgi?id=50058<https://issues.apache.org/__bugzilla/show_bug.cgi?id=50058>
<https://issues.apache.org/**bugzilla/show_bug.cgi?id=50058<https://issues.apache.org/bugzilla/show_bug.cgi?id=50058>
**>
Using the apr_file_lock()/apr_file___**unlock() under Windows
in
append mode will deadlock the current thread! In the time of
2010 I just removed the apr_file_lock()/apr_file___**unlock()
code within the
readwrite.c module. But a better solution is to support
nesting within apr_file_lock()/apr_file___**unlock() API calls!
Any comments?
Stefan
I thought it was this simple for append:
On Unix a lock isn't needed because the APR implementation there
uses O_APPEND, which is atomic (subject to the size of the write
I suppose)*; on Windows there's no such feature and APR has to
use a lock to make it equivalent. So the app shouldn't be
getting a lock.
Is that consistent with what you see?
The problem arise when you want to use the
apr_file_lock()/apr_file_**unlock() calls to protected multiple
calls to apr_file_write():
1) apr_file_open(FOPEN_APPEND);
2) apr_file_lock();
3) apr_file_write();
4) apr_file_write();
5) apr_file_write();
6) apr_file_unlock();
7) apr_file_close();
Under Unix all works perfect. But under Windows the step 3) call
to apr_file_write() will deadlock, because the LockFileEx()
should not be called recursively...
However, in the APR API docs there is nothing said about an atomic
write within apr_file_write(), thus from my point of view its up
to the application to make it atomic with the
apr_file_lock()/apr_file_**unlock() calls. On Unix its a nice side
effect that each call to apr_file_write() is atomic....
An alternate interpretation ;) Access to the O_APPEND semantics is
a critical feature, and the lock on Windows was the best known way
to map that feature.
The easist way to make it conistent is to support a nesting
counter within apr_file_lock()/apr_file_**unlock() which will
also reflect the APR API docs for that calls which are documented
to be used recursively!
I generally agree, though I think the behavior of apr_file_lock() on
Unix needs examination too so we understand more widely what is
broken w.r.t. the documentation. I guess testflock.c would be
modified to verify that part of the documentation and then tested on
a couple of Unix variations using the alternate low-level
implementations.
I believe the multiple-choice answer (particularly, but perhaps not
exclusively on win32) is...
* support nested apr_file_lock(s)
* reorder win32 apr_file_writev to lock around multiple-segment writes
There is an implicit contract in apr_file_write and writev that the
operation is atomic, in as much as posix write[v] is supposedly
atomic, at least under most unix.
Let's also define APR_XTHREAD more precisely and narrowly...
when created, the flag described that a given apr_foo_t would be
referenced by more than one thread, concurrently. Which says nothing
about whether multiple apr_foo_t's are referencing the same FS/IO object
in parallel.
Sounds good to me!
Stefan