Mathias Weinert <w...@mccw.de> wrote:

Daniel Shahaf <d...@daniel.shahaf.name> wrote:

Philip Martin wrote on Wed, Jul 20, 2011 at 15:12:51 +0100:
Daniel Shahaf <d...@daniel.shahaf.name> writes:

Mathias Weinert wrote on Wed, Jul 20, 2011 at 14:59:23 +0200:

each time when I am loading a certain dump file on Windows which
contains one revision with over 100K changed paths I get the error
"Can't open file
'c:\Repositories\test\db\transactions\5445-479.txn\next-ids': The
requested operation cannot be performed on a file with a user-mapped
section open.". After looking at the mailing list archives and other
mailing lists I found out that I am not the only one to encounter
this problem and that in most cases a virus scanner was the cause of
the problem. And indeed, adding next-ids to the exclusion list
solved the problem.

But now I wonder if svnadmin couldn't handle this case a bit more
elegantly. IMHO it would make sense not to quit the load immediately
but to try it again maybe after waiting half a second or so if this
specific error occurs (some other users reported that they got the
error "The process cannot access the file because it is being used
by another process."). If we can't access next-ids after trying it
let's say 5 times with a little pause after each try we still can
quit the load process.

It would be good to solve this now as that is one of the concerns with
the (partially implemented) design for revprop packing, due for release
in 1.8.

The current implementation writes the file inplace:

static svn_error_t *
write_next_ids(svn_fs_t *fs,
              const char *txn_id,
              const char *node_id,
              const char *copy_id,
              apr_pool_t *pool)
{
 apr_file_t *file;
 svn_stream_t *out_stream;

 SVN_ERR(svn_io_file_open(&file, path_txn_next_ids(fs, txn_id, pool),
                          APR_WRITE | APR_TRUNCATE,
                          APR_OS_DEFAULT, pool));

 out_stream = svn_stream_from_aprfile2(file, TRUE, pool);

 SVN_ERR(svn_stream_printf(out_stream, pool, "%s %s\n", node_id, copy_id));

 SVN_ERR(svn_stream_close(out_stream));
 return svn_io_file_close(file, pool);
}

Is there any reason we don't switch to our standard pattern: write to a
temp file and rename?  That would give us Subversion's standard retry
loop -- would that fix "requested operation cannot be performed"?

fs_fs.c:move_into_file() already does the rename loop, so no objection.
(assuming we document that the use of move_into_file() is for
performance and virus scanners rather than for concurrent reader
correctness)


I replaced svn_io_file_open, svn_stream_printf and svn_stream_close with svn_io_write_unique and move_into_place (see attached patch). Although this works correctly it shows bad performance. Loading a dump from a little repo which mainly has a 2000 files commit now takes about 400s while with the original version it only took about 100s.

It was 20,000 files, not 2000.

Reply via email to