Justin, Jason,

Some things you could do are:

- What RA method do you use?  svn:// or http://?

- Are the failing revisions always small (eg: just a URL-URL copy),
  or always large (eg: results of a merge)?

- Do you have any caching enabled at the OS filesystem layer or
  below it?

- Did you configure Subversion to use memcached?

- Did you configure a maximum cache size for Subversion?

- Could you try setting the maximum cache size to zero?  (svnserve:
  --memory-cache-size=0; mod_dav_svn: SVNInMemoryCacheSize 0)

  This doesn't entirely disable caching, actually -- it just disables
  the new 'membuffer' cache backend, reverting to another older
  backend called 'inprocess'.

@Justin -- thanks for the information.  I've asked Jason upthread some
questions (eg, whether there patterns of commits concurrent to the
failing ones), which likely apply to you too.

Daniel

Justin Johnson wrote on Wed, Feb 29, 2012 at 09:22:40 -0600:
> On Tue, Feb 28, 2012 at 3:07 AM, Daniel Shahaf <danie...@elego.de> wrote:
> > Jason Wong wrote on Mon, Feb 27, 2012 at 07:36:39 -0800:
> > > Replies above. Sorry about the delay in replying. I have been really
> > > busy of late. I will try and get the results this week, if not, it
> > > will most likely be next week.
> > >
> >
> > No problem.
> >
> 
> I just discovered we are having this problem as well.  Since upgrading from
> 1.6.17 to 1.7.2 on the evening of 2/21 we have seen this error on 10
> different machines.  9 of those are build machines, so they would have been
> creating tags.  I have a suspicion that all of the builds were trying to
> create tags by copying from a working copy as opposed to a URL to URL copy,
> but I need to confirm.  I know at least some of them were.  The other 1
> machine received the error when trying to commit merge results.  The
> details of the merge are not available to me since the user reverted the
> local merge results.
> 
> Let me know if I can provide any more info to help solve this problem.
> 
> Justin

Reply via email to