[ 
https://issues.apache.org/jira/browse/SVN-4667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Foad updated SVN-4667:
-----------------------------
    Description: 
A WANdisco customer found that a merge will not complete with 4 GB RAM and will 
complete with 5 GB RAM available.

The branches involved have subtree mergeinfo on over 3500 files, each referring 
to about 350 branches on average, and just over 1 revision range on average per 
mergeinfo line. Average path length is under 100 bytes. The total number of 
revisions in the repo is about 1 million.

This seems already far too much memory usage for the size of the data set, and 
the size of the data set is growing.

This issue is about reducing the amount of RAM Subversion uses given this data 
set. Another way to approach the problem is to reduce the amount of subtree 
mergeinfo by changing the work flow practices; that approach is also being 
investigated but is not in the scope of this issue, except to note that the 
tools "svn-mergeinfo-normalizer" and "svn-clean-mergeinfo.pl" both also fail to 
execute in the available RAM.

(WANdisco's internal issue id: SVNB-1952.)

(For reference, past issues about merge using too much memory include SVN-3393 
"Merge consuming too much memory" and SVN-3854 "Out of memory during merge with 
many Tree Conflicts".)

  was:
A WANdisco customer found that a merge will not complete with 4 GB RAM and will 
complete with 5 GB RAM available.

The branches involved have subtree mergeinfo on over 3500 files, each referring 
to about 350 branches on average, and just over 1 revision range on average per 
mergeinfo line. Average path length is under 100 bytes.

This seems already far too much memory usage for the size of the data set, and 
the size of the data set is growing.

This issue is about reducing the amount of RAM Subversion uses given this data 
set. Another way to approach the problem is to reduce the amount of subtree 
mergeinfo by changing the work flow practices; that approach is also being 
investigated but is not in the scope of this issue, except to note that the 
tools "svn-mergeinfo-normalizer" and "svn-clean-mergeinfo.pl" both also fail to 
execute in the available RAM.

(WANdisco's internal issue id: SVNB-1952.)

(For reference, past issues about merge using too much memory include SVN-3393 
"Merge consuming too much memory" and SVN-3854 "Out of memory during merge with 
many Tree Conflicts".)


> Merge uses large amount of memory
> ---------------------------------
>
>                 Key: SVN-4667
>                 URL: https://issues.apache.org/jira/browse/SVN-4667
>             Project: Subversion
>          Issue Type: Bug
>    Affects Versions: 1.9.4
>            Reporter: Julian Foad
>         Attachments: repro-test-1.patch, repro-test-2.patch
>
>
> A WANdisco customer found that a merge will not complete with 4 GB RAM and 
> will complete with 5 GB RAM available.
> The branches involved have subtree mergeinfo on over 3500 files, each 
> referring to about 350 branches on average, and just over 1 revision range on 
> average per mergeinfo line. Average path length is under 100 bytes. The total 
> number of revisions in the repo is about 1 million.
> This seems already far too much memory usage for the size of the data set, 
> and the size of the data set is growing.
> This issue is about reducing the amount of RAM Subversion uses given this 
> data set. Another way to approach the problem is to reduce the amount of 
> subtree mergeinfo by changing the work flow practices; that approach is also 
> being investigated but is not in the scope of this issue, except to note that 
> the tools "svn-mergeinfo-normalizer" and "svn-clean-mergeinfo.pl" both also 
> fail to execute in the available RAM.
> (WANdisco's internal issue id: SVNB-1952.)
> (For reference, past issues about merge using too much memory include 
> SVN-3393 "Merge consuming too much memory" and SVN-3854 "Out of memory during 
> merge with many Tree Conflicts".)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to