Hi there,
Ben asked me on IRC what the worst case scenario for rep
deltification in FSFS and I gave him the obvious answer.
Thinking about it just a little more would have lead me straight
to a design flaw / side effect that came with rep sharing:
When a noderev history e.g. starts with a shared reps, the
latter is not taken into account for determining the appropriate
skip delta distance. The script below reproduces the effect
and I don't think the "add-without-history" followed by a
modification example is not too contrived.
The effect of that are worse than just a performance hit:
When reconstructing the contents from the deltified reps,
we open all contributing files and run out of file handles
after e.g. 1000 revs / rev packs.
r1418963 fixes the problem for most cases.
-- Stefan^2.
[[[
#!/bin/sh
# globals
SVN="./subversion/svn/svn"
SVNADMIN="./subversion/svnadmin/svnadmin"
DATA="/dev/shm/data"
MAXCOUNT=100000
# prepare
rm -rf $DATA
mkdir $DATA
${SVNADMIN} create $DATA/repo
${SVN} co file://$DATA/repo $DATA/wc
# initial data
echo "1" > $DATA/wc/1
${SVN} add $DATA/wc/1
${SVN} ci $DATA/wc -m ""
# create long chain
PREV="1"
CURRENT=2
ODD=1
while [ $CURRENT -lt $MAXCOUNT ]; do
cp $DATA/wc/$PREV $DATA/wc/$CURRENT
${SVN} rm $DATA/wc/$PREV -q
${SVN} add $DATA/wc/$CURRENT
${SVN} ci $DATA/wc -m "" -q
echo " $CURRENT" >> $DATA/wc/$CURRENT
${SVN} ci $DATA/wc -m "" -q
# maintenance every 100 iterations
if [ $ODD -eq 0 ]; then
# optional. reduces number of files
${SVNADMIN} pack $DATA/repo -q
# limit size of pristines
${SVN} cleanup $DATA/wc
fi
PREV=$CURRENT
CURRENT=`echo 1 + $CURRENT | bc`
ODD=`echo $CURRENT % 100 | bc`
done
]]]
--
Certified & Supported Apache Subversion Downloads:
*
http://www.wandisco.com/subversion/download
*