(this is mostly for Ben's benefit)

I ran a quick test to determine how expensive it would be to duplicate
the whole nodes table as a basis for creating local workspaces. I used a
wc.db from a checkout of the whole Subversion tree -- it's a couple
months old, but quite representative, IMO, of the WC size that we're
likely to encounter in the wild. The results are not encouraging and
it'll be fun trying to speed this up by a factor of 10. This is all on
SSD with hot caches, by the way.

(Note that I already added the required record in the WCROOT table.)


-- Brane

$ time sqlite3 all-subversion.copy.db 'select wc_id, count(*) from nodes group 
by wc_id;'
1|456410

real    0m0.072s
user    0m0.060s
sys     0m0.010s

$ time sqlite3 all-subversion.copy.db 'insert into nodes select 2, 
local_relpath, op_depth, parent_relpath, repos_id, repos_path, revision, 
presence, moved_here, moved_to, kind, properties, depth, checksum, 
symlink_target, changed_revision, changed_date, changed_author, 
translated_size, last_mod_time, dav_cache, file_external, inherited_props from 
nodes where wc_id=1;'

real    0m16.183s
user    0m8.419s
sys     0m5.338s

$ time sqlite3 all-subversion.copy.db 'select wc_id, count(*) from nodes group 
by wc_id;'
1|456410
2|456410

real    0m0.126s
user    0m0.107s
sys     0m0.017s

$ time sqlite3 all-subversion.copy.db 'delete from nodes where wc_id=2;'

real    0m7.675s
user    0m4.798s
sys     0m2.489s




-- 
Branko Čibej | Director of Subversion
WANdisco | Realising the impossibilities of Big Data
e. br...@wandisco.com

Reply via email to