>> That was in an office environment. At home I use
>> fossil+(plan9port)venti running on linux-based NAS. This ends up
>> working very well for me since I have resources to spare on that
>> machine. This also lets me backup my arenas via CrashPlan. I use a
> 
> I am very interested to use such a setup. Could you please add more
> about the setup? What hardware do you use for the NAS? Any scripts
> etc?

(disclaimer: i am fairly new to the plan9 universe, so
my apologies for anything i missed.  most of this is 
from memory.)

my setup is slightly different, but similar in concept.  
my only machine is a mac, so i run plan9 under qemu.␣  i
have a p9p venti running on the mac with a few qemu vm's 
using it.

qemu also kind of solves the problem of the availability 
of compatible hardware for me.

when i first installed plan9, i installed using the 
fossil+venti option.  when i wanted to expand beyond a
simple terminal, i decided to get venti out of the vm
and run it off the mac using p9p.  i also wanted to keep
the data from the original venti.

the old venti was ~8G, but used the default arena size of
512M.  only 4 of the previous arenas were sealed with the 
the fifth active.  i also added another 10G worth of 
arenas at 1G each, so this is roughly what i did.

on plan9:

for (i in 0 1 2 3 4)
        rdarena /dev/sdC0/arenas arenas0$i > /tmp/arenas0$i

on the mac:

dd if=/dev/zero of=arenas1.img bs=1024 count=2622212
dd if=/dev/zero of=arenas2.img bs=1024 count=10486532
dd if=/dev/zero of=isect.img bs=1024k count=1024
dd if=/dev/zero of=bloom.img bs=1024k count=128
fmtarenas arenas1 arenas1.img
fmtarenas -a 1024m arenas2 arenas2.img
fmtisect isect isect.img
fmtbloom -N 24 bloom.img
fmtindex venti.conf
buildindex -b venti.conf
$PLAN9/bin/venti/venti -c venti.conf

back to plan9:

for (i in /tmp/arenas*)
        venti/wrarena -h tcp!$ventiaddr!17034 $i

----------
i keep the files on an apple raid mirror for venti.  this 
is my venti.conf:

index main
arenas /Volumes/venti/arenas1.img
arenas /Volumes/venti/arenas2.img
isect /Volumes/venti/isect.img
bloom /Volumes/venti/bloom.img
mem 48m
bcmem 96m
icmem 128m
addr tcp!*!17034
httpaddr tcp!*!8000
webroot /Volumes/venti/webroot

----------
the end result is that i had a 12.5G venti with a 1G isect
and all my previous data content.  the venti memory footprint 
is approximately 650M.  it works quite well for me.

in all instances below, i have adjusted my configuration
files to what one would use if using separate machines.
we have static ip addressing here, so my config will 
reflect that.  substitute $ventiaddr for the ip address of 
your p9p venti server, $gateway for your router, $ipmask 
for your netmask, and $clientip for the machine(s) using the
p9p venti.

my auth+fs vm (the main p9p venti client) has the usual
9fat, nvram, swap, and fossil partitions with plan9.ini
as follows:

bootfile=sdC0!9fat!9pccpu
console=0 b115200 l8 pn s1
*noe820scan=1
bootargs=local!#S/sdC0/fossi
bootdisk=local!#S/sdC0/fossil
nobootprompt=local!#S/sdC0/fossil
venti=tcp!$ventiaddr!17034 -g $gateway ether /net/ether0 $clientip $ipmask
sysname=p9auth

----------
the other cpu vm's pxe boot off the auth+fs vm using pxe
config files like this:

bootfile=ether0!/386/9pccpu
console=0 b115200 l8 pn s1
readparts=
nvram=/dev/sdC0/nvram
*noe820scan=1
nobootprompt=tcp
sysname=whatever

----------
since two of my cpu vm's have local fossils that i would like
to keep archived, i added the following to /bin/cpurc.local:

venti=tcp!$ventiaddr!17034

and then i start the fossils from /cfg/$sysname/cpurc.

hope this is helpful in some way.


Reply via email to