I imagine that you would get problems with duplicates. You could duplicate
enough of the caches and internal structures that it would work, but then
you would be close to having two instances of venti anyway. But maybe, if
you removed the assumption that there are no duplicates, and changed venti
t
No need to be sorry. I've been looking at the code now and then, but
haven't really got the hang of the difference between the vac and venti
formats.
On Wed, Dec 13, 2017 at 1:03 AM, Steve Simon wrote:
> grief, sorry.
>
> what can i say, too old, too many kids. important stuff gets pushed out of
true
i have my old work archives and rather than merge that venti into my main one i
could just mount it via vacfs on demand (from 9fs). but can one venti cope with
two incompatible sets of arenas?
-Steve
> On 12 Dec 2017, at 21:58, Ole-Hjalmar Kristensen
> wrote:
>
> Strictly speaking, is
grief, sorry.
what can i say, too old, too many kids. important stuff gets pushed out of my
brain (against my will) to make room for the lyrics of “Let it go”.
> On 12 Dec 2017, at 21:40, Ole-Hjalmar Kristensen
> wrote:
>
> Yes, I know. I was thinking along the same lines a while ago, we ev
i think it's not being taken advantage of, rather than ability:
https://github.com/0intro/plan9/blob/7524062cfa4689019a4ed6fc22500ec209522ef0/sys/src/cmd/fcp.c
On Tue, Dec 12, 2017 at 11:38 AM Steven Stallion
wrote:
> I suspect the main
> culprit is the fact that 9p doesn't support multiple ou
Strictly speaking, isn't venti just content-addressable block storage, not
a file system? Anyway, I'm curious to know what you are going to use this
for.
On Tue, Dec 12, 2017 at 9:41 PM, Steve Simon wrote:
> Can a venti instance be configured to service to two seperate venti
> filesystems? They
Yes, you better have high-endurance SSD's. I put the venti index at work on
an ordinary SSD, and it lasted six months. The log itself was fine, of
course, so I only had to rebuild the index to recover. This was plan9port
on Solaris, btw.
Now this venti runs on an ordinary disk, the speed is less, b
Yes, I know. I was thinking along the same lines a while ago, we even
discussed this here on this mailing list. I did some digging, and I found
this interesting comment in vac/file.c:
/*
*
* Fossil generates slightly different vac files, due to a now
* impossible-to-change bug, which contain
I have a similar setup. On my file server I have a mirrored pair of
high-endurance SSDs tied together via devfs with two fossil file
systems: main and other. main is a 32GB write cache which is dumped
each night at midnight (this is similar to the labs configuration for
sources). other is the remai
Can a venti instance be configured to service to two seperate venti
filesystems? They would need different names and different listner
names but can they share a process / buffer cache?
I guess the alternative is to run two seperate instances.
-Steve
I can understand that it cannot fill up. What I do not understand is why
there are no safeguards in place to ensure that it doesn't. (And my inner
geek wants to know)
As you say, in reality it will not fill up unless you dump huge amounts of
data on it at once. Unfortunately, this is just what I in
The best solution (imho) for what you want to do is the feature I never added.
It would be great if you could vac up your linux fs and then just cut and past
the
vac score into fossil's console with a command like this:
main import -v 7478923893289ef928932a9888c98b2333 /active/usr/ole/linux
the
"the fact that 9p doesn't support multiple outstanding"
that's not a sentence, but i'm not sure it's thus a joke.
Re: fossil
Fossil must not fill up, however I would say that the dropoff was the lack of
clear
documentation stating this.
Fossil has two modes of operation.
As a stand alone filesystem, not really intented (I believe) as a production
system, more as a replacement for kfs - for laptops or insta
Same place as I found another useful script, dumpvacroots:
#!/bin/rc
# dumpvacroots - dumps all the vac scores ever stored to the venti server
# if nothing else, this illustrates that you have to control access
# to the physical disks storing the archive!
ventihttp=`{
echo $venti | sed 's/^[a
r
sorry I meant /sys/src/cmd/venti/words/dumpvacroots of course.
/sys/src/cmd/venti/words/printarenas
no idea why it lived there though.
-Steve
> On 12 Dec 2017, at 18:33, Ole-Hjalmar Kristensen
> wrote:
>
> Hmm. On both my plan9port and on a 9front system I find printarenas.c, but no
> script. Maybe you are thinking of the script for backup of individua
I ran back through my old notes. Turns out I inflated the numbers a
bit - it was about a week rather than a month. I suspect the main
culprit is the fact that 9p doesn't support multiple outstanding. I
wasn't in much of a hurry at the time, so I'm sure there are more
efficient ways than simply firi
Thanks for the tip about mounting with 9fs. I have used vacfs on Linux ,
though.
But why so slow? Did you import a root with lots of backup versions? It was
partly because of that I made this client which can import venti blocks
without needing to traverse a file tree over and over again.
On Tue,
Hmm. On both my plan9port and on a 9front system I find printarenas.c, but
no script. Maybe you are thinking of the script for backup of individual
arenas to file? Yes, that could be a starting point.
Anyway, printarenas.c doesn't look too scary, basically a loop checking all
(or matching) arenas.
It depends - the 30GB I was mentioning before was from an older Ken's
fs that I imported with a modified cwfs. Rather than deal with all of
the history, I just took a snap with vac -s of the latest state of the
file system. I keep the original dump along with the cwfs binary in
case I ever need to
Interesting.
how did you do the import? did you use vac -q and vac -d previous-score for each
imported day to try and speed things up?
Previously I imported stuff into venti by copying it into fossil first
and then taking a snap. I always wanted a better solution, like being able
to use vac and t
Get ready to wait! It took almost a month for me to import about 30GB
from a decommissioned file server. It was well worth the wait though -
if you place the the resulting .vac file under /lib/vac (or
$home/lib/vac) you can just use 9fs to mount with zero fuss.
On a related note, once sources star
printarenas is a script - it walks through all your arenas at each offset.
You could craft another script that remembers the last arena and offset you
successfully
transferred and only send those after that.
I think there is a pattern where you can save the last arena,offset in the local
fossil.
Based on copy.c and readlist.c, I have cobbled together a venti client to
copy a list of venti blocks from one venti server to another. I am thinking
of using it to incrementally replicate the contents on one site site to
another. It could even be used for two-way replication, since the CAS and
ded
Has anyone looked into porting Plan 9 to VoCore2 (mips)?
I bought one earlier this year, but only recently started playing with it.
It is well made and performance with OpenWrt is reasonable.
It is claimed to be open source. Board schematics, OpenWrt and LEDE project
code and drivers are on githu
26 matches
Mail list logo