Re: [9fans] Re: venti/mirrorarenas usage

2024-08-01 Thread wb . kloke
On Tuesday, 30 July 2024, at 7:29 AM, Marco Feichtinger wrote: > So I am curious how does it work, how does one to set it up, so the arenas get mirrored automatically, and why do you use it instead of fs(3) mirror? Adding to this ill-fated thread. Mirroring venti arenas was just a stillborn

Re: [9fans] mk results in rc: suicide:

2024-07-09 Thread wb . kloke
On Sunday, 7 July 2024, at 8:23 PM, Marco Feichtinger wrote: > No swap. Are you sure that this is not your problem? -- 9fans: 9fans Permalink: https://9fans.topicbox.com/groups/9fans/T68f44cf88ca61ff3-M7ef5bb5a39e349ab52bf1cf0 Delivery options:

Re: [9fans] mk results in rc: suicide:

2024-07-07 Thread wb . kloke
What about swap memory? I wonder, why the process works, if slowed down by manual scrolling. -- 9fans: 9fans Permalink: https://9fans.topicbox.com/groups/9fans/T68f44cf88ca61ff3-M208a397f8c49e88256d1dacf Delivery options:

[9fans] Re: duplicate scores in venti arenas

2024-07-06 Thread wb . kloke
The main reason for the remaining mventi issues were, that I ignored the message about clump miscalculation in one arena. Why it is not really destructive in the original venti, I don't know. So, if this message occurs, mventi will not serve the venti protocol. Clump miscalculation indicates

Re: [9fans] mk results in rc: suicide:

2024-07-04 Thread wb . kloke
The symptoms look like disk full error. -- 9fans: 9fans Permalink: https://9fans.topicbox.com/groups/9fans/T68f44cf88ca61ff3-M5f914074957df945a1b6432a Delivery options: https://9fans.topicbox.com/groups/9fans/subscription

[9fans] Re: duplicate scores in venti arenas

2024-07-02 Thread wb . kloke
I now have a vague idea, what may have happened. The really strange thing is that I did not write to this venti at all (at least not intentionally). Now I tried to repair the config by deleting  the last arena parttion, reformat a new one and buildindex from scratch. After some time, when I

Re: [9fans] Plan 9 from User Space still mantained?

2024-06-23 Thread wb . kloke
For FreeBSD/plan9port users: I cobbled a geom module to serve a ffs filesystem backup created by p9p vbackup as a readonly disk device on FreeBSD. The source and a Read.me file are found in http://pkeus.de/~wb/vgate -- 9fans: 9fans Permalink:

[9fans] duplicate scores in venti arenas

2024-06-22 Thread wb . kloke
Working on my data set, I found that there are several duplicate entries in the arenas. I am confident that the data are consistent, though. Which may be the circumstances, that this happens more than 1 time? The arenas have been written by p9p venti server on my wdmycloud NAS.  Perhaps I

Re: [9fans] Plan 9 from User Space still mantained?

2024-06-21 Thread wb . kloke
On Friday, 21 June 2024, at 11:47 AM, hruodr wrote: > Thanks! There is in FreeBSD a plan9port port and package: > The port may be ok, but the package is deficient. If you try to use the rc from the package you will find a very inconvenient place for rcmain compiled in. So, this rc is not usable

Re: [9fans] Plan 9 from User Space still mantained?

2024-06-21 Thread wb . kloke
*** 9pserve.c Fri Mar 1 15:46:35 2024 --- /home/wb/plan9port/src/cmd/9pserve.cFri Jun 21 09:43:38 2024 *** *** 86,92 Queue *inq; int verbose = 0; int logging = 0; ! int msize = 8192+24; u32int xafid = NOFID; int attached; int versioned; --- 86,92

Re: [9fans] yet another try to fixup venti

2024-06-20 Thread wb . kloke
After I managed to avoid storing the full scores in main memory, I am confident that the files in http://pkeus.de/~wb/mventi are worth a try for the public. So far, I don't see much restrictions on use. The arena on-disk format is unchanged.  I now see chances to further shorten the trienodes

[9fans] Re: Plan 9 from User Space still mantained?

2024-06-20 Thread wb . kloke
I am actively using planport on FreeBSD14. So far, I found  some programs which do not work: These are some servers, where threadmaybackground() should be included, vacfs vnfs (as I reported under issues on plan9port github). 9pfuse also is not expected to be usable, as the fusekernel and the

Re: [9fans] yet another try to fixup venti

2024-06-16 Thread wb . kloke
Some news on my effort. This morning I used my venti do to real work, serving the fossil filesystem to boot  a 386 vm. So far, it looks good. I did not try to write yet. I changed my trie.c for some optimisations: I ditched my union trienode for separate struct trieleaf and struct trienode

Re: [9fans] yet another try to fixup venti

2024-06-13 Thread wb . kloke
I updated http://pkeus.de/~wb/mventi, adding a file mventi.c which is venti.c + parts of index.c (to avoid linking the real index.o). I have some performance data now. I had to add a 4th arenapartition, which sealed the old partitions, so readonly acces is sufficient for them,. I served the

Re: [9fans] yet another try to fixup venti

2024-06-13 Thread wb . kloke
On Thursday, 13 June 2024, at 6:08 AM, ori wrote: > Sounds fairly interesting, though I'm curious how it compares; my guess was that because of the lack of locality due to using hashes for the score, a trie wouldn't be that different from a hash table. You are right. Lack of locality is a main

[9fans] Re: yet another try to fixup venti

2024-06-12 Thread wb . kloke
For now, I failed to push my  changes to github.  If anybody is interested in the files, they are available in http://pkeus.de/~wb/mventi Copy the files int $PLAN9/src/cmd/venti/srv and try "mk o.buildtrie" . I also inserted my code into index.c to check that index lookup gives the same

[9fans] yet another try to fixup venti

2024-06-11 Thread wb . kloke
After studying Steve Stallion's  SSD venti disaster, I decided to do my own try to fix the issues of venti. Despite my reservations on the lasting wisdom of some of the design choices, I try to use the traditional  arena disk layout. Only the on-disk index is replaced with a trie-based

Re: [9fans] Throwing in the Towel

2024-05-29 Thread wb . kloke
> i'm curious what straightforward storage structure wouldn't be. trying to > second-guess ssd firmware seems a tricky design criterion. > Designing for minimal disk stress: Never rewrite data already written to disk. Now we have big and quite cheap main memory.  I don't critisize the

Re: [9fans] Throwing in the Towel

2024-05-28 Thread wb . kloke
For the napkin calculation: On disk, the IEntry is 38Bytes. Alas, writes occur always in (the ssd internal) blocksize. So, essentially (assuming 4096 byte blocksize, which is quite optimistic), we have a write efficiency of less than 1 percent. A good firmware in the ssd could avoid needing a

Re: [9fans] Throwing in the Towel

2024-05-26 Thread wb . kloke
I would like to refresh my questions from may 4th. Can it be the case that the venti index file exhibits a desastrous write pattern for SSDs? I presume that each new block written to venti  causes a random block to be rewritten in the index file, until the bucket is full (after 215 writes).

Re: [9fans] fossil [was: List of companies that use Plan 9.]

2024-05-17 Thread wb . kloke
Just a simple note. When I compared the fossil version posted by Moody in the original discussion thread to the one I am using (and IIRC  it is the one in the 9legacy git repository), I found that they differed in 2 points. One was the increase of a msg buffer, which is probably no big issue,

Re: [9fans] Interoperating between 9legacy and 9front

2024-05-09 Thread wb . kloke
Installing fossil on 9front is not really difficult. Fossil is just a userland server which probably can even be copied as a binary, as long as the cpu is the same. Here are the hostowner factotum/ctl readout from the auth server: key proto=p9sk1 user=bootes dom=fritz.box  !password? key 

Re: [9fans] Interoperating between 9legacy and 9front

2024-05-09 Thread wb . kloke
I am using fossil on plan9port (which should be similar to 9legacy) from 9front. The only thing which I needed was to enable p9sk1 for the hostowner on 9front  (the auth server) and a factotum entry for this in the file server, IIRC. -- 9fans: 9fans

Re: [9fans] Throwing in the Towel

2024-05-04 Thread wb . kloke
Imho, we would like to get some more info, what happened. 1) Was it the fossil write buffer? 2) Was it the venti index ? We can safely exclude the venti arenas, or ? As you mentioned a 2nd set of SSDs, how long did the last one hold? Why is the whole set affected? Raid-x?

[9fans] drawterm factotum interference in 9front

2024-04-30 Thread wb . kloke
When I use drawterm to access 9front from FreeBSD with a running factotum, no additional user identification is needed. This is fine, as long as I do not try to use another identity. Factotum seems to overrid any -u user option in the drawterm command.  It logs me in always as myself, even if I

Re: [9fans] strange tls problem on 9front

2024-04-28 Thread wb . kloke
On Sunday, 28 April 2024, at 5:32 PM, cinap_lenrek wrote: > because the namespace from bootrc is not inherited. init creates a complrely new namespace using /lib/namespace from your root file-system: Thank you. I copied /lib/namespace to the file server. Now it works as it should.

Re: [9fans] strange tls problem on 9front

2024-04-28 Thread wb . kloke
I just inserted the line  On Saturday, 27 April 2024, at 11:49 PM, cinap_lenrek wrote: > bind -a #a /net into termrc after the ip init. Now I have /net/tls, but  aux/listen1 'tcp!*!rcpu' /rc/bin/service/tcp17019  still does not allow drawterm access. --

Re: [9fans] strange tls problem on 9front

2024-04-28 Thread wb . kloke
On Saturday, 27 April 2024, at 11:49 PM, cinap_lenrek wrote: > i suppose the following is missing in your /lib/namespace: > bind -a #a /net This binding has to be very early to be effective. It is done /sys/src/9/boot/bootrc. Why does it disappear when the filesystem is not local?

[9fans] strange tls problem on 9front

2024-04-27 Thread wb . kloke
I am experiencing a strange problem. When I try to boot  my 9front system using a file server over tcp, I get no /net/tls. The same kernel booting on a local hjfs filesystem has it. I think, that the files  in both systems are also he same. I can drawterm only to the latter configuration.

Re: [9fans] openat()

2024-04-10 Thread wb . kloke
Some replies seem to hold openat() for superfluous. In fact, IMHO the traditional syscall open() should be deprecated. In FreeBSD at least, the libc  open() does in fact use the real thing __sys_openat(). in /usr/src/lib/libc/sys/open.c via the interposing table. Some days ago, I experimented

Re: [9fans] 9front: fossil broken in Humanbiologics rel.

2023-12-15 Thread wb . kloke
Solved.  Noam was right. Restarting venti made the thing working again. -- 9fans: 9fans Permalink: https://9fans.topicbox.com/groups/9fans/Tac8d983292c826c1-M4bb44f64e1443090796ea9ae Delivery options: https://9fans.topicbox.com/groups/9fans/subscription

Re: [9fans] 9front: fossil broken in Humanbiologics rel.

2023-12-14 Thread wb . kloke
The venti server is in active use, e.g. for providing my backups via vacfs and vnfs on FreeBSD.  I even checked that a 2nd fossil (this time on a Raspberry Pi4)  is working.  -- 9fans: 9fans Permalink:

Re: [9fans] 9front: fossil broken in Humanbiologics rel.

2023-12-14 Thread wb . kloke
I have put a file containing the 4 lstk of 4 threads at http://pkeus.de/~wb/threads.lstk In the meantime, I tried to use an older 9front 9pc64 kernel, but this didn't help either.  Unfortunately, I did not make a snapshot before the last sysupdate.  The next thing I try ,is setting up a fresh

Re: [9fans] 9front: fossil broken in Humanbiologics rel.

2023-12-13 Thread wb . kloke
Thank you for looking  after my problem. As  Ori asked, here is a stack trace from the probably hanging thread.  /proc/504/text:amd64 plan 9 executable /sys/lib/acid/port /sys/lib/acid/amd64 acid: lstk() pread(a0=0x4)+0xe /sys/src/libc/9syscall/pread.s:6 read(buf=0x44f158,n=0x4)+0x27

[9fans] 9front: fossil broken in Humanbiologics rel.

2023-12-12 Thread wb . kloke
Till I upgraded my 9front I was using a fossil server (from9legacy) on my 9front FreeBSD Bhyve system, probably 9front release Emailschaden. Now it doesn't work anymore. Remaking the programs did not help either. The fossil server now hangs immediately after start. Neither file nor console