Harald Barth wrote:
my name is Anders Magnusson and I work at Lule} University in Sweden.
Hej Ragge!
Tjenare Harald!
Test environment has been Heimdal 1.0 + OpenAFS 1.4.5, servers Redhat 4 (Dell
1950)
and Solaris 10 (Sun V880). Clients was Redhat and for Windows (using 1.5.28).
On Sol10, which file system do you have for /vicepX? I recommend
running the namei version of the server (which is not default to
historical reasons) because that makes it
underlying-file-system-agnostic.
I used plain UFS and the namei version of the server. Use of this
machine was only
for testing purposes, we'll probably only use Redhat machines if we
start using it
in production environment.
- What are the actual limitations on users/groups/volumes/sizes? I'm
interested in both
practical and theoretical. I have tested with 200k volumes/partition, which
seems to
work fine with the exception of that zapping volumes take 12 seconds each.
I also added 200k users and 200k groups, and tested access groups with 200k
people
in each, without any noticeable slowdown. Quite impressive :-) I don't
know if or how
any problem here should be noted.
2TB/volume and 2TB/partition, see recent emails on this list. We have
not seen any practical limitation in number of volumes and/or users
yet.
Ok, thanks. A related question to this is how database replication
works, the databases
became quite large. 80MB prdb and 40MB vldb. Will it only copy changed
entries or
may the whole database be copied?
- What are reasonable maximum on files/directory and mountpoints/directory? I
tested by
adding 64k mountpoints (volumes) in one directory, and accessing directories
below did
not become noticeably slower. The only peculiarity I noted was that Windows
explorer
only showed the first 1000 entries, but I don't know if it is a bug in
explorer or the AFS client.
The only place where you want many mountpoints is /home and we have
solved that through /afs/pdc.kth.se/home/h/haba, others have used
/afs/kth.se/home/h/a/haba and so on.
Yes, I've seen that, and I also saw some recommendations about having a
small number
of entries in each directory, but is this still an issue today?
- A few problems with the windows manager tools were also noted, it may be just
me that
haven't really understand how it all works :-)
In the server manager; running salvager failed with the error messages:
The AFS server manager was unable to perform the requested salvage
operation.
Error: unable to successfully read log file (0x0000422A)
I think I can figure out what's wrong if you describe what you really tried to
do :)
Simple; create a volume, remove it from vldb and then try to salvage the
volume that the
server manager complains about :-)
I just noticed that I couldn't create a volume in the server manager either:
The AFS Server Manager was unable to create volume gurka on partition
/vicepa of server apa
Error: not synchronization site (should work on sync site)(0x00001501)
Also, moving volumes do not work, but I get no error message. ``nothing
happens''.
That is probably some setup error, but setup can be a bit hairy
because of the documentation that does not exist.
Yeah :-/ Fortunately the old IBM documentation contains lot of still
usable stuff.
I must say that we are all very impressed of how well it has worked so far!
Great!
*smile*
Harald.
-- Ragge
_______________________________________________
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info