[OpenAFS] Tuning openafs write speed

2007-08-23 Thread Kai Moritz
Hi folks!

I would like to try tuning the speed of my openafs installation, but the only 
information I could google is this rather old thread
(http://www.openafs.org/pipermail/openafs-info/2003-June/009753.html) and the 
hint to use a big cache-partition.

For comparison I've created files with random data and different size (1MB, 
2MB, 4MB, 8MB, 16MB, 32MB, 64MB and 128MB) on my local disk.
I copied them into AFS and then I copied then to the same disk on the same host 
via scp (without compression). I've done that 10 times and computed the average.
For the 1MB file AFS ist slightly faster then scp (factor 0,89). For the 2 and 
the 4MB file AFS needs about 1,4 of the time scp needs. For the 8, 16 and 32MB 
the factor is about 2,7 and for the 64 and the 128MB file it is about 3,3.
 
I've already tried bigger cache-partitions, but it does not make a difference. 
Are there tuning parameters, which tell the system a threshold for the size of 
files, beyond which data won't be written to the cache?

Greetings Kai Moritz
-- 
GMX FreeMail: 1 GB Postfach, 5 E-Mail-Adressen, 10 Free SMS.
Alle Infos und kostenlose Anmeldung: http://www.gmx.net/de/go/freemail
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Tuning openafs write speed

2007-08-23 Thread Kai Moritz
 What are your data rates in MB/s?
scp says: 4.6MB/s

 If you are on a fast network (Gbit Ethernet, Inifiband ...) a disk cache
 may be remarkably slower than the network. In this case memory cache can 
 help.
I haven't tried that yet, becaus in the file /etc/openafs/afs.conf of
my Debian Etch installation there is a comment that says:

# Using the memory cache is not recommended.  It's less stable than the disk
# cache and doesn't improve performance as much as it might sound.

 
 Another point is chunk size. The default (64 KB) is bad for reading 
 where each chunk is fetched in a separate RPC. with disk cache bigger 
 chunks (1 MB) can be recommanded, anyway. For memory cache of, say, 64 
 MB you would limit the number of chunks to only 64 which is certainly 
 too low.

With automatically choosen values writing a 128 MB file in AFS takes
about 44-45 seconds.
On that machine I have a 3 GB cache.
With the following options, which a have taken from an example in a Debian 
Configfile, writing the 128 MB file takes about 48 seconds :(

-chunksize 20 -files 8 -dcache 1 -stat 15000 -daemons 6 -volume
s 500 -rmtsys

Greetings kai
-- 
Ist Ihr Browser Vista-kompatibel? Jetzt die neuesten 
Browser-Versionen downloaden: http://www.gmx.net/de/go/browser
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Tuning openafs write speed

2007-08-23 Thread Kai Moritz
 memcache is much faster than the disk cache.  memcache will not get any
 better if no one ever uses it so the openafs developers can get some
 bug reports.
That's true, but I cannot annoy my users with starving machines... Hence, I can 
only run that on test-machines.

Greetings kai

-- 
GMX FreeMail: 1 GB Postfach, 5 E-Mail-Adressen, 10 Free SMS.
Alle Infos und kostenlose Anmeldung: http://www.gmx.net/de/go/freemail
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Tuning openafs write speed

2007-08-23 Thread Kai Moritz

 * slowest: disk cache, of course.
 * medium: memory cache
 * fastest: ufs filesystem on a lofi-mounted block device hosted in / 
 tmp  (which is in-RAM)
   (I know this certainly wastes some cpu/memory resources and  
 overhead, but...  it works)
 
That sound intresting! 
I will give a ramdisk a try on some test-machines and report...

Greetings Kai
-- 
Der GMX SmartSurfer hilft bis zu 70% Ihrer Onlinekosten zu sparen! 
Ideal für Modem und ISDN: http://www.gmx.net/de/go/smartsurfer
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Resizing an ext2-partition on linux

2006-11-19 Thread Kai Moritz

Marcus Watts schrieb:

There's an easier openafs thing you could do:

make a new partition
mkfs the new partition
mount it as /vicepb (or whatever the next free letter is)
stop the openafs-fileserver
start the openafs-fileserver

Unless you need to create afs volumes or files on the fileserver
that are somewhere near the total size of the volume, there's no real
advantage to having a single really big filesystem.  Having multiple smaller
partitions should make life easier for the salvager, fsck, etc.

Yes, that's true and I am aware of it. But at the moment I am in the 
need to create one big volume, because I have to copy a big bunch of old 
 user-directories to the new fileserver, in order to turn off the old 
nfs-server. Most of the directories will not be used in future, so it 
would be much of work to splice them up in smaller volumes. I'm planing 
to turn the old nfs-server into an additional afs-fileserver and then 
move the used directories to smaller volumes...



Greetings Kai Moritz
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


[OpenAFS] Resizing an ext2-partition on linux

2006-11-18 Thread Kai Moritz

Hi folks!

My /vicepa lifes on an ext2-formatted lvm-partition. Now I have to 
resize it. Normally I would call fsck.ext2 -f on the partition 
(because resize2fs requires it), enlarge the lvm-partition and call 
resize2fs. But the AFS Administration-Guide said, that I have to use a 
special AFS-version of fsck. Unfortunatly I can't find that binary in my 
hand-compiled debian-package (I just grabbed the source-packages for 
1.4.1 from Ubuntu Edgy and compiled them for Debian sarge). I can't find 
it (or it's source-code) in the openafs-1.4.1 source-directory either.


Greetings Kai Moritz
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Resizing an ext2-partition on linux

2006-11-18 Thread Kai Moritz

Chris Huebsch schrieb:

Iirc the Admin-Guide says that only for inode-base fileservers (which
manipulate inode-structures themselves instead of using normal
file-operations). Name-based fileservers just create normal files an do
not do any nasty things with inodes. For Linux there is only a
name-based fileserver available. So you can use all fs-tools of your
distribution.


Good news :)!

But I am a little bit confused. Since installation, I restarted my 
fileserver only once. And olthough it looked like a clean shutdown, 
there were a lot of complaints of the automatic fsck.ext2 at reboot. 
Afterwards the salvager was complaining like this:

-
11/06/2006 10:46:22 STARTING AFS SALVAGER 2.4 (/usr/lib/openafs/salvager)
11/06/2006 10:46:22 Starting salvage of file system partition /vicepa
11/06/2006 10:46:22 SALVAGING FILE SYSTEM PARTITION /vicepa (device=vicepa)
11/06/2006 10:46:23 7 nVolumesInInodeFile 196
11/06/2006 10:46:23 CHECKING CLONED VOLUME 536870916.
11/06/2006 10:46:23 root.afs.readonly (536870916) updated 11/03/2006 16:50
11/06/2006 10:46:23 SALVAGING VOLUME 536870915.
11/06/2006 10:46:23 root.afs (536870915) updated 11/03/2006 16:51
11/06/2006 10:46:23 Vnode 2: version  inode version; fixed (old status)

[...]

11/06/2006 10:46:23 Vnode 338: version  inode version; fixed (old status)
11/06/2006 10:46:23 Vnode 340: version  inode version; fixed (old status)
11/06/2006 10:46:23 Vnode 342: version  inode version; fixed (old status)
11/06/2006 10:46:23 totalInodes 180
11/06/2006 10:46:23 Salvaged root.afs (536870915): 172 files, 179 blocks
11/06/2006 10:46:23 SALVAGING VOLUME 536870927.
11/06/2006 10:46:23 user.ls3.moritz (536870927) updated 11/03/2006 22:55
11/06/2006 10:46:23 Vnode 16: version  inode version; fixed (old status)
11/06/2006 10:46:23 Vnode 76: version  inode version; fixed (old status)

 [...]

11/06/2006 10:46:23 Vnode 5142: version  inode version; fixed (old status)
11/06/2006 10:46:23 Vnode 5142: length incorrect; changed from 0 to 925696
11/06/2006 10:46:23 Vnode 5144: version  inode version; fixed (old status)
11/06/2006 10:46:23 Vnode 5256: version  inode version; fixed (old status)
11/06/2006 10:46:23 totalInodes 1895
11/06/2006 10:46:24 dir vnode 767: special old unlink-while-referenced 
file .__a

fsD04 is deleted (vnode 4984)
11/06/2006 10:46:24 Salvaged user.ls3.moritz (536870927): 1890 files, 
1872274 bl

ocks
11/06/2006 10:46:24 SALVAGING OF PARTITION /vicepa COMPLETED
-

So I assumed, that the automatic fsck.ext2 messed up something, that the 
salvager had to correct again afterwards.
Perhaps the shutdown was not that clean, like I thought it was. I have 
to check that!


Greetings Kai Moritz
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Resizing an ext2-partition on linux

2006-11-18 Thread Kai Moritz

Chris Huebsch schrieb:

On Sat, 18 Nov 2006, ted tcreedon wrote:


Why not use ext3 or reiser?


Ext3 is in fact possible, and I do use it on my vice-partitions.

[...]

Anyway, ext3 would not solve my problem. I have to check the 
vice-partition by hand (via fsck.ext2 -f), because resize2fs forces me 
to do so (otherwise it refuses to resize the partition, because the 
possibility of data-loss).


So, that's my plan:

- stop the openafs-fileserver
- unmount the vice-partition
- check it via linux-fsck
- enlarge the partition
- resize it with resize2fs
- mount it again
- restart the openafs-fileserver


That's exactly the same, like what I do with a normal partition (expect 
stop/start openafs-fileserver).
Is that the correct way? Or are there any special openafs-things that I 
should do additionally?!


Greetings Kai Moritz
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Cache Manager drops tickets

2006-11-17 Thread Kai Moritz

Russ Allbery schrieb:

Kai Moritz [EMAIL PROTECTED] writes:


Is there a workaround, or a newer version of libpam-openafs-session?


Not yet, but that's what I'm working on right now.

You can build http://www.eyrie.org/~eagle/software/pam-afs-session/ by
hand and use it as a replacement, but it's not packaged for Debian yet.
For the etch release, I'm probably going to take the syscall layer for
Linux out of that package and patch it into libpam-openafs-session, and
then do the actual migration for the next Debian release.

I'm hoping to have new packages uploaded to unstable by the end of the
weekend.


Thanks for the information! I think I will give your new version a try.

Best Regards

Kai Moritz


___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Cache Manager drops tickets

2006-11-16 Thread Kai Moritz

Russ Allbery schrieb:

anymore.  I am a bit confused, because the problem only arises on this
one other machine, which is a Debian testing/unstable mix (with actual
openafs 1.4.2 packages from unstable). On all other machines (Debian
sarge and one Kubuntu edgy, all with hand-compiled openafs 1.4.1
debian-packages, taken from Kubuntu Edgy) libpam-openafs-session works
well, though the version of the source for the libpam-openafs-session is
1.0 on all systems!  The user reports, that he always losts his
afs-tokens, if he does a su to become root. Also, the tokens got lost
after some time anyway...


aklog -setpag stopped working in 1.4.2, which is probably the difference.


Is there a workaround, or a newer version of libpam-openafs-session?
I tried to find one via google, but the only hits I get for 
libpam-openafs-session are debian-package-sites. Perhaps it is the 
easiest workaround to downgrade that client to 1.4.1...


Greetings Kai Moritz
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Cache Manager drops tickets

2006-11-15 Thread Kai Moritz

Russ Allbery schrieb:

Are you using libpam-openafs-session?  If so, try removing it temporarily
from your /etc/pam.d/common-session and see if your tokens now stick
around.  It's a little too aggressive about deleting tokens in my
experience.

Yes, I am using libpam-openafs-session, and your suggestion solves my 
problem, thanks :)


But now, the problem arises on one other machine. Unlike brunhild, this 
machine is no testing-box. So I cannot just remove 
libpam-openafs-session, because then, the user cannot login normally 
anymore.
I am a bit confused, because the problem only arises on this one other 
machine, which is a Debian testing/unstable mix (with actual openafs 
1.4.2 packages from unstable). On all other machines (Debian sarge and 
one Kubuntu edgy, all with hand-compiled openafs 1.4.1 debian-packages, 
taken from Kubuntu Edgy) libpam-openafs-session works well, though the 
version of the source for the libpam-openafs-session is 1.0 on all systems!
The user reports, that he always losts his afs-tokens, if he does a su 
to become root. Also, the tokens got lost after some time anyway...


By the way: I'm running the fileserver with version 1.4.1, but some 
Linux-Clients with version 1.4.2 and all Windows-Clients with versuib 
1.5.11. Is this leagal, or may it corrupt the fileserver?


Greetings Kai Moritz
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


[OpenAFS] Cache Manager drops tickets

2006-11-13 Thread Kai Moritz

Hi folks!

I set up an openafs fileserver to replace our old nfs system. Everything
is running fine, but i have some strange problems while copying the old
user data to the new system:
I cannot install an openafs-client on the machine, where the old data is
 living on (called zimt). So I simply logged into a machine with
openafs-client (called brunhild) as root and got tickets for my
afs-adminuser via kinit afsadmin and aklog. Then I start an rsync on
zimt like this rsync -av /home [EMAIL PROTECTED]:/afs/cellname/user. All
works fine in the beginning, but after some time (5-30 min) the rsync
command is interrupted because of access-problems. Issuing tokens on
brunhild shows, that the cache manager has dropped the token, althoug it
was not expired.
I cannot figure out why... There are no error-messages in the
openafs-log-files. Time is synchronized and another aklog fetches a new
token without any problems.

I am running openafs 1.4.1 with MIT Kerberos on a Debian Sarge system.

Greetings Kai Moritz

___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info