[OpenAFS] Re: Setting up new cell on RHEL4 - some help needed

2007-08-23 Thread Dr A V Le Blanc
On Wed, 22 Aug 2007, Robert Sturrock wrote:
 In former times (when Linux was just born), the content of /afs was
 delivered by a volume itself (named root.afs). All this volume contained
 were mountpoints to root.cell-volumes of other cells. An admin had to
 maintain those mountpoints so that the users of the cell can browse to
 other cells.

 Later it became obvious, that maintaining root.afs manually is a lot of
 work if you want to be up to date. At that time dynroot was invented.

I'm still using a nice tool called ucsdb, which I presume came from
the University of California at San Diego.  It collects cell definitions
from a number of places, combines them, mounts root.afs read-write,
adds or deletes cell mount points, unmounts, and then releases the
volume.  Since you need about the same amount of effort to do the
CellServDB file, it's quite painless to maintain root.afs as well.

 -- Owen
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


[OpenAFS] Tuning openafs write speed

2007-08-23 Thread Kai Moritz
Hi folks!

I would like to try tuning the speed of my openafs installation, but the only 
information I could google is this rather old thread
(http://www.openafs.org/pipermail/openafs-info/2003-June/009753.html) and the 
hint to use a big cache-partition.

For comparison I've created files with random data and different size (1MB, 
2MB, 4MB, 8MB, 16MB, 32MB, 64MB and 128MB) on my local disk.
I copied them into AFS and then I copied then to the same disk on the same host 
via scp (without compression). I've done that 10 times and computed the average.
For the 1MB file AFS ist slightly faster then scp (factor 0,89). For the 2 and 
the 4MB file AFS needs about 1,4 of the time scp needs. For the 8, 16 and 32MB 
the factor is about 2,7 and for the 64 and the 128MB file it is about 3,3.
 
I've already tried bigger cache-partitions, but it does not make a difference. 
Are there tuning parameters, which tell the system a threshold for the size of 
files, beyond which data won't be written to the cache?

Greetings Kai Moritz
-- 
GMX FreeMail: 1 GB Postfach, 5 E-Mail-Adressen, 10 Free SMS.
Alle Infos und kostenlose Anmeldung: http://www.gmx.net/de/go/freemail
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Tuning openafs write speed

2007-08-23 Thread Hartmut Reuter

Kai Moritz wrote:

Hi folks!

I would like to try tuning the speed of my openafs installation, but
the only information I could google is this rather old thread 
(http://www.openafs.org/pipermail/openafs-info/2003-June/009753.html)

and the hint to use a big cache-partition.

For comparison I've created files with random data and different size
(1MB, 2MB, 4MB, 8MB, 16MB, 32MB, 64MB and 128MB) on my local disk. I
copied them into AFS and then I copied then to the same disk on the
same host via scp (without compression). I've done that 10 times and
computed the average. For the 1MB file AFS ist slightly faster then
scp (factor 0,89). For the 2 and the 4MB file AFS needs about 1,4 of
the time scp needs. For the 8, 16 and 32MB the factor is about 2,7
and for the 64 and the 128MB file it is about 3,3.

I've already tried bigger cache-partitions, but it does not make a
difference. Are there tuning parameters, which tell the system a
threshold for the size of files, beyond which data won't be written
to the cache?

Greetings Kai Moritz


What are your data rates in MB/s?
If you are on a fast network (Gbit Ethernet, Inifiband ...) a disk cache
may be remarkably slower than the network. In this case memory cache can 
help.


Another point is chunk size. The default (64 KB) is bad for reading 
where each chunk is fetched in a separate RPC. with disk cache bigger 
chunks (1 MB) can be recommanded, anyway. For memory cache of, say, 64 
MB you would limit the number of chunks to only 64 which is certainly 
too low.


Here ramdisks can help because many of the chunks are filled with short 
contents, such as directories and symbolic links. The additional 
overhead to go through the filesystem layer may be less than what you 
can earn from bigger chunks. With ramdisk 1 MB chunks aren't too bad.


Hartmut

--
-
Hartmut Reuter   e-mail [EMAIL PROTECTED]
   phone +49-89-3299-1328
RZG (Rechenzentrum Garching)   fax   +49-89-3299-1301
Computing Center of the Max-Planck-Gesellschaft (MPG) and the
Institut fuer Plasmaphysik (IPP)
-
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Re: Setting up new cell on RHEL4 - some help needed

2007-08-23 Thread Derrick Brashear
On 8/23/07, Dr A V Le Blanc [EMAIL PROTECTED] wrote:

 On Wed, 22 Aug 2007, Robert Sturrock wrote:
  In former times (when Linux was just born), the content of /afs was
  delivered by a volume itself (named root.afs). All this volume contained
  were mountpoints to root.cell-volumes of other cells. An admin had to
  maintain those mountpoints so that the users of the cell can browse to
  other cells.
 
  Later it became obvious, that maintaining root.afs manually is a lot of
  work if you want to be up to date. At that time dynroot was invented.

 I'm still using a nice tool called ucsdb,



update cellservdb. Tobias Schaefer wrote it.


which I presume came from
 the University of California at San Diego.



Re: [OpenAFS] Tuning openafs write speed

2007-08-23 Thread Kai Moritz
 What are your data rates in MB/s?
scp says: 4.6MB/s

 If you are on a fast network (Gbit Ethernet, Inifiband ...) a disk cache
 may be remarkably slower than the network. In this case memory cache can 
 help.
I haven't tried that yet, becaus in the file /etc/openafs/afs.conf of
my Debian Etch installation there is a comment that says:

# Using the memory cache is not recommended.  It's less stable than the disk
# cache and doesn't improve performance as much as it might sound.

 
 Another point is chunk size. The default (64 KB) is bad for reading 
 where each chunk is fetched in a separate RPC. with disk cache bigger 
 chunks (1 MB) can be recommanded, anyway. For memory cache of, say, 64 
 MB you would limit the number of chunks to only 64 which is certainly 
 too low.

With automatically choosen values writing a 128 MB file in AFS takes
about 44-45 seconds.
On that machine I have a 3 GB cache.
With the following options, which a have taken from an example in a Debian 
Configfile, writing the 128 MB file takes about 48 seconds :(

-chunksize 20 -files 8 -dcache 1 -stat 15000 -daemons 6 -volume
s 500 -rmtsys

Greetings kai
-- 
Ist Ihr Browser Vista-kompatibel? Jetzt die neuesten 
Browser-Versionen downloaden: http://www.gmx.net/de/go/browser
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Tuning openafs write speed

2007-08-23 Thread chas williams - CONTRACTOR
In message [EMAIL PROTECTED],Kai Moritz writes:
I haven't tried that yet, becaus in the file /etc/openafs/afs.conf of
my Debian Etch installation there is a comment that says:

# Using the memory cache is not recommended.  It's less stable than the disk
# cache and doesn't improve performance as much as it might sound.

memcache is much faster than the disk cache.  memcache will not get any
better if no one ever uses it so the openafs developers can get some
bug reports.  i think memcache has improved quite a bit (but it could
be better, i need to submit some patches) over the last couple years.

i use '-memcache -chunksize 15 -dcache 1024'.  if your system is memory
starved this might be an issue.
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Tuning openafs write speed

2007-08-23 Thread Robert Banz


memcache is much faster than the disk cache.  memcache will not get  
any

better if no one ever uses it so the openafs developers can get some
bug reports.  i think memcache has improved quite a bit (but it could
be better, i need to submit some patches) over the last couple years.

i use '-memcache -chunksize 15 -dcache 1024'.  if your system is  
memory

starved this might be an issue.


I did a whole bunch of testing regarding cache performances while  
we've been moving all of our users off of AFS-hosted mailspools, and  
here's what I've found -- this is on Sol 10 x86...


* slowest: disk cache, of course.
* medium: memory cache
* fastest: ufs filesystem on a lofi-mounted block device hosted in / 
tmp  (which is in-RAM)
	(I know this certainly wastes some cpu/memory resources and  
overhead, but...  it works)






___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Tuning openafs write speed

2007-08-23 Thread Hartmut Reuter

Kai Moritz wrote:

What are your data rates in MB/s?


scp says: 4.6MB/s


Isn't great either. So may be you have some other problems in your network?

When I do a scp of a 100 MB file to my laptop I get ~ 8 MB/s and there 
is in parallel running a remote rsync with about another .7 MB/s in both 
directions (rsyncd and AFS).


So I normally get the full 100 Mbit/s bandwidth when I write into AFS or 
read from AFS: = 10 MB/s.







If you are on a fast network (Gbit Ethernet, Inifiband ...) a disk cache
may be remarkably slower than the network. In this case memory cache can 
help.


I haven't tried that yet, becaus in the file /etc/openafs/afs.conf of
my Debian Etch installation there is a comment that says:

# Using the memory cache is not recommended.  It's less stable than the disk
# cache and doesn't improve performance as much as it might sound.


We are using here in our Linux clusters and on the high performance AIX 
power 4/5 machines memcache without problems.


It's my special OpenAFS-1.4.4 with OSD support which is expected to 
arrive soonly in the OpenAFS CVS. But I suppose also the normal 
OpenAFS-1.4.4 should work without problems with memcache


Hartmut



Another point is chunk size. The default (64 KB) is bad for reading 
where each chunk is fetched in a separate RPC. with disk cache bigger 
chunks (1 MB) can be recommanded, anyway. For memory cache of, say, 64 
MB you would limit the number of chunks to only 64 which is certainly 
too low.



With automatically choosen values writing a 128 MB file in AFS takes
about 44-45 seconds.
On that machine I have a 3 GB cache.
With the following options, which a have taken from an example in a Debian 
Configfile, writing the 128 MB file takes about 48 seconds :(

-chunksize 20 -files 8 -dcache 1 -stat 15000 -daemons 6 -volume
s 500 -rmtsys

Greetings kai



--
-
Hartmut Reuter   e-mail [EMAIL PROTECTED]
   phone +49-89-3299-1328
RZG (Rechenzentrum Garching)   fax   +49-89-3299-1301
Computing Center of the Max-Planck-Gesellschaft (MPG) and the
Institut fuer Plasmaphysik (IPP)
-
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Tuning openafs write speed

2007-08-23 Thread Kai Moritz
 memcache is much faster than the disk cache.  memcache will not get any
 better if no one ever uses it so the openafs developers can get some
 bug reports.
That's true, but I cannot annoy my users with starving machines... Hence, I can 
only run that on test-machines.

Greetings kai

-- 
GMX FreeMail: 1 GB Postfach, 5 E-Mail-Adressen, 10 Free SMS.
Alle Infos und kostenlose Anmeldung: http://www.gmx.net/de/go/freemail
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Tuning openafs write speed

2007-08-23 Thread Kai Moritz

 * slowest: disk cache, of course.
 * medium: memory cache
 * fastest: ufs filesystem on a lofi-mounted block device hosted in / 
 tmp  (which is in-RAM)
   (I know this certainly wastes some cpu/memory resources and  
 overhead, but...  it works)
 
That sound intresting! 
I will give a ramdisk a try on some test-machines and report...

Greetings Kai
-- 
Der GMX SmartSurfer hilft bis zu 70% Ihrer Onlinekosten zu sparen! 
Ideal für Modem und ISDN: http://www.gmx.net/de/go/smartsurfer
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Tuning openafs write speed

2007-08-23 Thread Russ Allbery
chas williams - CONTRACTOR [EMAIL PROTECTED] writes:
 In message [EMAIL PROTECTED],Kai Moritz writes:

 I haven't tried that yet, becaus in the file /etc/openafs/afs.conf of
 my Debian Etch installation there is a comment that says:
 
 # Using the memory cache is not recommended.  It's less stable than the disk
 # cache and doesn't improve performance as much as it might sound.

 memcache is much faster than the disk cache.  memcache will not get any
 better if no one ever uses it so the openafs developers can get some
 bug reports.  i think memcache has improved quite a bit (but it could
 be better, i need to submit some patches) over the last couple years.

Sounds like that comment is obsolete.  I'll drop it from the Debian
packages.

-- 
Russ Allbery ([EMAIL PROTECTED]) http://www.eyrie.org/~eagle/
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Tuning openafs write speed

2007-08-23 Thread Robert Banz


On Aug 23, 2007, at 10:49, Kai Moritz wrote:




* slowest: disk cache, of course.
* medium: memory cache
* fastest: ufs filesystem on a lofi-mounted block device hosted  
in /

tmp  (which is in-RAM)
(I know this certainly wastes some cpu/memory resources and
overhead, but...  it works)


That sound intresting!
I will give a ramdisk a try on some test-machines and report...


Make sure  you do it with a real filesystem.  The AFS cache stuff  
won't work on top of most 'tmpfs' filesystems, hense the ufs- 
filesystem on the block device...



___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


[OpenAFS] klog with sites using fakeka against MIT1.6.2 broken?

2007-08-23 Thread Matt Elliott
We just discovered a problem with our KDC now running MIT 1.6.2.   
When a user changes their password (previous keys were created with  
our old kdc version 1.4.3 still work) with patches and then tries  
klog it  longer grants tokens. klog returns Unable to authenticate  
to AFS because password was incorrect.  kinit and a subsequent aklog  
still works.  Has anyone else seen this or have a fix?


Thanks,



Matt ElliottProduction Systems Infrastructure
217-265-0257mailto:[EMAIL PROTECTED]


___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] klog with sites using fakeka against MIT1.6.2 broken?

2007-08-23 Thread Russ Allbery
Matt Elliott [EMAIL PROTECTED] writes:

 We just discovered a problem with our KDC now running MIT 1.6.2.  When a
 user changes their password (previous keys were created with our old kdc
 version 1.4.3 still work) with patches and then tries klog it longer
 grants tokens. klog returns Unable to authenticate to AFS because
 password was incorrect.  kinit and a subsequent aklog still works.  Has
 anyone else seen this or have a fix?

I suspect referrals broke something, mostly because almost everything that
breaks after upgrading to 1.6.2 is because of referrals.

-- 
Russ Allbery ([EMAIL PROTECTED]) http://www.eyrie.org/~eagle/
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] klog with sites using fakeka against MIT1.6.2 broken?

2007-08-23 Thread Jeffrey Altman
Matt Elliott wrote:
 We just discovered a problem with our KDC now running MIT 1.6.2.  When a
 user changes their password (previous keys were created with our old kdc
 version 1.4.3 still work) with patches and then tries klog it  longer
 grants tokens. klog returns Unable to authenticate to AFS because
 password was incorrect.  kinit and a subsequent aklog still works.  Has
 anyone else seen this or have a fix?

What keys are you generating in the KDC for principals at password changes?



smime.p7s
Description: S/MIME Cryptographic Signature


Re: [OpenAFS] klog with sites using fakeka against MIT1.6.2 broken?

2007-08-23 Thread Mike Dopheide

Number of keys: 5
Key: vno 30, AES-256 CTS mode with 96-bit SHA-1 HMAC, no salt
Key: vno 30, Triple DES cbc mode with HMAC/sha1, no salt
Key: vno 30, DES cbc mode with CRC-32, no salt
Key: vno 30, DES cbc mode with CRC-32, Version 4
Key: vno 30, DES cbc mode with CRC-32, AFS version 3

-Mike

Jeffrey Altman wrote:

Matt Elliott wrote:

We just discovered a problem with our KDC now running MIT 1.6.2.  When a
user changes their password (previous keys were created with our old kdc
version 1.4.3 still work) with patches and then tries klog it  longer
grants tokens. klog returns Unable to authenticate to AFS because
password was incorrect.  kinit and a subsequent aklog still works.  Has
anyone else seen this or have a fix?


What keys are you generating in the KDC for principals at password changes?


___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Re: Setting up new cell on RHEL4 - some help needed

2007-08-23 Thread Jason Edgecombe

Dr A V Le Blanc wrote:

On Wed, 22 Aug 2007, Robert Sturrock wrote:
  

In former times (when Linux was just born), the content of /afs was
delivered by a volume itself (named root.afs). All this volume contained
were mountpoints to root.cell-volumes of other cells. An admin had to
maintain those mountpoints so that the users of the cell can browse to
other cells.

Later it became obvious, that maintaining root.afs manually is a lot of
work if you want to be up to date. At that time dynroot was invented.



I'm still using a nice tool called ucsdb, which I presume came from
the University of California at San Diego.  It collects cell definitions
from a number of places, combines them, mounts root.afs read-write,
adds or deletes cell mount points, unmounts, and then releases the
volume.  Since you need about the same amount of effort to do the
CellServDB file, it's quite painless to maintain root.afs as well.

   
Don't you still have to  maintain the CellServDB file and the root.afs 
volume?


Jason
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


[OpenAFS] Windows client username problems

2007-08-23 Thread Karl M. Davis
Hello all,

 

I have a user whose domain logon name is ronald.carlsten.  When he tries
to logon to a computer with the AFS and Kerberos clients installed he gets
the error message Integrated login failed: client not found in Kerberos
database.  I have another user john.holguin with the same problem.  Other
users such as karl and conrad can login and access AFS just fine on the
Windows machines.

 

Two questions:

1.   Is it the . In the usernames causing the problem or the length?

2.   Anything I can do other than renaming their accounts?

 

Thanks!

Karl M. Davis



Re: [OpenAFS] Windows client username problems

2007-08-23 Thread Jeffrey Altman
I don't know why the . would be a problem for Kerberos but it is
currently a problem for AFS.   See the discussion on this list within
the last month.

Karl M. Davis wrote:
 Hello all,
 
  
 
 I have a user whose domain logon name is “ronald.carlsten”.  When he
 tries to logon to a computer with the AFS and Kerberos clients installed
 he gets the error message “Integrated login failed: client not found in
 Kerberos database”.  I have another user “john.holguin” with the same
 problem.  Other users such as “karl” and “conrad” can login and access
 AFS just fine on the Windows machines.
 
  
 
 Two questions:
 
 1.   Is it the “.” In the usernames causing the problem or the length?
 
 2.   Anything I can do other than renaming their accounts?
 
  
 
 Thanks!
 
 Karl M. Davis
 



smime.p7s
Description: S/MIME Cryptographic Signature