Re: [OpenAFS] openafs upgrade from 1.4.1 to 1.5.7

2007-11-29 Thread Harald Barth

 What's the underlying filesystem? AFS passes through the semantics of
 metadata operations of the underlying filesystem, and ext* for instance is
 poor at it.

For example xfs on Linux has worked well for us.

  We are running an old version of AFS.. 1.4.1.   Are there any
  configuration differences between 1.4.1 and 1.5.7?
 
 
 Lots. Of course, we recommend 1.4.5, and not some random 1.5, especially not
 an old one. 1.5.7 is similarly old to 1.4.1.

As Derrick said, use 1.4.x for servers.


  Can I have a mixed environment of versions?

 Unless you have pts supergroups enabled, yes, though there is a pending bug
 regarding moving volumes between current 1.5.x and 1.4.x.

Better than that. You can mix even if you have supergroups enabled,
but don't create any supergroups until all your servers are upgraded.
There is no need for a complete cell shutdown to perform an upgrade.
There has not been such a need for years.

Harald.
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS-Doc] Re: [OpenAFS] Quick Start Guide updated for Kerberos v5

2007-11-29 Thread Harald Barth

 inode is still recommended for Solaris.  namei is recommended in all other
 cases, and generally is the only possible method.

I would recommend namei for all new installations. Is there any reason against
that which I'm not aware of?

Harald.
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


[OpenAFS] What's the problem with reiser

2007-11-29 Thread Martin Ginkel

Hi,

We use openafs clients on a lot of machines. The local Filesystems are
usually reiser. But for the DiskCache we have to install one partition
with ext2.

Why is that? What's the problem with reiser as cacheFS ?

Just curious
Martin
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


[OpenAFS] MacOS X 1.4 (tiger) PPC built of 1.5.27?

2007-11-29 Thread Lars Schimmer

Hi!

I' ve updated our PowerPC Mac with latest available OpenAFS version for 
PPC Mac OS X Tiger, which was 1.5.26.
For bad sake, MacOS X just went bad every shutdown and OpenAFS seems to 
be the reason because of which MacOS X won't shutdown clear and ends in 
a kernel panic.


Is there any 1.5.27 for MacOS Tiger PPC available? Maybe that bu is fixed.
Btw: I clicked send that bug to apple, does OpenAFS get these error 
reports just like on win ?


MfG,
Lars Schimmer
--
-
TU Graz, Institut für ComputerGraphik  WissensVisualisierung
Tel: +43 316 873-5405   E-Mail: [EMAIL PROTECTED]
Fax: +43 316 873-5402   PGP-Key-ID: 0x4A9B1723
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] What's the problem with reiser

2007-11-29 Thread Harald Barth
 
 We use openafs clients on a lot of machines. The local Filesystems are
 usually reiser. But for the DiskCache we have to install one partition
 with ext2.

To all my experience, reiserfs is broken. I recommend NOT to use that
file system. At all. As a cache file system ext2 is fine, because it
is fast, most kernels have the drivers and if it breaks because of a
system crash, so what, it was just a cache.

Harald.
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] What's the problem with reiser

2007-11-29 Thread Dirk Heinrichs
Am Donnerstag, 29. November 2007 schrieb ext Harald Barth:
  We use openafs clients on a lot of machines. The local Filesystems are
  usually reiser. But for the DiskCache we have to install one partition
  with ext2.

 To all my experience, reiserfs is broken. I recommend NOT to use that
 file system. At all.

OK. replase reiser with xfs, jfs, whatever. I guess the real question was: 
What's the reason why one should not use other filesystems than ext2 for 
the cache partition on a Linux client?

Bye...

Dirk
-- 
Dirk Heinrichs  | Tel:  +49 (0)162 234 3408
Configuration Manager   | Fax:  +49 (0)211 47068 111
Capgemini Deutschland   | Mail: [EMAIL PROTECTED]
Wanheimerstraße 68  | Web:  http://www.capgemini.com
D-40468 Düsseldorf  | ICQ#: 110037733
GPG Public Key C2E467BB | Keyserver: www.keyserver.net


signature.asc
Description: This is a digitally signed message part.


Re: [OpenAFS-Doc] Re: [OpenAFS] Quick Start Guide updated for Kerberos v5

2007-11-29 Thread Douglas E. Engert



Russ Allbery wrote:

Jason Edgecombe [EMAIL PROTECTED] writes:


chapter1:
* about upgrading OS:
**Should the namei fileserver be mentioned? Is namei the
recommended way?


inode is still recommended for Solaris.  namei is recommended in all other
cases, and generally is the only possible method.



You should consider recommending namei on Solaris too. inode only
works on ufs and you must have logging turned off. If you want to
use zfs then you must use namei.




chapter 2:
  * getting started on solaris:
** it still mentions copying files from cd-rom (grep for CD-ROM)


Yeah, this needs to get fixed throughout the manual, replaced with
instructions about how to start from the downloaded binary build or to
build from source.


**only mentions solaris 7, it should mention 8, 9 
10/opensolaris or just say 7  later versions


Yup.


   ** about fsck: does solaris use inode, namei or both? Is
clarification needed?


Solaris can use either, so yes, clarification is needed.  I'm fairly sure
you don't need the custom fsck if you use namei.



Correct. It makes me feel more comfortable using the vendor's fsck
rather then a modified fsck.




The entry for AFS server processes, called either *afs* or
*afs//cell/*. No user logs in under this identity, but it is used to
encrypt the server tickets that granted to AFS clients for presentation
to server processes during mutual authentication.

should that be ...that are granted to AFS clients...?


Yup.

The source for this is in DocBook in the OpenAFS CVS head, and patches are
certainly welcome.



--

 Douglas E. Engert  [EMAIL PROTECTED]
 Argonne National Laboratory
 9700 South Cass Avenue
 Argonne, Illinois  60439
 (630) 252-5444
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] What's the problem with reiser

2007-11-29 Thread Rob Banz


On Nov 29, 2007, at 07:41, Dirk Heinrichs wrote:


Am Donnerstag, 29. November 2007 schrieb ext Harald Barth:
We use openafs clients on a lot of machines. The local Filesystems  
are
usually reiser. But for the DiskCache we have to install one  
partition

with ext2.


To all my experience, reiserfs is broken. I recommend NOT to use that
file system. At all.


OK. replase reiser with xfs, jfs, whatever. I guess the real  
question was:
What's the reason why one should not use other filesystems than ext2  
for

the cache partition on a Linux client?


For a cache partition, at least on other *ixes, the cache partition  
has always needed special attention because of the way its used by the  
AFS kernel module.  Certain care has to be taken as to do operations  
in such a way that kernel deadlocks and such are avoided.  For  
example, on Solaris you use ufs, however, you can't use logging ufs  
because of known deadlock problems.


I'd assume that the use of ext2 on Linux is for a similar reason.

-rob
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


[OpenAFS] Automatically get AFS-token

2007-11-29 Thread Lara Lloret Iglesias

Hello!

I´ve just installed three AFS clients, and everything seems to work fine! The 
only problem is that I would like to get my afs-token on login, without having 
to type klog.
Any idea?

Thank you very much,


Lara
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] What's the problem with reiser

2007-11-29 Thread Rob Banz




For a cache partition, at least on other *ixes, the cache partition
has always needed special attention because of the way its used by  
the

AFS kernel module.  Certain care has to be taken as to do operations
in such a way that kernel deadlocks and such are avoided.  For
example, on Solaris you use ufs, however, you can't use logging ufs
because of known deadlock problems.

I'd assume that the use of ext2 on Linux is for a similar reason.

-rob
Fascinating. I did not know of UFS logging issue on the cache  
partition.
Strangely, I haven't heard of any issues. does ext3 have this issue  
as well?


I had used logging ufs as a cache partition for years without a  
problem as well -- but in the past couple years ran into deadlocks.  I  
remember reliably seeing them under Solaris 10x86 on a Dell 2650 where  
it'd lock up right after AFS started and some automated processes were  
busy trying to access it.


-rob
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] MacOS X 1.4 (tiger) PPC built of 1.5.27?

2007-11-29 Thread Derrick Brashear
On Nov 29, 2007 5:58 AM, Lars Schimmer [EMAIL PROTECTED] wrote:

 Hi!

 I' ve updated our PowerPC Mac with latest available OpenAFS version for
 PPC Mac OS X Tiger, which was 1.5.26.
 For bad sake, MacOS X just went bad every shutdown and OpenAFS seems to
 be the reason because of which MacOS X won't shutdown clear and ends in
 a kernel panic.

 Is there any 1.5.27 for MacOS Tiger PPC available? Maybe that bu is fixed.


It is, however, no build is available currently. What are you using tht's a
1.5 feature?


 Btw: I clicked send that bug to apple, does OpenAFS get these error
 reports just like on win ?

We don't.


Re: [OpenAFS-Doc] Re: [OpenAFS] Quick Start Guide updated for Kerberos v5

2007-11-29 Thread Rob Banz


On Nov 29, 2007, at 08:39, chas williams - CONTRACTOR wrote:


In message [EMAIL PROTECTED],Russ Allbery writes:

  ** about fsck: does solaris use inode, namei or both? Is
clarification needed?


Solaris can use either, so yes, clarification is needed.  I'm  
fairly sure

you don't need the custom fsck if you use namei.


you do not need the custom fsck for namei.  further, namei only works
for ufs nonlogging filesystems.  if you have say zfs perhaps, namei
is your only choice.  given this, i think its reasonable to suggest
that people just use namei only


Is inode that only works on unlogging ufs, namei works on logging,  
unlogging, zfs, etc. ;)


-rob
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS-Doc] Re: [OpenAFS] Quick Start Guide updated for Kerberos v5

2007-11-29 Thread chas williams - CONTRACTOR
In message [EMAIL PROTECTED],Russ Allbery writes:
** about fsck: does solaris use inode, namei or both? Is
 clarification needed?

Solaris can use either, so yes, clarification is needed.  I'm fairly sure
you don't need the custom fsck if you use namei.

you do not need the custom fsck for namei.  further, namei only works
for ufs nonlogging filesystems.  if you have say zfs perhaps, namei
is your only choice.  given this, i think its reasonable to suggest
that people just use namei only.
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] What's the problem with reiser

2007-11-29 Thread chas williams - CONTRACTOR
In message [EMAIL PROTECTED],Harald Barth wr
ites:
 We use openafs clients on a lot of machines. The local Filesystems are
 usually reiser. But for the DiskCache we have to install one partition
 with ext2.

To all my experience, reiserfs is broken. I recommend NOT to use that
file system. At all. As a cache file system ext2 is fine, because it

as i recall resiferfs doesnt work because it doesnt keep a fixed
mapping between file objects and what afs would consider the inode.
i believe people have been lucky with using a reisferfs cache filesystem
but it had to be on a seperate partition.  normally, its journaling
that creates trouble for caching filesystems.  personally, unless you
have a need for massive amounts of cache, use memcache.

search through the list archives for a better answers about this.

this info doesnt appear to be in the wiki, so perhaps it needs one.
(and one that is more correct than my vague ramblings).
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] What's the problem with reiser

2007-11-29 Thread Jason Edgecombe
Rob Banz wrote:

 On Nov 29, 2007, at 07:41, Dirk Heinrichs wrote:

 Am Donnerstag, 29. November 2007 schrieb ext Harald Barth:
 We use openafs clients on a lot of machines. The local Filesystems are
 usually reiser. But for the DiskCache we have to install one partition
 with ext2.

 To all my experience, reiserfs is broken. I recommend NOT to use that
 file system. At all.

 OK. replase reiser with xfs, jfs, whatever. I guess the real question
 was:
 What's the reason why one should not use other filesystems than ext2 for
 the cache partition on a Linux client?

 For a cache partition, at least on other *ixes, the cache partition
 has always needed special attention because of the way its used by the
 AFS kernel module.  Certain care has to be taken as to do operations
 in such a way that kernel deadlocks and such are avoided.  For
 example, on Solaris you use ufs, however, you can't use logging ufs
 because of known deadlock problems.

 I'd assume that the use of ext2 on Linux is for a similar reason.

 -rob 
Fascinating. I did not know of UFS logging issue on the cache partition.
Strangely, I haven't heard of any issues. does ext3 have this issue as well?

Thanks,
Jason
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] MacOS X 1.4 (tiger) PPC built of 1.5.27?

2007-11-29 Thread Lars Schimmer

Derrick Brashear wrote:

On Nov 29, 2007 5:58 AM, Lars Schimmer [EMAIL PROTECTED] wrote:


Hi!

I' ve updated our PowerPC Mac with latest available OpenAFS version for
PPC Mac OS X Tiger, which was 1.5.26.
For bad sake, MacOS X just went bad every shutdown and OpenAFS seems to
be the reason because of which MacOS X won't shutdown clear and ends in
a kernel panic.

Is there any 1.5.27 for MacOS Tiger PPC available? Maybe that bu is fixed.



It is, however, no build is available currently. What are you using tht's a
1.5 feature?


I just made good experiece with 1.5.27 win build and thought install 
latest mac client to get latest usefull features.
If no public 1.5.27 is available, I should go back to latest stable 
built, or which one should be preferred?



Btw: I clicked send that bug to apple, does OpenAFS get these error
reports just like on win ?


We don't.




MfG,
Lars Schimmer
--
-
TU Graz, Institut für ComputerGraphik  WissensVisualisierung
Tel: +43 316 873-5405   E-Mail: [EMAIL PROTECTED]
Fax: +43 316 873-5402   PGP-Key-ID: 0x4A9B1723
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] MacOS X 1.4 (tiger) PPC built of 1.5.27?

2007-11-29 Thread Derrick Brashear
On Nov 29, 2007 9:01 AM, Lars Schimmer [EMAIL PROTECTED] wrote:

 Derrick Brashear wrote:
  On Nov 29, 2007 5:58 AM, Lars Schimmer [EMAIL PROTECTED] wrote:
 
  Hi!
 
  I' ve updated our PowerPC Mac with latest available OpenAFS version for
  PPC Mac OS X Tiger, which was 1.5.26.
  For bad sake, MacOS X just went bad every shutdown and OpenAFS seems to
  be the reason because of which MacOS X won't shutdown clear and ends in
  a kernel panic.
 
  Is there any 1.5.27 for MacOS Tiger PPC available? Maybe that bu is
 fixed.
 
 
  It is, however, no build is available currently. What are you using
 tht's a
  1.5 feature?

 I just made good experiece with 1.5.27 win build and thought install
 latest mac client to get latest usefull features.
 If no public 1.5.27 is available, I should go back to latest stable
 built, or which one should be preferred?

In general the preferred build is listed at
http://www.openafs.org/macos.html

A Tiger 1.5.28 build will be issued when that is released.


Re: [OpenAFS] What's the problem with reiser

2007-11-29 Thread Derrick Brashear
On Nov 29, 2007 8:57 AM, Rob Banz [EMAIL PROTECTED] wrote:

 
 
  For a cache partition, at least on other *ixes, the cache partition
  has always needed special attention because of the way its used by
  the
  AFS kernel module.  Certain care has to be taken as to do operations
  in such a way that kernel deadlocks and such are avoided.  For
  example, on Solaris you use ufs, however, you can't use logging ufs
  because of known deadlock problems.
 
  I'd assume that the use of ext2 on Linux is for a similar reason.
 
  -rob
  Fascinating. I did not know of UFS logging issue on the cache
  partition.
  Strangely, I haven't heard of any issues. does ext3 have this issue
  as well?

 I had used logging ufs as a cache partition for years without a
 problem as well -- but in the past couple years ran into deadlocks.  I
 remember reliably seeing them under Solaris 10x86 on a Dell 2650 where
 it'd lock up right after AFS started and some automated processes were
 busy trying to access it.


For documentation purposes it might be interesting to get kernel backtraces
of those if you ever get bored.


Re: [OpenAFS-Doc] Re: [OpenAFS] Quick Start Guide updated for Kerberos v5

2007-11-29 Thread Jeffrey Altman
chas williams - CONTRACTOR wrote:
 In message [EMAIL PROTECTED],Russ Allbery writes:
** about fsck: does solaris use inode, namei or both? Is
 clarification needed?
 Solaris can use either, so yes, clarification is needed.  I'm fairly sure
 you don't need the custom fsck if you use namei.
 
 you do not need the custom fsck for namei.  further, namei only works
 for ufs nonlogging filesystems.  if you have say zfs perhaps, namei
 is your only choice.  given this, i think its reasonable to suggest
 that people just use namei only.

Except that when you use memcache you lose the benefits of the cache
between restarts.   There are many organizations that use cache sizes
large enough so that the entire 90+% of the data needed for the
operating system and applications comes from the cache.

After a restart all that is then required is for a series of FetchStatus
calls to be performed.

Jeffrey Altman


smime.p7s
Description: S/MIME Cryptographic Signature


[OpenAFS] openafs partition - how to increase

2007-11-29 Thread Helmut Jarausch
Hi,

I'd like to resize (enlarge) the ext2-partition on which e.g.
/vicepa is mounted.

I know how to enlarge an ext2 partition using Linux tools,
but how does OpenAFS react to this situtation?

Does it use the additional space or is there any danger
OpenAFS gets confused?

Many thanks for a hint,

Helmut Jarausch

Lehrstuhl fuer Numerische Mathematik
RWTH - Aachen University
D 52056 Aachen, Germany
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] openafs partition - how to increase

2007-11-29 Thread Derrick Brashear
On Nov 29, 2007 10:31 AM, Helmut Jarausch [EMAIL PROTECTED]
wrote:

 Hi,

 I'd like to resize (enlarge) the ext2-partition on which e.g.
 /vicepa is mounted.

 I know how to enlarge an ext2 partition using Linux tools,
 but how does OpenAFS react to this situtation?

 Does it use the additional space or is there any danger
 OpenAFS gets confused?

 Many thanks for a hint,

It doesn't care, and won't use it unless you increase the number in the
cacheinfo file anyway.


Re: [OpenAFS] openafs partition - how to increase

2007-11-29 Thread Brandon S. Allbery KF8NH


On Nov 29, 2007, at 10:38 , Derrick Brashear wrote:

On Nov 29, 2007 10:31 AM, Helmut Jarausch [EMAIL PROTECTED] 
aachen.de wrote:

 I'd like to resize (enlarge) the ext2-partition on which e.g.
 /vicepa is mounted.
It doesn't care, and won't use it unless you increase the number in  
the cacheinfo file anyway.


Aroo?  That'd be /var/vice/cache, not e.g. /vicepa?

--
brandon s. allbery [solaris,freebsd,perl,pugs,haskell] [EMAIL PROTECTED]
system administrator [openafs,heimdal,too many hats] [EMAIL PROTECTED]
electrical and computer engineering, carnegie mellon universityKF8NH


___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] openafs partition - how to increase

2007-11-29 Thread Steve Devine

Derrick Brashear wrote:



On Nov 29, 2007 10:31 AM, Helmut Jarausch 
[EMAIL PROTECTED] mailto:[EMAIL PROTECTED] 
wrote:


Hi,

I'd like to resize (enlarge) the ext2-partition on which e.g.
/vicepa is mounted.

I know how to enlarge an ext2 partition using Linux tools,
but how does OpenAFS react to this situtation?

Does it use the additional space or is there any danger
OpenAFS gets confused?

Many thanks for a hint,

It doesn't care, and won't use it unless you increase the number in 
the cacheinfo file anyway.
 


I don't think he is asking about a cache partition.

--
Steve Devine
Network Storage  Printing
Academic Computing  Network Services
Michigan State University

506 Computer Center
East Lansing, MI 48824-1042
1-517-432-7327

Baseball is ninety percent mental; the other half is physical.
- Yogi Berra

___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Automatically get AFS-token

2007-11-29 Thread Douglas E. Engert



Lara Lloret Iglesias wrote:

Hello!

I´ve just installed three AFS clients, and everything seems to work fine! The 
only problem is that I would like to get my afs-token on login, without having 
to type klog.
Any idea?


Yes use pam_krb5 and pam_afs_session. I am assuming that your site has the
krb5 setup and can use aklog instead of klog. Check with you afs admins.

Try Google'ing with these:

   site:cern.ch aklog

   site:cern.ch pam_krb5

   site:cern.ch pam_afs2

pam_afs2 is what we use, but will eventual convert to pam_afs_session.



Thank you very much,


Lara
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info




--

 Douglas E. Engert  [EMAIL PROTECTED]
 Argonne National Laboratory
 9700 South Cass Avenue
 Argonne, Illinois  60439
 (630) 252-5444
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] MS Word crashing when opening files, 1.5.27 client

2007-11-29 Thread Jeffrey Altman
Jeffrey Altman wrote:
 If the files are truly intended for read-only use, store them in a
 directory that provides only 'rl' access to the end users or store them
 in a .readonly volume.   In both of those cases the AFS Cache Manager
 knows that the user cannot obtain a lock on the file and will issue one
 locally.
 
 Jeffrey Altman

Now that I am back in my own timezone let me take the time to explain a
bit more about locking and Microsoft Office applications.  Office
applications will obtain an exclusive lock on a file even when the file
is being opened in read only mode.  OAFW will translate file open for
read and not write and not delete and request for exclusive lock as
meaning obtain a read lock on the file.

The problem that you are experiencing is that while the Office
application is requesting a lock for a very small subset of the file,
AFS only implements full file locks.  If Office applications are two
machines attempt to open the same file and the first one has the file
open for write, the second one won't be able to open for read because
lock requests that otherwise would be non-intersecting byte ranges
collide when translated into full file locks.

Therefore, if you are providing files to be used simply as read only
templates, they should be stored in AFS in a manner that indicates to
the AFS client that they are in fact readonly so that the cache manager
knows it is safe to fake the locks locally.

Jeffrey Altman


smime.p7s
Description: S/MIME Cryptographic Signature


Re: [OpenAFS] Automatically get AFS-token

2007-11-29 Thread Michael C Garrison

On Thu, 29 Nov 2007, Lara Lloret Iglesias wrote:


Hello!

I´ve just installed three AFS clients, and everything seems to work fine! The 
only problem is that I would like to get my afs-token on login, without having 
to type klog.
Any idea?


Use a pam module, such as Russ' pam-afs-session: 
http://www.eyrie.org/~eagle/software/pam-afs-session/


--
Mike Garrison

Re: [OpenAFS] openafs partition - how to increase

2007-11-29 Thread Dirk Heinrichs
Am Donnerstag 29 November 2007 schrieb Helmut Jarausch:

 I'd like to resize (enlarge) the ext2-partition on which e.g.
 /vicepa is mounted.

 Does it use the additional space or is there any danger
 OpenAFS gets confused?

It will happily use the additional space.

Bye...

Dirk


signature.asc
Description: This is a digitally signed message part.


RE: [OpenAFS] openafs upgrade from 1.4.1 to 1.5.7

2007-11-29 Thread Jerry Normandin
Thanks everybody for your input! I just started my postion here and
inherited this AFS deployment.  It was great that the previous people
chose AFS, but they deployed it wrong.  I decided on using the XFS file
system, Also I will be upgrading to 1.4.7

The very first day I was here I got queries about improving AFS
performance.
First I plan on making the fixes, next to add some redundancy.  There is
none here.  At my previous employers I had plenty of nodes. Over there
is no
Redundancy in the volume servers.  Sure there are multiple volume
servers, but only one for AFS HOME and one for TOOLS.  So.. I have my
work cut out for me.  

Once again.. Thanks everybody!

-Original Message-
From: Harald Barth [mailto:[EMAIL PROTECTED] 
Sent: Thursday, November 29, 2007 4:29 AM
To: Jerry Normandin
Cc: openafs-info@openafs.org
Subject: Re: [OpenAFS] openafs upgrade from 1.4.1 to 1.5.7


 What's the underlying filesystem? AFS passes through the semantics of
 metadata operations of the underlying filesystem, and ext* for
instance is
 poor at it.

For example xfs on Linux has worked well for us.

  We are running an old version of AFS.. 1.4.1.   Are there any
  configuration differences between 1.4.1 and 1.5.7?
 
 
 Lots. Of course, we recommend 1.4.5, and not some random 1.5,
especially not
 an old one. 1.5.7 is similarly old to 1.4.1.

As Derrick said, use 1.4.x for servers.


  Can I have a mixed environment of versions?

 Unless you have pts supergroups enabled, yes, though there is a
pending bug
 regarding moving volumes between current 1.5.x and 1.4.x.

Better than that. You can mix even if you have supergroups enabled,
but don't create any supergroups until all your servers are upgraded.
There is no need for a complete cell shutdown to perform an upgrade.
There has not been such a need for years.

Harald.
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] openafs partition - how to increase

2007-11-29 Thread Derrick Brashear
On Nov 29, 2007 10:42 AM, Steve Devine [EMAIL PROTECTED] wrote:

 Derrick Brashear wrote:
 
 
  On Nov 29, 2007 10:31 AM, Helmut Jarausch
  [EMAIL PROTECTED] mailto:[EMAIL PROTECTED]
  wrote:
 
  Hi,
 
  I'd like to resize (enlarge) the ext2-partition on which e.g.
  /vicepa is mounted.
 
  I know how to enlarge an ext2 partition using Linux tools,
  but how does OpenAFS react to this situtation?
 
  Does it use the additional space or is there any danger
  OpenAFS gets confused?
 
  Many thanks for a hint,
 
  It doesn't care, and won't use it unless you increase the number in
  the cacheinfo file anyway.
 
 
 I don't think he is asking about a cache partition.


He's not, I'm just an idiot.


Re: [OpenAFS] openafs upgrade from 1.4.1 to 1.5.7

2007-11-29 Thread Derrick Brashear
On Nov 29, 2007 12:52 PM, Jerry Normandin [EMAIL PROTECTED] wrote:

 Thanks everybody for your input! I just started my postion here and
 inherited this AFS deployment.  It was great that the previous people
 chose AFS, but they deployed it wrong.  I decided on using the XFS file
 system, Also I will be upgrading to 1.4.7


We released 1.4.5 recently.

What do you know that I don't?


[OpenAFS] File systems on Linux, again.

2007-11-29 Thread Smith, Matt
After the recent thread openafs upgrade from 1.4.1 to 1.5.7, and a
review of a thread[1] from July, I'm wondering if there is a definitive
recommendation for which file system to use on Linux AFS file servers.
Ext3, XFS, JFS, something else?

Thanks all,
-Matt

[1] http://www.openafs.org/pipermail/openafs-info/2007-July/026798.html
-- 
Matt Smith
[EMAIL PROTECTED]
University Information Technology Services (UITS)
University of Connecticut
PGP Key ID: 0xE9C5244E


signature.asc
Description: This is a digitally signed message part


Re: [OpenAFS] What's the problem with reiser

2007-11-29 Thread Russ Allbery
Jason Edgecombe [EMAIL PROTECTED] writes:

 Fascinating. I did not know of UFS logging issue on the cache partition.
 Strangely, I haven't heard of any issues. does ext3 have this issue as
 well?

No, ext3 is fine.  UFS logging is also fine provided that nothing else is
writing to the same partition.  afsd prints out a warning about this when
using UFS with logging on Solaris.

-- 
Russ Allbery ([EMAIL PROTECTED]) http://www.eyrie.org/~eagle/
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] File systems on Linux, again.

2007-11-29 Thread Russ Allbery
Smith, Matt [EMAIL PROTECTED] writes:

 After the recent thread openafs upgrade from 1.4.1 to 1.5.7, and a
 review of a thread[1] from July, I'm wondering if there is a definitive
 recommendation for which file system to use on Linux AFS file servers.
 Ext3, XFS, JFS, something else?

It shouldn't make much of a difference, so I think you're safe choosing
your file system on whatever basis you'd choose a file system for any
other file server.  We use ext3 because of the stability, reliability, and
center of the mainstream support in the kernel, which we always
considered more important than a bit of additional speed, but your mileage
may vary.

XFS is probably the next most common choice.

I would be very leery of ReiserFS.  It has nice features, but the recovery
tools are fairly horrific.

-- 
Russ Allbery ([EMAIL PROTECTED]) http://www.eyrie.org/~eagle/
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Slow AFS performance

2007-11-29 Thread Jason Edgecombe
Dale Ghent wrote:
 On Nov 28, 2007, at 9:19 AM, Jason Edgecombe wrote:

 I'm experiencing poor AFS performance on under Sparc solaris 9 09/05HW
 running Openafs server 1.4.1 on a Sun StorageTeck 3511 Fibre channel  to
 SATA array

 At first, I thought that having UFS logging disabled was the culprit,
 but I have enabled UFS logging and I am using the namei server, but
 performance still stinks. It took 1.5 hours to move a 3.2GB volume to
 the server. Things seem fine except on the Fibre channel disks.

 ...

 My bonnie++ performance numbers for vicepa are here:
 http://www.coe.uncc.edu/~jwedgeco/bonnie.html

 What could AFS be doing that causes the performance to stink?

 Well first, your bonnie run tested sequential IO, so of course you're
 going to get stellar numbers with sequential large writes and reads.
 Arrays and disks in general eat that stuff right on up.

 AFS namei does writes in small batches, up to 64k, and all of it
 random (think: lots of users accessing lots of small files) It would
 be advantageous to tun to this environment.

 Check out your 3511s first.

 1) Make sure write-back caching is on. This is a global as well as a
 per-Logical Drive setting on the 3510/3511.

 2) Optimized for Random I/O (be sure the latest 3511 fw is applied, it
 betters the performance of this setting)

 3) You didn't mention the type of RAID config you have on the 3511s.
 RAID5? If so, is the stripe size low (64k?)

 Some 'iostat -nx 1' output would be helpful, too. The wsvc_t and %b
 columns would be telling.

 /dale

ok, I'm now some better performance out of my FC array. vos move of a
4.4GB volume from one disk in the FC array to another disk in the array
only took 16 minutes (4.6MB/s).

Derrick's suggestion of upgrading to 1.4.5 with namei did the trick.

Unfortunately, I don't think I can upgrade the firmware or tune the 3511
array. It's the 3511 expansion unit without the raid controller. It's a
JBOD.


As requested, here are the statics from iostat -nx 1
r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
0.0   13.00.08.0  0.0  0.10.09.8   0   2 c1t0d0
  481.00.0 14905.10.0  0.0  0.60.01.3   0  50 c0t0d0
0.00.00.00.0  0.0  0.00.00.0   0   0 c0t1d0
0.00.00.00.0  0.0  0.00.00.0   0   0 c0t3d0
   13.0  133.0  104.0 12082.3  0.0  3.60.0   25.0   0  86 c0t2d0
0.00.00.00.0  0.0  0.00.00.0   0   0 c0t5d0
0.00.00.00.0  0.0  0.00.00.0   0   0 c0t8d0
0.00.00.00.0  0.0  0.00.00.0   0   0 c0t7d0
0.00.00.00.0  0.0  0.00.00.0   0   0 c0t4d0
0.00.00.00.0  0.0  0.00.00.0   0   0 c0t6d0

...
extended device statistics 
r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
0.00.00.00.0  0.0  0.00.00.0   0   0 c1t0d0
0.00.00.00.0  0.0  0.00.00.0   0   0 c0t0d0
0.00.00.00.0  0.0  0.00.00.0   0   0 c0t1d0
0.00.00.00.0  0.0  0.00.00.0   0   0 c0t3d0
0.0  333.00.0  581.0 384.2 19.0 1153.7   57.1 100 100 c0t2d0
0.00.00.00.0  0.0  0.00.00.0   0   0 c0t5d0
0.00.00.00.0  0.0  0.00.00.0   0   0 c0t8d0
0.00.00.00.0  0.0  0.00.00.0   0   0 c0t7d0
0.00.00.00.0  0.0  0.00.00.0   0   0 c0t4d0
0.00.00.00.0  0.0  0.00.00.0   0   0 c0t6d0
extended device statistics 
r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
0.00.00.00.0  0.0  0.00.00.0   0   0 c1t0d0
0.00.00.00.0  0.0  0.00.00.0   0   0 c0t0d0
0.00.00.00.0  0.0  0.00.00.0   0   0 c0t1d0
0.00.00.00.0  0.0  0.00.00.0   0   0 c0t3d0
0.0  339.00.0  773.0 371.3 19.0 1095.4   56.0 100 100 c0t2d0
0.00.00.00.0  0.0  0.00.00.0   0   0 c0t5d0
0.00.00.00.0  0.0  0.00.00.0   0   0 c0t8d0
0.00.00.00.0  0.0  0.00.00.0   0   0 c0t7d0
0.00.00.00.0  0.0  0.00.00.0   0   0 c0t4d0
0.00.00.00.0  0.0  0.00.00.0   0   0 c0t6d0


Jason
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


[OpenAFS] 1.4.5 namei on solaris 9 sparc requires AlwaysAttach for vice partitions

2007-11-29 Thread Jason Edgecombe
Hi all,

In my sordid saga to get a Sun fibre channel array working well with
AFS, I found the following:

When I upgraded the server to 1.4.5 namei, the fileserver would not
mount the /vicep? partitions without doing a touch
/vicep?/AlwaysAttach first. These are dedicated partitions on separate
hard drives.

I'm using a source-compiled openafs on solaris 9 sparc. openafs was
compiled with the following options:
CC=/opt/SUNWspro/bin/cc YACC=yacc -vd ./configure \
  --enable-transarc-paths \
  --enable-largefile-fileserver \
  --enable-supergroups \
  --enable-namei-fileserver \
  --with-krb5-conf=/usr/local/krb5/bin/krb5-config

We're using MIT kerberos 1.4.1 on the clients  fileservers with a 1.6.x KDC

# mount | grep vicep
/vicepa on /dev/dsk/c0t0d0s6
read/write/setuid/intr/largefiles/logging/xattr/onerror=panic/dev=1d80006
on Thu Nov 29 13:03:15 2007
/vicepd on /dev/dsk/c0t3d0s6
read/write/setuid/intr/largefiles/logging/xattr/onerror=panic/dev=1d80016
on Thu Nov 29 13:03:15 2007
/vicepc on /dev/dsk/c0t2d0s6
read/write/setuid/intr/largefiles/logging/xattr/onerror=panic/dev=1d8001e
on Thu Nov 29 13:03:15 2007
/vicepb on /dev/dsk/c0t1d0s6
read/write/setuid/intr/largefiles/xattr/onerror=panic/dev=1d8000e on Thu
Nov 29 13:03:15 2007

# grep vicep /etc/vfstab
/dev/dsk/c0t0d0s6   /dev/rdsk/c0t0d0s6  /vicepa ufs 3  
yes -
/dev/dsk/c0t1d0s6   /dev/rdsk/c0t1d0s6  /vicepb ufs 3  
yes -
/dev/dsk/c0t2d0s6   /dev/rdsk/c0t2d0s6  /vicepc ufs 3  
yes -

#cat SalvageLog
@(#) OpenAFS 1.4.5 built  2007-11-28
11/29/2007 09:52:59 STARTING AFS SALVAGER 2.4 (/usr/afs/bin/salvager)
11/29/2007 09:52:59 No file system partitions named /vicep* found; not
salvaged

Does anyone know why this would be happening?

Thanks,
Jason
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] MS Word crashing when opening files, 1.5.27 client

2007-11-29 Thread Rodney M. Dyer

At 10:59 AM 11/29/2007, Jeffrey Altman wrote:

Therefore, if you are providing files to be used simply as read only
templates, they should be stored in AFS in a manner that indicates to
the AFS client that they are in fact readonly so that the cache manager
knows it is safe to fake the locks locally.


One small question.  Historically, in virtually all DOS/Win PC networking 
environments, the file attribute r was also recognized by applications as 
meaning read-only (whole file), even if it is just advisory to the 
network client.  What does AFS do in this situation, if anything at all, or 
is that still the applications responsibility to recognize the r attribute?


Rodney

___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Slow AFS performance

2007-11-29 Thread Rob Banz


ok, I'm now some better performance out of my FC array. vos move  
of a
4.4GB volume from one disk in the FC array to another disk in the  
array

only took 16 minutes (4.6MB/s).

Derrick's suggestion of upgrading to 1.4.5 with namei did the trick.

Unfortunately, I don't think I can upgrade the firmware or tune the  
3511
array. It's the 3511 expansion unit without the raid controller.  
It's a

JBOD.


I think 1.4.5 and above has the no-fsync stuff enabled by default,  
which really speeds up operations that do a lot of file creations/ 
deletions such as volume moves.
(if it doesn't, head over to the OpenAFS RT issue tracker and find the  
patch)


-rob
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] 1.4.5 namei on solaris 9 sparc requires AlwaysAttach for vice partitions

2007-11-29 Thread Derrick Brashear
On Nov 29, 2007 3:34 PM, Jason Edgecombe [EMAIL PROTECTED] wrote:

 Hi all,

 In my sordid saga to get a Sun fibre channel array working well with
 AFS, I found the following:

 When I upgraded the server to 1.4.5 namei, the fileserver would not
 mount the /vicep? partitions without doing a touch
 /vicep?/AlwaysAttach first. These are dedicated partitions on separate
 hard drives.

 I'm using a source-compiled openafs on solaris 9 sparc. openafs was
 compiled with the following options:
 CC=/opt/SUNWspro/bin/cc YACC=yacc -vd ./configure \
  --enable-transarc-paths \
  --enable-largefile-fileserver \
  --enable-supergroups \
  --enable-namei-fileserver \
  --with-krb5-conf=/usr/local/krb5/bin/krb5-config

 We're using MIT kerberos 1.4.1 on the clients  fileservers with a 1.6.xKDC

 # mount | grep vicep
 /vicepa on /dev/dsk/c0t0d0s6
 read/write/setuid/intr/largefiles/logging/xattr/onerror=panic/dev=1d80006
 on Thu Nov 29 13:03:15 2007
 /vicepd on /dev/dsk/c0t3d0s6
 read/write/setuid/intr/largefiles/logging/xattr/onerror=panic/dev=1d80016
 on Thu Nov 29 13:03:15 2007
 /vicepc on /dev/dsk/c0t2d0s6
 read/write/setuid/intr/largefiles/logging/xattr/onerror=panic/dev=1d8001e
 on Thu Nov 29 13:03:15 2007
 /vicepb on /dev/dsk/c0t1d0s6
 read/write/setuid/intr/largefiles/xattr/onerror=panic/dev=1d8000e on Thu
 Nov 29 13:03:15 2007

 # grep vicep /etc/vfstab
 /dev/dsk/c0t0d0s6   /dev/rdsk/c0t0d0s6  /vicepa ufs 3
 yes -
 /dev/dsk/c0t1d0s6   /dev/rdsk/c0t1d0s6  /vicepb ufs 3
 yes -
 /dev/dsk/c0t2d0s6   /dev/rdsk/c0t2d0s6  /vicepc ufs 3
 yes -

 #cat SalvageLog
 @(#) OpenAFS 1.4.5 built  2007-11-28
 11/29/2007 09:52:59 STARTING AFS SALVAGER 2.4 (/usr/afs/bin/salvager)
 11/29/2007 09:52:59 No file system partitions named /vicep* found; not
 salvaged

 Does anyone know why this would be happening?

Probably a bug in the what's acceptable as a vice partition logic... which
I thought i fixed before 1.4.5; i bet i committed the wrong thing (because i
tested it)


Re: [OpenAFS] 1.4.5 namei on solaris 9 sparc requires AlwaysAttach for vice partitions

2007-11-29 Thread Jason Edgecombe
Derrick Brashear wrote:


 On Nov 29, 2007 3:34 PM, Jason Edgecombe [EMAIL PROTECTED]
 mailto:[EMAIL PROTECTED] wrote:

 Hi all,

 In my sordid saga to get a Sun fibre channel array working well with
 AFS, I found the following:

 When I upgraded the server to 1.4.5 namei, the fileserver would not
 mount the /vicep? partitions without doing a touch
 /vicep?/AlwaysAttach first. These are dedicated partitions on
 separate
 hard drives.

 I'm using a source-compiled openafs on solaris 9 sparc. openafs was
 compiled with the following options:
 CC=/opt/SUNWspro/bin/cc YACC=yacc -vd ./configure \
  --enable-transarc-paths \
  --enable-largefile-fileserver \
  --enable-supergroups \
  --enable-namei-fileserver \
  --with-krb5-conf=/usr/local/krb5/bin/krb5-config

 We're using MIT kerberos 1.4.1 on the clients  fileservers with a
 1.6.x KDC

 # mount | grep vicep
 /vicepa on /dev/dsk/c0t0d0s6
 read/write/setuid/intr/largefiles/logging/xattr/onerror=panic/dev=1d80006
 on Thu Nov 29 13:03:15 2007
 /vicepd on /dev/dsk/c0t3d0s6
 read/write/setuid/intr/largefiles/logging/xattr/onerror=panic/dev=1d80016
 on Thu Nov 29 13:03:15 2007
 /vicepc on /dev/dsk/c0t2d0s6
 read/write/setuid/intr/largefiles/logging/xattr/onerror=panic/dev=1d8001e

 on Thu Nov 29 13:03:15 2007
 /vicepb on /dev/dsk/c0t1d0s6
 read/write/setuid/intr/largefiles/xattr/onerror=panic/dev=1d8000e
 on Thu
 Nov 29 13:03:15 2007

 # grep vicep /etc/vfstab
 /dev/dsk/c0t0d0s6   /dev/rdsk/c0t0d0s6  /vicepa ufs 3
 yes -
 /dev/dsk/c0t1d0s6   /dev/rdsk/c0t1d0s6  /vicepb ufs 3
 yes -
 /dev/dsk/c0t2d0s6   /dev/rdsk/c0t2d0s6  /vicepc ufs 3
 yes -

 #cat SalvageLog
 @(#) OpenAFS 1.4.5 built  2007-11-28
 11/29/2007 09:52:59 STARTING AFS SALVAGER 2.4 (/usr/afs/bin/salvager)
 11/29/2007 09:52:59 No file system partitions named /vicep* found; not
 salvaged

 Does anyone know why this would be happening?

 Probably a bug in the what's acceptable as a vice partition logic...
 which I thought i fixed before 1.4.5; i bet i committed the wrong
 thing (because i tested it)
  

Is it safe to run like this?

Should I file a bug?

Jason
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] 1.4.5 namei on solaris 9 sparc requires AlwaysAttach for vice partitions

2007-11-29 Thread Derrick Brashear

  Does anyone know why this would be happening?
 
  Probably a bug in the what's acceptable as a vice partition logic...
  which I thought i fixed before 1.4.5; i bet i committed the wrong
  thing (because i tested it)
 
 
 Is it safe to run like this?


yup



 Should I file a bug?


probably


Re: [OpenAFS] 1.4.5 namei on solaris 9 sparc requires AlwaysAttach for vice partitions

2007-11-29 Thread Hartmut Reuter

I also was today surprised when I started the freshly compiled 1.4.5
fileserver on Solaris and it didn't attach any partition.

There was a change between 1.4.4 and 1.4.5 in favour of zfs, but a
unfortunately broken:

/* Ignore non ufs or non read/write partitions */
if ((strcmp(mnt.mnt_fstype, ufs) != 0)
|| (strncmp(mnt.mnt_mntopts, ro,ignore, 9) == 0))
#else
(strcmp(mnt.mnt_fstype, ufs) != 0)
#endif
|| (strncmp(mnt.mnt_mntopts, ro,ignore, 9) == 0))
continue;

was changed to

/* Ignore non ufs or non read/write partitions */
/* but allow zfs too if we're in the NAMEI environment */
if (
#ifdef AFS_NAMEI_ENV
(((!strcmp(mnt.mnt_fstype, ufs) 
strcmp(mnt.mnt_fstype, zfs
|| (strncmp(mnt.mnt_mntopts, ro,ignore, 9) == 0))
continue;
}
#else
continue;
#endif



The ! in front of strcmp in the new version lets him exactly ignore
ufs. Just remove it!

Hartmut

Jason Edgecombe wrote:

Hi all,

In my sordid saga to get a Sun fibre channel array working well with
AFS, I found the following:

When I upgraded the server to 1.4.5 namei, the fileserver would not
mount the /vicep? partitions without doing a touch
/vicep?/AlwaysAttach first. These are dedicated partitions on separate
hard drives.

I'm using a source-compiled openafs on solaris 9 sparc. openafs was
compiled with the following options:
CC=/opt/SUNWspro/bin/cc YACC=yacc -vd ./configure \
  --enable-transarc-paths \
  --enable-largefile-fileserver \
  --enable-supergroups \
  --enable-namei-fileserver \
  --with-krb5-conf=/usr/local/krb5/bin/krb5-config

We're using MIT kerberos 1.4.1 on the clients  fileservers with a 1.6.x KDC

# mount | grep vicep
/vicepa on /dev/dsk/c0t0d0s6
read/write/setuid/intr/largefiles/logging/xattr/onerror=panic/dev=1d80006
on Thu Nov 29 13:03:15 2007
/vicepd on /dev/dsk/c0t3d0s6
read/write/setuid/intr/largefiles/logging/xattr/onerror=panic/dev=1d80016
on Thu Nov 29 13:03:15 2007
/vicepc on /dev/dsk/c0t2d0s6
read/write/setuid/intr/largefiles/logging/xattr/onerror=panic/dev=1d8001e
on Thu Nov 29 13:03:15 2007
/vicepb on /dev/dsk/c0t1d0s6
read/write/setuid/intr/largefiles/xattr/onerror=panic/dev=1d8000e on Thu
Nov 29 13:03:15 2007

# grep vicep /etc/vfstab
/dev/dsk/c0t0d0s6   /dev/rdsk/c0t0d0s6  /vicepa ufs 3  
yes -
/dev/dsk/c0t1d0s6   /dev/rdsk/c0t1d0s6  /vicepb ufs 3  
yes -
/dev/dsk/c0t2d0s6   /dev/rdsk/c0t2d0s6  /vicepc ufs 3  
yes -


#cat SalvageLog
@(#) OpenAFS 1.4.5 built  2007-11-28
11/29/2007 09:52:59 STARTING AFS SALVAGER 2.4 (/usr/afs/bin/salvager)
11/29/2007 09:52:59 No file system partitions named /vicep* found; not
salvaged

Does anyone know why this would be happening?

Thanks,
Jason
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info



--
-
Hartmut Reuter   e-mail [EMAIL PROTECTED]
   phone +49-89-3299-1328
RZG (Rechenzentrum Garching)   fax   +49-89-3299-1301
Computing Center of the Max-Planck-Gesellschaft (MPG) and the
Institut fuer Plasmaphysik (IPP)
-

___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] MS Word crashing when opening files, 1.5.27 client

2007-11-29 Thread Jeffrey Altman
Rodney M. Dyer wrote:
 At 10:59 AM 11/29/2007, Jeffrey Altman wrote:
 Therefore, if you are providing files to be used simply as read only
 templates, they should be stored in AFS in a manner that indicates to
 the AFS client that they are in fact readonly so that the cache manager
 knows it is safe to fake the locks locally.
 
 One small question.  Historically, in virtually all DOS/Win PC
 networking environments, the file attribute r was also recognized by
 applications as meaning read-only (whole file), even if it is just
 advisory to the network client.  What does AFS do in this situation,
 if anything at all, or is that still the applications responsibility to
 recognize the r attribute?
 
 Rodney

The Windows/DOS Read-only attribute is interpreted by the application
and is separate from the AFS r permission.   When set Office
applications open documents in shared read only mode which means that
they still obtain locks on the file.

Jeffrey Altman


___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] File systems on Linux, again.

2007-11-29 Thread Hartmut Reuter

Smith, Matt wrote:

After the recent thread openafs upgrade from 1.4.1 to 1.5.7, and a
review of a thread[1] from July, I'm wondering if there is a definitive
recommendation for which file system to use on Linux AFS file servers.
Ext3, XFS, JFS, something else?

Thanks all,
-Matt

[1] http://www.openafs.org/pipermail/openafs-info/2007-July/026798.html


We are using exclusively xfs since many years. It is performant and you 
can enlarge partitions on the fly doing lvextend and xfs_growfs.


Hartmut
-
Hartmut Reuter   e-mail [EMAIL PROTECTED]
   phone +49-89-3299-1328
RZG (Rechenzentrum Garching)   fax   +49-89-3299-1301
Computing Center of the Max-Planck-Gesellschaft (MPG) and the
Institut fuer Plasmaphysik (IPP)
-
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] MS Word crashing when opening files, 1.5.27 client

2007-11-29 Thread Hans Melgers

Thanks Jeffrey,

snip

Therefore, if you are providing files to be used simply as read only
templates, they should be stored in AFS in a manner that indicates to
the AFS client that they are in fact readonly so that the cache manager
knows it is safe to fake the locks locally.

Would this also be achieved when we vos release this volume to a ro 
clone ? (not necessarily on another fileserver?)


I wonder what common practice is for this kind of shared volumes;  This 
one is a projects folder where everybody stores his or her additions 
to several projects, so, alot of files are opened as templates to make 
new ones that are also stored in the same folder or subfolders.
If a readonly copy could solve this locking problems (or at least will 
bring the number of occurences down) we'll have to do alot of vos 
release commands throughout the day to avoid the locking problems as 
much as possible. How do you handle this ?


Hans



Jeffrey Altman wrote:

Jeffrey Altman wrote:
  

If the files are truly intended for read-only use, store them in a
directory that provides only 'rl' access to the end users or store them
in a .readonly volume.   In both of those cases the AFS Cache Manager
knows that the user cannot obtain a lock on the file and will issue one
locally.

Jeffrey Altman



Now that I am back in my own timezone let me take the time to explain a
bit more about locking and Microsoft Office applications.  Office
applications will obtain an exclusive lock on a file even when the file
is being opened in read only mode.  OAFW will translate file open for
read and not write and not delete and request for exclusive lock as
meaning obtain a read lock on the file.

The problem that you are experiencing is that while the Office
application is requesting a lock for a very small subset of the file,
AFS only implements full file locks.  If Office applications are two
machines attempt to open the same file and the first one has the file
open for write, the second one won't be able to open for read because
lock requests that otherwise would be non-intersecting byte ranges
collide when translated into full file locks.

Therefore, if you are providing files to be used simply as read only
templates, they should be stored in AFS in a manner that indicates to
the AFS client that they are in fact readonly so that the cache manager
knows it is safe to fake the locks locally.

Jeffrey Altman
  


Re: [OpenAFS] MS Word crashing when opening files, 1.5.27 client

2007-11-29 Thread Jeffrey Altman
Hans Melgers wrote:
 Thanks Jeffrey,
 
 snip
 
 Therefore, if you are providing files to be used simply as read only
 templates, they should be stored in AFS in a manner that indicates to
 the AFS client that they are in fact readonly so that the cache manager
 knows it is safe to fake the locks locally.
 
 Would this also be achieved when we vos release this volume to a ro
 clone ? (not necessarily on another fileserver?)

yes.  that is a .readonly volume.

 I wonder what common practice is for this kind of shared volumes;  This
 one is a projects folder where everybody stores his or her additions
 to several projects, so, alot of files are opened as templates to make
 new ones that are also stored in the same folder or subfolders.
 If a readonly copy could solve this locking problems (or at least will
 bring the number of occurences down) we'll have to do alot of vos
 release commands throughout the day to avoid the locking problems as
 much as possible. How do you handle this ?

I can very easily imagine a publish as template button that gets
implemented in Office applications as a button on a custom toolbar.  The
button uses the macro language to save the file locally.  Copy the file
to a drop box location in AFS.  Then post a message to a web form with
the name of the file that instructs a privileged service to move the
copy into a templates directory that normal users only have 'rl' on.

There is nothing wrong with users saving files in the same directory.
Its just that only one of them at a time can open them.

This is just one of the limitations that the gatekeepers would like to
address but for which we have no resources.  Contributions are welcome.

Jeffrey Altman
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info