Re: [OpenAFS] New to OpenAFS

2008-03-20 Thread Jason Edgecombe
billbaird3 wrote:
 Jason,

 Thanks so much for clarifying things, for the majority of users I think this
 will solve a lot of file sharing issues. Most of our locations are dedicated
 to a specific department, so traffic will stay on the local network. I do
 have a few additional questions if you don't mind...

 - Our accounting group is distributed with a equal amount of people in NY,
 NC  LA. Right now they all connect to our server (WebDAV, Oracle Content
 Services) in NY. It is great for the NY users, but slow for others. Is there
 anyway to make an accounting folder available for fast access in multiple
 locations? Is there anyway to have read/write clones? If your example, you
 say that the read/write copy is accessible via:
 /afs/.example.com/shared/procedures. This is different than
 /afs/example.com/shared/procedures. Just wanted to confirm that it isn't a
 typo...

 - Have you integrated your OpenAFS server into an LDAP or Directory server?
 We are planning to run Samba with an OpenLDAP backend for our domain. Is
 this possible? I haven't been able to find my documentation about this. We
 are actually starting fresh, so we are open to any directory system as long
 as we can have other other apps authenticate via LDAP.

 - Lastly, when users are connecting to AFS do they need to only be able to
 contact one AFS server? Or contact others? For example, do the LA users need
 to talk to the NY server? Or do they talk to the LA server and the LA server
 handles interaction with NY?
   
Glad to help.

Currently, OpenAFS cannot do read/write replicas. Only read-only data
can be replicated. The multi-office accounting folder would have to be
hosted at one office. read-only copies could be kept at all offices, but
the read-write copy can only be on one server.  /afs/.example.com was
not a typo. that's called the dotted path in AFS jargon. That is the
path to use to reach the read-write copy of data. /afs/example.com will
prefer read-only data when certain rules are met. If you like, you can
get more fine-grained that I showed. Each directory could be on a
different volume, which could be in a different server in a different
office. You could set up project directories under the departmental
shared folders. For example, /afs/example.com/accounting/payroll could
be in New York and /afs/example.com/accounting/auditing could be in
Chicago. A user cannot tell which server a file is on except possible by
the delay incurred from a distant office. Keep in mind that AFS clients
have good caching. If you save a file in AFS, then immediately open it,
then there will be no delay because the file is still cached. AFS caches
chunks of files.You can have a 1GB or larger disk cache on windows
(check the release notes), so the client won't have to re-fetch the file
unless it has changed.

At my work, we are not using a directory server. We use script to
distribute /etc/passwd. you will need to set up a kerberos server for
AFS. openldap can serve the /etc/passwd data to linux/unix machines. I
don't know how samba+openldap would work. Since AFs requires kerberos,
kerberos will be your primary password store for AFS. you either need to
have openldap/samba bounce password requests to kerberos or do some type
of password synchronization. I have heard of people using kerberos with
openldap, though.

The client must be able to contact the fileserver with the files AND the
DB servers. Each client must find the DB servers for your AFS cell via
the CellServDB file or DNS. The client contacts one of the DB servers to
find out which server contains the file it wants. After that, the client
talks to the fileserver containing the files. DB servers are a critical
part of AFS infrastructure. Three is the recommended minimum of DB
servers so that you still have full functionaliy if one goes down. You
still have partial functionality and normal file serving with two down,
but you can't do some admin operations like moving volumes. with
multiple offices, I highly recommend putting the three DB servers in
different offices.Using the fs setserverprefs command, you can set
clients to contact the DB server nearest to them to avoid WAN traffic.

I forgot to mention that a single machine can be both a fileserver and a
DB server. DB and fileserver are just different processes that define
roles in the infrastructure.

Any more questions?

Sincerely,
Jason
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] New to OpenAFS

2008-03-20 Thread Christopher D. Clausen
billbaird3 [EMAIL PROTECTED] wrote:
 I'm new to OpenAFS and was hoping if the community could help me
 determine if it would be a good fit for my company. We are approx 150
 people, with 50 home users and the rest in small offices of about
 10-15 people. I would like to have a main file server that everyone
 can access, but also departmental servers in offices that would allow
 people to save files quickly (without going over the WAN).

What operating system are these users running?  If you are running 
nearly all Microsoft Windows machine, you probably want to at least look 
at Microsoft's Distributed FileSystem (Dfs.)  It allows for multi-master 
read-write replicas and a user-defined site topology to optimize 
replication and allowing clients to find the closest replica.  Be aware 
that Dfs is not encrypted and is not a true WAN filesystem.  Microsoft 
recomends using IPsec to secure connections.  There is also no caching 
by default (one would need to setup Offline Folders functionality to 
cache files locally on the computer.)

While AFS is very useful in heterogeneous environments, there may be a 
better choice if only a single operating system is in use.

CDC


___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] New to OpenAFS

2008-03-20 Thread billbaird3


What operating system are these users running?  If you are running 
 nearly all Microsoft Windows machine, you probably want to at least look 
 at Microsoft's Distributed FileSystem (Dfs.)  It allows for multi-master 
 read-write replicas and a user-defined site topology to optimize 
 replication and allowing clients to find the closest replica.  Be aware 
 that Dfs is not encrypted and is not a true WAN filesystem.  Microsoft 
 recomends using IPsec to secure connections.  There is also no caching 
 by default (one would need to setup Offline Folders functionality to 
 cache files locally on the computer.)

Thanks for replying! I had looked at DFS, but we currently run almost
entirely on Linux servers and I would like to keep it that way. With the
licensing fees, I doubt that would even get approved.

Also, with our 50 home users...using CIFS over a VPN would be pretty
painful. From what I understand, OpenAFS is a more efficient WAN protocol
(using WebDAV now, which is much more efficient than CIFS). We may
eventually use terminal server or a remote desktop solution, but using CIFS
over a WAN would almost make that a necessity.

--Bill
-- 
View this message in context: 
http://www.nabble.com/New-to-OpenAFS-tp16093093p16180548.html
Sent from the OpenAFS - General mailing list archive at Nabble.com.

___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] AFS namei file servers, SAN, any issues elsewhere? We've had some. Can AFS _cause_ SAN issues?

2008-03-20 Thread Kim Kimball

Thanks, Jason.

Is the hardware the same as what you tested last year?

Kim


Jason Edgecombe wrote:

Is this what you need?

  PKGINST:  SUNWsan
 NAME:  SAN Foundation Kit
 CATEGORY:  system
 ARCH:  sparc
  VERSION:  1.0
  BASEDIR:  /
   VENDOR:  Sun Microsystems, Inc.
 DESC:  This package provides a support for the SAN Foundation Kit.
   PSTAMP:  sanserve-a20031029172438
 INSTDATE:  Jan 15 2008 10:37
  HOTLINE:  Please contact your local service provider
   STATUS:  completely installed
FILES:   22 installed pathnames
  4 shared pathnames
  1 linked files
 11 directories
  2 executables
239 blocks used (approx)


Running Solaris 9 09/05HW Sparc with Sun SAN foundation.

Jason

Kim Kimball wrote:

Hi Jason,

Thanks!

Can you tell me which flavor of SAN you're using?

Kim


Jason Edgecombe wrote:

Robert Banz wrote:



AFS can't really cause san issues in that it's just another 
application using your filesystem.  In some cases, it can be quite 
a heavy user of such, but since its only interacting through the 
fs, its not going to know anything about your underlying storage 
fabric, or have any way of targeting it for any more badness than 
any other filesystem user.


One of the big differences that would effect the filesystem IO load 
that occurred between 1.4.1  1.4.6 was the removal functions that 
made copious fsync operations.  These operations were called in 
fileserver/volserver functions that modified various in-volume 
structures, specifically file creations and deletions, and would 
lead to rather underwhelming performance when doing vos restores, 
deleting, or copying large file trees.  In many configurations, 
this causes the OS to pass on a call to the underlying storage to 
verify that all changes written have been written to *disk*, 
causing the storage controller to flush its write cache.  Since 
this defeats many of the benefits (wrt I/O scheduling) on your 
storage hardware of having a cache, this could lead to overloaded 
storage.


Some storage devices have the option to ignore these calls from 
devices, assuming your write cache is reliable.


Under UFS, I would suggest that you'd be running in 'logging' mode 
when using the namei fileserver on Solaris, as yes, fsck is rather 
horrible to run.  Performance on reasonably recent versions of ZFS 
were quite acceptable as well.


I can confirm Robert's observations. I recently tested openafs 1.4.1 
inode vs 1.4.6 namei on solaris 9 sparc with a Sun Storedge 3511 
Expansion tray fibre channel device. The difference is stagerring 
with vos move and such. We have been using the 1.4.6 namei config on 
a SAN for a few months now with no issues.


Jason
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info




___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info



___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info




___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


[OpenAFS] kvibille as nautilus-extension?

2008-03-20 Thread Dave Botsch
So, I compiled kvibille 0.2 for RHEL4 and the nautilus extension libs were
generated. lsof shows nautilus is using the
/usr/lib/.../libkvibille-nautilus.so.0.0.0

but, no additional tab in the properties dialog of files/folders in afs.

The standalone program does seem to work.

thoughts?

-- 

David William Botsch
Programmer/Analyst
CNF Computing
[EMAIL PROTECTED]

___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] AFS namei file servers, SAN, any issues elsewhere? We've had some. Can AFS _cause_ SAN issues?

2008-03-20 Thread Jason Edgecombe
Yes. We only have one fibre channel HBA and one fibre channel disk pack, 
a Sun StorEdge 3511 expansion tray with SATA disks.


For what it's worth, we just tested 1.4.6 inode fileserver (nologging 
ufs) on an old-style direct-attached SCSI disk pack and saw similar 
sluggish vos performance to what we saw on our SAN disk pack running the 
1.4.1-inode fileserver.


Jason

Kim Kimball wrote:

Thanks, Jason.

Is the hardware the same as what you tested last year?

Kim


Jason Edgecombe wrote:

Is this what you need?

  PKGINST:  SUNWsan
 NAME:  SAN Foundation Kit
 CATEGORY:  system
 ARCH:  sparc
  VERSION:  1.0
  BASEDIR:  /
   VENDOR:  Sun Microsystems, Inc.
 DESC:  This package provides a support for the SAN Foundation Kit.
   PSTAMP:  sanserve-a20031029172438
 INSTDATE:  Jan 15 2008 10:37
  HOTLINE:  Please contact your local service provider
   STATUS:  completely installed
FILES:   22 installed pathnames
  4 shared pathnames
  1 linked files
 11 directories
  2 executables
239 blocks used (approx)


Running Solaris 9 09/05HW Sparc with Sun SAN foundation.

Jason

Kim Kimball wrote:

Hi Jason,

Thanks!

Can you tell me which flavor of SAN you're using?

Kim


Jason Edgecombe wrote:

Robert Banz wrote:



AFS can't really cause san issues in that it's just another 
application using your filesystem.  In some cases, it can be quite 
a heavy user of such, but since its only interacting through the 
fs, its not going to know anything about your underlying storage 
fabric, or have any way of targeting it for any more badness than 
any other filesystem user.


One of the big differences that would effect the filesystem IO 
load that occurred between 1.4.1  1.4.6 was the removal functions 
that made copious fsync operations.  These operations were called 
in fileserver/volserver functions that modified various in-volume 
structures, specifically file creations and deletions, and would 
lead to rather underwhelming performance when doing vos restores, 
deleting, or copying large file trees.  In many configurations, 
this causes the OS to pass on a call to the underlying storage to 
verify that all changes written have been written to *disk*, 
causing the storage controller to flush its write cache.  Since 
this defeats many of the benefits (wrt I/O scheduling) on your 
storage hardware of having a cache, this could lead to overloaded 
storage.


Some storage devices have the option to ignore these calls from 
devices, assuming your write cache is reliable.


Under UFS, I would suggest that you'd be running in 'logging' mode 
when using the namei fileserver on Solaris, as yes, fsck is rather 
horrible to run.  Performance on reasonably recent versions of ZFS 
were quite acceptable as well.


I can confirm Robert's observations. I recently tested openafs 
1.4.1 inode vs 1.4.6 namei on solaris 9 sparc with a Sun Storedge 
3511 Expansion tray fibre channel device. The difference is 
stagerring with vos move and such. We have been using the 1.4.6 
namei config on a SAN for a few months now with no issues.


Jason 


___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] AFS namei file servers, SAN, any issues elsewhere? We've had some. Can AFS _cause_ SAN issues?

2008-03-20 Thread Dale Ghent


Note the you only need SUNWsan if you're running Solaris  10.

Why one would run Solaris  10 these days is beyond me, but...

/dale


On Mar 20, 2008, at 3:40 PM, Kim Kimball wrote:


Thanks, Jason.

Is the hardware the same as what you tested last year?

Kim


Jason Edgecombe wrote:

Is this what you need?

 PKGINST:  SUNWsan
NAME:  SAN Foundation Kit
CATEGORY:  system
ARCH:  sparc
 VERSION:  1.0
 BASEDIR:  /
  VENDOR:  Sun Microsystems, Inc.
DESC:  This package provides a support for the SAN Foundation  
Kit.

  PSTAMP:  sanserve-a20031029172438
INSTDATE:  Jan 15 2008 10:37
 HOTLINE:  Please contact your local service provider
  STATUS:  completely installed
   FILES:   22 installed pathnames
 4 shared pathnames
 1 linked files
11 directories
 2 executables
   239 blocks used (approx)


Running Solaris 9 09/05HW Sparc with Sun SAN foundation.

Jason

Kim Kimball wrote:

Hi Jason,

Thanks!

Can you tell me which flavor of SAN you're using?

Kim


Jason Edgecombe wrote:

Robert Banz wrote:



AFS can't really cause san issues in that it's just another  
application using your filesystem.  In some cases, it can be  
quite a heavy user of such, but since its only interacting  
through the fs, its not going to know anything about your  
underlying storage fabric, or have any way of targeting it for  
any more badness than any other filesystem user.


One of the big differences that would effect the filesystem IO  
load that occurred between 1.4.1  1.4.6 was the removal  
functions that made copious fsync operations.  These operations  
were called in fileserver/volserver functions that modified  
various in-volume structures, specifically file creations and  
deletions, and would lead to rather underwhelming performance  
when doing vos restores, deleting, or copying large file trees.   
In many configurations, this causes the OS to pass on a call to  
the underlying storage to verify that all changes written have  
been written to *disk*, causing the storage controller to flush  
its write cache.  Since this defeats many of the benefits (wrt I/ 
O scheduling) on your storage hardware of having a cache, this  
could lead to overloaded storage.


Some storage devices have the option to ignore these calls from  
devices, assuming your write cache is reliable.


Under UFS, I would suggest that you'd be running in 'logging'  
mode when using the namei fileserver on Solaris, as yes, fsck is  
rather horrible to run.  Performance on reasonably recent  
versions of ZFS were quite acceptable as well.


I can confirm Robert's observations. I recently tested openafs  
1.4.1 inode vs 1.4.6 namei on solaris 9 sparc with a Sun Storedge  
3511 Expansion tray fibre channel device. The difference is  
stagerring with vos move and such. We have been using the 1.4.6  
namei config on a SAN for a few months now with no issues.


Jason
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info




___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info



___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info




___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info



___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info