Re: [OpenAFS] order of directory entries re: creat() vs readdir()

2009-09-01 Thread Jeffrey Altman
Todd M. Lewis wrote:

> Thoughts? Comments?

I wouldn't count on the order of directory entries in the enumeration.
As slots are the beginning of the raw directory structure are freed
they will be reused.  However, the client can represent the directory
entries to the application in any order that is convenient for it.
On Windows for example, directory enumeration is performed by walking
the leaf nodes of a B+ tree constructed from the raw data.  As a result
the entries are always alphabetized.



smime.p7s
Description: S/MIME Cryptographic Signature


[OpenAFS] order of directory entries re: creat() vs readdir()

2009-09-01 Thread Todd M. Lewis
We have a process that may run on any number of clients and at various
times, the result of each run is to drop a small file into a common
directory. These files represent queued work requests that get done in
batches about once per hour by another process running on a single
server. That process deletes the files it has processed.

The clients include a time stamp in the file names. However, because of
client clock skew (which was more of an issue many years ago when this
was set up), rather than sorting the files by time stamp, the server
process uses other heuristics to determine the order in which to process
requests based on the nature of the requests themselves -- creates,
mods, deletes, etc. falling into a hierarchy that almost always does the
Right Thing. The preferred order would be the actual order the files
were created in.

My question then is about the order of entries as files are created in
AFS directories. It occurs to me that even though the clients' clocks
may be out of sync, the files' actual creation order may be reflected in
the order directory entries are returned from readdir() since that
happens on the server. A few tests seem to show that this is the case,
but... are there any guarantees about the order directory entries are
created and returned from readdir(), all else being equal (by which I
mean, starting from an empty directory)? If not guarantees, how about in
terms of emergent properties due to current implementation? That's
subject to change of course, but these aren't life or death critical
operations and they are already questionably ordered in some
circumstances with our current scheme. So, would this be a reasonable
way to improve our processing order? Obviously we'd need to pay
attention to how we deleted these files to reduce create/delete ordering
issues.

Thoughts? Comments?
--
todd_le...@unc.edu
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


[OpenAFS] symantec netbackup on Linux

2009-09-01 Thread Gary Gatling


Greetings,

Has anyone ever gotten netbackup backup and restore to work with openafs
servers running on Linux? I was able to get restores to work with a 6.0 
and a 5.1 client. But not backups.


In netbackup I have a file selection for my AFS policy that reads 
/vicep[a-z]


I have been able to backup and restore volumes using netbackup 5.1 MP7, 
6.0 MP4, and 6.5.4 clients running on SPARC Solaris openafs servers. 
(openafs version 1.4.11 `cause we just upgraded our servers) But when I 
try with Linux it just backs up the /vicepa directory like its using a 
standard policy instead of using vos commands on volumes like it should.


So it backs up all the funny named files under /vicepa/* and when you go 
to look at the list of what it backed up it opens a dialog saying no files 
match the criteria. The same version of the software on Solaris with 
AFS backups just works ok.


An example, on Solaris it finds 2 files on my fileserver and backups one 
of them which is a volume. (I guess the other is the .backup volume) But 
on Linux with the same basic setup it finds 24 files to backup and they 
are listed as they are backing up as stuff like:


/vicepa/AFSIDat/c=/cJc1U/+/+/=2

(So maybe there is some bug in the Linux version of netbackup in the file 
selection mechanism...)


I am using version 6.5.4 on my test netbackup server.

People here want us to get off of SPARC Solaris but since we are stuck 
with netbackup as a backup solution it looks like we might be stuck on 
that platform... Anyone else run into this or find a workaround?


In the Netbackup Systems Administrators guide, volume 2 for 6.0 it says 
that AFS backups are only supported on Solaris 7, HP-UX 11.0 or IBM AIX 
4.3.3. platforms. It does seem to work ok with Solaris 10 even though its 
not officially supported by symantec any longer.


We are running a mix of SPARC Solaris 10 and RHEL 5 in our environment. So 
it would be awesome if there was some way to get AFS backups to work on 
RHEL 5. We already have a several Linux AFS file servers but they only 
hold read only volumes so there is no need to back those up.


I was actually suprised that netbackup 6.5.4 on Solaris worked with 
openafs. Our instructor at the netbackup course told us only old versions 
of the software would likely work.


Thanks,

Gary Gatling  | ITECS Systems
ITECS, BOX 7901   | Operations and Systems Analyst
NCSU, Raleigh, NC | Email: gsgatlin at eos.ncsu.edu
27695-7901| (5C Page Hall)
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Snow Leopard (OS X 10.6) and kerberos ssh logins

2009-09-01 Thread Adeyemi Adesanya


BTW, the following message is logged to /var/log/secure.log each time  
I attempt to perform a kerberos ssh login:


"in pam_sm_authenticate(): Failed to determine Kerberos principal name."

I should mention that I can perform kerberos logins from the console  
without any problems.


---
Yemi

On Sep 1, 2009, at 10:12 AM, Adeyemi Adesanya wrote:



I've installed the official release of Snow Leopard and I'm running  
OpenAFS 1.4.11 without any trouble. Apple have bundled a more recent  
version of OpenSSH (5.2p1). Apple support actually sent me message  
claiming that the Kerberos credentials cache issue in 10.5 is now  
fixed in 10.6 but I'll believe it when I see it.  What's interesting  
is that the sshd pam stack (/etc/pam.d/sshd) now includes pam_krb5 .  
Has anyone successfully logged into a Snow Leopard system via ssh  
using kerberos authentication? I haven't got it working yet...



---
Yemi
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


[OpenAFS] Snow Leopard (OS X 10.6) and kerberos ssh logins

2009-09-01 Thread Adeyemi Adesanya


I've installed the official release of Snow Leopard and I'm running  
OpenAFS 1.4.11 without any trouble. Apple have bundled a more recent  
version of OpenSSH (5.2p1). Apple support actually sent me message  
claiming that the Kerberos credentials cache issue in 10.5 is now  
fixed in 10.6 but I'll believe it when I see it.  What's interesting  
is that the sshd pam stack (/etc/pam.d/sshd) now includes pam_krb5 .  
Has anyone successfully logged into a Snow Leopard system via ssh  
using kerberos authentication? I haven't got it working yet...



---
Yemi
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


[OpenAFS] Re: Support for large volumes

2009-09-01 Thread Andrew Deason
On Tue, 01 Sep 2009 12:46:20 -0400
Jeffrey Altman  wrote:

> For public interface changes proposals will need to be developed
> and submitted to the AFS3 Standardization mailing list for review
> and obtaining consensus.  The result should be an Internet-Draft
> style document that can be used as an AFS3 standard for use by
> implementers.

The wire protocol changes for using 64-bit-sized volumes have already
been listed[1], if I'm not mistaken (or at least some of them, if I
missed any). It's not an internet draft, but it does have the relevant
fields, if you wanted to work from there.

I don't believe those changes are necessary to use "large" volumes,
though; they only need to exist to get correct reporting on volume
usage, and for setting volume quotas above the 31-bit mark.

[1] The ones under the sections '64-bit unsigned values for volume
quotas' and '64-bit unsigned values for volume blocks in use, partition
blocks in use, and maximum partition blocks' in:


-- 
Andrew Deason
adea...@sinenomine.net

___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Support for large volumes

2009-09-01 Thread Jeffrey Altman
Mauro:

No one is working on this at the moment.  The first thing that needs
to be done is to create a list of all of the places in the public
interfaces (wire protocols, pioctls, dump formats, etc) where there
will be an impact as well as local or internal changes.

For public interface changes proposals will need to be developed
and submitted to the AFS3 Standardization mailing list for review
and obtaining consensus.  The result should be an Internet-Draft
style document that can be used as an AFS3 standard for use by
implementers.

For the local and internal changes they need to be submitted for
review on the OpenAFS developers list prior to implementation.
Tools providing for forward and backward migration should accompany
on-disk format changes.

Jeffrey Altman


Mauricio Villarroel wrote:
> Thanks Jeffrey for the quick response.
> 
> Is someone currently working on this?, I would love give a hand with
> this if there is a list somewhere of changes and tasks needed.
> 
> Mauro
> 
> On Mon, Aug 31, 2009 at 7:18 PM, Jeffrey Altman
> mailto:jalt...@secure-endpoints.com>> wrote:
> 
> Mauricio Villarroel wrote:
> 
> > What is the limit for a volume?. I read online that one possible
> source
> > of the limitation comes from some data structure that are using 32bits
> > "ints" and changing them would mean changing the AFS communication
> > protocol?, if this is the case, is there a plan to increase it?
> 
> The volume limit is 2^31 (max signed int).
> 
> Raising this limit requires introducing a new set of RPCs, modifying the
> on-disk format, creating disk format conversion tools, etc.   Yes: we
> want to do it.  No: there is no scheduled timeline for doing so.
> However, we would prefer it to be sooner rather than later.
> 
> Jeffrey Altman
> 
> 
> 
> 


___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Support for large volumes

2009-09-01 Thread Mauricio Villarroel
Thanks Jeffrey for the quick response.

Is someone currently working on this?, I would love give a hand with this if
there is a list somewhere of changes and tasks needed.

Mauro

On Mon, Aug 31, 2009 at 7:18 PM, Jeffrey Altman <
jalt...@secure-endpoints.com> wrote:

> Mauricio Villarroel wrote:
>
> > What is the limit for a volume?. I read online that one possible source
> > of the limitation comes from some data structure that are using 32bits
> > "ints" and changing them would mean changing the AFS communication
> > protocol?, if this is the case, is there a plan to increase it?
>
> The volume limit is 2^31 (max signed int).
>
> Raising this limit requires introducing a new set of RPCs, modifying the
> on-disk format, creating disk format conversion tools, etc.   Yes: we
> want to do it.  No: there is no scheduled timeline for doing so.
> However, we would prefer it to be sooner rather than later.
>
> Jeffrey Altman
>
>
>
>


[OpenAFS] OpenAFS in LTSP environment

2009-09-01 Thread Joerg Herzinger
I've been using OpenAFS now for three years and I am really impressed by 
its possibilities and performance. Now I am in a new company with about 
40 clients all running LTSP. We are still using NFS for our users homes 
and shared directories and it slowly gets a real pain in the ass. I am 
really missing ACLs, useful quotas the possibility to make everything 
publicly available.
Now my Problem/thoughts are that one of the great things about OpenAFS 
is the cache. It heavily reduces disk writes on the server, which is 
currently our main problem. Now with LTSP this could become very 
interesting, because since most applications are run on one server disk 
IO could again be the bottleneck. There are just some applications like 
Firefox, Thunderbird and OpenOffice.org that are run on the local 
machines with their own OpenAFS client and could really take advantage 
of the cache.
One thought that came to me would be using a ramfs as cache. Would that 
be possible??
I hoped to get some input about that topic here. What are your thoughts 
about that? OpenAFS definitely has some huge advantages over NFS, but I 
am really concerned that it won't work out that good with LTSP.


so long,
   Jörg
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info