Re: rpc.lockd and true NFS locks?

2000-12-17 Thread Axel Thimm

On Sat, Dec 16, 2000 at 04:27:20PM -0600, Dan Nelson wrote:
> In the last episode (Dec 16), Axel Thimm said:
> > Wouldn't that mean, that you might cause data corruption if, say, I was to
> > read my mail from a FreeBSD box over an NFS mounted spool directory
> > (running under OSF1 in our case), and I decided to write back the mbox to
> > the spool dir the same moment new mail is delivered?
> That's why dotlocking is recommended for locking mail spools.  Both procmail
> and mutt will dotlock your mail file while it's being accessed.

This was just a test case above. Not all programs are kind enough to allow
control of their locking strategy. What about samba accessing NFS volumes in a
transparent net or pure sendmail w/o procmail? Especially if your mail server
is already at heavy load serving O(1000) users, forcing each incomming mail to
be passed to procmail would must certainly increase the load too much. (Maybe
sendmail and samba can also be compiled with dotlocking methods, these are
also just examples). Also not all our users want to switch to mutt, we have
to support a wide range of mail readers.

Axel.
-- 



To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



Re: rpc.lockd and true NFS locks?

2000-12-16 Thread Axel Thimm

Thanks for the fast reply.

On Thu, Dec 14, 2000 at 05:45:15PM -0500, David E. Cross wrote:
> As for "client" vs. "server", that is quite tricky since WRT NFS locking
> they are both client and server.  The "server" side is done and requires no
> modifcations to the kernel.  However a FreeBSD kernel is still unable to
> acquire an NFS lock.  This latter case is quite likely what your users are
> seeing the affects of.

Just to understand it right: The current rpc.lockd is neither requesting
locks, if FreeBSD is an NFS client to whatever NFS server, nor serving such
requests as an NFS server to whatever client.

Your (David Cross's) uncommited code does implement NFS locks for a FreeBSD
NFS server. Perhaps in a development stage, but better than not having locks
at all.

Now I am quite surprised to learn that FreeBSD apparently is not able to
request locks over NFS. Am I right?

Wouldn't that mean, that you might cause data corruption if, say, I was to
read my mail from a FreeBSD box over an NFS mounted spool directory (running
under OSF1 in our case), and I decided to write back the mbox to the spool dir
the same moment new mail is delivered?

I can't imagine that, I must have misunderstood something, most probably the
role of the client part of NFS locks. Could someone clarify? If I were right,
then FreeBSD would only be good for read only NFS access, and we were using
FreeBSD as NFS clients in our department since before 2.2.x.

> In the end:  the code is there and available for those who want to test and
> play with it.  It has not been committed because it is still "broken". 
> I could do it to -current or make it a port, if someone were to tell me that
> it would be "ok" to do so.

I would vote for port.

Thanks, Axel.
P.S. please reply not only to freebsd-hackers, but also Cc: me, as I am only
subscribed to freebsd-current and freebsd-stable.
-- 
[EMAIL PROTECTED]


To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



rpc.lockd and true NFS locks?

2000-12-14 Thread Axel Thimm

Dear all,

rpc.lockd in FreeBSD suffers from a pubic server's lazyness --- It says it's
done the job, but never did anything besides talking...

Searching through the lists gives different stories. Some say that NFS locking
isn't really necessary, but what about locking critical situations like
delivering mail over NFS to FreeBSD homes? Procmail & fcntl made our computing
department especially unhappy, and we are wondering whether we can keep our
migration strategy (moving our homes to backuped FreeBSD boxes).

Some of the following quoted mails (consider this mail as a review, if you
like) give hope that some people were working on this (without obviously
having commited anything, as one can check in cvsweb).

Is this true? Has anyone any server side patches for FreeBSD? Is he/she
looking for guinea pigs? Anything is better than the current situation. Our
users are running away from our otherwise very comfortable FreeBSD homes. :(

On Mon, Apr 03, 2000 at 02:07:54PM +0200, Brad Knowles wrote:
> [...]
>   Besides, file locking becomes impossible in -STABLE once you've
> mounted it with NFS (we don't have a working lockd, although work in this
> area is progressing in -CURRENT), and NFS writes generally suck when
> compared to local writes.
> [...]

On Fri, Apr 07, 2000 at 08:07:40PM -0400, David E. Cross wrote:
> I apologize profusely for the delay of this, but lockd-0.2 is out.
> The URL is: http://www.cs.rpi.edu/~crossd/FreeBSD/lockd-0.2.tar.gz
> [...]
> 5) this does not add the code to FreeBSD's kernel to request the NFS locks
>(that is a job for people more skilled than I ;)
> [...]
On Sat, Apr 08, 2000 at 12:23:14AM -0400, David E. Cross wrote:
> [...]
> http://www.cs.rpi.edu/~crossd/FreeBSD/lockd-0.2a.tar.gz
> [...]

On Fri, Apr 07, 2000 at 08:44:33PM -0400, Andrew Gallatin wrote:
> This might be a bit touchy, but I'm rather curious -- how will the BSDI
> merger affect your lockd work?  It seems like your work on lockd
> (esp. client side & statd interoperation issues) could be speeded up if you
> had access to the BSDI sources..

On Tue, Sep 19, 2000 at 12:38:51PM +0200, Roman Shterenzon wrote:
> Quoting Andrew Gordon <[EMAIL PROTECTED]>:
> > On Mon, 4 Sep 2000, Roman Shterenzon wrote:
> > > The rpc.lockd(8) is marked as broken in /etc/defaults/rc.conf in 4.1-R
> > > My question is - how bad is it broken?
> > The rpc.lockd in 4.x simply answers "Yes" to all locking requests, and
> > does not maintain any state.  This means that if your programs actually
> > need locking, running rpc.lockd will cause problems (file corruption etc).
> > 
> > On the other hand, if your programs don't need locking and are just making
> > the locking calls for the hell of it, rpc.lockd will allow these programs
> > to run rather than just hanging up.
> > 
> > There was talk a few months ago about someone having implemented NFS
> > locking properly, but I haven't heard any more since - check the mailing
> > list archives.
> > 
> > [I wrote the existing 'hack' implementation].

On Wed, Sep 20, 2000 at 09:28:36AM -0500, Scot W. Hetzel wrote:
> From: "Roman Shterenzon" <[EMAIL PROTECTED]>
> > On Tue, 19 Sep 2000 [EMAIL PROTECTED] wrote:
> > > [...] Someone (from something.edu, perhaps rpi.edu) posted a URL to one
> > > of the lists of a working but untested rpc.lockd. [...]

I believe that Andrew means "David E. Cross" <[EMAIL PROTECTED]>, but his
citation some lines above show that he hadn't worked in that direction.

> I kind of remember reading about it on the current mailing list.
> Current-Users: Has a working rpc.lockd been imported into CURRENT.  If it
> has, is there a possibility of getting it MFC'd to STABLE.

On Thu, Sep 21, 2000 at 11:02:25AM +0930, Greg Lewis wrote:
> Look through the freebsd-hackers archive.  There was an rpc.lockd
> implementation announced there looking for testers about a month or so
> before the 4.0 release.  The person who wrote it is David Cross who is now a
> FreeBSD committer I believe.
> 
> Thats my recollection anyway.  Unfortunately I haven't seen any recent
> followups.  At the time it was deemed too close to the 4.0 release.  If you
> do test it maybe you can prod David with the results and get it committed to
> -current.

On Wed, Nov 08, 2000 at 05:45:21PM +0100, Erik Trulsson wrote:
> On Wed, Nov 08, 2000 at 02:53:47PM +0100, cam wrote:
> > I have to use rpc.lockd on my NFS server (FreeBSD 4.0-STABLE) and I've
> > notice that it is broken with this line in /etc/defaults/rc.conf:
> > 113: rpc_lockd_enable="NO"   # Run NFS rpc.lockd (*broken!*) if 
>nfs_server.
> 
> You can't have looked that hard. This question did come up earlier this
> year on -questions and it wasn't difficult to find the answer searching
> through the list-archives.
> 
> Anyway, the answer is that lockd is just a dummy implementation. When the
> client requests a lock rpc.lockd will just say "A lock? Sure, here you have
> one." without actually locking anything. 
> 
> The only reason for r