Re: [Linux-cluster] Re: GFS, what's remaining

2005-09-05 Thread Joel Becker
On Sun, Sep 04, 2005 at 09:37:15AM +0100, Alan Cox wrote: > I am curious why a lock manager uses open to implement its locking > semantics rather than using the locking API (POSIX locks etc) however. Because it is simple (how do you fcntl(2) from a shell fd?), has no ranges (what do you do

Re: [Linux-cluster] Re: GFS, what's remaining

2005-09-05 Thread Andrew Morton
Alan Cox <[EMAIL PROTECTED]> wrote: > > On Llu, 2005-09-05 at 12:53 -0700, Andrew Morton wrote: > > > - How are they ref counted > > > - What are the cleanup semantics > > > - How do I pass a lock between processes (AF_UNIX sockets wont work now) > > > - How do I poll on a lock coming free.

Re: [Linux-cluster] Re: GFS, what's remaining

2005-09-05 Thread Alan Cox
On Llu, 2005-09-05 at 12:53 -0700, Andrew Morton wrote: > > - How are they ref counted > > - What are the cleanup semantics > > - How do I pass a lock between processes (AF_UNIX sockets wont work now) > > - How do I poll on a lock coming free. > > - What are the semantics of lock ownership >

Re: [Linux-cluster] Re: GFS, what's remaining

2005-09-05 Thread Andrew Morton
Alan Cox <[EMAIL PROTECTED]> wrote: > > On Llu, 2005-09-05 at 02:19 -0700, Andrew Morton wrote: > > > create_lockspace() > > > release_lockspace() > > > lock() > > > unlock() > > > > Neat. I'd be inclined to make them syscalls then. I don't suppose anyone > > is likely to object i

Re: [Linux-cluster] Re: GFS, what's remaining

2005-09-05 Thread kurt . hackel
On Mon, Sep 05, 2005 at 05:24:33PM +0800, David Teigland wrote: > On Mon, Sep 05, 2005 at 01:54:08AM -0700, Andrew Morton wrote: > > David Teigland <[EMAIL PROTECTED]> wrote: > > > > > > We export our full dlm API through read/write/poll on a misc device. > > > > > > > inotify did that for a whil

Re: [Linux-cluster] Re: GFS, what's remaining

2005-09-05 Thread Alan Cox
On Sad, 2005-09-03 at 21:46 -0700, Andrew Morton wrote: > Actually I think it's rather sick. Taking O_NONBLOCK and making it a > lock-manager trylock because they're kinda-sorta-similar-sounding? Spare > me. O_NONBLOCK means "open this file in nonblocking mode", not "attempt to > acquire a clust

Re: [Linux-cluster] Re: GFS, what's remaining

2005-09-05 Thread Alan Cox
On Llu, 2005-09-05 at 02:19 -0700, Andrew Morton wrote: > > create_lockspace() > > release_lockspace() > > lock() > > unlock() > > Neat. I'd be inclined to make them syscalls then. I don't suppose anyone > is likely to object if we reserve those slots. If the locks are not file descript

Re: [Linux-cluster] Re: GFS, what's remaining

2005-09-05 Thread Stephen C. Tweedie
Hi, On Sun, 2005-09-04 at 21:33, Pavel Machek wrote: > > - read-only mount > > - "specatator" mount (like ro but no journal allocated for the mount, > > no fencing needed for failed node that was mounted as specatator) > > I'd call it "real-read-only", and yes, that's very usefull > mount. Cou

Re: [Linux-cluster] Re: GFS, what's remaining

2005-09-05 Thread David Teigland
On Mon, Sep 05, 2005 at 02:19:48AM -0700, Andrew Morton wrote: > David Teigland <[EMAIL PROTECTED]> wrote: > > Four functions: > > create_lockspace() > > release_lockspace() > > lock() > > unlock() > > Neat. I'd be inclined to make them syscalls then. I don't suppose anyone > is likely t

Re: [Linux-cluster] Re: GFS, what's remaining

2005-09-05 Thread Daniel Phillips
On Monday 05 September 2005 05:19, Andrew Morton wrote: > David Teigland <[EMAIL PROTECTED]> wrote: > > On Mon, Sep 05, 2005 at 01:54:08AM -0700, Andrew Morton wrote: > > > David Teigland <[EMAIL PROTECTED]> wrote: > > > > We export our full dlm API through read/write/poll on a misc device. > > >

Re: [Linux-cluster] Re: GFS, what's remaining

2005-09-05 Thread Andrew Morton
David Teigland <[EMAIL PROTECTED]> wrote: > > On Mon, Sep 05, 2005 at 01:54:08AM -0700, Andrew Morton wrote: > > David Teigland <[EMAIL PROTECTED]> wrote: > > > > > > We export our full dlm API through read/write/poll on a misc device. > > > > > > > inotify did that for a while, but we ended up g

Re: [Linux-cluster] Re: GFS, what's remaining

2005-09-05 Thread David Teigland
On Mon, Sep 05, 2005 at 01:54:08AM -0700, Andrew Morton wrote: > David Teigland <[EMAIL PROTECTED]> wrote: > > > > We export our full dlm API through read/write/poll on a misc device. > > > > inotify did that for a while, but we ended up going with a straight syscall > interface. > > How fat is

Re: [Linux-cluster] Re: GFS, what's remaining

2005-09-05 Thread Andrew Morton
David Teigland <[EMAIL PROTECTED]> wrote: > > We export our full dlm API through read/write/poll on a misc device. > inotify did that for a while, but we ended up going with a straight syscall interface. How fat is the dlm interface? ie: how many syscalls would it take? - To unsubscribe from

Re: [Linux-cluster] Re: GFS, what's remaining

2005-09-04 Thread David Teigland
On Sat, Sep 03, 2005 at 10:41:40PM -0700, Andrew Morton wrote: > Joel Becker <[EMAIL PROTECTED]> wrote: > > > > > What happens when we want to add some new primitive which has no > > > posix-file analog? > > > > The point of dlmfs is not to express every primitive that the > > DLM has. dlm

Re: [Linux-cluster] Re: GFS, what's remaining

2005-09-04 Thread Daniel Phillips
On Sunday 04 September 2005 03:28, Andrew Morton wrote: > If there is already a richer interface into all this code (such as a > syscall one) and it's feasible to migrate the open() tricksies to that API > in the future if it all comes unstuck then OK. That's why I asked (thus > far unsuccessfully

Re: [Linux-cluster] Re: GFS, what's remaining

2005-09-04 Thread Hua Zhong
>takelock domainxxx lock1 >do sutff >droplock domainxxx lock1 > > When someone kills the shell, the lock is leaked, becuase droplock isn't > called. Why not open the lock resource (or the lock space) instead of individual locks as file? It then looks like this: open lock

Re: [Linux-cluster] Re: GFS, what's remaining

2005-09-04 Thread Joel Becker
On Sun, Sep 04, 2005 at 02:18:36AM -0700, Andrew Morton wrote: > take-and-drop-lock -d domainxxx -l lock1 -e "do stuff" Ahh, but then you have to have lots of scripts somewhere in path, or do massive inline scripts. especially if you want to take another lock in there somewhere.

Re: [Linux-cluster] Re: GFS, what's remaining

2005-09-04 Thread Andrew Morton
Joel Becker <[EMAIL PROTECTED]> wrote: > > I can't see how that works easily. I'm not worried about a > tarball (eventually Red Hat and SuSE and Debian would have it). I'm > thinking about this shell: > > exec 7 do stuff > exec 7 > If someone kills the shell while stu

Re: [Linux-cluster] Re: GFS, what's remaining

2005-09-04 Thread Joel Becker
On Sun, Sep 04, 2005 at 01:18:05AM -0700, Andrew Morton wrote: > > I thought I stated this in my other email. We're not intending > > to extend dlmfs. > > Famous last words ;) Heh, of course :-) > I don't buy the general "fs is nice because we can script it" argument, > really. You

Re: [Linux-cluster] Re: GFS, what's remaining

2005-09-04 Thread Andrew Morton
Mark Fasheh <[EMAIL PROTECTED]> wrote: > > On Sun, Sep 04, 2005 at 12:23:43AM -0700, Andrew Morton wrote: > > > What would be an acceptable replacement? I admit that O_NONBLOCK -> > > > trylock > > > is a bit unfortunate, but really it just needs a bit to express that - > > > nobody over here care

Re: [Linux-cluster] Re: GFS, what's remaining

2005-09-04 Thread Andrew Morton
Joel Becker <[EMAIL PROTECTED]> wrote: > > On Sun, Sep 04, 2005 at 12:28:28AM -0700, Andrew Morton wrote: > > If there is already a richer interface into all this code (such as a > > syscall one) and it's feasible to migrate the open() tricksies to that API > > in the future if it all comes unstuck

Re: [Linux-cluster] Re: GFS, what's remaining

2005-09-04 Thread Mark Fasheh
On Sun, Sep 04, 2005 at 12:23:43AM -0700, Andrew Morton wrote: > > What would be an acceptable replacement? I admit that O_NONBLOCK -> trylock > > is a bit unfortunate, but really it just needs a bit to express that - > > nobody over here cares what it's called. > > The whole idea of reinterpretin

Re: [Linux-cluster] Re: GFS, what's remaining

2005-09-04 Thread Joel Becker
On Sun, Sep 04, 2005 at 12:28:28AM -0700, Andrew Morton wrote: > If there is already a richer interface into all this code (such as a > syscall one) and it's feasible to migrate the open() tricksies to that API > in the future if it all comes unstuck then OK. > That's why I asked (thus far unsucces

Re: [Linux-cluster] Re: GFS, what's remaining

2005-09-04 Thread Andrew Morton
Daniel Phillips <[EMAIL PROTECTED]> wrote: > > If the only user is their tools I would say let it go ahead and be cute, even > sickeningly so. It is not supposed to be a general dlm api, at least that > is > my understanding. It is just supposed to be an interface for their tools. > Of co

Re: [Linux-cluster] Re: GFS, what's remaining

2005-09-04 Thread Andrew Morton
Mark Fasheh <[EMAIL PROTECTED]> wrote: > > On Sat, Sep 03, 2005 at 09:46:53PM -0700, Andrew Morton wrote: > > Actually I think it's rather sick. Taking O_NONBLOCK and making it a > > lock-manager trylock because they're kinda-sorta-similar-sounding? Spare > > me. O_NONBLOCK means "open this file

Re: [Linux-cluster] Re: GFS, what's remaining

2005-09-03 Thread Daniel Phillips
On Sunday 04 September 2005 00:46, Andrew Morton wrote: > Daniel Phillips <[EMAIL PROTECTED]> wrote: > > The model you came up with for dlmfs is beyond cute, it's downright > > clever. > > Actually I think it's rather sick. Taking O_NONBLOCK and making it a > lock-manager trylock because they're k

Re: [Linux-cluster] Re: GFS, what's remaining

2005-09-03 Thread Mark Fasheh
On Sat, Sep 03, 2005 at 09:46:53PM -0700, Andrew Morton wrote: > Actually I think it's rather sick. Taking O_NONBLOCK and making it a > lock-manager trylock because they're kinda-sorta-similar-sounding? Spare > me. O_NONBLOCK means "open this file in nonblocking mode", not "attempt to > acquire

Re: [Linux-cluster] Re: GFS, what's remaining

2005-09-03 Thread Joel Becker
On Sun, Sep 04, 2005 at 01:52:29AM -0400, Daniel Phillips wrote: > You do have ->release and ->make_item/group. ->release is like kobject release. It's a free callback, not a callback from close. > If I may hand you a more substantive argument: you don't support user-driven > creation of

Re: [Linux-cluster] Re: GFS, what's remaining

2005-09-03 Thread Joel Becker
On Sat, Sep 03, 2005 at 10:41:40PM -0700, Andrew Morton wrote: > Are you saying that the posix-file lookalike interface provides access to > part of the functionality, but there are other APIs which are used to > access the rest of the functionality? If so, what is that interface, and > why cannot

Re: [Linux-cluster] Re: GFS, what's remaining

2005-09-03 Thread Daniel Phillips
On Sunday 04 September 2005 01:00, Joel Becker wrote: > On Sun, Sep 04, 2005 at 12:51:10AM -0400, Daniel Phillips wrote: > > Clearly, I ought to have asked why dlmfs can't be done by configfs. It > > is the same paradigm: drive the kernel logic from user-initiated vfs > > methods. You already hav

Re: [Linux-cluster] Re: GFS, what's remaining

2005-09-03 Thread Andrew Morton
Joel Becker <[EMAIL PROTECTED]> wrote: > > > What happens when we want to add some new primitive which has no posix-file > > analog? > > The point of dlmfs is not to express every primitive that the > DLM has. dlmfs cannot express the CR, CW, and PW levels of the VMS > locking scheme.

Re: [Linux-cluster] Re: GFS, what's remaining

2005-09-03 Thread Joel Becker
On Sun, Sep 04, 2005 at 12:51:10AM -0400, Daniel Phillips wrote: > Clearly, I ought to have asked why dlmfs can't be done by configfs. It is > the > same paradigm: drive the kernel logic from user-initiated vfs methods. You > already have nearly all the right methods in nearly all the right pl

Re: [Linux-cluster] Re: GFS, what's remaining

2005-09-03 Thread Joel Becker
On Sat, Sep 03, 2005 at 09:46:53PM -0700, Andrew Morton wrote: > It would be much better to do something which explicitly and directly > expresses what you're trying to do rather than this strange "lets do this > because the names sound the same" thing. So, you'd like a new flag name? Tha

Re: [Linux-cluster] Re: GFS, what's remaining

2005-09-03 Thread Andrew Morton
Daniel Phillips <[EMAIL PROTECTED]> wrote: > > The model you came up with for dlmfs is beyond cute, it's downright clever. Actually I think it's rather sick. Taking O_NONBLOCK and making it a lock-manager trylock because they're kinda-sorta-similar-sounding? Spare me. O_NONBLOCK means "open t

Re: [Linux-cluster] Re: GFS, what's remaining

2005-09-03 Thread Daniel Phillips
On Sunday 04 September 2005 00:30, Joel Becker wrote: > You asked why dlmfs can't go into sysfs, and I responded. And you got me! In the heat of the moment I overlooked the fact that you and Greg haven't agreed to the merge yet ;-) Clearly, I ought to have asked why dlmfs can't be done by confi

Re: [Linux-cluster] Re: GFS, what's remaining

2005-09-03 Thread Joel Becker
On Sun, Sep 04, 2005 at 12:22:36AM -0400, Daniel Phillips wrote: > It is 640 lines. It's 450 without comments and blank lines. Please, don't tell me that comments to help understanding are bloat. > I said "configfs" in the email to which you are replying. To wit: > Daniel Phillips said

Re: [Linux-cluster] Re: GFS, what's remaining

2005-09-03 Thread Daniel Phillips
On Saturday 03 September 2005 23:06, Joel Becker wrote: > dlmfs is *tiny*. The VFS interface is less than his claimed 500 > lines of savings. It is 640 lines. > The few VFS callbacks do nothing but call DLM > functions. You'd have to replace this VFS glue with sysfs glue, and > probably save

Re: [Linux-cluster] Re: GFS, what's remaining

2005-09-03 Thread Joel Becker
On Sat, Sep 03, 2005 at 06:32:41PM -0700, Andrew Morton wrote: > If there's duplicated code in there then we should seek to either make the > code multi-purpose or place the common or reusable parts into a library > somewhere. Regarding sysfs and configfs, that's a whole 'nother conversati

Re: [Linux-cluster] Re: GFS, what's remaining

2005-09-03 Thread Andrew Morton
Joel Becker <[EMAIL PROTECTED]> wrote: > > On Sat, Sep 03, 2005 at 06:21:26PM -0400, Daniel Phillips wrote: > > that fit the configfs-nee-sysfs model? If it does, the payoff will be > about > > 500 lines saved. > > I'm still awaiting your merge of ext3 and reiserfs, because you > can s

Re: [Linux-cluster] Re: GFS, what's remaining

2005-09-03 Thread Joel Becker
On Sat, Sep 03, 2005 at 06:21:26PM -0400, Daniel Phillips wrote: > that fit the configfs-nee-sysfs model? If it does, the payoff will be about > 500 lines saved. I'm still awaiting your merge of ext3 and reiserfs, because you can save probably 500 lines having a filesystem that can creat

Re: [Linux-cluster] Re: GFS, what's remaining

2005-09-03 Thread Daniel Phillips
On Saturday 03 September 2005 02:46, Wim Coekaerts wrote: > On Sat, Sep 03, 2005 at 02:42:36AM -0400, Daniel Phillips wrote: > > On Friday 02 September 2005 20:16, Mark Fasheh wrote: > > > As far as userspace dlm apis go, dlmfs already abstracts away a large > > > part of the dlm interaction... > >

Re: [Linux-cluster] Re: GFS, what's remaining

2005-09-02 Thread Wim Coekaerts
On Sat, Sep 03, 2005 at 02:42:36AM -0400, Daniel Phillips wrote: > On Friday 02 September 2005 20:16, Mark Fasheh wrote: > > As far as userspace dlm apis go, dlmfs already abstracts away a large part > > of the dlm interaction... > > Dumb question, why can't you use sysfs for this instead of rolli

RE: [Linux-cluster] Re: GFS, what's remaining

2005-09-01 Thread Hua Zhong \(hzhong\)
inux-kernel@vger.kernel.org > Subject: [Linux-cluster] Re: GFS, what's remaining > > On Thu, Sep 01, 2005 at 04:28:30PM +0100, Alan Cox wrote: > > > That's GFS. The submission is about a GFS2 that's > on-disk incompatible > > > to GFS. > > &

Re: [Linux-cluster] Re: [PATCH 1/3] dlm: use configfs

2005-08-19 Thread Joel Becker
On Thu, Aug 18, 2005 at 02:07:47PM -0700, Joel Becker wrote: > On Wed, Aug 17, 2005 at 11:22:18PM -0700, Andrew Morton wrote: > > Fair enough. This really means that the configfs patch should be split out > > of the ocfs2 megapatch... > > Easy to do, it's a separate commit in the ocfs2.git

Re: [Linux-cluster] GFS - updated patches

2005-08-11 Thread Pekka Enberg
On 8/11/05, Michael <[EMAIL PROTECTED]> wrote: > Hi, Dave, > > I quickly applied gfs2 and dlm patches in kernel 2.6.12.2, it passed > compiling but has some warning log, see attachment. maybe helpful to > you. kzalloc is not in Linus' tree yet. Try with 2.6.13-rc5-mm1.

Re: [Linux-cluster] GFS - updated patches

2005-08-11 Thread Michael
f patches that > incorporates the suggestions we've received. > > http://redhat.com/~teigland/gfs2/20050811/gfs2-full.patch > http://redhat.com/~teigland/gfs2/20050811/broken-out/ > > Dave > > -- > Linux-cluster mailing list > [EMAIL PROTECTED] > http:

Re: [Linux-cluster] GFS - updated patches

2005-08-11 Thread Michael
> incorporates the suggestions we've received. > > http://redhat.com/~teigland/gfs2/20050811/gfs2-full.patch > http://redhat.com/~teigland/gfs2/20050811/broken-out/ > > Dave > > -- > Linux-cluster mailing list > [EMAIL PROTECTED] > http://www.redhat.com/ma

Re: [Linux-cluster] Re: [PATCH 00/14] GFS

2005-08-10 Thread Kyle Moffett
On Aug 10, 2005, at 09:26:26, AJ Lewis wrote: On Wed, Aug 10, 2005 at 12:11:10PM +0100, Christoph Hellwig wrote: On Wed, Aug 10, 2005 at 01:09:17PM +0200, Lars Marowsky-Bree wrote: So for every directory hierarchy on a shared filesystem, each user needs to have the complete list of bindmount

Re: [Linux-cluster] Re: [PATCH 00/14] GFS

2005-08-10 Thread AJ Lewis
On Wed, Aug 10, 2005 at 12:11:10PM +0100, Christoph Hellwig wrote: > On Wed, Aug 10, 2005 at 01:09:17PM +0200, Lars Marowsky-Bree wrote: > > So for every directoy hiearchy on a shared filesystem, each user needs > > to have the complete list of bindmounts needed, and automatically resync > > that a

Re: [Linux-cluster] Re: [PATCH 00/14] GFS

2005-08-05 Thread Mike Christie
r into a > bio. It does not have to be page granularity. Can something like that be > used in these places? > > -- > Linux-cluster mailing list > [EMAIL PROTECTED] > http://www.redhat.com/mailman/listinfo/linux-cluster - To unsubscribe from this list: send the li

Re: [Linux-cluster] Re: [PATCH 00/14] GFS

2005-08-05 Thread Mike Christie
David Teigland wrote: > On Tue, Aug 02, 2005 at 09:45:24AM +0200, Arjan van de Ven wrote: > >>* Why are you using bufferheads extensively in a new filesystem? > > > bh's are used for metadata, the log, and journaled data which need to be > written at the block granularity, not page. > In a scs

Re: [Linux-cluster] Re: [Ocfs2-devel] [RFC] nodemanager, ocfs2, dlm

2005-07-21 Thread Daniel Phillips
On Thursday 21 July 2005 02:55, Walker, Bruce J (HP-Labs) wrote: > Like Lars, I too was under the wrong impression about this configfs > "nodemanager" kernel component. Our discussions in the cluster meeting > Monday and Tuesday were assuming it was a general service that other > kernel components

Re: [Clusters_sig] RE: [Linux-cluster] Re: [Ocfs2-devel] [RFC] nodemanager, ocfs2, dlm

2005-07-21 Thread Lars Marowsky-Bree
On 2005-07-20T11:39:38, Joel Becker <[EMAIL PROTECTED]> wrote: > In turn, let me clarify a little where configfs fits in to > things. Configfs is merely a convenient and transparent method to > communicate configuration to kernel objects. It's not a place for > uevents, for netlink sockets

Re: [Clusters_sig] RE: [Linux-cluster] Re: [Ocfs2-devel] [RFC] nodemanager, ocfs2, dlm

2005-07-20 Thread Joel Becker
On Wed, Jul 20, 2005 at 08:09:18PM +0200, Lars Marowsky-Bree wrote: > On 2005-07-20T09:55:31, "Walker, Bruce J (HP-Labs)" <[EMAIL PROTECTED]> wrote: > > > Like Lars, I too was under the wrong impression about this configfs > > "nodemanager" kernel component. Our discussions in the cluster > > mee

Re: [Clusters_sig] RE: [Linux-cluster] Re: [Ocfs2-devel] [RFC] nodemanager, ocfs2, dlm

2005-07-20 Thread Lars Marowsky-Bree
On 2005-07-20T09:55:31, "Walker, Bruce J (HP-Labs)" <[EMAIL PROTECTED]> wrote: > Like Lars, I too was under the wrong impression about this configfs > "nodemanager" kernel component. Our discussions in the cluster > meeting Monday and Tuesday were assuming it was a general service that > other ke

RE: [Linux-cluster] Re: [Ocfs2-devel] [RFC] nodemanager, ocfs2, dlm

2005-07-20 Thread Walker, Bruce J (HP-Labs)
r.kernel.org; [EMAIL PROTECTED] Subject: [Linux-cluster] Re: [Ocfs2-devel] [RFC] nodemanager, ocfs2, dlm On 2005-07-20T11:35:46, David Teigland <[EMAIL PROTECTED]> wrote: > > Also, eventually we obviously need to have state for the nodes - > > up/down et cetera. I think the node

Re: [Linux-cluster] [RFC] nodemanager, ocfs2, dlm

2005-07-19 Thread David Teigland
On Tue, Jul 19, 2005 at 05:48:26PM -0700, Mark Fasheh wrote: > For OCFS2 that would mean that an ocfs2_nodemanager would still exist, > but as a much smaller module sitting on top of 'nodemanager'. Yep, factoring out the common bits. > So no port attribute. The OCFS2 network code normally takes p

Re: [Linux-cluster] [RFC] nodemanager, ocfs2, dlm

2005-07-19 Thread Mark Fasheh
Hi David, On Mon, Jul 18, 2005 at 02:15:53PM +0800, David Teigland wrote: > Some of the comments about the dlm concerned how it's configured (from > user space.) In particular, there was interest in seeing the dlm and > ocfs2 use common methods for their configuration. > > The first area I'm loo

Re: [Linux-cluster] [RFC] nodemanager, ocfs2, dlm

2005-07-19 Thread Daniel Phillips
On Monday 18 July 2005 16:15, David Teigland wrote: > I've taken a stab at generalizing ocfs2_nodemanager so the dlm could use > it (removing ocfs-specific stuff). It still needs some work, but I'd > like to know if this appeals to the ocfs group and to others who were > interested in seeing some

Re: Linux Cluster using shared scsi

2001-05-04 Thread Eddie Williams
Doug Ledford wrote: > > ... > > If told to hold a reservation, then resend your reservation request once every > 2 seconds (this actually has very minimal CPU/BUS usage and isn't as big a > deal as requesting a reservation every 2 seconds might sound). The first time > the reservation is refused,

Re: Linux Cluster using shared scsi

2001-05-03 Thread Jonathan Lundell
At 3:57 PM -0400 2001-05-03, Eric Z. Ayers wrote: >However distateful it sounds, there is precedent for the >behavior that Doug is proposing in commercial clustering >implementations. My recollection is that both Compaq TruCluster and >HP Service Guard have logic that will panic the kernel when a

Re: Linux Cluster using shared scsi

2001-05-03 Thread Eric Z. Ayers
Pavel Machek writes: > Hi! > > > > > ... > > > > > > > > If told to hold a reservation, then resend your reservation request once every > > > > 2 seconds (this actually has very minimal CPU/BUS usage and isn't as big a > > > > deal as requesting a reservation every 2 seconds might sound).

Re: Linux Cluster using shared scsi

2001-05-03 Thread Pavel Machek
Hi! > > > ... > > > > > > If told to hold a reservation, then resend your reservation request once every > > > 2 seconds (this actually has very minimal CPU/BUS usage and isn't as big a > > > deal as requesting a reservation every 2 seconds might sound). The first time > > > the reservation is r

Re: Linux Cluster using shared scsi

2001-05-03 Thread James Bottomley
There is another nasty in multi-port arrays that I should perhaps point out: a bus reset isn't supposed to drop the reservation if it was taken on another port. A device or LUN reset will drop reservations on all ports. This behaviour, although clearly mandated by the SCSI-3-SPC, is rather p

Re: Linux Cluster using shared scsi

2001-05-03 Thread James Bottomley
[EMAIL PROTECTED] said: > Correct, if you hold a reservation on a device for which you have > multiple paths, you have to use the correct path. As far as multi-path scsi reservations go, the SCSI-2 standards (and this includes the completion in the SCSI-3 SPC) is very malleable. The standard i

Re: Linux Cluster using shared scsi

2001-05-02 Thread Doug Ledford
Max TenEyck Woodbury wrote: > > Doug Ledford wrote: > > > > Max TenEyck Woodbury wrote: > >> > >> Umm. Reboot? What do you think this is? Windoze? > > > > It's the *only* way to guarantee that the drive is never touched by more > > than one machine at a time (notice, I've not been talking about a

Re: Linux Cluster using shared scsi

2001-05-02 Thread Max TenEyck Woodbury
Doug Ledford wrote: > > Max TenEyck Woodbury wrote: >> >> Umm. Reboot? What do you think this is? Windoze? > > It's the *only* way to guarantee that the drive is never touched by more > than one machine at a time (notice, I've not been talking about a shared > use drive, only one machine in the

Re: Linux Cluster using shared scsi

2001-05-02 Thread Doug Ledford
Mike Anderson wrote: > > Doug, > > I guess I worded my question poorly. My question was around multi-path > devices in combination with SCSI-2 reserve vs SCSI-3 persistent reserve which > has not always been easy, but is more difficult is you use a name space that > can slip or can have multiple

Re: Linux Cluster using shared scsi

2001-05-02 Thread Mike Anderson
Doug, I guess I worded my question poorly. My question was around multi-path devices in combination with SCSI-2 reserve vs SCSI-3 persistent reserve which has not always been easy, but is more difficult is you use a name space that can slip or can have multiple entries for the same physical dev

Re: Linux Cluster using shared scsi

2001-05-02 Thread Doug Ledford
Mike Anderson wrote: > > Doug, > > A question on clarification. > > Is the configuration you are testing have both FC adapters going to the same > port of the storage device (mutli-path) or to different ports of the storage > device (mulit-port)? > > The reason I ask is that I thought if you a

Re: Linux Cluster using shared scsi

2001-05-02 Thread Doug Ledford
Max TenEyck Woodbury wrote: > > Doug Ledford wrote: > > > > ... > > > > If told to hold a reservation, then resend your reservation request once every > > 2 seconds (this actually has very minimal CPU/BUS usage and isn't as big a > > deal as requesting a reservation every 2 seconds might sound).

Re: Linux Cluster using shared scsi

2001-05-02 Thread Max TenEyck Woodbury
Doug Ledford wrote: > > ... > > If told to hold a reservation, then resend your reservation request once every > 2 seconds (this actually has very minimal CPU/BUS usage and isn't as big a > deal as requesting a reservation every 2 seconds might sound). The first time > the reservation is refuse

Re: Linux Cluster using shared scsi

2001-05-02 Thread Mike Anderson
Doug, A question on clarification. Is the configuration you are testing have both FC adapters going to the same port of the storage device (mutli-path) or to different ports of the storage device (mulit-port)? The reason I ask is that I thought if you are using SCSI-2 reserves that the reserve

Re: Linux Cluster using shared scsi

2001-05-02 Thread Eddie Williams
Hi Doug, Great to hear your progress on this. As I had not heard anything about this effort since this time last year I had assumed you put this project on the shelf. I will be happy to test these interfaces when they are ready. Eddie > "Eric Z. Ayers" wrote: > > > > Doug Ledford writes:

Re: Linux Cluster using shared scsi

2001-05-02 Thread Doug Ledford
"Eric Z. Ayers" wrote: > > Doug Ledford writes: > (James Bottomley commented about the need for SCSI reservation kernel patches) > > > > I agree. It's something that needs fixed in general, your software needs it > > as well, and I've written (about 80% done at this point) some open source >

Re: Linux Cluster using shared scsi

2001-05-01 Thread Alan Cox
> reserved.But if you did such a hot swap you would have "bigger > fish to fry" in a HA application... I mean, none of your data would be > there! You need to realise this has happened and do the right thing. Since it could be an md raid array the hotswap is not fatal.

Re: Linux Cluster using shared scsi

2001-05-01 Thread Eric Z. Ayers
Alan Cox writes: > > Does this package also tell the kernel to "re-establish" a > > reservation for all devices after a bus reset, or at least inform a > > user level program? Finding out when there has been a bus reset has > > been a stumbling block for me. > > You cannot rely on a bus re

Re: Linux Cluster using shared scsi

2001-05-01 Thread James Bottomley
[EMAIL PROTECTED] said: > Does this package also tell the kernel to "re-establish" a reservation > for all devices after a bus reset, or at least inform a user level > program? Finding out when there has been a bus reset has been a > stumbling block for me. [EMAIL PROTECTED] said: > You cannot

Re: Linux Cluster using shared scsi

2001-05-01 Thread Alan Cox
> Does this package also tell the kernel to "re-establish" a > reservation for all devices after a bus reset, or at least inform a > user level program? Finding out when there has been a bus reset has > been a stumbling block for me. You cannot rely on a bus reset. Imagine hot swap disks on an F

Re: Linux Cluster using shared scsi

2001-05-01 Thread Eric Z. Ayers
Doug Ledford writes: (James Bottomley commented about the need for SCSI reservation kernel patches) > > I agree. It's something that needs fixed in general, your software needs it > as well, and I've written (about 80% done at this point) some open source > software geared towards getting/ho

Re: Linux Cluster using shared scsi

2001-05-01 Thread Doug Ledford
James Bottomley wrote: > > [EMAIL PROTECTED] said: > > So, will Linux ever support the scsi reservation mechanism as standard? > > That's not within my gift. I can merely write the code that corrects the > behaviour. I can't force anyone else to accept it. I think it will be standard before n

Re: Linux Cluster using shared scsi

2001-05-01 Thread James Bottomley
[EMAIL PROTECTED] said: > So, will Linux ever support the scsi reservation mechanism as standard? That's not within my gift. I can merely write the code that corrects the behaviour. I can't force anyone else to accept it. [EMAIL PROTECTED] said: > Isn't there a standard that says if you scsi

RE: Linux Cluster using shared scsi

2001-05-01 Thread Roets, Chris
[mailto:[EMAIL PROTECTED]] Sent: Friday, April 27, 2001 5:12 PM To: Roets, Chris Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED] Subject: Re: Linux Cluster using shared scsi I've copied linux SCSI and quoted the entire message below so they can follow. Your assertion that this works in 2.2.16 is inc

Re: Linux Cluster using shared scsi

2001-04-27 Thread James Bottomley
I've copied linux SCSI and quoted the entire message below so they can follow. Your assertion that this works in 2.2.16 is incorrect, the patch to fix the linux reservation conflict handler has never been added to the official tree. I suspect you actually don't have vanilla 2.2.16 but instead

Linux Cluster using shared scsi

2001-04-27 Thread Roets, Chris
> Problem : > install two Linux-system with a shared scsi-bus and storage on that shared > bus. > suppose : > system one : SCSI ID 7 > system two : SCSI ID 6 > shared disk : SCSI ID 4 > > By default, you can mount the disk on both system. This is normal > behavior, but > may impose data corrupti

linux cluster

2001-03-20 Thread Tomasiewicz, William R
I once saw an articles on how to install Linux on four separate 456's to form a cluster server. I since then have not seen any other info on the subject. I have multiple Gateway 166's that I would like to try and experiment with Linux in a clustered environment. Any Advise? Bill Tomasiewicz Compu

[ANNOUNCE] linux-cluster list

2001-02-28 Thread Rik van Riel
could share, or ... Since I agree with you that we need such a place, I've just created a mailing list: [EMAIL PROTECTED] To subscribe to the list, send an email with the text "subscribe linux-cluster" to: [EMAIL PROTECTED] I hope that we'll be able to spli