On Sun, Sep 04, 2005 at 09:37:15AM +0100, Alan Cox wrote:
> I am curious why a lock manager uses open to implement its locking
> semantics rather than using the locking API (POSIX locks etc) however.
Because it is simple (how do you fcntl(2) from a shell fd?), has no
ranges (what do you do
Alan Cox <[EMAIL PROTECTED]> wrote:
>
> On Llu, 2005-09-05 at 12:53 -0700, Andrew Morton wrote:
> > > - How are they ref counted
> > > - What are the cleanup semantics
> > > - How do I pass a lock between processes (AF_UNIX sockets wont work now)
> > > - How do I poll on a lock coming free.
On Llu, 2005-09-05 at 12:53 -0700, Andrew Morton wrote:
> > - How are they ref counted
> > - What are the cleanup semantics
> > - How do I pass a lock between processes (AF_UNIX sockets wont work now)
> > - How do I poll on a lock coming free.
> > - What are the semantics of lock ownership
>
Alan Cox <[EMAIL PROTECTED]> wrote:
>
> On Llu, 2005-09-05 at 02:19 -0700, Andrew Morton wrote:
> > > create_lockspace()
> > > release_lockspace()
> > > lock()
> > > unlock()
> >
> > Neat. I'd be inclined to make them syscalls then. I don't suppose anyone
> > is likely to object i
On Mon, Sep 05, 2005 at 05:24:33PM +0800, David Teigland wrote:
> On Mon, Sep 05, 2005 at 01:54:08AM -0700, Andrew Morton wrote:
> > David Teigland <[EMAIL PROTECTED]> wrote:
> > >
> > > We export our full dlm API through read/write/poll on a misc device.
> > >
> >
> > inotify did that for a whil
On Sad, 2005-09-03 at 21:46 -0700, Andrew Morton wrote:
> Actually I think it's rather sick. Taking O_NONBLOCK and making it a
> lock-manager trylock because they're kinda-sorta-similar-sounding? Spare
> me. O_NONBLOCK means "open this file in nonblocking mode", not "attempt to
> acquire a clust
On Llu, 2005-09-05 at 02:19 -0700, Andrew Morton wrote:
> > create_lockspace()
> > release_lockspace()
> > lock()
> > unlock()
>
> Neat. I'd be inclined to make them syscalls then. I don't suppose anyone
> is likely to object if we reserve those slots.
If the locks are not file descript
Hi,
On Sun, 2005-09-04 at 21:33, Pavel Machek wrote:
> > - read-only mount
> > - "specatator" mount (like ro but no journal allocated for the mount,
> > no fencing needed for failed node that was mounted as specatator)
>
> I'd call it "real-read-only", and yes, that's very usefull
> mount. Cou
On Mon, Sep 05, 2005 at 02:19:48AM -0700, Andrew Morton wrote:
> David Teigland <[EMAIL PROTECTED]> wrote:
> > Four functions:
> > create_lockspace()
> > release_lockspace()
> > lock()
> > unlock()
>
> Neat. I'd be inclined to make them syscalls then. I don't suppose anyone
> is likely t
On Monday 05 September 2005 05:19, Andrew Morton wrote:
> David Teigland <[EMAIL PROTECTED]> wrote:
> > On Mon, Sep 05, 2005 at 01:54:08AM -0700, Andrew Morton wrote:
> > > David Teigland <[EMAIL PROTECTED]> wrote:
> > > > We export our full dlm API through read/write/poll on a misc device.
> > >
David Teigland <[EMAIL PROTECTED]> wrote:
>
> On Mon, Sep 05, 2005 at 01:54:08AM -0700, Andrew Morton wrote:
> > David Teigland <[EMAIL PROTECTED]> wrote:
> > >
> > > We export our full dlm API through read/write/poll on a misc device.
> > >
> >
> > inotify did that for a while, but we ended up g
On Mon, Sep 05, 2005 at 01:54:08AM -0700, Andrew Morton wrote:
> David Teigland <[EMAIL PROTECTED]> wrote:
> >
> > We export our full dlm API through read/write/poll on a misc device.
> >
>
> inotify did that for a while, but we ended up going with a straight syscall
> interface.
>
> How fat is
David Teigland <[EMAIL PROTECTED]> wrote:
>
> We export our full dlm API through read/write/poll on a misc device.
>
inotify did that for a while, but we ended up going with a straight syscall
interface.
How fat is the dlm interface? ie: how many syscalls would it take?
-
To unsubscribe from
On Sat, Sep 03, 2005 at 10:41:40PM -0700, Andrew Morton wrote:
> Joel Becker <[EMAIL PROTECTED]> wrote:
> >
> > > What happens when we want to add some new primitive which has no
> > > posix-file analog?
> >
> > The point of dlmfs is not to express every primitive that the
> > DLM has. dlm
On Sunday 04 September 2005 03:28, Andrew Morton wrote:
> If there is already a richer interface into all this code (such as a
> syscall one) and it's feasible to migrate the open() tricksies to that API
> in the future if it all comes unstuck then OK. That's why I asked (thus
> far unsuccessfully
>takelock domainxxx lock1
>do sutff
>droplock domainxxx lock1
>
> When someone kills the shell, the lock is leaked, becuase droplock isn't
> called.
Why not open the lock resource (or the lock space) instead of
individual locks as file? It then looks like this:
open lock
On Sun, Sep 04, 2005 at 02:18:36AM -0700, Andrew Morton wrote:
> take-and-drop-lock -d domainxxx -l lock1 -e "do stuff"
Ahh, but then you have to have lots of scripts somewhere in
path, or do massive inline scripts. especially if you want to take
another lock in there somewhere.
Joel Becker <[EMAIL PROTECTED]> wrote:
>
> I can't see how that works easily. I'm not worried about a
> tarball (eventually Red Hat and SuSE and Debian would have it). I'm
> thinking about this shell:
>
> exec 7 do stuff
> exec 7
> If someone kills the shell while stu
On Sun, Sep 04, 2005 at 01:18:05AM -0700, Andrew Morton wrote:
> > I thought I stated this in my other email. We're not intending
> > to extend dlmfs.
>
> Famous last words ;)
Heh, of course :-)
> I don't buy the general "fs is nice because we can script it" argument,
> really. You
Mark Fasheh <[EMAIL PROTECTED]> wrote:
>
> On Sun, Sep 04, 2005 at 12:23:43AM -0700, Andrew Morton wrote:
> > > What would be an acceptable replacement? I admit that O_NONBLOCK ->
> > > trylock
> > > is a bit unfortunate, but really it just needs a bit to express that -
> > > nobody over here care
Joel Becker <[EMAIL PROTECTED]> wrote:
>
> On Sun, Sep 04, 2005 at 12:28:28AM -0700, Andrew Morton wrote:
> > If there is already a richer interface into all this code (such as a
> > syscall one) and it's feasible to migrate the open() tricksies to that API
> > in the future if it all comes unstuck
On Sun, Sep 04, 2005 at 12:23:43AM -0700, Andrew Morton wrote:
> > What would be an acceptable replacement? I admit that O_NONBLOCK -> trylock
> > is a bit unfortunate, but really it just needs a bit to express that -
> > nobody over here cares what it's called.
>
> The whole idea of reinterpretin
On Sun, Sep 04, 2005 at 12:28:28AM -0700, Andrew Morton wrote:
> If there is already a richer interface into all this code (such as a
> syscall one) and it's feasible to migrate the open() tricksies to that API
> in the future if it all comes unstuck then OK.
> That's why I asked (thus far unsucces
Daniel Phillips <[EMAIL PROTECTED]> wrote:
>
> If the only user is their tools I would say let it go ahead and be cute, even
> sickeningly so. It is not supposed to be a general dlm api, at least that
> is
> my understanding. It is just supposed to be an interface for their tools.
> Of co
Mark Fasheh <[EMAIL PROTECTED]> wrote:
>
> On Sat, Sep 03, 2005 at 09:46:53PM -0700, Andrew Morton wrote:
> > Actually I think it's rather sick. Taking O_NONBLOCK and making it a
> > lock-manager trylock because they're kinda-sorta-similar-sounding? Spare
> > me. O_NONBLOCK means "open this file
On Sunday 04 September 2005 00:46, Andrew Morton wrote:
> Daniel Phillips <[EMAIL PROTECTED]> wrote:
> > The model you came up with for dlmfs is beyond cute, it's downright
> > clever.
>
> Actually I think it's rather sick. Taking O_NONBLOCK and making it a
> lock-manager trylock because they're k
On Sat, Sep 03, 2005 at 09:46:53PM -0700, Andrew Morton wrote:
> Actually I think it's rather sick. Taking O_NONBLOCK and making it a
> lock-manager trylock because they're kinda-sorta-similar-sounding? Spare
> me. O_NONBLOCK means "open this file in nonblocking mode", not "attempt to
> acquire
On Sun, Sep 04, 2005 at 01:52:29AM -0400, Daniel Phillips wrote:
> You do have ->release and ->make_item/group.
->release is like kobject release. It's a free callback, not a
callback from close.
> If I may hand you a more substantive argument: you don't support user-driven
> creation of
On Sat, Sep 03, 2005 at 10:41:40PM -0700, Andrew Morton wrote:
> Are you saying that the posix-file lookalike interface provides access to
> part of the functionality, but there are other APIs which are used to
> access the rest of the functionality? If so, what is that interface, and
> why cannot
On Sunday 04 September 2005 01:00, Joel Becker wrote:
> On Sun, Sep 04, 2005 at 12:51:10AM -0400, Daniel Phillips wrote:
> > Clearly, I ought to have asked why dlmfs can't be done by configfs. It
> > is the same paradigm: drive the kernel logic from user-initiated vfs
> > methods. You already hav
Joel Becker <[EMAIL PROTECTED]> wrote:
>
> > What happens when we want to add some new primitive which has no posix-file
> > analog?
>
> The point of dlmfs is not to express every primitive that the
> DLM has. dlmfs cannot express the CR, CW, and PW levels of the VMS
> locking scheme.
On Sun, Sep 04, 2005 at 12:51:10AM -0400, Daniel Phillips wrote:
> Clearly, I ought to have asked why dlmfs can't be done by configfs. It is
> the
> same paradigm: drive the kernel logic from user-initiated vfs methods. You
> already have nearly all the right methods in nearly all the right pl
On Sat, Sep 03, 2005 at 09:46:53PM -0700, Andrew Morton wrote:
> It would be much better to do something which explicitly and directly
> expresses what you're trying to do rather than this strange "lets do this
> because the names sound the same" thing.
So, you'd like a new flag name? Tha
Daniel Phillips <[EMAIL PROTECTED]> wrote:
>
> The model you came up with for dlmfs is beyond cute, it's downright clever.
Actually I think it's rather sick. Taking O_NONBLOCK and making it a
lock-manager trylock because they're kinda-sorta-similar-sounding? Spare
me. O_NONBLOCK means "open t
On Sunday 04 September 2005 00:30, Joel Becker wrote:
> You asked why dlmfs can't go into sysfs, and I responded.
And you got me! In the heat of the moment I overlooked the fact that you and
Greg haven't agreed to the merge yet ;-)
Clearly, I ought to have asked why dlmfs can't be done by confi
On Sun, Sep 04, 2005 at 12:22:36AM -0400, Daniel Phillips wrote:
> It is 640 lines.
It's 450 without comments and blank lines. Please, don't tell
me that comments to help understanding are bloat.
> I said "configfs" in the email to which you are replying.
To wit:
> Daniel Phillips said
On Saturday 03 September 2005 23:06, Joel Becker wrote:
> dlmfs is *tiny*. The VFS interface is less than his claimed 500
> lines of savings.
It is 640 lines.
> The few VFS callbacks do nothing but call DLM
> functions. You'd have to replace this VFS glue with sysfs glue, and
> probably save
On Sat, Sep 03, 2005 at 06:32:41PM -0700, Andrew Morton wrote:
> If there's duplicated code in there then we should seek to either make the
> code multi-purpose or place the common or reusable parts into a library
> somewhere.
Regarding sysfs and configfs, that's a whole 'nother
conversati
Joel Becker <[EMAIL PROTECTED]> wrote:
>
> On Sat, Sep 03, 2005 at 06:21:26PM -0400, Daniel Phillips wrote:
> > that fit the configfs-nee-sysfs model? If it does, the payoff will be
> about
> > 500 lines saved.
>
> I'm still awaiting your merge of ext3 and reiserfs, because you
> can s
On Sat, Sep 03, 2005 at 06:21:26PM -0400, Daniel Phillips wrote:
> that fit the configfs-nee-sysfs model? If it does, the payoff will be about
> 500 lines saved.
I'm still awaiting your merge of ext3 and reiserfs, because you
can save probably 500 lines having a filesystem that can creat
On Saturday 03 September 2005 02:46, Wim Coekaerts wrote:
> On Sat, Sep 03, 2005 at 02:42:36AM -0400, Daniel Phillips wrote:
> > On Friday 02 September 2005 20:16, Mark Fasheh wrote:
> > > As far as userspace dlm apis go, dlmfs already abstracts away a large
> > > part of the dlm interaction...
> >
On Sat, Sep 03, 2005 at 02:42:36AM -0400, Daniel Phillips wrote:
> On Friday 02 September 2005 20:16, Mark Fasheh wrote:
> > As far as userspace dlm apis go, dlmfs already abstracts away a large part
> > of the dlm interaction...
>
> Dumb question, why can't you use sysfs for this instead of rolli
inux-kernel@vger.kernel.org
> Subject: [Linux-cluster] Re: GFS, what's remaining
>
> On Thu, Sep 01, 2005 at 04:28:30PM +0100, Alan Cox wrote:
> > > That's GFS. The submission is about a GFS2 that's
> on-disk incompatible
> > > to GFS.
> >
&
On Thu, Aug 18, 2005 at 02:07:47PM -0700, Joel Becker wrote:
> On Wed, Aug 17, 2005 at 11:22:18PM -0700, Andrew Morton wrote:
> > Fair enough. This really means that the configfs patch should be split out
> > of the ocfs2 megapatch...
>
> Easy to do, it's a separate commit in the ocfs2.git
On 8/11/05, Michael <[EMAIL PROTECTED]> wrote:
> Hi, Dave,
>
> I quickly applied gfs2 and dlm patches in kernel 2.6.12.2, it passed
> compiling but has some warning log, see attachment. maybe helpful to
> you.
kzalloc is not in Linus' tree yet. Try with 2.6.13-rc5-mm1.
f patches that
> incorporates the suggestions we've received.
>
> http://redhat.com/~teigland/gfs2/20050811/gfs2-full.patch
> http://redhat.com/~teigland/gfs2/20050811/broken-out/
>
> Dave
>
> --
> Linux-cluster mailing list
> [EMAIL PROTECTED]
> http:
> incorporates the suggestions we've received.
>
> http://redhat.com/~teigland/gfs2/20050811/gfs2-full.patch
> http://redhat.com/~teigland/gfs2/20050811/broken-out/
>
> Dave
>
> --
> Linux-cluster mailing list
> [EMAIL PROTECTED]
> http://www.redhat.com/ma
On Aug 10, 2005, at 09:26:26, AJ Lewis wrote:
On Wed, Aug 10, 2005 at 12:11:10PM +0100, Christoph Hellwig wrote:
On Wed, Aug 10, 2005 at 01:09:17PM +0200, Lars Marowsky-Bree wrote:
So for every directory hierarchy on a shared filesystem, each
user needs
to have the complete list of bindmount
On Wed, Aug 10, 2005 at 12:11:10PM +0100, Christoph Hellwig wrote:
> On Wed, Aug 10, 2005 at 01:09:17PM +0200, Lars Marowsky-Bree wrote:
> > So for every directoy hiearchy on a shared filesystem, each user needs
> > to have the complete list of bindmounts needed, and automatically resync
> > that a
r into a
> bio. It does not have to be page granularity. Can something like that be
> used in these places?
>
> --
> Linux-cluster mailing list
> [EMAIL PROTECTED]
> http://www.redhat.com/mailman/listinfo/linux-cluster
-
To unsubscribe from this list: send the li
David Teigland wrote:
> On Tue, Aug 02, 2005 at 09:45:24AM +0200, Arjan van de Ven wrote:
>
>>* Why are you using bufferheads extensively in a new filesystem?
>
>
> bh's are used for metadata, the log, and journaled data which need to be
> written at the block granularity, not page.
>
In a scs
On Thursday 21 July 2005 02:55, Walker, Bruce J (HP-Labs) wrote:
> Like Lars, I too was under the wrong impression about this configfs
> "nodemanager" kernel component. Our discussions in the cluster meeting
> Monday and Tuesday were assuming it was a general service that other
> kernel components
On 2005-07-20T11:39:38, Joel Becker <[EMAIL PROTECTED]> wrote:
> In turn, let me clarify a little where configfs fits in to
> things. Configfs is merely a convenient and transparent method to
> communicate configuration to kernel objects. It's not a place for
> uevents, for netlink sockets
On Wed, Jul 20, 2005 at 08:09:18PM +0200, Lars Marowsky-Bree wrote:
> On 2005-07-20T09:55:31, "Walker, Bruce J (HP-Labs)" <[EMAIL PROTECTED]> wrote:
>
> > Like Lars, I too was under the wrong impression about this configfs
> > "nodemanager" kernel component. Our discussions in the cluster
> > mee
On 2005-07-20T09:55:31, "Walker, Bruce J (HP-Labs)" <[EMAIL PROTECTED]> wrote:
> Like Lars, I too was under the wrong impression about this configfs
> "nodemanager" kernel component. Our discussions in the cluster
> meeting Monday and Tuesday were assuming it was a general service that
> other ke
r.kernel.org; [EMAIL PROTECTED]
Subject: [Linux-cluster] Re: [Ocfs2-devel] [RFC] nodemanager, ocfs2, dlm
On 2005-07-20T11:35:46, David Teigland <[EMAIL PROTECTED]> wrote:
> > Also, eventually we obviously need to have state for the nodes -
> > up/down et cetera. I think the node
On Tue, Jul 19, 2005 at 05:48:26PM -0700, Mark Fasheh wrote:
> For OCFS2 that would mean that an ocfs2_nodemanager would still exist,
> but as a much smaller module sitting on top of 'nodemanager'.
Yep, factoring out the common bits.
> So no port attribute. The OCFS2 network code normally takes p
Hi David,
On Mon, Jul 18, 2005 at 02:15:53PM +0800, David Teigland wrote:
> Some of the comments about the dlm concerned how it's configured (from
> user space.) In particular, there was interest in seeing the dlm and
> ocfs2 use common methods for their configuration.
>
> The first area I'm loo
On Monday 18 July 2005 16:15, David Teigland wrote:
> I've taken a stab at generalizing ocfs2_nodemanager so the dlm could use
> it (removing ocfs-specific stuff). It still needs some work, but I'd
> like to know if this appeals to the ocfs group and to others who were
> interested in seeing some
Doug Ledford wrote:
>
> ...
>
> If told to hold a reservation, then resend your reservation request once every
> 2 seconds (this actually has very minimal CPU/BUS usage and isn't as big a
> deal as requesting a reservation every 2 seconds might sound). The first time
> the reservation is refused,
At 3:57 PM -0400 2001-05-03, Eric Z. Ayers wrote:
>However distateful it sounds, there is precedent for the
>behavior that Doug is proposing in commercial clustering
>implementations. My recollection is that both Compaq TruCluster and
>HP Service Guard have logic that will panic the kernel when a
Pavel Machek writes:
> Hi!
>
> > > > ...
> > > >
> > > > If told to hold a reservation, then resend your reservation request once every
> > > > 2 seconds (this actually has very minimal CPU/BUS usage and isn't as big a
> > > > deal as requesting a reservation every 2 seconds might sound).
Hi!
> > > ...
> > >
> > > If told to hold a reservation, then resend your reservation request once every
> > > 2 seconds (this actually has very minimal CPU/BUS usage and isn't as big a
> > > deal as requesting a reservation every 2 seconds might sound). The first time
> > > the reservation is r
There is another nasty in multi-port arrays that I should perhaps point out:
a bus reset isn't supposed to drop the reservation if it was taken on another
port. A device or LUN reset will drop reservations on all ports. This
behaviour, although clearly mandated by the SCSI-3-SPC, is rather p
[EMAIL PROTECTED] said:
> Correct, if you hold a reservation on a device for which you have
> multiple paths, you have to use the correct path.
As far as multi-path scsi reservations go, the SCSI-2 standards (and this
includes the completion in the SCSI-3 SPC) is very malleable. The standard i
Max TenEyck Woodbury wrote:
>
> Doug Ledford wrote:
> >
> > Max TenEyck Woodbury wrote:
> >>
> >> Umm. Reboot? What do you think this is? Windoze?
> >
> > It's the *only* way to guarantee that the drive is never touched by more
> > than one machine at a time (notice, I've not been talking about a
Doug Ledford wrote:
>
> Max TenEyck Woodbury wrote:
>>
>> Umm. Reboot? What do you think this is? Windoze?
>
> It's the *only* way to guarantee that the drive is never touched by more
> than one machine at a time (notice, I've not been talking about a shared
> use drive, only one machine in the
Mike Anderson wrote:
>
> Doug,
>
> I guess I worded my question poorly. My question was around multi-path
> devices in combination with SCSI-2 reserve vs SCSI-3 persistent reserve which
> has not always been easy, but is more difficult is you use a name space that
> can slip or can have multiple
Doug,
I guess I worded my question poorly. My question was around multi-path
devices in combination with SCSI-2 reserve vs SCSI-3 persistent reserve which
has not always been easy, but is more difficult is you use a name space that
can slip or can have multiple entries for the same physical dev
Mike Anderson wrote:
>
> Doug,
>
> A question on clarification.
>
> Is the configuration you are testing have both FC adapters going to the same
> port of the storage device (mutli-path) or to different ports of the storage
> device (mulit-port)?
>
> The reason I ask is that I thought if you a
Max TenEyck Woodbury wrote:
>
> Doug Ledford wrote:
> >
> > ...
> >
> > If told to hold a reservation, then resend your reservation request once every
> > 2 seconds (this actually has very minimal CPU/BUS usage and isn't as big a
> > deal as requesting a reservation every 2 seconds might sound).
Doug Ledford wrote:
>
> ...
>
> If told to hold a reservation, then resend your reservation request once every
> 2 seconds (this actually has very minimal CPU/BUS usage and isn't as big a
> deal as requesting a reservation every 2 seconds might sound). The first time
> the reservation is refuse
Doug,
A question on clarification.
Is the configuration you are testing have both FC adapters going to the same
port of the storage device (mutli-path) or to different ports of the storage
device (mulit-port)?
The reason I ask is that I thought if you are using SCSI-2 reserves that the
reserve
Hi Doug,
Great to hear your progress on this. As I had not heard anything about this
effort since this time last year I had assumed you put this project on the
shelf. I will be happy to test these interfaces when they are ready.
Eddie
> "Eric Z. Ayers" wrote:
> >
> > Doug Ledford writes:
"Eric Z. Ayers" wrote:
>
> Doug Ledford writes:
> (James Bottomley commented about the need for SCSI reservation kernel patches)
> >
> > I agree. It's something that needs fixed in general, your software needs it
> > as well, and I've written (about 80% done at this point) some open source
>
> reserved.But if you did such a hot swap you would have "bigger
> fish to fry" in a HA application... I mean, none of your data would be
> there!
You need to realise this has happened and do the right thing. Since
it could be an md raid array the hotswap is not fatal.
Alan Cox writes:
> > Does this package also tell the kernel to "re-establish" a
> > reservation for all devices after a bus reset, or at least inform a
> > user level program? Finding out when there has been a bus reset has
> > been a stumbling block for me.
>
> You cannot rely on a bus re
[EMAIL PROTECTED] said:
> Does this package also tell the kernel to "re-establish" a reservation
> for all devices after a bus reset, or at least inform a user level
> program? Finding out when there has been a bus reset has been a
> stumbling block for me.
[EMAIL PROTECTED] said:
> You cannot
> Does this package also tell the kernel to "re-establish" a
> reservation for all devices after a bus reset, or at least inform a
> user level program? Finding out when there has been a bus reset has
> been a stumbling block for me.
You cannot rely on a bus reset. Imagine hot swap disks on an F
Doug Ledford writes:
(James Bottomley commented about the need for SCSI reservation kernel patches)
>
> I agree. It's something that needs fixed in general, your software needs it
> as well, and I've written (about 80% done at this point) some open source
> software geared towards getting/ho
James Bottomley wrote:
>
> [EMAIL PROTECTED] said:
> > So, will Linux ever support the scsi reservation mechanism as standard?
>
> That's not within my gift. I can merely write the code that corrects the
> behaviour. I can't force anyone else to accept it.
I think it will be standard before n
[EMAIL PROTECTED] said:
> So, will Linux ever support the scsi reservation mechanism as standard?
That's not within my gift. I can merely write the code that corrects the
behaviour. I can't force anyone else to accept it.
[EMAIL PROTECTED] said:
> Isn't there a standard that says if you scsi
[mailto:[EMAIL PROTECTED]]
Sent: Friday, April 27, 2001 5:12 PM
To: Roets, Chris
Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED]
Subject: Re: Linux Cluster using shared scsi
I've copied linux SCSI and quoted the entire message below so they can
follow.
Your assertion that this works in 2.2.16 is inc
I've copied linux SCSI and quoted the entire message below so they can follow.
Your assertion that this works in 2.2.16 is incorrect, the patch to fix the
linux reservation conflict handler has never been added to the official tree.
I suspect you actually don't have vanilla 2.2.16 but instead
> Problem :
> install two Linux-system with a shared scsi-bus and storage on that shared
> bus.
> suppose :
> system one : SCSI ID 7
> system two : SCSI ID 6
> shared disk : SCSI ID 4
>
> By default, you can mount the disk on both system. This is normal
> behavior, but
> may impose data corrupti
I once saw an articles on how to install Linux on four separate 456's to
form a cluster server. I since then have not seen any other info on the
subject.
I have multiple Gateway 166's that I would like to try and experiment with
Linux in a clustered environment. Any Advise?
Bill Tomasiewicz
Compu
could share, or ...
Since I agree with you that we need such a place, I've just
created a mailing list:
[EMAIL PROTECTED]
To subscribe to the list, send an email with the text
"subscribe linux-cluster" to:
[EMAIL PROTECTED]
I hope that we'll be able to spli
87 matches
Mail list logo