Re: [PATCH] gfs: no need to check return value of debugfs_create functions

2019-01-23 Thread Andreas Gruenbacher
Greg, On Tue, 22 Jan 2019 at 16:24, Greg Kroah-Hartman wrote: > When calling debugfs functions, there is no need to ever check the > return value. The function can work or not, but the code logic should > never do something different based on this. > > There is no need to save the dentries for

[PATCH] gfs: no need to check return value of debugfs_create functions

2019-01-22 Thread Greg Kroah-Hartman
When calling debugfs functions, there is no need to ever check the return value. The function can work or not, but the code logic should never do something different based on this. There is no need to save the dentries for the debugfs files, so drop those variables to save a bit of space and

Re: [PATCH 0/3] gfs: More logging neatening

2014-03-07 Thread Steven Whitehouse
Hi, On Thu, 2014-03-06 at 12:10 -0800, Joe Perches wrote: > Joe Perches (3): > gfs2: Use pr_ more consistently > gfs2: Use fs_ more often > gfs2: Convert gfs2_lm_withdraw to use fs_err > > fs/gfs2/dir.c| 14 > fs/gfs2/glock.c | 8 +++-- > fs/gfs2/lock_dlm.c | 9

Re: [PATCH 0/3] gfs: More logging neatening

2014-03-07 Thread Steven Whitehouse
Hi, On Thu, 2014-03-06 at 12:10 -0800, Joe Perches wrote: Joe Perches (3): gfs2: Use pr_level more consistently gfs2: Use fs_level more often gfs2: Convert gfs2_lm_withdraw to use fs_err fs/gfs2/dir.c| 14 fs/gfs2/glock.c | 8 +++-- fs/gfs2/lock_dlm.c | 9

[PATCH 0/3] gfs: More logging neatening

2014-03-06 Thread Joe Perches
Joe Perches (3): gfs2: Use pr_ more consistently gfs2: Use fs_ more often gfs2: Convert gfs2_lm_withdraw to use fs_err fs/gfs2/dir.c| 14 fs/gfs2/glock.c | 8 +++-- fs/gfs2/lock_dlm.c | 9 +++-- fs/gfs2/main.c | 2 ++ fs/gfs2/ops_fstype.c | 25 ++---

[PATCH 0/3] gfs: More logging neatening

2014-03-06 Thread Joe Perches
Joe Perches (3): gfs2: Use pr_level more consistently gfs2: Use fs_level more often gfs2: Convert gfs2_lm_withdraw to use fs_err fs/gfs2/dir.c| 14 fs/gfs2/glock.c | 8 +++-- fs/gfs2/lock_dlm.c | 9 +++-- fs/gfs2/main.c | 2 ++ fs/gfs2/ops_fstype.c | 25

[PATCH 20/22] [PATCH] gfs: check nlink count

2007-02-09 Thread Dave Hansen
--- lxc-dave/fs/gfs2/inode.c |1 + 1 file changed, 1 insertion(+) diff -puN fs/gfs2/inode.c~gfs-check-nlink-count fs/gfs2/inode.c --- lxc/fs/gfs2/inode.c~gfs-check-nlink-count 2007-02-09 14:26:59.0 -0800 +++ lxc-dave/fs/gfs2/inode.c2007-02-09 14:26:59.0 -0800

[PATCH 20/22] [PATCH] gfs: check nlink count

2007-02-09 Thread Dave Hansen
--- lxc-dave/fs/gfs2/inode.c |1 + 1 file changed, 1 insertion(+) diff -puN fs/gfs2/inode.c~gfs-check-nlink-count fs/gfs2/inode.c --- lxc/fs/gfs2/inode.c~gfs-check-nlink-count 2007-02-09 14:26:59.0 -0800 +++ lxc-dave/fs/gfs2/inode.c2007-02-09 14:26:59.0 -0800

Re: GFS, what's remaining

2005-09-07 Thread David Teigland
On Thu, Sep 01, 2005 at 01:35:23PM +0200, Arjan van de Ven wrote: > +static inline void glock_put(struct gfs2_glock *gl) > +{ > + if (atomic_read(>gl_count) == 1) > + gfs2_glock_schedule_for_reclaim(gl); > + gfs2_assert(gl->gl_sbd, atomic_read(>gl_count) > 0,); > +

Re: GFS, what's remaining

2005-09-07 Thread David Teigland
On Thu, Sep 01, 2005 at 01:35:23PM +0200, Arjan van de Ven wrote: +static inline void glock_put(struct gfs2_glock *gl) +{ + if (atomic_read(gl-gl_count) == 1) + gfs2_glock_schedule_for_reclaim(gl); + gfs2_assert(gl-gl_sbd, atomic_read(gl-gl_count) 0,); +

Re: GFS, what's remainingh

2005-09-06 Thread Dmitry Torokhov
On 9/6/05, Daniel Phillips <[EMAIL PROTECTED]> wrote: > On Tuesday 06 September 2005 02:55, Dmitry Torokhov wrote: > > On Tuesday 06 September 2005 01:48, Daniel Phillips wrote: > > > On Tuesday 06 September 2005 01:05, Dmitry Torokhov wrote: > > > > do you think it is a bit premature to dismiss

Re: GFS, what's remainingh

2005-09-06 Thread Alan Cox
On Maw, 2005-09-06 at 02:48 -0400, Daniel Phillips wrote: > On Tuesday 06 September 2005 01:05, Dmitry Torokhov wrote: > > do you think it is a bit premature to dismiss something even without > > ever seeing the code? > > You told me you are using a dlm for a single-node application, is there >

Re: GFS, what's remaining

2005-09-06 Thread Suparna Bhattacharya
On Fri, Sep 02, 2005 at 11:17:08PM +0200, Andi Kleen wrote: > Andrew Morton <[EMAIL PROTECTED]> writes: > > > > > > > - Why GFS is better than OCFS2, or has functionality which OCFS2 cannot > > > > possibly gain (or vice versa) > > >

Re: GFS, what's remainingh

2005-09-06 Thread Daniel Phillips
On Tuesday 06 September 2005 02:55, Dmitry Torokhov wrote: > On Tuesday 06 September 2005 01:48, Daniel Phillips wrote: > > On Tuesday 06 September 2005 01:05, Dmitry Torokhov wrote: > > > do you think it is a bit premature to dismiss something even without > > > ever seeing the code? > > > > You

Re: GFS, what's remainingh

2005-09-06 Thread Dmitry Torokhov
On Tuesday 06 September 2005 01:48, Daniel Phillips wrote: > On Tuesday 06 September 2005 01:05, Dmitry Torokhov wrote: > > do you think it is a bit premature to dismiss something even without > > ever seeing the code? > > You told me you are using a dlm for a single-node application, is there >

Re: GFS, what's remainingh

2005-09-06 Thread Daniel Phillips
On Tuesday 06 September 2005 01:05, Dmitry Torokhov wrote: > do you think it is a bit premature to dismiss something even without > ever seeing the code? You told me you are using a dlm for a single-node application, is there anything more I need to know? Regards, Daniel - To unsubscribe from

Re: GFS, what's remainingh

2005-09-06 Thread Daniel Phillips
On Tuesday 06 September 2005 01:05, Dmitry Torokhov wrote: do you think it is a bit premature to dismiss something even without ever seeing the code? You told me you are using a dlm for a single-node application, is there anything more I need to know? Regards, Daniel - To unsubscribe from

Re: GFS, what's remainingh

2005-09-06 Thread Dmitry Torokhov
On Tuesday 06 September 2005 01:48, Daniel Phillips wrote: On Tuesday 06 September 2005 01:05, Dmitry Torokhov wrote: do you think it is a bit premature to dismiss something even without ever seeing the code? You told me you are using a dlm for a single-node application, is there

Re: GFS, what's remainingh

2005-09-06 Thread Daniel Phillips
On Tuesday 06 September 2005 02:55, Dmitry Torokhov wrote: On Tuesday 06 September 2005 01:48, Daniel Phillips wrote: On Tuesday 06 September 2005 01:05, Dmitry Torokhov wrote: do you think it is a bit premature to dismiss something even without ever seeing the code? You told me you

Re: GFS, what's remaining

2005-09-06 Thread Suparna Bhattacharya
On Fri, Sep 02, 2005 at 11:17:08PM +0200, Andi Kleen wrote: Andrew Morton [EMAIL PROTECTED] writes: - Why GFS is better than OCFS2, or has functionality which OCFS2 cannot possibly gain (or vice versa) - Relative merits of the two offerings You missed the important

Re: GFS, what's remainingh

2005-09-06 Thread Alan Cox
On Maw, 2005-09-06 at 02:48 -0400, Daniel Phillips wrote: On Tuesday 06 September 2005 01:05, Dmitry Torokhov wrote: do you think it is a bit premature to dismiss something even without ever seeing the code? You told me you are using a dlm for a single-node application, is there anything

Re: GFS, what's remainingh

2005-09-06 Thread Dmitry Torokhov
On 9/6/05, Daniel Phillips [EMAIL PROTECTED] wrote: On Tuesday 06 September 2005 02:55, Dmitry Torokhov wrote: On Tuesday 06 September 2005 01:48, Daniel Phillips wrote: On Tuesday 06 September 2005 01:05, Dmitry Torokhov wrote: do you think it is a bit premature to dismiss something

Re: GFS, what's remaining

2005-09-05 Thread Daniel Phillips
On Monday 05 September 2005 19:37, Joel Becker wrote: > OCFS2, the new filesystem, is fully general purpose. It > supports all the usual stuff, is quite fast... So I have heard, but isn't it time to quantify that? How do you think you would stack up here:

Re: GFS, what's remainingh

2005-09-05 Thread Dmitry Torokhov
On Monday 05 September 2005 23:58, Daniel Phillips wrote: > On Tuesday 06 September 2005 00:07, Dmitry Torokhov wrote: > > On Monday 05 September 2005 23:02, Daniel Phillips wrote: > > > By the way, you said "alpha server" not "alpha servers", was that just a > > > slip? Because if you don't have

Re: GFS, what's remainingh

2005-09-05 Thread Daniel Phillips
On Tuesday 06 September 2005 00:07, Dmitry Torokhov wrote: > On Monday 05 September 2005 23:02, Daniel Phillips wrote: > > By the way, you said "alpha server" not "alpha servers", was that just a > > slip? Because if you don't have a cluster then why are you using a dlm? > > No, it is not a slip.

Re: GFS, what's remainingh

2005-09-05 Thread Dmitry Torokhov
On Monday 05 September 2005 23:02, Daniel Phillips wrote: > > By the way, you said "alpha server" not "alpha servers", was that just a > slip? > Because if you don't have a cluster then why are you using a dlm? > No, it is not a slip. The application is running on just one node, so we do not

Re: GFS, what's remaining

2005-09-05 Thread Daniel Phillips
On Monday 05 September 2005 22:03, Dmitry Torokhov wrote: > On Monday 05 September 2005 19:57, Daniel Phillips wrote: > > On Monday 05 September 2005 12:18, Dmitry Torokhov wrote: > > > On Monday 05 September 2005 10:49, Daniel Phillips wrote: > > > > On Monday 05 September 2005 10:14, Lars

Re: GFS, what's remaining

2005-09-05 Thread Dmitry Torokhov
On Monday 05 September 2005 19:57, Daniel Phillips wrote: > On Monday 05 September 2005 12:18, Dmitry Torokhov wrote: > > On Monday 05 September 2005 10:49, Daniel Phillips wrote: > > > On Monday 05 September 2005 10:14, Lars Marowsky-Bree wrote: > > > > On 2005-09-03T01:57:31, Daniel Phillips

Re: GFS, what's remaining

2005-09-05 Thread Daniel Phillips
On Monday 05 September 2005 12:18, Dmitry Torokhov wrote: > On Monday 05 September 2005 10:49, Daniel Phillips wrote: > > On Monday 05 September 2005 10:14, Lars Marowsky-Bree wrote: > > > On 2005-09-03T01:57:31, Daniel Phillips <[EMAIL PROTECTED]> wrote: > > > > The only current users of dlms are

Re: GFS, what's remaining

2005-09-05 Thread Joel Becker
On Mon, Sep 05, 2005 at 10:24:03PM +0200, Bernd Eckenfels wrote: > The whole point of the orcacle cluster filesystem as it was described in old > papers was about pfiles, control files and software, because you can easyly > use direct block access (with ASM) for tablespaces. OCFS, the

Re: [Linux-cluster] Re: GFS, what's remaining

2005-09-05 Thread Joel Becker
On Sun, Sep 04, 2005 at 09:37:15AM +0100, Alan Cox wrote: > I am curious why a lock manager uses open to implement its locking > semantics rather than using the locking API (POSIX locks etc) however. Because it is simple (how do you fcntl(2) from a shell fd?), has no ranges (what do you

Re: [Linux-cluster] Re: GFS, what's remaining

2005-09-05 Thread Andrew Morton
Alan Cox <[EMAIL PROTECTED]> wrote: > > On Llu, 2005-09-05 at 12:53 -0700, Andrew Morton wrote: > > > - How are they ref counted > > > - What are the cleanup semantics > > > - How do I pass a lock between processes (AF_UNIX sockets wont work now) > > > - How do I poll on a lock coming

Re: [Linux-cluster] Re: GFS, what's remaining

2005-09-05 Thread Alan Cox
On Llu, 2005-09-05 at 12:53 -0700, Andrew Morton wrote: > > - How are they ref counted > > - What are the cleanup semantics > > - How do I pass a lock between processes (AF_UNIX sockets wont work now) > > - How do I poll on a lock coming free. > > - What are the semantics of lock ownership >

Re: GFS, what's remaining

2005-09-05 Thread Kurt Hackel
On Mon, Sep 05, 2005 at 10:24:03PM +0200, Bernd Eckenfels wrote: > On Mon, Sep 05, 2005 at 04:16:31PM +0200, Lars Marowsky-Bree wrote: > > That is the whole point why OCFS exists ;-) > > The whole point of the orcacle cluster filesystem as it was described in old > papers was about pfiles,

Re: GFS, what's remaining

2005-09-05 Thread Bernd Eckenfels
On Mon, Sep 05, 2005 at 04:16:31PM +0200, Lars Marowsky-Bree wrote: > That is the whole point why OCFS exists ;-) The whole point of the orcacle cluster filesystem as it was described in old papers was about pfiles, control files and software, because you can easyly use direct block access (with

Re: [Linux-cluster] Re: GFS, what's remaining

2005-09-05 Thread Andrew Morton
Alan Cox <[EMAIL PROTECTED]> wrote: > > On Llu, 2005-09-05 at 02:19 -0700, Andrew Morton wrote: > > > create_lockspace() > > > release_lockspace() > > > lock() > > > unlock() > > > > Neat. I'd be inclined to make them syscalls then. I don't suppose anyone > > is likely to object

Re: [Linux-cluster] Re: GFS, what's remaining

2005-09-05 Thread kurt . hackel
ctions in the "full" spec(1) that we didn't even attempt, either because we didn't require direct user<->kernel access or we just didn't need the function. As for the rather thick set of parameters expected in dlm calls, we managed to get dlmlock down to *ahem* eight, and the re

Re: [Linux-cluster] Re: GFS, what's remaining

2005-09-05 Thread Alan Cox
On Sad, 2005-09-03 at 21:46 -0700, Andrew Morton wrote: > Actually I think it's rather sick. Taking O_NONBLOCK and making it a > lock-manager trylock because they're kinda-sorta-similar-sounding? Spare > me. O_NONBLOCK means "open this file in nonblocking mode", not "attempt to > acquire a

Re: [Linux-cluster] Re: GFS, what's remaining

2005-09-05 Thread Alan Cox
On Llu, 2005-09-05 at 02:19 -0700, Andrew Morton wrote: > > create_lockspace() > > release_lockspace() > > lock() > > unlock() > > Neat. I'd be inclined to make them syscalls then. I don't suppose anyone > is likely to object if we reserve those slots. If the locks are not file

Re: GFS, what's remaining

2005-09-05 Thread Dmitry Torokhov
On Monday 05 September 2005 10:49, Daniel Phillips wrote: > On Monday 05 September 2005 10:14, Lars Marowsky-Bree wrote: > > On 2005-09-03T01:57:31, Daniel Phillips <[EMAIL PROTECTED]> wrote: > > > The only current users of dlms are cluster filesystems. There are zero > > > users of the userspace

Re: GFS, what's remaining

2005-09-05 Thread Daniel Phillips
On Monday 05 September 2005 10:14, Lars Marowsky-Bree wrote: > On 2005-09-03T01:57:31, Daniel Phillips <[EMAIL PROTECTED]> wrote: > > The only current users of dlms are cluster filesystems. There are zero > > users of the userspace dlm api. > > That is incorrect... Application users Lars, sorry

Re: GFS, what's remaining

2005-09-05 Thread Lars Marowsky-Bree
On 2005-09-03T01:57:31, Daniel Phillips <[EMAIL PROTECTED]> wrote: > The only current users of dlms are cluster filesystems. There are zero users > of the userspace dlm api. That is incorrect, and you're contradicting yourself here: > What does have to be resolved is a common API for node

Re: GFS, what's remaining

2005-09-05 Thread Lars Marowsky-Bree
On 2005-09-03T09:27:41, Bernd Eckenfels <[EMAIL PROTECTED]> wrote: > Oh thats interesting, I never thought about putting data files (tablespaces) > in a clustered file system. Does that mean you can run supported RAC on > shared ocfs2 files and anybody is using that? That is the whole point why

Re: GFS, what's remaining

2005-09-05 Thread Theodore Ts'o
On Mon, Sep 05, 2005 at 12:09:23AM -0700, Mark Fasheh wrote: > Btw, I'm curious to know how useful folks find the ext3 mount options > errors=continue and errors=panic. I'm extremely likely to implement the > errors=read-only behavior as default in OCFS2 and I'm wondering whether the > other two

Re: real read-only [was Re: GFS, what's remaining]

2005-09-05 Thread Theodore Ts'o
On Mon, Sep 05, 2005 at 10:27:35AM +0200, Pavel Machek wrote: > > There's a better reason, too. I do swsusp. Then I'd like to boot with > / mounted read-only (so that I can read my config files, some > binaries, and maybe suspended image), but I absolutely may not write > to disk at this point,

Re: [Linux-cluster] Re: GFS, what's remaining

2005-09-05 Thread Stephen C. Tweedie
Hi, On Sun, 2005-09-04 at 21:33, Pavel Machek wrote: > > - read-only mount > > - "specatator" mount (like ro but no journal allocated for the mount, > > no fencing needed for failed node that was mounted as specatator) > > I'd call it "real-read-only", and yes, that's very usefull > mount.

Re: [Linux-cluster] Re: GFS, what's remaining

2005-09-05 Thread David Teigland
On Mon, Sep 05, 2005 at 02:19:48AM -0700, Andrew Morton wrote: > David Teigland <[EMAIL PROTECTED]> wrote: > > Four functions: > > create_lockspace() > > release_lockspace() > > lock() > > unlock() > > Neat. I'd be inclined to make them syscalls then. I don't suppose anyone > is likely

Re: [Linux-cluster] Re: GFS, what's remaining

2005-09-05 Thread Daniel Phillips
On Monday 05 September 2005 05:19, Andrew Morton wrote: > David Teigland <[EMAIL PROTECTED]> wrote: > > On Mon, Sep 05, 2005 at 01:54:08AM -0700, Andrew Morton wrote: > > > David Teigland <[EMAIL PROTECTED]> wrote: > > > > We export our full dlm API through read/write/poll on a misc device. > > >

Re: [Linux-cluster] Re: GFS, what's remaining

2005-09-05 Thread Andrew Morton
David Teigland <[EMAIL PROTECTED]> wrote: > > On Mon, Sep 05, 2005 at 01:54:08AM -0700, Andrew Morton wrote: > > David Teigland <[EMAIL PROTECTED]> wrote: > > > > > > We export our full dlm API through read/write/poll on a misc device. > > > > > > > inotify did that for a while, but we ended up

Re: [Linux-cluster] Re: GFS, what's remaining

2005-09-05 Thread David Teigland
On Mon, Sep 05, 2005 at 01:54:08AM -0700, Andrew Morton wrote: > David Teigland <[EMAIL PROTECTED]> wrote: > > > > We export our full dlm API through read/write/poll on a misc device. > > > > inotify did that for a while, but we ended up going with a straight syscall > interface. > > How fat is

Re: GFS, what's remaining

2005-09-05 Thread David Teigland
On Mon, Sep 05, 2005 at 10:58:08AM +0200, J?rn Engel wrote: > #define gfs2_assert(sdp, assertion) do { \ > if (unlikely(!(assertion))) { \ > printk(KERN_ERR "GFS2: fsid=\n", (sdp)->sd_fsname); \ > BUG();

Re: GFS, what's remaining

2005-09-05 Thread Jörn Engel
On Mon, 5 September 2005 11:47:39 +0800, David Teigland wrote: > > Joern already suggested moving this out of line and into a function (as it > was before) to avoid repeating string constants. In that case the > function, file and line from BUG aren't useful. We now have this, does it > look

Re: [Linux-cluster] Re: GFS, what's remaining

2005-09-05 Thread Andrew Morton
David Teigland <[EMAIL PROTECTED]> wrote: > > We export our full dlm API through read/write/poll on a misc device. > inotify did that for a while, but we ended up going with a straight syscall interface. How fat is the dlm interface? ie: how many syscalls would it take? - To unsubscribe from

Re: GFS, what's remaining

2005-09-05 Thread Pekka Enberg
On Thu, Sep 01, 2005 at 01:35:23PM +0200, Arjan van de Ven wrote: > > +void gfs2_glock_hold(struct gfs2_glock *gl) > > +{ > > + glock_hold(gl); > > +} > > > > eh why? On 9/5/05, David Teigland <[EMAIL PROTECTED]> wrote: > You removed the comment stating exactly why, see below. If that's not

Re: GFS, what's remaining

2005-09-05 Thread Theodore Ts'o
On Sun, Sep 04, 2005 at 10:33:44PM +0200, Pavel Machek wrote: > Hi! > > > - read-only mount > > - "specatator" mount (like ro but no journal allocated for the mount, > > no fencing needed for failed node that was mounted as specatator) > > I'd call it "real-read-only", and yes, that's very

Re: GFS, what's remaining

2005-09-05 Thread David Teigland
On Thu, Sep 01, 2005 at 01:35:23PM +0200, Arjan van de Ven wrote: > +static unsigned int handle_roll(atomic_t *a) > +{ > + int x = atomic_read(a); > + if (x < 0) { > + atomic_set(a, 0); > + return 0; > + } > + return (unsigned int)x; > +} > > this is just

Re: GFS, what's remaining

2005-09-05 Thread David Teigland
On Thu, Sep 01, 2005 at 01:35:23PM +0200, Arjan van de Ven wrote: +static unsigned int handle_roll(atomic_t *a) +{ + int x = atomic_read(a); + if (x 0) { + atomic_set(a, 0); + return 0; + } + return (unsigned int)x; +} this is just plain scary.

Re: GFS, what's remaining

2005-09-05 Thread Theodore Ts'o
On Sun, Sep 04, 2005 at 10:33:44PM +0200, Pavel Machek wrote: Hi! - read-only mount - specatator mount (like ro but no journal allocated for the mount, no fencing needed for failed node that was mounted as specatator) I'd call it real-read-only, and yes, that's very usefull mount.

Re: GFS, what's remaining

2005-09-05 Thread Pekka Enberg
On Thu, Sep 01, 2005 at 01:35:23PM +0200, Arjan van de Ven wrote: +void gfs2_glock_hold(struct gfs2_glock *gl) +{ + glock_hold(gl); +} eh why? On 9/5/05, David Teigland [EMAIL PROTECTED] wrote: You removed the comment stating exactly why, see below. If that's not a accepted

Re: [Linux-cluster] Re: GFS, what's remaining

2005-09-05 Thread Andrew Morton
David Teigland [EMAIL PROTECTED] wrote: We export our full dlm API through read/write/poll on a misc device. inotify did that for a while, but we ended up going with a straight syscall interface. How fat is the dlm interface? ie: how many syscalls would it take? - To unsubscribe from this

Re: GFS, what's remaining

2005-09-05 Thread Jörn Engel
On Mon, 5 September 2005 11:47:39 +0800, David Teigland wrote: Joern already suggested moving this out of line and into a function (as it was before) to avoid repeating string constants. In that case the function, file and line from BUG aren't useful. We now have this, does it look ok? Ok

Re: GFS, what's remaining

2005-09-05 Thread David Teigland
On Mon, Sep 05, 2005 at 10:58:08AM +0200, J?rn Engel wrote: #define gfs2_assert(sdp, assertion) do { \ if (unlikely(!(assertion))) { \ printk(KERN_ERR GFS2: fsid=\n, (sdp)-sd_fsname); \ BUG();

Re: [Linux-cluster] Re: GFS, what's remaining

2005-09-05 Thread David Teigland
On Mon, Sep 05, 2005 at 01:54:08AM -0700, Andrew Morton wrote: David Teigland [EMAIL PROTECTED] wrote: We export our full dlm API through read/write/poll on a misc device. inotify did that for a while, but we ended up going with a straight syscall interface. How fat is the dlm

Re: [Linux-cluster] Re: GFS, what's remaining

2005-09-05 Thread Andrew Morton
David Teigland [EMAIL PROTECTED] wrote: On Mon, Sep 05, 2005 at 01:54:08AM -0700, Andrew Morton wrote: David Teigland [EMAIL PROTECTED] wrote: We export our full dlm API through read/write/poll on a misc device. inotify did that for a while, but we ended up going with a straight

Re: [Linux-cluster] Re: GFS, what's remaining

2005-09-05 Thread Daniel Phillips
On Monday 05 September 2005 05:19, Andrew Morton wrote: David Teigland [EMAIL PROTECTED] wrote: On Mon, Sep 05, 2005 at 01:54:08AM -0700, Andrew Morton wrote: David Teigland [EMAIL PROTECTED] wrote: We export our full dlm API through read/write/poll on a misc device. inotify did

Re: [Linux-cluster] Re: GFS, what's remaining

2005-09-05 Thread David Teigland
On Mon, Sep 05, 2005 at 02:19:48AM -0700, Andrew Morton wrote: David Teigland [EMAIL PROTECTED] wrote: Four functions: create_lockspace() release_lockspace() lock() unlock() Neat. I'd be inclined to make them syscalls then. I don't suppose anyone is likely to object if we

Re: [Linux-cluster] Re: GFS, what's remaining

2005-09-05 Thread Stephen C. Tweedie
Hi, On Sun, 2005-09-04 at 21:33, Pavel Machek wrote: - read-only mount - specatator mount (like ro but no journal allocated for the mount, no fencing needed for failed node that was mounted as specatator) I'd call it real-read-only, and yes, that's very usefull mount. Could we get it

Re: real read-only [was Re: GFS, what's remaining]

2005-09-05 Thread Theodore Ts'o
On Mon, Sep 05, 2005 at 10:27:35AM +0200, Pavel Machek wrote: There's a better reason, too. I do swsusp. Then I'd like to boot with / mounted read-only (so that I can read my config files, some binaries, and maybe suspended image), but I absolutely may not write to disk at this point,

Re: GFS, what's remaining

2005-09-05 Thread Theodore Ts'o
On Mon, Sep 05, 2005 at 12:09:23AM -0700, Mark Fasheh wrote: Btw, I'm curious to know how useful folks find the ext3 mount options errors=continue and errors=panic. I'm extremely likely to implement the errors=read-only behavior as default in OCFS2 and I'm wondering whether the other two are

Re: GFS, what's remaining

2005-09-05 Thread Lars Marowsky-Bree
On 2005-09-03T09:27:41, Bernd Eckenfels [EMAIL PROTECTED] wrote: Oh thats interesting, I never thought about putting data files (tablespaces) in a clustered file system. Does that mean you can run supported RAC on shared ocfs2 files and anybody is using that? That is the whole point why OCFS

Re: GFS, what's remaining

2005-09-05 Thread Lars Marowsky-Bree
On 2005-09-03T01:57:31, Daniel Phillips [EMAIL PROTECTED] wrote: The only current users of dlms are cluster filesystems. There are zero users of the userspace dlm api. That is incorrect, and you're contradicting yourself here: What does have to be resolved is a common API for node

Re: GFS, what's remaining

2005-09-05 Thread Daniel Phillips
On Monday 05 September 2005 10:14, Lars Marowsky-Bree wrote: On 2005-09-03T01:57:31, Daniel Phillips [EMAIL PROTECTED] wrote: The only current users of dlms are cluster filesystems. There are zero users of the userspace dlm api. That is incorrect... Application users Lars, sorry if I did

Re: GFS, what's remaining

2005-09-05 Thread Dmitry Torokhov
On Monday 05 September 2005 10:49, Daniel Phillips wrote: On Monday 05 September 2005 10:14, Lars Marowsky-Bree wrote: On 2005-09-03T01:57:31, Daniel Phillips [EMAIL PROTECTED] wrote: The only current users of dlms are cluster filesystems. There are zero users of the userspace dlm api.

Re: [Linux-cluster] Re: GFS, what's remaining

2005-09-05 Thread Alan Cox
On Llu, 2005-09-05 at 02:19 -0700, Andrew Morton wrote: create_lockspace() release_lockspace() lock() unlock() Neat. I'd be inclined to make them syscalls then. I don't suppose anyone is likely to object if we reserve those slots. If the locks are not file descriptors then

Re: [Linux-cluster] Re: GFS, what's remaining

2005-09-05 Thread Alan Cox
On Sad, 2005-09-03 at 21:46 -0700, Andrew Morton wrote: Actually I think it's rather sick. Taking O_NONBLOCK and making it a lock-manager trylock because they're kinda-sorta-similar-sounding? Spare me. O_NONBLOCK means open this file in nonblocking mode, not attempt to acquire a clustered

Re: [Linux-cluster] Re: GFS, what's remaining

2005-09-05 Thread kurt . hackel
just didn't need the function. As for the rather thick set of parameters expected in dlm calls, we managed to get dlmlock down to *ahem* eight, and the rest are fairly slim. Looking at the misc device that gfs uses, it seems like there is pretty much complete interface to the same calls you have

Re: [Linux-cluster] Re: GFS, what's remaining

2005-09-05 Thread Andrew Morton
Alan Cox [EMAIL PROTECTED] wrote: On Llu, 2005-09-05 at 02:19 -0700, Andrew Morton wrote: create_lockspace() release_lockspace() lock() unlock() Neat. I'd be inclined to make them syscalls then. I don't suppose anyone is likely to object if we reserve those

Re: GFS, what's remaining

2005-09-05 Thread Bernd Eckenfels
On Mon, Sep 05, 2005 at 04:16:31PM +0200, Lars Marowsky-Bree wrote: That is the whole point why OCFS exists ;-) The whole point of the orcacle cluster filesystem as it was described in old papers was about pfiles, control files and software, because you can easyly use direct block access (with

Re: GFS, what's remaining

2005-09-05 Thread Kurt Hackel
On Mon, Sep 05, 2005 at 10:24:03PM +0200, Bernd Eckenfels wrote: On Mon, Sep 05, 2005 at 04:16:31PM +0200, Lars Marowsky-Bree wrote: That is the whole point why OCFS exists ;-) The whole point of the orcacle cluster filesystem as it was described in old papers was about pfiles, control

Re: [Linux-cluster] Re: GFS, what's remaining

2005-09-05 Thread Alan Cox
On Llu, 2005-09-05 at 12:53 -0700, Andrew Morton wrote: - How are they ref counted - What are the cleanup semantics - How do I pass a lock between processes (AF_UNIX sockets wont work now) - How do I poll on a lock coming free. - What are the semantics of lock ownership - What

Re: [Linux-cluster] Re: GFS, what's remaining

2005-09-05 Thread Andrew Morton
Alan Cox [EMAIL PROTECTED] wrote: On Llu, 2005-09-05 at 12:53 -0700, Andrew Morton wrote: - How are they ref counted - What are the cleanup semantics - How do I pass a lock between processes (AF_UNIX sockets wont work now) - How do I poll on a lock coming free. - What

Re: [Linux-cluster] Re: GFS, what's remaining

2005-09-05 Thread Joel Becker
On Sun, Sep 04, 2005 at 09:37:15AM +0100, Alan Cox wrote: I am curious why a lock manager uses open to implement its locking semantics rather than using the locking API (POSIX locks etc) however. Because it is simple (how do you fcntl(2) from a shell fd?), has no ranges (what do you do

Re: GFS, what's remaining

2005-09-05 Thread Joel Becker
On Mon, Sep 05, 2005 at 10:24:03PM +0200, Bernd Eckenfels wrote: The whole point of the orcacle cluster filesystem as it was described in old papers was about pfiles, control files and software, because you can easyly use direct block access (with ASM) for tablespaces. OCFS, the

Re: GFS, what's remaining

2005-09-05 Thread Daniel Phillips
On Monday 05 September 2005 12:18, Dmitry Torokhov wrote: On Monday 05 September 2005 10:49, Daniel Phillips wrote: On Monday 05 September 2005 10:14, Lars Marowsky-Bree wrote: On 2005-09-03T01:57:31, Daniel Phillips [EMAIL PROTECTED] wrote: The only current users of dlms are cluster

Re: GFS, what's remaining

2005-09-05 Thread Dmitry Torokhov
On Monday 05 September 2005 19:57, Daniel Phillips wrote: On Monday 05 September 2005 12:18, Dmitry Torokhov wrote: On Monday 05 September 2005 10:49, Daniel Phillips wrote: On Monday 05 September 2005 10:14, Lars Marowsky-Bree wrote: On 2005-09-03T01:57:31, Daniel Phillips [EMAIL

Re: GFS, what's remaining

2005-09-05 Thread Daniel Phillips
On Monday 05 September 2005 22:03, Dmitry Torokhov wrote: On Monday 05 September 2005 19:57, Daniel Phillips wrote: On Monday 05 September 2005 12:18, Dmitry Torokhov wrote: On Monday 05 September 2005 10:49, Daniel Phillips wrote: On Monday 05 September 2005 10:14, Lars Marowsky-Bree

Re: GFS, what's remainingh

2005-09-05 Thread Dmitry Torokhov
On Monday 05 September 2005 23:02, Daniel Phillips wrote: By the way, you said alpha server not alpha servers, was that just a slip? Because if you don't have a cluster then why are you using a dlm? No, it is not a slip. The application is running on just one node, so we do not really

Re: GFS, what's remainingh

2005-09-05 Thread Daniel Phillips
On Tuesday 06 September 2005 00:07, Dmitry Torokhov wrote: On Monday 05 September 2005 23:02, Daniel Phillips wrote: By the way, you said alpha server not alpha servers, was that just a slip? Because if you don't have a cluster then why are you using a dlm? No, it is not a slip. The

Re: GFS, what's remainingh

2005-09-05 Thread Dmitry Torokhov
On Monday 05 September 2005 23:58, Daniel Phillips wrote: On Tuesday 06 September 2005 00:07, Dmitry Torokhov wrote: On Monday 05 September 2005 23:02, Daniel Phillips wrote: By the way, you said alpha server not alpha servers, was that just a slip? Because if you don't have a cluster

Re: GFS, what's remaining

2005-09-05 Thread Daniel Phillips
On Monday 05 September 2005 19:37, Joel Becker wrote: OCFS2, the new filesystem, is fully general purpose. It supports all the usual stuff, is quite fast... So I have heard, but isn't it time to quantify that? How do you think you would stack up here:

Re: GFS, what's remaining

2005-09-04 Thread David Teigland
On Thu, Sep 01, 2005 at 01:35:23PM +0200, Arjan van de Ven wrote: > +void gfs2_glock_hold(struct gfs2_glock *gl) > +{ > + glock_hold(gl); > +} > > eh why? You removed the comment stating exactly why, see below. If that's not a accepted technique in the kernel, say so and I'll be happy to

Re: [Linux-cluster] Re: GFS, what's remaining

2005-09-04 Thread David Teigland
On Sat, Sep 03, 2005 at 10:41:40PM -0700, Andrew Morton wrote: > Joel Becker <[EMAIL PROTECTED]> wrote: > > > > > What happens when we want to add some new primitive which has no > > > posix-file analog? > > > > The point of dlmfs is not to express every primitive that the > > DLM has.

Re: GFS, what's remaining

2005-09-04 Thread David Teigland
> > > > what is gfs2_assert() about anyway? please just use BUG_ON directly > > > everywhere > > > > When a machine has many gfs file systems mounted at once it can be useful > > to know which one failed. Does the following look ok? > > >

Re: GFS, what's remaining

2005-09-04 Thread Joel Becker
On Sun, Sep 04, 2005 at 10:33:44PM +0200, Pavel Machek wrote: > > - read-only mount > > - "specatator" mount (like ro but no journal allocated for the mount, > > no fencing needed for failed node that was mounted as specatator) > > I'd call it "real-read-only", and yes, that's very usefull >

Re: GFS, what's remaining

2005-09-04 Thread Pavel Machek
Hi! > - read-only mount > - "specatator" mount (like ro but no journal allocated for the mount, > no fencing needed for failed node that was mounted as specatator) I'd call it "real-read-only", and yes, that's very usefull mount. Could we get it for ext3, too?

Re: [Linux-cluster] Re: GFS, what's remaining

2005-09-04 Thread Daniel Phillips
On Sunday 04 September 2005 03:28, Andrew Morton wrote: > If there is already a richer interface into all this code (such as a > syscall one) and it's feasible to migrate the open() tricksies to that API > in the future if it all comes unstuck then OK. That's why I asked (thus > far

Re: [Linux-cluster] Re: GFS, what's remaining

2005-09-04 Thread Hua Zhong
>takelock domainxxx lock1 >do sutff >droplock domainxxx lock1 > > When someone kills the shell, the lock is leaked, becuase droplock isn't > called. Why not open the lock resource (or the lock space) instead of individual locks as file? It then looks like this: open lock

Re: [Linux-cluster] Re: GFS, what's remaining

2005-09-04 Thread Joel Becker
On Sun, Sep 04, 2005 at 02:18:36AM -0700, Andrew Morton wrote: > take-and-drop-lock -d domainxxx -l lock1 -e "do stuff" Ahh, but then you have to have lots of scripts somewhere in path, or do massive inline scripts. especially if you want to take another lock in there somewhere.

Re: [Linux-cluster] Re: GFS, what's remaining

2005-09-04 Thread Andrew Morton
Joel Becker <[EMAIL PROTECTED]> wrote: > > I can't see how that works easily. I'm not worried about a > tarball (eventually Red Hat and SuSE and Debian would have it). I'm > thinking about this shell: > > exec 7 do stuff > exec 7 > If someone kills the shell while

Re: [Linux-cluster] Re: GFS, what's remaining

2005-09-04 Thread Joel Becker
On Sun, Sep 04, 2005 at 01:18:05AM -0700, Andrew Morton wrote: > > I thought I stated this in my other email. We're not intending > > to extend dlmfs. > > Famous last words ;) Heh, of course :-) > I don't buy the general "fs is nice because we can script it" argument, > really.

  1   2   3   4   5   >