Greg,
On Tue, 22 Jan 2019 at 16:24, Greg Kroah-Hartman
wrote:
> When calling debugfs functions, there is no need to ever check the
> return value. The function can work or not, but the code logic should
> never do something different based on this.
>
> There is no need to save the dentries for
When calling debugfs functions, there is no need to ever check the
return value. The function can work or not, but the code logic should
never do something different based on this.
There is no need to save the dentries for the debugfs files, so drop
those variables to save a bit of space and
Hi,
On Thu, 2014-03-06 at 12:10 -0800, Joe Perches wrote:
> Joe Perches (3):
> gfs2: Use pr_ more consistently
> gfs2: Use fs_ more often
> gfs2: Convert gfs2_lm_withdraw to use fs_err
>
> fs/gfs2/dir.c| 14
> fs/gfs2/glock.c | 8 +++--
> fs/gfs2/lock_dlm.c | 9
Hi,
On Thu, 2014-03-06 at 12:10 -0800, Joe Perches wrote:
Joe Perches (3):
gfs2: Use pr_level more consistently
gfs2: Use fs_level more often
gfs2: Convert gfs2_lm_withdraw to use fs_err
fs/gfs2/dir.c| 14
fs/gfs2/glock.c | 8 +++--
fs/gfs2/lock_dlm.c | 9
Joe Perches (3):
gfs2: Use pr_ more consistently
gfs2: Use fs_ more often
gfs2: Convert gfs2_lm_withdraw to use fs_err
fs/gfs2/dir.c| 14
fs/gfs2/glock.c | 8 +++--
fs/gfs2/lock_dlm.c | 9 +++--
fs/gfs2/main.c | 2 ++
fs/gfs2/ops_fstype.c | 25 ++---
Joe Perches (3):
gfs2: Use pr_level more consistently
gfs2: Use fs_level more often
gfs2: Convert gfs2_lm_withdraw to use fs_err
fs/gfs2/dir.c| 14
fs/gfs2/glock.c | 8 +++--
fs/gfs2/lock_dlm.c | 9 +++--
fs/gfs2/main.c | 2 ++
fs/gfs2/ops_fstype.c | 25
---
lxc-dave/fs/gfs2/inode.c |1 +
1 file changed, 1 insertion(+)
diff -puN fs/gfs2/inode.c~gfs-check-nlink-count fs/gfs2/inode.c
--- lxc/fs/gfs2/inode.c~gfs-check-nlink-count 2007-02-09 14:26:59.0
-0800
+++ lxc-dave/fs/gfs2/inode.c2007-02-09 14:26:59.0 -0800
---
lxc-dave/fs/gfs2/inode.c |1 +
1 file changed, 1 insertion(+)
diff -puN fs/gfs2/inode.c~gfs-check-nlink-count fs/gfs2/inode.c
--- lxc/fs/gfs2/inode.c~gfs-check-nlink-count 2007-02-09 14:26:59.0
-0800
+++ lxc-dave/fs/gfs2/inode.c2007-02-09 14:26:59.0 -0800
On Thu, Sep 01, 2005 at 01:35:23PM +0200, Arjan van de Ven wrote:
> +static inline void glock_put(struct gfs2_glock *gl)
> +{
> + if (atomic_read(>gl_count) == 1)
> + gfs2_glock_schedule_for_reclaim(gl);
> + gfs2_assert(gl->gl_sbd, atomic_read(>gl_count) > 0,);
> +
On Thu, Sep 01, 2005 at 01:35:23PM +0200, Arjan van de Ven wrote:
+static inline void glock_put(struct gfs2_glock *gl)
+{
+ if (atomic_read(gl-gl_count) == 1)
+ gfs2_glock_schedule_for_reclaim(gl);
+ gfs2_assert(gl-gl_sbd, atomic_read(gl-gl_count) 0,);
+
On 9/6/05, Daniel Phillips <[EMAIL PROTECTED]> wrote:
> On Tuesday 06 September 2005 02:55, Dmitry Torokhov wrote:
> > On Tuesday 06 September 2005 01:48, Daniel Phillips wrote:
> > > On Tuesday 06 September 2005 01:05, Dmitry Torokhov wrote:
> > > > do you think it is a bit premature to dismiss
On Maw, 2005-09-06 at 02:48 -0400, Daniel Phillips wrote:
> On Tuesday 06 September 2005 01:05, Dmitry Torokhov wrote:
> > do you think it is a bit premature to dismiss something even without
> > ever seeing the code?
>
> You told me you are using a dlm for a single-node application, is there
>
On Fri, Sep 02, 2005 at 11:17:08PM +0200, Andi Kleen wrote:
> Andrew Morton <[EMAIL PROTECTED]> writes:
>
> >
> > > > - Why GFS is better than OCFS2, or has functionality which OCFS2 cannot
> > > > possibly gain (or vice versa)
> > >
On Tuesday 06 September 2005 02:55, Dmitry Torokhov wrote:
> On Tuesday 06 September 2005 01:48, Daniel Phillips wrote:
> > On Tuesday 06 September 2005 01:05, Dmitry Torokhov wrote:
> > > do you think it is a bit premature to dismiss something even without
> > > ever seeing the code?
> >
> > You
On Tuesday 06 September 2005 01:48, Daniel Phillips wrote:
> On Tuesday 06 September 2005 01:05, Dmitry Torokhov wrote:
> > do you think it is a bit premature to dismiss something even without
> > ever seeing the code?
>
> You told me you are using a dlm for a single-node application, is there
>
On Tuesday 06 September 2005 01:05, Dmitry Torokhov wrote:
> do you think it is a bit premature to dismiss something even without
> ever seeing the code?
You told me you are using a dlm for a single-node application, is there
anything more I need to know?
Regards,
Daniel
-
To unsubscribe from
On Tuesday 06 September 2005 01:05, Dmitry Torokhov wrote:
do you think it is a bit premature to dismiss something even without
ever seeing the code?
You told me you are using a dlm for a single-node application, is there
anything more I need to know?
Regards,
Daniel
-
To unsubscribe from
On Tuesday 06 September 2005 01:48, Daniel Phillips wrote:
On Tuesday 06 September 2005 01:05, Dmitry Torokhov wrote:
do you think it is a bit premature to dismiss something even without
ever seeing the code?
You told me you are using a dlm for a single-node application, is there
On Tuesday 06 September 2005 02:55, Dmitry Torokhov wrote:
On Tuesday 06 September 2005 01:48, Daniel Phillips wrote:
On Tuesday 06 September 2005 01:05, Dmitry Torokhov wrote:
do you think it is a bit premature to dismiss something even without
ever seeing the code?
You told me you
On Fri, Sep 02, 2005 at 11:17:08PM +0200, Andi Kleen wrote:
Andrew Morton [EMAIL PROTECTED] writes:
- Why GFS is better than OCFS2, or has functionality which OCFS2 cannot
possibly gain (or vice versa)
- Relative merits of the two offerings
You missed the important
On Maw, 2005-09-06 at 02:48 -0400, Daniel Phillips wrote:
On Tuesday 06 September 2005 01:05, Dmitry Torokhov wrote:
do you think it is a bit premature to dismiss something even without
ever seeing the code?
You told me you are using a dlm for a single-node application, is there
anything
On 9/6/05, Daniel Phillips [EMAIL PROTECTED] wrote:
On Tuesday 06 September 2005 02:55, Dmitry Torokhov wrote:
On Tuesday 06 September 2005 01:48, Daniel Phillips wrote:
On Tuesday 06 September 2005 01:05, Dmitry Torokhov wrote:
do you think it is a bit premature to dismiss something
On Monday 05 September 2005 19:37, Joel Becker wrote:
> OCFS2, the new filesystem, is fully general purpose. It
> supports all the usual stuff, is quite fast...
So I have heard, but isn't it time to quantify that? How do you think you
would stack up here:
On Monday 05 September 2005 23:58, Daniel Phillips wrote:
> On Tuesday 06 September 2005 00:07, Dmitry Torokhov wrote:
> > On Monday 05 September 2005 23:02, Daniel Phillips wrote:
> > > By the way, you said "alpha server" not "alpha servers", was that just a
> > > slip? Because if you don't have
On Tuesday 06 September 2005 00:07, Dmitry Torokhov wrote:
> On Monday 05 September 2005 23:02, Daniel Phillips wrote:
> > By the way, you said "alpha server" not "alpha servers", was that just a
> > slip? Because if you don't have a cluster then why are you using a dlm?
>
> No, it is not a slip.
On Monday 05 September 2005 23:02, Daniel Phillips wrote:
>
> By the way, you said "alpha server" not "alpha servers", was that just a
> slip?
> Because if you don't have a cluster then why are you using a dlm?
>
No, it is not a slip. The application is running on just one node, so we
do not
On Monday 05 September 2005 22:03, Dmitry Torokhov wrote:
> On Monday 05 September 2005 19:57, Daniel Phillips wrote:
> > On Monday 05 September 2005 12:18, Dmitry Torokhov wrote:
> > > On Monday 05 September 2005 10:49, Daniel Phillips wrote:
> > > > On Monday 05 September 2005 10:14, Lars
On Monday 05 September 2005 19:57, Daniel Phillips wrote:
> On Monday 05 September 2005 12:18, Dmitry Torokhov wrote:
> > On Monday 05 September 2005 10:49, Daniel Phillips wrote:
> > > On Monday 05 September 2005 10:14, Lars Marowsky-Bree wrote:
> > > > On 2005-09-03T01:57:31, Daniel Phillips
On Monday 05 September 2005 12:18, Dmitry Torokhov wrote:
> On Monday 05 September 2005 10:49, Daniel Phillips wrote:
> > On Monday 05 September 2005 10:14, Lars Marowsky-Bree wrote:
> > > On 2005-09-03T01:57:31, Daniel Phillips <[EMAIL PROTECTED]> wrote:
> > > > The only current users of dlms are
On Mon, Sep 05, 2005 at 10:24:03PM +0200, Bernd Eckenfels wrote:
> The whole point of the orcacle cluster filesystem as it was described in old
> papers was about pfiles, control files and software, because you can easyly
> use direct block access (with ASM) for tablespaces.
OCFS, the
On Sun, Sep 04, 2005 at 09:37:15AM +0100, Alan Cox wrote:
> I am curious why a lock manager uses open to implement its locking
> semantics rather than using the locking API (POSIX locks etc) however.
Because it is simple (how do you fcntl(2) from a shell fd?), has no
ranges (what do you
Alan Cox <[EMAIL PROTECTED]> wrote:
>
> On Llu, 2005-09-05 at 12:53 -0700, Andrew Morton wrote:
> > > - How are they ref counted
> > > - What are the cleanup semantics
> > > - How do I pass a lock between processes (AF_UNIX sockets wont work now)
> > > - How do I poll on a lock coming
On Llu, 2005-09-05 at 12:53 -0700, Andrew Morton wrote:
> > - How are they ref counted
> > - What are the cleanup semantics
> > - How do I pass a lock between processes (AF_UNIX sockets wont work now)
> > - How do I poll on a lock coming free.
> > - What are the semantics of lock ownership
>
On Mon, Sep 05, 2005 at 10:24:03PM +0200, Bernd Eckenfels wrote:
> On Mon, Sep 05, 2005 at 04:16:31PM +0200, Lars Marowsky-Bree wrote:
> > That is the whole point why OCFS exists ;-)
>
> The whole point of the orcacle cluster filesystem as it was described in old
> papers was about pfiles,
On Mon, Sep 05, 2005 at 04:16:31PM +0200, Lars Marowsky-Bree wrote:
> That is the whole point why OCFS exists ;-)
The whole point of the orcacle cluster filesystem as it was described in old
papers was about pfiles, control files and software, because you can easyly
use direct block access (with
Alan Cox <[EMAIL PROTECTED]> wrote:
>
> On Llu, 2005-09-05 at 02:19 -0700, Andrew Morton wrote:
> > > create_lockspace()
> > > release_lockspace()
> > > lock()
> > > unlock()
> >
> > Neat. I'd be inclined to make them syscalls then. I don't suppose anyone
> > is likely to object
ctions in the "full" spec(1) that we
didn't even attempt, either because we didn't require direct
user<->kernel access or we just didn't need the function. As for the
rather thick set of parameters expected in dlm calls, we managed to get
dlmlock down to *ahem* eight, and the re
On Sad, 2005-09-03 at 21:46 -0700, Andrew Morton wrote:
> Actually I think it's rather sick. Taking O_NONBLOCK and making it a
> lock-manager trylock because they're kinda-sorta-similar-sounding? Spare
> me. O_NONBLOCK means "open this file in nonblocking mode", not "attempt to
> acquire a
On Llu, 2005-09-05 at 02:19 -0700, Andrew Morton wrote:
> > create_lockspace()
> > release_lockspace()
> > lock()
> > unlock()
>
> Neat. I'd be inclined to make them syscalls then. I don't suppose anyone
> is likely to object if we reserve those slots.
If the locks are not file
On Monday 05 September 2005 10:49, Daniel Phillips wrote:
> On Monday 05 September 2005 10:14, Lars Marowsky-Bree wrote:
> > On 2005-09-03T01:57:31, Daniel Phillips <[EMAIL PROTECTED]> wrote:
> > > The only current users of dlms are cluster filesystems. There are zero
> > > users of the userspace
On Monday 05 September 2005 10:14, Lars Marowsky-Bree wrote:
> On 2005-09-03T01:57:31, Daniel Phillips <[EMAIL PROTECTED]> wrote:
> > The only current users of dlms are cluster filesystems. There are zero
> > users of the userspace dlm api.
>
> That is incorrect...
Application users Lars, sorry
On 2005-09-03T01:57:31, Daniel Phillips <[EMAIL PROTECTED]> wrote:
> The only current users of dlms are cluster filesystems. There are zero users
> of the userspace dlm api.
That is incorrect, and you're contradicting yourself here:
> What does have to be resolved is a common API for node
On 2005-09-03T09:27:41, Bernd Eckenfels <[EMAIL PROTECTED]> wrote:
> Oh thats interesting, I never thought about putting data files (tablespaces)
> in a clustered file system. Does that mean you can run supported RAC on
> shared ocfs2 files and anybody is using that?
That is the whole point why
On Mon, Sep 05, 2005 at 12:09:23AM -0700, Mark Fasheh wrote:
> Btw, I'm curious to know how useful folks find the ext3 mount options
> errors=continue and errors=panic. I'm extremely likely to implement the
> errors=read-only behavior as default in OCFS2 and I'm wondering whether the
> other two
On Mon, Sep 05, 2005 at 10:27:35AM +0200, Pavel Machek wrote:
>
> There's a better reason, too. I do swsusp. Then I'd like to boot with
> / mounted read-only (so that I can read my config files, some
> binaries, and maybe suspended image), but I absolutely may not write
> to disk at this point,
Hi,
On Sun, 2005-09-04 at 21:33, Pavel Machek wrote:
> > - read-only mount
> > - "specatator" mount (like ro but no journal allocated for the mount,
> > no fencing needed for failed node that was mounted as specatator)
>
> I'd call it "real-read-only", and yes, that's very usefull
> mount.
On Mon, Sep 05, 2005 at 02:19:48AM -0700, Andrew Morton wrote:
> David Teigland <[EMAIL PROTECTED]> wrote:
> > Four functions:
> > create_lockspace()
> > release_lockspace()
> > lock()
> > unlock()
>
> Neat. I'd be inclined to make them syscalls then. I don't suppose anyone
> is likely
On Monday 05 September 2005 05:19, Andrew Morton wrote:
> David Teigland <[EMAIL PROTECTED]> wrote:
> > On Mon, Sep 05, 2005 at 01:54:08AM -0700, Andrew Morton wrote:
> > > David Teigland <[EMAIL PROTECTED]> wrote:
> > > > We export our full dlm API through read/write/poll on a misc device.
> > >
David Teigland <[EMAIL PROTECTED]> wrote:
>
> On Mon, Sep 05, 2005 at 01:54:08AM -0700, Andrew Morton wrote:
> > David Teigland <[EMAIL PROTECTED]> wrote:
> > >
> > > We export our full dlm API through read/write/poll on a misc device.
> > >
> >
> > inotify did that for a while, but we ended up
On Mon, Sep 05, 2005 at 01:54:08AM -0700, Andrew Morton wrote:
> David Teigland <[EMAIL PROTECTED]> wrote:
> >
> > We export our full dlm API through read/write/poll on a misc device.
> >
>
> inotify did that for a while, but we ended up going with a straight syscall
> interface.
>
> How fat is
On Mon, Sep 05, 2005 at 10:58:08AM +0200, J?rn Engel wrote:
> #define gfs2_assert(sdp, assertion) do { \
> if (unlikely(!(assertion))) { \
> printk(KERN_ERR "GFS2: fsid=\n", (sdp)->sd_fsname); \
> BUG();
On Mon, 5 September 2005 11:47:39 +0800, David Teigland wrote:
>
> Joern already suggested moving this out of line and into a function (as it
> was before) to avoid repeating string constants. In that case the
> function, file and line from BUG aren't useful. We now have this, does it
> look
David Teigland <[EMAIL PROTECTED]> wrote:
>
> We export our full dlm API through read/write/poll on a misc device.
>
inotify did that for a while, but we ended up going with a straight syscall
interface.
How fat is the dlm interface? ie: how many syscalls would it take?
-
To unsubscribe from
On Thu, Sep 01, 2005 at 01:35:23PM +0200, Arjan van de Ven wrote:
> > +void gfs2_glock_hold(struct gfs2_glock *gl)
> > +{
> > + glock_hold(gl);
> > +}
> >
> > eh why?
On 9/5/05, David Teigland <[EMAIL PROTECTED]> wrote:
> You removed the comment stating exactly why, see below. If that's not
On Sun, Sep 04, 2005 at 10:33:44PM +0200, Pavel Machek wrote:
> Hi!
>
> > - read-only mount
> > - "specatator" mount (like ro but no journal allocated for the mount,
> > no fencing needed for failed node that was mounted as specatator)
>
> I'd call it "real-read-only", and yes, that's very
On Thu, Sep 01, 2005 at 01:35:23PM +0200, Arjan van de Ven wrote:
> +static unsigned int handle_roll(atomic_t *a)
> +{
> + int x = atomic_read(a);
> + if (x < 0) {
> + atomic_set(a, 0);
> + return 0;
> + }
> + return (unsigned int)x;
> +}
>
> this is just
On Thu, Sep 01, 2005 at 01:35:23PM +0200, Arjan van de Ven wrote:
+static unsigned int handle_roll(atomic_t *a)
+{
+ int x = atomic_read(a);
+ if (x 0) {
+ atomic_set(a, 0);
+ return 0;
+ }
+ return (unsigned int)x;
+}
this is just plain scary.
On Sun, Sep 04, 2005 at 10:33:44PM +0200, Pavel Machek wrote:
Hi!
- read-only mount
- specatator mount (like ro but no journal allocated for the mount,
no fencing needed for failed node that was mounted as specatator)
I'd call it real-read-only, and yes, that's very usefull
mount.
On Thu, Sep 01, 2005 at 01:35:23PM +0200, Arjan van de Ven wrote:
+void gfs2_glock_hold(struct gfs2_glock *gl)
+{
+ glock_hold(gl);
+}
eh why?
On 9/5/05, David Teigland [EMAIL PROTECTED] wrote:
You removed the comment stating exactly why, see below. If that's not a
accepted
David Teigland [EMAIL PROTECTED] wrote:
We export our full dlm API through read/write/poll on a misc device.
inotify did that for a while, but we ended up going with a straight syscall
interface.
How fat is the dlm interface? ie: how many syscalls would it take?
-
To unsubscribe from this
On Mon, 5 September 2005 11:47:39 +0800, David Teigland wrote:
Joern already suggested moving this out of line and into a function (as it
was before) to avoid repeating string constants. In that case the
function, file and line from BUG aren't useful. We now have this, does it
look ok?
Ok
On Mon, Sep 05, 2005 at 10:58:08AM +0200, J?rn Engel wrote:
#define gfs2_assert(sdp, assertion) do { \
if (unlikely(!(assertion))) { \
printk(KERN_ERR GFS2: fsid=\n, (sdp)-sd_fsname); \
BUG();
On Mon, Sep 05, 2005 at 01:54:08AM -0700, Andrew Morton wrote:
David Teigland [EMAIL PROTECTED] wrote:
We export our full dlm API through read/write/poll on a misc device.
inotify did that for a while, but we ended up going with a straight syscall
interface.
How fat is the dlm
David Teigland [EMAIL PROTECTED] wrote:
On Mon, Sep 05, 2005 at 01:54:08AM -0700, Andrew Morton wrote:
David Teigland [EMAIL PROTECTED] wrote:
We export our full dlm API through read/write/poll on a misc device.
inotify did that for a while, but we ended up going with a straight
On Monday 05 September 2005 05:19, Andrew Morton wrote:
David Teigland [EMAIL PROTECTED] wrote:
On Mon, Sep 05, 2005 at 01:54:08AM -0700, Andrew Morton wrote:
David Teigland [EMAIL PROTECTED] wrote:
We export our full dlm API through read/write/poll on a misc device.
inotify did
On Mon, Sep 05, 2005 at 02:19:48AM -0700, Andrew Morton wrote:
David Teigland [EMAIL PROTECTED] wrote:
Four functions:
create_lockspace()
release_lockspace()
lock()
unlock()
Neat. I'd be inclined to make them syscalls then. I don't suppose anyone
is likely to object if we
Hi,
On Sun, 2005-09-04 at 21:33, Pavel Machek wrote:
- read-only mount
- specatator mount (like ro but no journal allocated for the mount,
no fencing needed for failed node that was mounted as specatator)
I'd call it real-read-only, and yes, that's very usefull
mount. Could we get it
On Mon, Sep 05, 2005 at 10:27:35AM +0200, Pavel Machek wrote:
There's a better reason, too. I do swsusp. Then I'd like to boot with
/ mounted read-only (so that I can read my config files, some
binaries, and maybe suspended image), but I absolutely may not write
to disk at this point,
On Mon, Sep 05, 2005 at 12:09:23AM -0700, Mark Fasheh wrote:
Btw, I'm curious to know how useful folks find the ext3 mount options
errors=continue and errors=panic. I'm extremely likely to implement the
errors=read-only behavior as default in OCFS2 and I'm wondering whether the
other two are
On 2005-09-03T09:27:41, Bernd Eckenfels [EMAIL PROTECTED] wrote:
Oh thats interesting, I never thought about putting data files (tablespaces)
in a clustered file system. Does that mean you can run supported RAC on
shared ocfs2 files and anybody is using that?
That is the whole point why OCFS
On 2005-09-03T01:57:31, Daniel Phillips [EMAIL PROTECTED] wrote:
The only current users of dlms are cluster filesystems. There are zero users
of the userspace dlm api.
That is incorrect, and you're contradicting yourself here:
What does have to be resolved is a common API for node
On Monday 05 September 2005 10:14, Lars Marowsky-Bree wrote:
On 2005-09-03T01:57:31, Daniel Phillips [EMAIL PROTECTED] wrote:
The only current users of dlms are cluster filesystems. There are zero
users of the userspace dlm api.
That is incorrect...
Application users Lars, sorry if I did
On Monday 05 September 2005 10:49, Daniel Phillips wrote:
On Monday 05 September 2005 10:14, Lars Marowsky-Bree wrote:
On 2005-09-03T01:57:31, Daniel Phillips [EMAIL PROTECTED] wrote:
The only current users of dlms are cluster filesystems. There are zero
users of the userspace dlm api.
On Llu, 2005-09-05 at 02:19 -0700, Andrew Morton wrote:
create_lockspace()
release_lockspace()
lock()
unlock()
Neat. I'd be inclined to make them syscalls then. I don't suppose anyone
is likely to object if we reserve those slots.
If the locks are not file descriptors then
On Sad, 2005-09-03 at 21:46 -0700, Andrew Morton wrote:
Actually I think it's rather sick. Taking O_NONBLOCK and making it a
lock-manager trylock because they're kinda-sorta-similar-sounding? Spare
me. O_NONBLOCK means open this file in nonblocking mode, not attempt to
acquire a clustered
just didn't need the function. As for the
rather thick set of parameters expected in dlm calls, we managed to get
dlmlock down to *ahem* eight, and the rest are fairly slim.
Looking at the misc device that gfs uses, it seems like there is pretty
much complete interface to the same calls you have
Alan Cox [EMAIL PROTECTED] wrote:
On Llu, 2005-09-05 at 02:19 -0700, Andrew Morton wrote:
create_lockspace()
release_lockspace()
lock()
unlock()
Neat. I'd be inclined to make them syscalls then. I don't suppose anyone
is likely to object if we reserve those
On Mon, Sep 05, 2005 at 04:16:31PM +0200, Lars Marowsky-Bree wrote:
That is the whole point why OCFS exists ;-)
The whole point of the orcacle cluster filesystem as it was described in old
papers was about pfiles, control files and software, because you can easyly
use direct block access (with
On Mon, Sep 05, 2005 at 10:24:03PM +0200, Bernd Eckenfels wrote:
On Mon, Sep 05, 2005 at 04:16:31PM +0200, Lars Marowsky-Bree wrote:
That is the whole point why OCFS exists ;-)
The whole point of the orcacle cluster filesystem as it was described in old
papers was about pfiles, control
On Llu, 2005-09-05 at 12:53 -0700, Andrew Morton wrote:
- How are they ref counted
- What are the cleanup semantics
- How do I pass a lock between processes (AF_UNIX sockets wont work now)
- How do I poll on a lock coming free.
- What are the semantics of lock ownership
- What
Alan Cox [EMAIL PROTECTED] wrote:
On Llu, 2005-09-05 at 12:53 -0700, Andrew Morton wrote:
- How are they ref counted
- What are the cleanup semantics
- How do I pass a lock between processes (AF_UNIX sockets wont work now)
- How do I poll on a lock coming free.
- What
On Sun, Sep 04, 2005 at 09:37:15AM +0100, Alan Cox wrote:
I am curious why a lock manager uses open to implement its locking
semantics rather than using the locking API (POSIX locks etc) however.
Because it is simple (how do you fcntl(2) from a shell fd?), has no
ranges (what do you do
On Mon, Sep 05, 2005 at 10:24:03PM +0200, Bernd Eckenfels wrote:
The whole point of the orcacle cluster filesystem as it was described in old
papers was about pfiles, control files and software, because you can easyly
use direct block access (with ASM) for tablespaces.
OCFS, the
On Monday 05 September 2005 12:18, Dmitry Torokhov wrote:
On Monday 05 September 2005 10:49, Daniel Phillips wrote:
On Monday 05 September 2005 10:14, Lars Marowsky-Bree wrote:
On 2005-09-03T01:57:31, Daniel Phillips [EMAIL PROTECTED] wrote:
The only current users of dlms are cluster
On Monday 05 September 2005 19:57, Daniel Phillips wrote:
On Monday 05 September 2005 12:18, Dmitry Torokhov wrote:
On Monday 05 September 2005 10:49, Daniel Phillips wrote:
On Monday 05 September 2005 10:14, Lars Marowsky-Bree wrote:
On 2005-09-03T01:57:31, Daniel Phillips [EMAIL
On Monday 05 September 2005 22:03, Dmitry Torokhov wrote:
On Monday 05 September 2005 19:57, Daniel Phillips wrote:
On Monday 05 September 2005 12:18, Dmitry Torokhov wrote:
On Monday 05 September 2005 10:49, Daniel Phillips wrote:
On Monday 05 September 2005 10:14, Lars Marowsky-Bree
On Monday 05 September 2005 23:02, Daniel Phillips wrote:
By the way, you said alpha server not alpha servers, was that just a
slip?
Because if you don't have a cluster then why are you using a dlm?
No, it is not a slip. The application is running on just one node, so we
do not really
On Tuesday 06 September 2005 00:07, Dmitry Torokhov wrote:
On Monday 05 September 2005 23:02, Daniel Phillips wrote:
By the way, you said alpha server not alpha servers, was that just a
slip? Because if you don't have a cluster then why are you using a dlm?
No, it is not a slip. The
On Monday 05 September 2005 23:58, Daniel Phillips wrote:
On Tuesday 06 September 2005 00:07, Dmitry Torokhov wrote:
On Monday 05 September 2005 23:02, Daniel Phillips wrote:
By the way, you said alpha server not alpha servers, was that just a
slip? Because if you don't have a cluster
On Monday 05 September 2005 19:37, Joel Becker wrote:
OCFS2, the new filesystem, is fully general purpose. It
supports all the usual stuff, is quite fast...
So I have heard, but isn't it time to quantify that? How do you think you
would stack up here:
On Thu, Sep 01, 2005 at 01:35:23PM +0200, Arjan van de Ven wrote:
> +void gfs2_glock_hold(struct gfs2_glock *gl)
> +{
> + glock_hold(gl);
> +}
>
> eh why?
You removed the comment stating exactly why, see below. If that's not a
accepted technique in the kernel, say so and I'll be happy to
On Sat, Sep 03, 2005 at 10:41:40PM -0700, Andrew Morton wrote:
> Joel Becker <[EMAIL PROTECTED]> wrote:
> >
> > > What happens when we want to add some new primitive which has no
> > > posix-file analog?
> >
> > The point of dlmfs is not to express every primitive that the
> > DLM has.
>
> > > what is gfs2_assert() about anyway? please just use BUG_ON directly
> > > everywhere
> >
> > When a machine has many gfs file systems mounted at once it can be useful
> > to know which one failed. Does the following look ok?
> >
>
On Sun, Sep 04, 2005 at 10:33:44PM +0200, Pavel Machek wrote:
> > - read-only mount
> > - "specatator" mount (like ro but no journal allocated for the mount,
> > no fencing needed for failed node that was mounted as specatator)
>
> I'd call it "real-read-only", and yes, that's very usefull
>
Hi!
> - read-only mount
> - "specatator" mount (like ro but no journal allocated for the mount,
> no fencing needed for failed node that was mounted as specatator)
I'd call it "real-read-only", and yes, that's very usefull
mount. Could we get it for ext3, too?
On Sunday 04 September 2005 03:28, Andrew Morton wrote:
> If there is already a richer interface into all this code (such as a
> syscall one) and it's feasible to migrate the open() tricksies to that API
> in the future if it all comes unstuck then OK. That's why I asked (thus
> far
>takelock domainxxx lock1
>do sutff
>droplock domainxxx lock1
>
> When someone kills the shell, the lock is leaked, becuase droplock isn't
> called.
Why not open the lock resource (or the lock space) instead of
individual locks as file? It then looks like this:
open lock
On Sun, Sep 04, 2005 at 02:18:36AM -0700, Andrew Morton wrote:
> take-and-drop-lock -d domainxxx -l lock1 -e "do stuff"
Ahh, but then you have to have lots of scripts somewhere in
path, or do massive inline scripts. especially if you want to take
another lock in there somewhere.
Joel Becker <[EMAIL PROTECTED]> wrote:
>
> I can't see how that works easily. I'm not worried about a
> tarball (eventually Red Hat and SuSE and Debian would have it). I'm
> thinking about this shell:
>
> exec 7 do stuff
> exec 7
> If someone kills the shell while
On Sun, Sep 04, 2005 at 01:18:05AM -0700, Andrew Morton wrote:
> > I thought I stated this in my other email. We're not intending
> > to extend dlmfs.
>
> Famous last words ;)
Heh, of course :-)
> I don't buy the general "fs is nice because we can script it" argument,
> really.
1 - 100 of 418 matches
Mail list logo