Re: [Linux-cluster] Re: GFS, what's remaining

2005-09-02 Thread Wim Coekaerts
On Sat, Sep 03, 2005 at 02:42:36AM -0400, Daniel Phillips wrote:
> On Friday 02 September 2005 20:16, Mark Fasheh wrote:
> > As far as userspace dlm apis go, dlmfs already abstracts away a large part
> > of the dlm interaction...
> 
> Dumb question, why can't you use sysfs for this instead of rolling your own?

because it's totally different. have a look at what it does.

-
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: GFS, what's remaining

2005-09-02 Thread Daniel Phillips
On Friday 02 September 2005 20:16, Mark Fasheh wrote:
> As far as userspace dlm apis go, dlmfs already abstracts away a large part
> of the dlm interaction...

Dumb question, why can't you use sysfs for this instead of rolling your own?

Side note: you seem to have deleted all the 2.6.12-rc4 patches.  Perhaps you 
forgot that there are dozens of lkml archives pointing at them?

Regards,

Daniel
-
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: GFS, what's remaining

2005-09-02 Thread D. Hazelton
On Saturday 03 September 2005 02:14, Arjan van de Ven wrote:
> On Sat, 2005-09-03 at 13:18 +0800, David Teigland wrote:
> > On Thu, Sep 01, 2005 at 01:21:04PM -0700, Andrew Morton wrote:
> > > Alan Cox <[EMAIL PROTECTED]> wrote:
> > > > > - Why GFS is better than OCFS2, or has functionality which
> > > > > OCFS2 cannot possibly gain (or vice versa)
> > > > >
> > > > > - Relative merits of the two offerings
> > > >
> > > > You missed the important one - people actively use it and
> > > > have been for some years. Same reason with have NTFS, HPFS,
> > > > and all the others. On that alone it makes sense to include.
> > >
> > > Again, that's not a technical reason.  It's _a_ reason, sure. 
> > > But what are the technical reasons for merging gfs[2], ocfs2,
> > > both or neither?
> > >
> > > If one can be grown to encompass the capabilities of the other
> > > then we're left with a bunch of legacy code and wasted effort.
> >
> > GFS is an established fs, it's not going away, you'd be hard
> > pressed to find a more widely used cluster fs on Linux.  GFS is
> > about 10 years old and has been in use by customers in production
> > environments for about 5 years.
>
> but you submitted GFS2 not GFS.

I'd rather not step into the middle of this mess, but you clipped out 
a good portion that explains why he talks about GFS when he submitted 
GFS2.  Let me quote the post you've pulled that partial paragraph 
from: "The latest development cycle (GFS2) has focused on improving
performance, it's not a new file system -- the "2" indicates that it's 
not ondisk compatible with earlier versions."

In other words he didn't submit the original, but the new version of 
it that is not compatable with the original GFS on disk format.  
While it is clear that GFS2 cannot claim the large installed user 
base or the proven capacity of the original (it is, after all, a new 
version that has incompatabilities) it can claim that as it's 
heritage and what it's aiming towards, the same as ext3 can (and 
does) claim the power and reliability of ext2.

In this case I've been following this thread just for the hell of it 
and I've noticed that there are some people who seem to not want to 
even think of having GFS2 included in a mainline kernel for personal 
and not technical reasons. That does not describe most of the people 
on this list, many of whom have helped debug the code (among other 
things), but it does describe a few.

I'll go back to being quiet now... 

DRH


0xA6992F96300F159086FF28208F8280BB8B00C32A.asc
Description: application/pgp-keys


pgp1CdOTp2012.pgp
Description: PGP signature


Re: GFS, what's remaining

2005-09-02 Thread Arjan van de Ven
On Sat, 2005-09-03 at 13:18 +0800, David Teigland wrote:
> On Thu, Sep 01, 2005 at 01:21:04PM -0700, Andrew Morton wrote:
> > Alan Cox <[EMAIL PROTECTED]> wrote:
> > > > - Why GFS is better than OCFS2, or has functionality which OCFS2 cannot
> > > >   possibly gain (or vice versa)
> > > > 
> > > > - Relative merits of the two offerings
> > > 
> > > You missed the important one - people actively use it and have been for
> > > some years. Same reason with have NTFS, HPFS, and all the others. On
> > > that alone it makes sense to include.
> > 
> > Again, that's not a technical reason.  It's _a_ reason, sure.  But what are
> > the technical reasons for merging gfs[2], ocfs2, both or neither?
> > 
> > If one can be grown to encompass the capabilities of the other then we're
> > left with a bunch of legacy code and wasted effort.
> 
> GFS is an established fs, it's not going away, you'd be hard pressed to
> find a more widely used cluster fs on Linux.  GFS is about 10 years old
> and has been in use by customers in production environments for about 5
> years.

but you submitted GFS2 not GFS.


-
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: GFS, what's remaining

2005-09-02 Thread Daniel Phillips
On Friday 02 September 2005 17:17, Andi Kleen wrote:
> The only thing that should be probably resolved is a common API
> for at least the clustered lock manager. Having multiple
> incompatible user space APIs for that would be sad.

The only current users of dlms are cluster filesystems.  There are zero users 
of the userspace dlm api.  Therefore, the (g)dlm userspace interface actually 
has nothing to do with the needs of gfs.  It should be taken out the gfs 
patch and merged later, when or if user space applications emerge that need 
it.  Maybe in the meantime it will be possible to come up with a userspace 
dlm api that isn't completely repulsive.

Also, note that the only reason the two current dlms are in-kernel is because 
it supposedly cuts down on userspace-kernel communication with the cluster 
filesystems.  Then why should a userspace application bother with a an 
awkward interface to an in-kernel dlm?  This is obviously suboptimal.  Why 
not have a userspace dlm for userspace apps, if indeed there are any 
userspace apps that would need to use dlm-style synchronization instead of 
more typical socket-based synchronization, or Posix locking, which is already 
exposed via a standard api?

There is actually nothing wrong with having multiple, completely different 
dlms active at the same time.  There is no urgent need to merge them into the 
one true dlm.  It would be a lot better to let them evolve separately and 
pick the winner a year or two from now.  Just think of the dlm as part of the 
cfs until then.

What does have to be resolved is a common API for node management.  It is not 
just cluster filesystems and their lock managers that have to interface to 
node management.  Below the filesystem layer, cluster block devices and 
cluster volume management need to be coordinated by the same system, and 
above the filesystem layer, applications also need to be hooked into it.  
This work is, in a word, incomplete.

Regards,

Daniel
-
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: GFS, what's remaining

2005-09-02 Thread Greg KH
On Fri, Sep 02, 2005 at 05:44:03PM +0800, David Teigland wrote:
> On Thu, Sep 01, 2005 at 01:35:23PM +0200, Arjan van de Ven wrote:
> 
> > +   gfs2_assert(gl->gl_sbd, atomic_read(&gl->gl_count) > 0,);
> 
> > what is gfs2_assert() about anyway? please just use BUG_ON directly
> > everywhere
> 
> When a machine has many gfs file systems mounted at once it can be useful
> to know which one failed.  Does the following look ok?
> 
> #define gfs2_assert(sdp, assertion)   \
> do {  \
> if (unlikely(!(assertion))) { \
> printk(KERN_ERR   \
> "GFS2: fsid=%s: fatal: assertion \"%s\" failed\n" \
> "GFS2: fsid=%s:   function = %s\n"\
> "GFS2: fsid=%s:   file = %s, line = %u\n" \
> "GFS2: fsid=%s:   time = %lu\n",  \
> sdp->sd_fsname, # assertion,  \
> sdp->sd_fsname,  __FUNCTION__,\
> sdp->sd_fsname, __FILE__, __LINE__,   \
> sdp->sd_fsname, get_seconds());   \
> BUG();\

You will already get the __FUNCTION__ (and hence the __FILE__ info)
directly from the BUG() dump, as well as the time from the syslog
message (turn on the printk timestamps if you want a more fine grain
timestamp), so the majority of this macro is redundant with the BUG()
macro...

thanks,

greg k-h
-
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: GFS, what's remaining

2005-09-02 Thread David Teigland
On Thu, Sep 01, 2005 at 01:21:04PM -0700, Andrew Morton wrote:
> Alan Cox <[EMAIL PROTECTED]> wrote:
> > > - Why GFS is better than OCFS2, or has functionality which OCFS2 cannot
> > >   possibly gain (or vice versa)
> > > 
> > > - Relative merits of the two offerings
> > 
> > You missed the important one - people actively use it and have been for
> > some years. Same reason with have NTFS, HPFS, and all the others. On
> > that alone it makes sense to include.
> 
> Again, that's not a technical reason.  It's _a_ reason, sure.  But what are
> the technical reasons for merging gfs[2], ocfs2, both or neither?
> 
> If one can be grown to encompass the capabilities of the other then we're
> left with a bunch of legacy code and wasted effort.

GFS is an established fs, it's not going away, you'd be hard pressed to
find a more widely used cluster fs on Linux.  GFS is about 10 years old
and has been in use by customers in production environments for about 5
years.  It is a mature, stable file system with many features that have
been technically refined over years of experience and customer/user
feedback.  The latest development cycle (GFS2) has focussed on improving
performance, it's not a new file system -- the "2" indicates that it's not
ondisk compatible with earlier versions.

OCFS2 is a new file system.  I expect they'll want to optimize for their
own unique goals.  When OCFS appeared everyone I know accepted it would
coexist with GFS, each in their niche like every other fs.  That's good,
OCFS and GFS help each other technically even though they may eventually
compete in some areas (which can also be good.)

Dave

Here's a random summary of technical features:

- cluster infrastructure: a lot of work, perhaps as much as gfs itself,
  has gone into the infrastructure surrounding and supporting gfs
- cluster infrastructure allows for easy cooperation with CLVM
- interchangable lock/cluster modules:  gfs interacts with the external
  infrastructure, including lock manager, through an interchangable
  module allowing the fs to be adapted to different environments.
- a "nolock" module can be plugged in to use gfs as a local fs
  (can be selected at mount time, so any fs can be mounted locally)
- quotas, acls, cluster flocks, direct io, data journaling,
  ordered/writeback journaling modes -- all supported
- gfs transparently switches to a different locking scheme for direct io
  allowing parallel non-allocating writes with no lock contention
- posix locks -- supported, although it's being reworked for better
  performance right now
- asynchronous locking, lock prefetching + read-ahead
- coherent shared-writeable memory mappings across the cluster
- nfs3 support (multiple nfs servers exporting one gfs is very common)
- extend fs online, add journals online
- full fs quiesce to allow for block level snapshot below gfs
- read-only mount
- "specatator" mount (like ro but no journal allocated for the mount,
  no fencing needed for failed node that was mounted as specatator)
- infrastructure in place for live ondisk inode migration, fs shrink
- stuffed dinodes, small files are stored in the disk inode block
- tunable (fuzzy) atime updates
- fast, nondisruptive stat on files during non-allocating direct-io
- fast, nondisruptive statfs (df) even during heavy fs usage
- friendly handling of io errors: shut down fs and withdraw from cluster
- largest GFS cluster deployed was around 200 nodes, most are much smaller
- use many GFS file systems at once on a node and in a cluster
- customers use GFS for: scientific apps, HA, NFS serving, database,
  others I'm sure
- graphical management tools for gfs, clvm, and the cluster infrastruture
  exist and are improving quickly

-
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: GFS, what's remaining

2005-09-02 Thread Mark Fasheh
On Fri, Sep 02, 2005 at 11:17:08PM +0200, Andi Kleen wrote:
> The only thing that should be probably resolved is a common API
> for at least the clustered lock manager. Having multiple
> incompatible user space APIs for that would be sad.
As far as userspace dlm apis go, dlmfs already abstracts away a large part
of the dlm interaction, so writing a module against another dlm looks like
it wouldn't be too bad (startup of a lockspace is probably the most
difficult part there).
--Mark

--
Mark Fasheh
Senior Software Developer, Oracle
[EMAIL PROTECTED]
-
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: GFS, what's remaining

2005-09-02 Thread Bryan Henderson
I have to correct an error in perspective, or at least in the wording of 
it, in the following, because it affects how people see the big picture in 
trying to decide how the filesystem types in question fit into the world:

>Shared storage can be more efficient than network file
>systems like NFS because the storage access is often more efficient
>than network access

The shared storage access _is_ network access.  In most cases, it's a 
fibre channel/FCP network.  Nowadays, it's more and more common for it to 
be a TCP/IP network just like the one folks use for NFS (but carrying 
ISCSI instead of NFS).  It's also been done with a handful of other 
TCP/IP-based block storage protocols.

The reason the storage access is expected to be more efficient than the 
NFS access is because the block access network protocols are supposed to 
be more efficient than the file access network protocols.

In reality, I'm not sure there really is such a difference in efficiency 
between the protocols.  The demonstrated differences in efficiency, or at 
least in speed, are due to other things that are different between a given 
new shared block implementation and a given old shared file 
implementation.

But there's another advantage to shared block over shared file that hasn't 
been mentioned yet:  some people find it easier to manage a pool of blocks 
than a pool of filesystems.

>it is more reliable because it doesn't have a
>single point of failure in form of the NFS server.

This advantage isn't because it's shared (block) storage, but because it's 
a distributed filesystem.  There are shared storage filesystems (e.g. IBM 
SANFS, ADIC StorNext) that have a centralized metadata or locking server 
that makes them unreliable (or unscalable) in the same ways as an NFS 
server.

--
Bryan Henderson IBM Almaden Research Center
San Jose CA Filesystems

-
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: GFS, what's remaining

2005-09-02 Thread Andi Kleen
Andrew Morton <[EMAIL PROTECTED]> writes:

> 
> > > - Why GFS is better than OCFS2, or has functionality which OCFS2 cannot
> > >   possibly gain (or vice versa)
> > > 
> > > - Relative merits of the two offerings
> > 
> > You missed the important one - people actively use it and have been for
> > some years. Same reason with have NTFS, HPFS, and all the others. On
> > that alone it makes sense to include.
>  
> Again, that's not a technical reason.  It's _a_ reason, sure.  But what are
> the technical reasons for merging gfs[2], ocfs2, both or neither?

There seems to be clearly a need for a shared-storage fs of some sort
for HA clusters and virtualized usage (multiple guests sharing a
partition).  Shared storage can be more efficient than network file
systems like NFS because the storage access is often more efficient
than network access  and it is more reliable because it doesn't have a
single point of failure in form of the NFS server.

It's also a logical extension of the "failover on failure" clusters
many people run now - instead of only failing over the shared fs at
failure and keeping one machine idle the load can be balanced between
multiple machines at any time.

One argument to merge both might be that nobody really knows yet which
shared-storage file system (GFS or OCFS2) is better. The only way to
find out would be to let the user base try out both, and that's most
practical when they're merged.

Personally I think ocfs2 has nicer&cleaner code than GFS.
It seems to be more or less a 64bit ext3 with cluster support, while
GFS seems to reinvent a lot more things and has somewhat uglier code.
On the other hand GFS' cluster support seems to be more aimed
at being a universal cluster service open for other usages too,
which might be a good thing. OCFS2s cluster seems to be more 
aimed at only serving the file system.

But which one works better in practice is really an open question.

The only thing that should be probably resolved is a common API
for at least the clustered lock manager. Having multiple
incompatible user space APIs for that would be sad.

-Andi
-
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] ia_attr_flags - time to die

2005-09-02 Thread Daniel Phillips
On Friday 02 September 2005 15:41, Miklos Szeredi wrote:
> Already dead ;)
>
> 2.6.13-mm1: remove-ia_attr_flags.patch
>
> Miklos

Wow, the pace of Linux development really is picking up.  Now patches are 
applied before I even send them!

Regards,

Daniel
-
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] ia_attr_flags - time to die

2005-09-02 Thread Miklos Szeredi
Already dead ;)

2.6.13-mm1: remove-ia_attr_flags.patch

Miklos
-
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH] ia_attr_flags - time to die

2005-09-02 Thread Daniel Phillips
Struct iattr is not involved any more in such things as NOATIME inode flags.
There are no in-tree users of ia_attr_flags.

Signed-off-by Daniel Phillips <[EMAIL PROTECTED]>

diff -up --recursive 2.6.13-rc5-mm1.clean/fs/hostfs/hostfs.h 
2.6.13-rc5-mm1/fs/hostfs/hostfs.h
--- 2.6.13-rc5-mm1.clean/fs/hostfs/hostfs.h 2005-08-09 18:23:11.0 
-0400
+++ 2.6.13-rc5-mm1/fs/hostfs/hostfs.h   2005-09-01 17:54:40.0 -0400
@@ -49,7 +49,6 @@ struct hostfs_iattr {
struct timespec ia_atime;
struct timespec ia_mtime;
struct timespec ia_ctime;
-   unsigned intia_attr_flags;
 };
 
 extern int stat_file(const char *path, unsigned long long *inode_out,
diff -up --recursive 2.6.13-rc5-mm1.clean/include/linux/fs.h 
2.6.13-rc5-mm1/include/linux/fs.h
--- 2.6.13-rc5-mm1.clean/include/linux/fs.h 2005-08-09 18:23:31.0 
-0400
+++ 2.6.13-rc5-mm1/include/linux/fs.h   2005-09-01 18:27:42.0 -0400
@@ -282,19 +282,9 @@ struct iattr {
struct timespec ia_atime;
struct timespec ia_mtime;
struct timespec ia_ctime;
-   unsigned intia_attr_flags;
 };
 
 /*
- * This is the inode attributes flag definitions
- */
-#define ATTR_FLAG_SYNCRONOUS   1   /* Syncronous write */
-#define ATTR_FLAG_NOATIME  2   /* Don't update atime */
-#define ATTR_FLAG_APPEND   4   /* Append-only file */
-#define ATTR_FLAG_IMMUTABLE8   /* Immutable file */
-#define ATTR_FLAG_NODIRATIME   16  /* Don't update atime for directory */
-
-/*
  * Includes for diskquotas.
  */
 #include 
-
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH] document mark_inode_dirty & mark_inode_dirty_sync in fs.h

2005-09-02 Thread Dave Kleikamp
On Fri, 2005-09-02 at 10:46 -0600, Andreas Dilger wrote:
> On Sep 02, 2005  07:42 -0500, Dave Kleikamp wrote:
> > They put the inode on the superblock's dirty list and make the inode as
> > dirty in the i_state field.  This makes sure that the inode will
> > eventually be written to disk.
> > 
> > mark_inode_dirty_sync only sets the I_DIRTY_SYNC flag, which does not
> > imply that any file data was changed.  It is called when a minor change
> > is made to an inode, such as a timestamp is changed.  Some sync
> > operations will only write the inode if data was written, so can avoid
> > writing the an inode that is only dirtied by I_DIRTY_SYNC.
> > 
> > mark_inode_dirty sets I_DIRTY which is I_DIRTY_SYNC | I_DIRTY_DATASYNC |
> > I_DIRTY_PAGES.  This indicates that the in-memory inode has changes to
> > the data that have not yet been written to disk.
> 
> Dave, could you consider submitting a patch to add the above as comments
> to fs.h for future reference?
> 
> Cheers, Andreas

How about this?
=
Document mark_inode_dirty and mark_inode_dirty_sync in fs.h

Signed-off-by: Dave Kleikamp <[EMAIL PROTECTED]>

diff --git a/include/linux/fs.h b/include/linux/fs.h
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -1048,7 +1048,7 @@ struct super_operations {
 /* Inode state bits.  Protected by inode_lock. */
 #define I_DIRTY_SYNC   1 /* Not dirty enough for O_DATASYNC */
 #define I_DIRTY_DATASYNC   2 /* Data-related inode changes pending */
-#define I_DIRTY_PAGES  4 /* Data-related inode changes pending */
+#define I_DIRTY_PAGES  4 /* Data changes pending */
 #define __I_LOCK   3
 #define I_LOCK (1 << __I_LOCK)
 #define I_FREEING  16
@@ -1059,11 +1059,19 @@ struct super_operations {
 #define I_DIRTY (I_DIRTY_SYNC | I_DIRTY_DATASYNC | I_DIRTY_PAGES)
 
 extern void __mark_inode_dirty(struct inode *, int);
+/*
+ * mark_inode_dirty indicates pending changes to the inode's data.
+ * Puts inode on superblock's dirty list.
+ */
 static inline void mark_inode_dirty(struct inode *inode)
 {
__mark_inode_dirty(inode, I_DIRTY);
 }
 
+/*
+ * mark_inode_dirty_sync indicates non-data related changes to the inode,
+ * such as a change to a timestamp.  Puts inode on superblock's dirty list.
+ */
 static inline void mark_inode_dirty_sync(struct inode *inode)
 {
__mark_inode_dirty(inode, I_DIRTY_SYNC);

-- 
David Kleikamp
IBM Linux Technology Center

-
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: mark_inode_dirty vs mark_inode_dirty_sync

2005-09-02 Thread Dave Kleikamp
On Fri, 2005-09-02 at 11:55 +0200, David Sanchez wrote:
> Hi,
> Please, could somebody explain me what the mark_inode_dirty* functions
> do and what is the difference between mark_inode_dirty and
> mark_inode_dirty_sync ?

They put the inode on the superblock's dirty list and make the inode as
dirty in the i_state field.  This makes sure that the inode will
eventually be written to disk.

mark_inode_dirty_sync only sets the I_DIRTY_SYNC flag, which does not
imply that any file data was changed.  It is called when a minor change
is made to an inode, such as a timestamp is changed.  Some sync
operations will only write the inode if data was written, so can avoid
writing the an inode that is only dirtied by I_DIRTY_SYNC.

mark_inode_dirty sets I_DIRTY which is I_DIRTY_SYNC | I_DIRTY_DATASYNC |
I_DIRTY_PAGES.  This indicates that the in-memory inode has changes to
the data that have not yet been written to disk.

Shaggy
-- 
David Kleikamp
IBM Linux Technology Center

-
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: GFS, what's remaining

2005-09-02 Thread Jörn Engel
On Fri, 2 September 2005 17:44:03 +0800, David Teigland wrote:
> On Thu, Sep 01, 2005 at 01:35:23PM +0200, Arjan van de Ven wrote:
> 
> > +   gfs2_assert(gl->gl_sbd, atomic_read(&gl->gl_count) > 0,);
> 
> > what is gfs2_assert() about anyway? please just use BUG_ON directly
> > everywhere
> 
> When a machine has many gfs file systems mounted at once it can be useful
> to know which one failed.  Does the following look ok?
> 
> #define gfs2_assert(sdp, assertion)   \
> do {  \
> if (unlikely(!(assertion))) { \
> printk(KERN_ERR   \
> "GFS2: fsid=%s: fatal: assertion \"%s\" failed\n" \
> "GFS2: fsid=%s:   function = %s\n"\
> "GFS2: fsid=%s:   file = %s, line = %u\n" \
> "GFS2: fsid=%s:   time = %lu\n",  \
> sdp->sd_fsname, # assertion,  \
> sdp->sd_fsname,  __FUNCTION__,\
> sdp->sd_fsname, __FILE__, __LINE__,   \
> sdp->sd_fsname, get_seconds());   \
> BUG();\
> } \
> } while (0)

That's a lot of string constants.  I'm not sure how smart current
versions of gcc are, but older ones created a new constant for each
invocation of such a macro, iirc.  So you might want to move the code
out of line.

Jörn

-- 
There's nothing better for promoting creativity in a medium than
making an audience feel "Hmm ­ I could do better than that!"
-- Douglas Adams in a slashdot interview
-
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


mark_inode_dirty vs mark_inode_dirty_sync

2005-09-02 Thread David Sanchez
Hi,
Please, could somebody explain me what the mark_inode_dirty* functions
do and what is the difference between mark_inode_dirty and
mark_inode_dirty_sync ?

Thanks


David SANCHEZ

-
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: GFS, what's remaining

2005-09-02 Thread David Teigland
On Thu, Sep 01, 2005 at 01:35:23PM +0200, Arjan van de Ven wrote:

> + gfs2_assert(gl->gl_sbd, atomic_read(&gl->gl_count) > 0,);

> what is gfs2_assert() about anyway? please just use BUG_ON directly
> everywhere

When a machine has many gfs file systems mounted at once it can be useful
to know which one failed.  Does the following look ok?

#define gfs2_assert(sdp, assertion)   \
do {  \
if (unlikely(!(assertion))) { \
printk(KERN_ERR   \
"GFS2: fsid=%s: fatal: assertion \"%s\" failed\n" \
"GFS2: fsid=%s:   function = %s\n"\
"GFS2: fsid=%s:   file = %s, line = %u\n" \
"GFS2: fsid=%s:   time = %lu\n",  \
sdp->sd_fsname, # assertion,  \
sdp->sd_fsname,  __FUNCTION__,\
sdp->sd_fsname, __FILE__, __LINE__,   \
sdp->sd_fsname, get_seconds());   \
BUG();\
} \
} while (0)

-
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: GFS, what's remaining

2005-09-02 Thread David Teigland
On Thu, Sep 01, 2005 at 06:56:03PM +0100, Christoph Hellwig wrote:

> Whether the gfs2 code is mergeable is a completely different question,
> and it seems at least debatable to submit a filesystem for inclusion

I actually asked what needs to be done for merging.  We appreciate the
feedback and are carefully studying and working on all of it as usual.
We'd also appreciate help, of course, if that sounds interesting to
anyone.

Thanks
Dave

-
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html