Can we conclude this discussion? In summary:
* The current comparison function causes bad IO patterns for iterate on
the dspace db. We can change it but the disk format will change in new
releases.
- If we change it, either we check a version number and provide the
right compariso
e DB_RECNUM flag,
so hopefully that will improve performance (the doc says it can really
slow things down).
Let me know how these changes look, and if someone gets a chance to
look at performance differences, that would be great.
Thanks,
-sam
On Mar 7, 2007, at 2:39 PM, Phil Carns wrote:
acl-check-assert.patch:
This is a bug fix to the server side acl handling. It replaces an
assertion with normal error handling to prevent a server from crashing
if it encounters invalid acl information.
check-group.patch:
--
This follows up on some rec
initialize-dyn.patch:
-
This is a correction to the initialize-dyn test program. It previously
hardcoded the number of mounted file systems and would crash if a
different number were mounted.
mount-mem-leaks.patch:
--
This patch corrects multiple memory
fying the response structure after decoding.
-Phil
Phil Carns wrote:
I have done some more work on this, but unfortunately I don't have a
patch that I can share yet (it might be faster for you guys to just make
the change if you want it; I can point out one little gotcha that I
fo
This patch corrects a variety of error code problems:
- several BMI error codes were not tagged with the BMI error class,
which is important to allow client state machines to retry on network errors
- ditto above for a few flow errors
- ECONNRESET was not understood by BMI or included in the er
I am sending this patch in a separate email because it may need some
discussion to hash out. Sometime in the past several months, the
pvfs2_lookup() function in namei.c changed (I think along with something
not directly related, but I don't recall exactly what happened now).
This change cause
auto-sm-tracking.patch:
---
At some point, new linked lists were added to track state machines that
are currently running within the server. When an SM completes, it is
implicitly removed from the list. However, SMs that were started
without a request (ie internal state ma
I think we talked about this some when batting around patches for the db
startup issues, but I just wanted to mention it again now that the
patches are in trunk. Can we get the HAVE_DB_BUFFER_SMALL ifdef
wrappers applied to dbpf-dspace.c? Right now it doesn't compile on
older db libraries.
Whoops, one other thing to report; apparently not all db libraries have
the get_pagesize() function either. I happen to be trying this on a box
with version 4.1.25 of db.
-Phil
Phil Carns wrote:
I think we talked about this some when batting around patches for the db
startup issues, but I
Sam Lang wrote:
acl-check-assert.patch:
It seems like it should be possible to do that format checking of the
acl when the system.posix_acl_access extended attribute is set. Does
it make sense to add a callouts framework to set-eattr to do format
checking for sp
Hi Sam,
That seems reasonable to me...
-Phil
Sam Lang wrote:
Hi Phil,
In the special remove case, could we make the error checking even
tighter? Something like:
if(resp_p->status == -PVFS_ENOENT && index == 1)
return 0;
if(resp_p->status != 0)
return resp_p->status;
Other than
5 PM, Phil Carns wrote:
Whoops, one other thing to report; apparently not all db libraries
have the get_pagesize() function either. I happen to be trying this
on a box with version 4.1.25 of db.
-Phil
Phil Carns wrote:
I think we talked about this some when batting around patches fo
I just noticed that there is a test program in the source tree called
acache-torture.c that should probably be removed. It is in the
test/client/sysint directory.
I think that a patch I submitted a long time ago actually removed this
source file, but that part may have been overlooked. At an
Sam Lang wrote:
On Mar 21, 2007, at 8:35 AM, Phil Carns wrote:
Sam Lang wrote:
acl-check-assert.patch:
It seems like it should be possible to do that format checking of
the acl when the system.posix_acl_access extended attribute is
set. Does it make sense
more recent versions? Do you guys expect to
discontinue supporting 2.4 in the near future? Would it be possible to
say that future pvfs releases only support 2.6 (maybe even > 2.6.x)?
Anyone that has an older kernel has to use an older pvfs version?
-sam
On Mar 20, 2007, at 9:36
Sam Lang wrote:
There were some bugs in that patch, yeah. I've attached another patch
that fixes them. Even without the patch, tacl_xattr.sh reports
failures, but now the failures with and without the patch are the
same. I've attached the output of the tacl script for with and without
ng. Seems to work on 2.4 and 2.6 (as yours does).
On Mar 27, 2007, at 4:33 PM, Phil Carns wrote:
Just to clarify a little bit, there is actually a problem with the
2.6 code path here as well. I went back and ran some tests with and
without the namei.patch on a RHEL4 box (2.6.9-som
It looks like if a non-null dentry is returned from lookup, dput is
called on that dentry, which decrements the usage count. If null is
returned dput isn't called. Could it be that we're actually leaking
entries in the dcache with these patches?
-sam
Maybe? I'm not sure what's supp
Hi Phil,
Good idea to look at the other file systems. My (admittedly limited)
understanding of disconnected dentries is based on the Documentation/
filesystems/Exporting doc, which may be a bit outdated. It suggests
that lookup should return whatever d_splice_alias returns (assuming the
Murali Vilayannur wrote:
Hi Phil,
FWIW, if these patches haven't been committed, it looks good :)
I am really backlogged with all my emails.
auto-sm-tracking.patch:
---
At some point, new linked lists were added to track state machines that
are currently running within the s
These are all pretty minor:
mod-parm-desc.patch:
The pvfs2 kernel module is missing the parameter description for the
op_timeout_secs option that can be set at insmod time.
server-gossip-errno.patch:
--
The pvfs2-server is trying to use errno to pro
I had not seen this, but it looks like you have it sorted out already now.
How did this bug manifest itself in terms of what the user sees? Does
this cause an error when the servers start, or does something pop up
later when you create new files?
-Phil
Sam Lang wrote:
Hi Phil,
With the n
It looks like pvfs2 does not allow you to set xattrs on the root
directory. Is this expected?
# checking the mount options:
> mount -t pvfs2
tcp://localhost:3334/pvfs2-fs on /mnt/pvfs2 type pvfs2 (rw,user_xattr)
# confirming that xattrs can be set on a normal directory:
> setfattr -n user.pv
A while back I ran into a configuration problem where the fs.conf file
accidentally listed the wrong handle range for a particular server (this
happened after the file system was created).
In this case, the trove handle management detects this problem because
the handles on disk don't seem to
This is a very small patch- it just initializes a size variable in
sys-create.sm before it gets inserted into the attribute cache. I think
this bug appeared in trunk sometime after the 1.6.3 release.
The simplest way that I found to trigger the problem was with a test
program (compiled withou
Just pinging on this one again- has any one else bumped into this problem?
-Phil
Phil Carns wrote:
It looks like pvfs2 does not allow you to set xattrs on the root
directory. Is this expected?
# checking the mount options:
> mount -t pvfs2
tcp://localhost:3334/pvfs2-fs on /mnt/pvfs2 t
,
Sorry I forgot to get back to you on this..
No I don't think this would work.
It is a kernel module issue that I hadnt tracked down fully..
Do you need this feature?
thanks,
Murali
On 6/27/07, Phil Carns <[EMAIL PROTECTED]> wrote:
Just pinging on this one again- has any one else bumpe
urali
On 6/28/07, Phil Carns <[EMAIL PROTECTED]> wrote:
I am interested in this because it would be nice to be able to set
tuning options at the root directory level without modifying the server
side configuration files. I also wonder if this issue would cause
problems for ACLs on the roo
t; > Thats why we don't even have a chance to subvirt those checks..
>> > What do you guys suggest? Shall we step back and make the root
>> > directory non-sticky by default? That way admins will need to
>> > explicitly create user directories etc and chown them.
&g
hehe; thats why I asked you guys :)
I am ok either ways!
I agree with Phil that it would be nice to set an xattr on the root
do and let it get inherited for the entire fs, but I'm not sure we've
spent much time thinking about that. Certainly we already have a
config option for the distribution
pvfs2-error-cleanup.patch:
--
This patch updates a gossip_debug() message in trove to be a
gossip_err() in order to help diagnose configuration problems that can
happen if your configuration file is out of sync with your storage
space. It also changes several gossip_ler
You might want to try repeating the test with the pvfs2-client set to
disable the ncache and acache (set the timeout to zero either in proc or
with command line arguments). I don't know if they are playing any role
or not, but it may at least simplify the debugging a little.
-Phil
Pete Wycko
We have run into a problem with running "rm -rf" and "ls" concurrently
on the same directory from different client nodes. In the particular
case that we are looking at, the directory has about 7000 files in it
but no subdirectories. If we do an ls on the directory while an "rm
-rf" is running
Hi Phil,
The trove layer caches the position -> name mapping for positions it
returns back to the client on a readdir. The problem is probably
related to caching those entries, where the readdir for the rm is
iterating over the directory, and so inserting position -> name entries
into
Unfortunately, it doesn't look like disabling the pcache made any
difference. I first tried disabling it by adding a "return 0" at the
top of the PINT_dbpf_keyval_pcache_insert() function. I also tried
modifying the dbpf_keyval_iterate_skip_to_position() function to not
call PINT_dbpf_k
I just stumbled on to something while trying to clean up my previous
tests. If I have the pcache disabled, then rm -rf of a large
directory never works right, even if all of the other clients are
idle. It is possible that we have the opposite problem? That
actually the pcache works fine
We have seen something that might be related, but we aren't sure. In
our case, we had servers running 2.6.9 (with epoll enabled) and clients
running an old 2.4 kernel (no epoll).
We saw that after about a week of very heavy loading, the clients would
get extremely slow for both metadata and I
pvfs2-aio-cancel.patch
--
This patch fixes a bug in the I/O cleanup path on the server side. In
cases where a flow needed to cancel pending I/O operations, the trove
cancel function was calling aio_cancel() directly. This doesn't work
correctly if the alt-aio implementatio
Looks great as well! Nice fix!
Is it enough to just do
tmp_pos = readdir_session; instead of +=?
Maybe? Assignment was definitely the intent, so what you suggest would
certainly look more reasonable. I think I may have been trying to avoid
a warning on some systems when assigning from a 32
One thing I noticed: For an unexpected receive, we do a peek on the
socket (recv(...MSG_PEEK)) and see if a full bmi header is there. If
it is, we set the socket back to blocking mode, do a blocking recv for
the header, and set then set the socket back to non-blocking mode. Rob
pointed
Sam Lang wrote:
The attached patch is the proposed fix for this problem. When the tcp
method receives a disconnect from a peer, it invokes a callback
(bmi_method_addr_forget_callback) into the bmi control layer to remove
the address reference from the list. Maybe I should also add a cou
Hi Phil,
I can add a server flag to the tcp addr struct, and only call
forget_addr in that case, but it seems like a bit of hack. Can we just
toss the ref list in the bmi control layer and force methods to manage
their addresses? The ref list is only being used to map an opaque
addres
Thanks for tracking this down! I've been out of the office for a few
days so I am a little behind on the conversation. I have a comment
on the patch, though.
I think that for the most part, it is calling
bmi_method_addr_forget_callback() on pretty much any tcp network
error, since it is
This patch adds a new configuration parameter called
"RootSquashExceptions". It has the same value syntax as the existing
RootSquash parameter, but it allows you to list hosts or subnets that
are exempt from root squashing.
This is helpful if you want to root squash all clients (or at least a
These patches add some new test cases into the test subdirectory of PVFS2:
pvfs2-test-aio.patch:
-
This set of test programs allow comparison of normal io (pread/pwrite)
vs. aio vs. alt-aio (as implemented in pvfs2 with the "alt-aio" trove
method). There isn't much depth, b
This patch is not a bug fix. I doubt that anyone wants this behavior,
but I am sharing the patch just in case. It causes PVFS to _not_ report
an error if a user attempts to set unsupported sticky or setuid bits on
a file or directory. It is mainly helpful if you have a 3rd party
application
Whoops- sorry. Wrong patch. The previous one that I sent had a problem
in it. The attached one is corrected.
Phil Carns wrote:
Actually, one more follow up on this specific patch to wrap up. The
problem that I suspected actually does not occur, although it may have
been by luck
Actually, one more follow up on this specific patch to wrap up. The
problem that I suspected actually does not occur, although it may have
been by luck :) There is a tcp_addr_data->bmi_addr field that gets used
as an argument to the forget_callback() function. That bmi_addr field
is set to
need right now.
On Oct 17, 2007, at 10:33 AM, Phil Carns wrote:
Actually, one more follow up on this specific patch to wrap up. The
problem that I suspected actually does not occur, although it may
have been by luck :) There is a tcp_addr_data->bmi_addr field that
gets used as
What is the status of setuid bit support in PVFS these days?
super.c indicates that there is a "suid" mount option. PVFS seems to
accept this, but the option does not show up in the mount output once
the file system is mounted.
With or without that mount option, any attempts to chmod with th
Thanks for working on the update, Sam.
I'm kinda stumped on that statecomp issue too. I have a couple of random
ideas to throw out there, but I'm not really thrilled with them either:
- If we continue on the track of modifying the state machines, it might be
good to try to get rid of the manual
Thanks Sam- that sounds good. We will eventually be re-evaluating that
crdirent case again and will let you know if we uncover anything.
-Phil
>
> Hi Phil,
>
> Thanks for the patches. I committed your patches, except for the
> last one, I think we want to be able to keep doing crdirents
> concu
>
> On Oct 12, 2007, at 4:03 PM, Phil Carns wrote:
>
>> This patch is not a bug fix. I doubt that anyone wants this
>> behavior, but I am sharing the patch just in case. It causes PVFS
>> to _not_ report an error if a user attempts to set unsupported
>> s
pvfs2-db-multi-server.patch:
This patch fixes a problem that we have seen a few times but only recently
figured out how to reproduce. The situation is that one machine is
running a pvfs2-server and then a second pvfs2-server is started on the
same machine after a heart
Here is the diff (excerpt) that added the code from 1.68 to 1.69:
Index: src/client/sysint/sys-lookup.sm
--- src/client/sysint/sys-lookup.sm 13 Apr 2007 05:14:16 - 1.68
+++ src/client/sysint/sys-lookup.sm 20 Jun 2007 06:08:51 -
@@ -378,7 +378,16 @@
cur_seg->se
I think something has changed since 2.7.0 that causes some trouble for
NFS reexporting. If I NFS mount PVFS2 from the 2.7.0 release, things
look fine as far as I can tell. If I NFS mount PVFS2 from a current
trunk build, I get this behavior:
pcarns-vm linux # cp /etc/hosts /mnt/tmp/foo
cp: c
In the case where you keep the logic that is currently in head but just
change strrchr() to strchr(), what breaks in from the previously reported
bug?
I tried making that change and it looks like both cases work fine, but I
may be missing something.
Maybe it has something to do with the nu
Somewhere along the line the PVFS trunk code picked up a regression that
causes dbench to wedge the client machine. I looked at this a little
bit today and observed several indirect problems:
1) dbench somehow kills pvfs2-client-core (not sure why yet)
2) pvfs2-client-core gets restarted, but
ay, I can't vouch for the current state of dbench with pvfs2 trunk;
I think it probably still hangs the client machine just as before.
-Phil
Robert Latham wrote:
On Thu, Jan 17, 2008 at 06:37:52PM -0600, Phil Carns wrote:
1) dbench somehow kills pvfs2-client-core (not sure why yet)
Hi P
just
for an example. I think get the same hang if the file and subdir
already exist when pvfs is mounted.
-Phil
Phil Carns wrote:
Whoops - it looks like the pvfs2-client-core crash didn't really have
much to do w/ dbench. That was a red herring (a system/compile problem
on my end was what
dir1 dir1/dir2
root@(none):/mnt/pvfs2/testdir# ls
root@(none):/mnt/pvfs2/testdir#
That "mv" command is supposed to fail. I'm not sure where dir1 went :)
-Phil
Phil Carns wrote:
It looks like dbench is hanging on a rename of a file within a
subdirectory. I can replicate it ou
s:
grep "kernel: " mv-bug.log
# for pvfs2-client messages:
grep "PVFS2: \[D\]" mv-bug.log
-Phil
Phil Carns wrote:
It looks like dbench is hanging on a rename of a file within a
subdirectory. I can replicate it outside of dbench like this:
root@(none):/mnt/pvfs2# pwd
/mnt/pv
p the same
directory twice in a row. Right now we are actually hanging within this
lock_rename() function before getting to the actual PVFS2 rename function.
-Phil
Phil Carns wrote:
It looks like dbench is hanging on a rename of a file within a
subdirectory. I can replicate it outside of
on
dcache entries staying put for us to invalidate them automatically.
To my knowledge all of the mv, simul, touch, and dbench issues related
to the dcache are now working.
This may also fix the NFS re-exporting issue in trunk, but I have not
tried it yet.
-Phil
Phil Carns wrote:
Ok, I think I
While looking for an entirely unrelated bug, I stumbled onto something
odd. I found a test system that would fail to find some directory
entries when doing a readdir if there were more than 32 files in the
directory. On other test systems it worked fine.
Looking at strace, I saw that the sys
There is a thread in progress on the user's list about this issue:
http://www.beowulf-underground.org/pipermail/pvfs2-users/2008-January/002307.html
I think the summary is that you may be able to get away with this?
- make distclean
- configure
- manually clear the HAVE_KERNEL_DEVICE_CLASSES fl
is better now, and I can safely join the users list (and handle
the large attachements). Is the best thing to go there and join into
that thread?
Cheers,
Brad
On 1/31/08, *Phil Carns* <[EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>>
wrote:
There is a thread in progress o
-Phil
Robert Latham wrote:
On Fri, Jan 04, 2008 at 12:16:52PM -0500, Phil Carns wrote:
I think something has changed since 2.7.0 that causes some trouble for
NFS reexporting. If I NFS mount PVFS2 from the 2.7.0 release, things
look fine as far as I can tell. If I NFS mount PVFS2 from a curre
I think this is all good stuff. For my 2c I think when this is
committed we may as well get rid of the unused ledger files. We will
still have them in cvs history if we want to refer back.
I think your latest patch still has one stray
"#ifdef TROVE_HANDLE_LEDGER_ENABLED" running around in it
I like this cleanup too.
One minor comment/suggestion. The elements in the big
PINT_server_req_table[] array have to be in order because they are
indexed like this:
PINT_server_req_table[req->op].params...
However, when the table is defined, that op value is also included in
the ta
I don't know how many people this is useful to, but there is a new tweak
in the configure script to try to do the right thing with kernel headers
if the ARCH environment variable is present.
One example would be to build pvfs2 against a kernel source tree
configured for user mode linux. You c
Pete Wyckoff wrote:
It is possible to mount pvfs via the kernel interface read-only, but
I'm not sure if we have a way to do that via userspace apps, e.g.
pvfs2-ls, MPI. For a remote-mount metadata mirroring project we've
been doing, this would be handy.
I think the only feasible way to handle
Hunh- I'm not sure when that quit working. I think your changes look
fine, though.
-Phil
Pete Wyckoff wrote:
[EMAIL PROTECTED] wrote on Tue, 19 Feb 2008 14:57 -0500:
You wouldn't have to add read-only export capability- its already there :)
Have a look at the "ReadOnly" option and comments
Sure, I think it would be fine to move it somewhere shared.
There is kind of a larger scale problem here too, but it would be
tedious to fix. If you take the client job bmi timeout, for example,
there are actually 4 places where defaults are defined:
- client-state-machine.h
- pvfs2-server.
Sam Lang wrote:
On Feb 14, 2008, at 4:34 PM, Pete Wyckoff wrote:
Add --enable option to build a threaded pvfs2-client-core, that is,
an executable linked against pvfs2-threaded.so.
Remove the "--threaded" argument from pvfs2-client. Whatever version
of client-core that was configured is all
Ohh- good catch. I think I have it fixed in cvs now.
-Phil
Probably need padding else later 64-bit parts will not be on 8-byte
boundaries.
-- Pete
___
Pvfs2-developers mailing list
Pvfs2-developers@beowulf-underground.org
http://ww
I don't know if IB or MX need to use the
bmi_method_addr_forget_callback() function. That function makes a
little more sense in the context of a particular tcp problem:
- each time a client opens a new tcp socket to a server, the server
creates a bmi_addr corresponding to that socket (so it c
The shortest way that I see is this:
1) call PINT_cached_config_get_server_name() to get the name of the
server that owns the handle in question (ie, "tcp://host:port")
2) call get_server_config_struct() to get a pointer to your local
server's configuration values
3) do a string comparison
On Mar 12, 2008, at 8:53 AM, Phil Carns wrote:
The shortest way that I see is this:
1) call PINT_cached_config_get_server_name() to get the name of the
server that owns the handle in question (ie, "tcp://host:port")
2) call get_server_config_struct() to get a pointer to your local
I needed to make some changes to list-attr.sm recently and realized that
it probably should really be redone now that we have parallel nested
machines available. listattr is a server operation that retrieves
attributes for a list of handles at once. The old version duplicated
logic from get-a
d
negative indexes, so a -1 index would give you the parent frame, and
0..N gives the frames on top of the stack (which is actually circular).
Pretty simple mod actually, I can try to create a patch for it and you
can try it out.
Keep me up to date on that other issue.
Walt
Phil Carns wro
walt wrote:
Patch is attatched - based on the top of the trunk - it is only the
changes to PINT_sm_frame
Thanks!
The way this works is when a SM DOES A PJMP it pushes it's children's
frames. The children will each get their own SMCB, so if they run a
PJMP they will have their own stack. S
Pete Wyckoff wrote:
[EMAIL PROTECTED] wrote on Tue, 25 Mar 2008 15:10 -0500:
In fact our error handling here looks to be a little wrong. When we return
0 from d_revalidate, the kernel assumes the file has been removed and
populates the dcache with a negative dentry (then tries to create the fi
jian hu wrote:
Hello,
I'm a beginner of pvfs2. Now I'm reading the code of it and try to
modify the code a little bit to
change the stripe style of the file system.
Can someone give me some tips or hint about where to start? Although I
read the documents
of pvfs but in fact, I can't even find
I'm going to commit a change to trunk to make pvfs2-genconfig
automatically set TroveMethod to alt-aio. I think we need to get some
mileage on it and plan on it being the default in the future. We can
revert it if anyone objects or we see any problems.
I think the alt-aio method should be a
Hi Walt,
I adjusted that list-attr.sm implementation in trunk to use the
mechanism you added. It seems to work fine and it looks much cleaner.
FYI, to follow up on the theory that the parallel sm version of
list-attr.sm would be a little faster, I did some simple timing of
pvfs2-lsplus on a
There are a couple of performance bug fixes in trunk and 2.7 branch now
that I thought might be of interest to the list.
The first went in about two weeks ago and fixed a condition variable bug
inside of the trove testcontext() function. There was a race condition
where it was possible for op
Rob Ross wrote:
Should we just replace pvfs2-ls with pvfs2-lsplus? -- Rob
I don't know of any reason not to. They are feature equivalent in terms
of command line arguments and output format.
-Phil
___
Pvfs2-developers mailing list
Pvfs2-developers
Could you break down what the app is doing at a little bit higher level
in this time frame? (ie, how many writes is it posting, how many reads
is it posting, which are concurrent, when it calls wait for each).
From what I can tell, it looks like there are 30 total isys_io's
posted; the first
Looks good to me.
The only minor thing I see is that there are a couple of gossip_err()
calls in there that need to be updated so they don't use the "PVFS2
server:" prefix on the messages.
While you are messing with that stuff, would it be possible to add a
"starting" message to pvfs2-client
n 'gdb -p 6888' to debug
run 'gdb -p 6888' to debug
If the file already exists, and is big enough, it works just fine.
So there is some race there that is notoriously timing sensitive, thus
has been really erratic for us to reproduce easily. I have suspicions
it will only
There is a new trove method available in trunk now called "null-aio".
It can be selected by putting "TroveMethod null-aio" in the
section of the file system configuration file.
This is only useful for debugging purposes, because it deliberately
skips doing any file I/O on the server side. Pl
gs up quite a bit to go that
route.
-sam
On Apr 17, 2008, at 4:10 PM, Phil Carns wrote:
There is a new trove method available in trunk now called "null-aio".
It can be selected by putting "TroveMethod null-aio" in the
section of the file system configuration file.
The dmesg output probably will help, but I think that the inode
alloc/dealloc check itself is a little trigger happy:
https://trac.mcs.anl.gov/projects/pvfs/ticket/7
I didn't think it would actually break anything other than printing out
an unecessary warning, but I never followed up on it. T
Hi Bart,
Could you try this patch out and see if it fixes your problem? This is
checked into trunk as well. This won't eliminate the inode alloc
warning, but I think it does actually fix the umount hang.
I also suspect that this same issue may affect a few other cases as
well, but it would
On Tue, May 13, 2008 at 7:55 AM, Phil Carns <[EMAIL PROTECTED]
<mailto:[EMAIL PROTECTED]>> wrote:
Hi Bart,
Could you try this patch out and see if it fixes your problem? This
is checked into trunk as well. This won't eliminate the inode alloc
warning, but
I don't think we have any globally valid fixed sized identifier for the
servers. The only option I know if would be to use a handle (for
example, the first handle in the server's mapping) to represent each
server. I think just using the string name would be safer, though.
-Phil
Walter Ligon
There is a minor bug fix in trunk now that makes the "noatime" and
"nodiratime" mount options work correctly on PVFS. We have had support
for this option for a long time but it has not work correctly on recent
kernels at least.
To test it out, I tried recursively grepping through about 1000 f
Hi Nick,
I just committed a fix for this in trunk that you might want to merge to
your branch if you can confirm it fixes your problem.
I think the client side function that waits for state machines to
complete was grabbing the sm frame too soon. In your case that caused
it to get the error
101 - 200 of 461 matches
Mail list logo