On Jan 3, 2011, at 1:49 PM, Rob Ross wrote:
Hi Julian,
Probably I'm being slow, just coming back from the holidays, but I think that
the issue is that your data is noncontiguous in memory? Current ROMIO doesn't
do buffering into a contiguous region prior to writing to PVFS (i.e., data
Hi Bart,
Attached is a patch that should fix the segv. It looks like there might be
multiple bugs causing this problem. Working from the top down:
1. I/O operations at the server are timing out. This could just be due to
server load or timeout values that are too small depending on the
before the cancel is called.
-sam
pint-mgmt-opid-cleanups.patch
Description: Binary data
On Oct 20, 2010, at 10:19 AM, Sam Lang wrote:
Hi Bart,
Attached is a patch that should fix the segv. It looks like there might be
multiple bugs causing this problem. Working from the top down
save three which were in __lll_lock_wait.
Ok, sounds like you sent the right backtrace. Those other threads are I/O
threads just waiting for something to do.
-sam
Thanks again,
Bart.
On Wed, Oct 20, 2010 at 9:58 AM, Sam Lang sl...@mcs.anl.gov wrote:
I think the reason
in the server logs besides timeouts?
-sam
I attached a text files with the results of trying to delete files that
return ENOENT and the Input/output error message. Let me know if I can
provide anything else.
Thanks for your help,
Bart.
On Tue, Jun 29, 2010 at 12:40 PM, Sam Lang sl
: Assertion `0' failed.
Aborted
Bart.
On Fri, Jun 18, 2010 at 8:15 AM, Sam Lang sl...@mcs.anl.gov wrote:
Hi Bart,
When you run the script, do you see any timeout error messages in the client
log?
-sam
On Jun 18, 2010, at 9:03 AM, Bart Taylor wrote:
Hey Phil,
Yes
Hi Bart,
When you run the script, do you see any timeout error messages in the client
log?
-sam
On Jun 18, 2010, at 9:03 AM, Bart Taylor wrote:
Hey Phil,
Yes, it is running 2.8.2. My setup was using 3 servers with 2.6.18-194.el5
kernels and High Availability. I have not had a chance
On May 3, 2010, at 2:16 PM, Bart Taylor wrote:
Now that I have circled back around to this, I found that directory creation
isn't getting squashed either. I see two different cases when issuing a mkdir:
- In the first, the mkdir request enters the prelude state machine, but after
the
On Apr 9, 2010, at 1:28 PM, Jiri Kortus wrote:
Hello,
I'm trying to compile PVFS server and PVFS kernel module to make it work with
pNFS as described in this guide
http://www.citi.umich.edu/projects/asci/pnfs/docs/pnfspvfs2.html. I'm using
Debian 5.0.4 with kernel 2.6.33-rc6 with pNFS
On Mar 3, 2010, at 8:46 PM, Yonggang Liu wrote:
Hello all,
I'm pretty new to PVFS2, and now I'm doing a project about implementing
interposed scheduler between the PVFS2 clients and PVFS2 servers to provide
QoS. So I have read some code and doc about the PVFS2 protocols. But for a
From: Sam Lang [sl...@mcs.anl.gov]
Sent: Thursday, February 04, 2010 9:08 AM
To: Allan, Benjamin
Cc: pvfs2-developers@beowulf-underground.org
Subject: Re: [Pvfs2-developers] 2.6.32.2 vs trunk and 2.8.1 cvs branch
On Feb 4, 2010, at 10:48 AM, Allan
On Feb 4, 2010, at 10:48 AM, Allan, Benjamin wrote:
Both branches are failing to build (using the attached config, build scripts)
on 2.6.32.2 with the same error.
Just to be sure, what tag did you use to checkout PVFS?
And of course a view through lxr shows no such struct elements as those
Announcing version 2.8.2 of PVFS
===
The PVFS project has a new minor release, version 2.8.2. This release
includes important bug fixes since the 2.8.1 release. Details can be found
in the ChangeLog and at: http://www.pvfs.org/news.php#32.
You can download the release
On Jan 25, 2010, at 12:59 PM, Walter Ligon wrote:
In the current head how do you use the FUSE stuff? Just --enable-fuse, or do
you have to do anything else (like tell it which kernel or anything)?
Just --enable-fuse.
How about startup, what do you do to configure it and get it going?
Hi Kazuki,
I committed your fix. Thanks for the patch!
-sam
On Jan 1, 2010, at 8:19 AM, Kazuki Ohta wrote:
Hi, and Happy New Year!
I met the problem in using TroveMethod directio.
pvfs2-server sometimes outputs the following error, and the clients hung up.
[D 12/31 19:15] PVFS2 Server
Hi Randall,
Any messages in the client log? These operations are waiting for the client
daemon, which isn't responding. Do you see a pvfs2-client-core process on that
node?
-sam
On Dec 18, 2009, at 12:30 PM, Randall Martin wrote:
I have been seeing a lot of process hangs/loops on our
Michael,
Are you using valgrind? I'm curious what motivated this commit?
-sam
On Sep 4, 2009, at 9:27 AM, Michael Moore wrote:
Michael Moore wrote:
Pete Wyckoff wrote:
mtmo...@clemson.edu wrote on Fri, 04 Sep 2009 09:29 -0400:
Pete Wyckoff wrote:
w...@clemson.edu wrote on Thu, 03 Sep
Hi Randy,
I don't have any ideas where the problem is, but could you try running
the server in gdb? That may give you a better backtrace. The code
that generates the backtrace and writes it to the log when a segfault
occurs isn't that reliable and may not work on your system. Also, you
Hi Dave,
Does restarting the servers help? Are the client nodes getting
rebooted ever?
-sam
On Jul 1, 2009, at 4:43 PM, David Bonnie wrote:
Hello all -
I'm having trouble figuring out a problem with performance
depredation on a simple 10 node cluster. Prior runs on the cluster
On Jul 1, 2009, at 5:05 PM, David Bonnie wrote:
Rob -
Performance is down across all PVFS2 installations. The benchmark
simply creates files of a random size (between 1 and 25 MB) in a
single folder on the mounted PVFS2 partition, 16 KB at a time. It's
not anywhere near ideal, but
David,
It sounds like your initial thought (that there is a network problem)
could be correct. I would probably explore that first. What sort of
numbers do you get from netpipe runs (or even bmi_pingpong) between
client and server?
-sam
On Jul 1, 2009, at 5:15 PM, David Bonnie wrote:
of the nodes checked out fine with netpipe, still no errors on
any of the adapters.
- Dave
On Wed, Jul 1, 2009 at 4:47 PM, Sam Lang sl...@mcs.anl.gov wrote:
On Jul 1, 2009, at 5:45 PM, David Bonnie wrote:
I'll run it on each node and let you know if anything is out of
place. I believe
On Jun 24, 2009, at 9:22 AM, Nicholas Mills wrote:
Hey all,
Can someone quickly explain to me why sys-symlink.sm (in the client
code) now uses batch create with a fixed size of one? What prevents
us from using the new create code? This change was merged in by phil
with the small files
the right approach,
especially since PVFS is completely open source, and anyone can just
look at the code.
-sam
Thanks for your response,
Nick
On Wed, Jun 24, 2009 at 2:03 PM, Sam Lang sl...@mcs.anl.gov wrote:
On Jun 24, 2009, at 9:22 AM, Nicholas Mills wrote:
Hey all,
Can someone quickly
On Jun 24, 2009, at 3:55 PM, Sam Lang wrote:
It sounds like your approach to eliminating security holes is with
security by obscurity. In other words, if the client (or some
rogue process acting as a client) does not know that the interface
is there, he can't abuse it. I don't think
On Jun 16, 2009, at 1:08 PM, Nicholas Mills wrote:
Hey all,
I seem to remember someone talking to me once about a tool that
could parse the state machine files and produce a visual graph of
all the different states. Anyone know where I can find it?
maint/pvfs2smdot.pl
-sam
Thanks,
their own data space. It might add just a little extra
load on the node. I am guessing PVFS takes some advantage of putting
metadata and data on same node. Can someone fill me in as to what they
are.
Thanks,
Sumit.
On Thu, May 28, 2009 at 5:17 PM, Sam Lang sl...@mcs.anl.gov wrote:
Hi David,
I
Hi David,
I don't see any problems with putting that in future releases once its
ready. It would be great if you could update the developers list with
your planned changes, and possibly send along patches as you have
them, allowing everyone to provide input, etc.
-sam
On May 28, 2009,
That's not a very useful error, but it looks like there might be
something wrong with your dspace db. Can you run it in gdb, set a
breakpoint at dbpf-dspace.c:650, step to the next line and print the
result of ret? Line 650 should have:
ret = dbc_p-c_get(dbc_p, key, data,
Hi Jozef,
Thanks for adding IPv6 support to PVFS. Send us the patch and we'll
test it on our cluster.
-sam
On Apr 8, 2009, at 6:00 PM, Jozef Pajzinka wrote:
Hi,
I'm working on support for IPv6 in PVFS2. I'll already changed the bmi
interface to be able to flexibly work over both ipv6
Hi Walt,
With the single config file in place now, the server has to be told
(with the -a command line option) or guess which server it is. It
guesses by simply using the hostname, and you may have run into a
bug. I'll play with it and see if I can make it more robust. In the
the install guide should simply warn the user that it might be
an issue.
Thanks for you help Sam!
Walt
Sam Lang wrote:
Hi Walt,
With the single config file in place now, the server has to be told
(with the -a command line option) or guess which server it is. It
guesses by simply using
stripped and unstripped version for
convenience - document accordingly
Opinions?
Walt
Sam Lang wrote:
Sounds good. Do you want to update the install guide? I'm not
sure what text you're referring to exactly. Its in CVS at doc/
pvfs2-quickstart.tex. If you just want to send me the paragraph
Announcing version 2.8.0 of PVFS
===
The PVFS project has a new major release, version 2.8.0. This release
includes a number of new features, improving performance for a variety
of different workloads and access patterns. It also includes bug fixes
since
the previous
timeouts to the tcp
testcontext call, but it requires a lot more smarts in the code to get
that right in general, whereas the callback option just allows you to
get completion right away.
-sam
Rob
On Jan 7, 2009, at 4:06 PM, Sam Lang wrote:
Hi All,
Right now if multiple methods
Hi All,
Right now if multiple methods are enabled in BMI, we tend to get poor
performance from the fast network, because BMI_testcontext iterates
through all the active methods calling testcontext for each one. It
tries to be smart about which methods get scheduled ;-) to prevent
in a separate thread.
Nawab, in the zoidfs init code after initializing BMI you need to call:
int check = 0;
BMI_set_info(0, BMI_TCP_CHECK_UNEXPECTED, check);
-sam
On Dec 23, 2008, at 2:01 PM, Phil Carns wrote:
Sam Lang wrote:
Hi All,
I think Nawab has found a bug (or untested code path) in the BMI
on if we think the use case is going to be around long
enough to justify tweaking the API.
-Phil
Sam Lang wrote:
I've committed the set_info fix for this. I'm not crazy about it,
but it should work for now. In the long term, we should probably
move away from method specific hacks like
for the latter. Changing the API to be more consistent or user
friendly doesn't affect where we choose to set the priority.
-sam
Rob
On Jan 6, 2009, at 4:57 PM, Sam Lang wrote:
Changing the API as you describe would actually bring back the
original problem. As is, the BMI_tcp_testcontext
forwarding system probably ought to use the non-blocking
PVFS calls so that it can better deal with this scenario anyway,
right?
zoidfs is a blocking API.
-sam
Rob
On Jan 6, 2009, at 5:54 PM, Sam Lang wrote:
On Jan 6, 2009, at 5:03 PM, Rob Ross wrote:
I think if we had this alternative
, at 9:15 PM, Sam Lang wrote:
On Jan 6, 2009, at 7:51 PM, Rob Ross rr...@mcs.anl.gov wrote:
Hi Sam,
My take on your email was that you were combining the two issues,
so I wanted to make sure that we were in agreement that the
alternative API was preferred (not that I think we should
Hi All,
I think Nawab has found a bug (or untested code path) in the BMI tcp
method. He's running a daemon that both receives unexpected requests
(as a server), and receives expected responses (as a client).
In the BMI_testcontext call, if there aren't any completed (expected)
Brad,
This was laziness on my part. Should be fixed now.
-sam
On Dec 15, 2008, at 1:31 PM, Bradley Settlemyer wrote:
Hello,
I am having trouble building the tree due to the layout types
(newer stuff I'm not familiar with) -- probably due to the recent
compiler I am using, though I
Hi Bart,
Thanks for the patch. For users with that many files in a directory,
using pvfs2-ls is probably a good alternative.
The kernel does readdir requests 32 entries at a time, so increasing
MAX_NUM_DIRENTS won't help for ls. Long listings requires getting the
size of files, which
Hi Elaine,
Yeah, I want those separate. The inode stuffing changes that Phil and
I made take advantage of the prelude_work sm. You can look at the
batch-remove state machine in the small-file-branch for an example.
The prelude state machine assumes that a server operation is being
On Aug 14, 2008, at 5:43 PM, Pete Wyckoff wrote:
[EMAIL PROTECTED] wrote on Thu, 14 Aug 2008 18:11 -0400:
+struct timespec PINT_util_get_abs_timespec(int microsecs)
+{
+struct timeval now;
+struct timespec tv;
+
+gettimeofday(now, NULL);
+tv.tv_sec = now.tv_sec + (microsecs /
Esteban,
All the I/O servers are striped across by default. If you want to
limit the number of servers striped across, you can do so by setting
an extended attributed on the _directory_ where you create the file:
setfattr -n user.pvfs2.num_dfiles -v 4 /pvfs/my-dir/
Alternatively, you
On Jul 7, 2008, at 10:57 AM, Phil Carns wrote:
Sam Lang wrote:
On Jul 7, 2008, at 10:24 AM, Phil Carns wrote:
There are a couple of options for dealing with the case where
num_groups num_dfiles if we want to keep the current defaults:
1) Transparently reduce the num_groups within
do any tuning.
-- Rob
On Jul 4, 2008, at 2:21 PM, Sam Lang [EMAIL PROTECTED] wrote:
Having it work with default parameters doesn't necessarily mean
making those default parameters exactly like simple stripe. Our
best all around solution is to fix this bug with a simple check in
the 2d code
?
(the original default settings are worthless for my applications so I
have to change them anyhow)
Kyle
On Tue, Jul 1, 2008 at 4:04 PM, Sam Lang [EMAIL PROTECTED] wrote:
Kyle,
How is this patch useful for you guys? With a single server (and
only one
stripe), it doesn't matter what the factor
fancy tricks
to accomplish this.
Kyle
On Tue, Jul 1, 2008 at 5:16 PM, Sam Lang [EMAIL PROTECTED] wrote:
On Jul 1, 2008, at 4:15 PM, Kyle Schochenmaier wrote:
We should use simple stripe in that case, which is why i have it
defaulted to it now, someone was playing with twod-dist today
Hi Walt,
The acache (attribute cache) and ncache (name cache) are both based on
a generic timeout cache interface (tcache). It shouldn't be hard to
build up a capability cache using tcache, and will allow you to handle
timeouts for capabilities separate from the attributes, which I would
Kyle,
How is this patch useful for you guys? With a single server (and only
one stripe), it doesn't matter what the factor and num_groups are,
does it? It still behaves the same as simple-stripe in that case.
With multiple servers setting the factor and num_groups to 1 does make
the
,
maybe I'm wrong?
(the original default settings are worthless for my applications so I
have to change them anyhow)
Kyle
On Tue, Jul 1, 2008 at 4:04 PM, Sam Lang [EMAIL PROTECTED] wrote:
Kyle,
How is this patch useful for you guys? With a single server (and
only one
stripe), it doesn't matter
Hi Andrew,
Were you seeing any other problems with the PVFS volume before the
unmount? Did a directory listing hang or anything like that?
I've included a report of the description of the problem in case other
PVFS developers have some ideas. The bug message is saying that the
current
Hi Dave,
Good catch! It doesn't look like we actually use that function
anywhere, but I added it there in the hope that we would have a good
string hashing function to use instead of reinventing something ad-hoc
every time. Are you guys using it now for the security stuff?
-sam
On
On May 19, 2008, at 3:35 PM, David Bonnie wrote:
Sam -
Yup, we're using it as part of the key database. I simply set the
variable to initialize to zero for consistency. Do you see a
problem with
doing that?
Nope. Go ahead and commit to HEAD as well.
-sam
- Dave
Hi Dave,
Good
Hi Bart,
After loading the pvfs2 kmod, can you do:
echo 1 /proc/sys/pvfs2/debug
Then run the same test, and send the dmesg output to me? This should
show where the inode allocs/deallocs are going awry.
Thanks,
-sam
On May 2, 2008, at 4:00 PM, Bart Taylor wrote:
Hey guys,
I have been
Hi Phil,
Its good to get this functionality into the code base -- we've had a
number of attempts at this sort of thing, but none of them got
committed to HEAD, and having it in there in whatever state is better
than not. I have a concern (and overall design gripe) with the use of
AIO
Announcing version 2.7.1 of PVFS
===
The PVFS project has a new minor release, version 2.7.1. This
release includes additional support for the latest linux kernel
versions,
improved performance for a number of metadata intensive workloads, and
improved support for
I've attached two patches. The first tries to fix a basic bug where
the client-core doesn't cleanup if it receives SIGHUP unless --
standalone is provided. This means that event if the parent process
forwards the SIGHUP, the client-core process just exits (doesn't let
operations
think anything is printed in the other cases.
The parent process by default prints a message to the log now. I'll
add a start message.
-sam
-Phil
Sam Lang wrote:
I've attached two patches. The first tries to fix a basic bug
where the client-core doesn't cleanup if it receives SIGHUP
Does this need to go in the next release?
-sam
On Apr 8, 2008, at 1:43 PM, Pete Wyckoff wrote:
[EMAIL PROTECTED] wrote on Tue, 08 Apr 2008 13:17 -0500:
Troy and I stumbled across this bug, that at least for our
configurations, causes a double-free on the server when cleaning up
'stale'
missing something?
Nope, I just need to know whether to merge it to the release branch or
not.
-sam
Kyle
On Tue, Apr 8, 2008 at 1:47 PM, Sam Lang [EMAIL PROTECTED] wrote:
Does this need to go in the next release?
-sam
On Apr 8, 2008, at 1:43 PM, Pete Wyckoff wrote:
[EMAIL
I've been debugging a problem with simul returning EACCES errors on
open of a PVFS file. It turns out the bug was due to permissions
being (re-)set on an inode without a mutex locking the write. This
was causing the permissions checks in other opens of the same file to
fail because the
Attached patch adds the lock around the getattr for revalidate. It
also includes some cleanup to the d_revalidate function for my sanity.
-sam
fix-attr-race.patch
Description: Binary data
On Mar 25, 2008, at 12:19 PM, Pete Wyckoff wrote:
[EMAIL PROTECTED] wrote on Tue, 25 Mar 2008
On Mar 24, 2008, at 2:42 PM, Pete Wyckoff wrote:
[EMAIL PROTECTED] wrote on Fri, 21 Mar 2008 14:44 -0500:
Update of /projects/cvsroot/pvfs2/src/kernel/linux-2.6
In directory parlweb1:/tmp/cvs-serv7976/src/kernel/linux-2.6
Modified Files:
Tag: pvfs-2-7-branch
pvfs2-proc.c
Log
On Mar 21, 2008, at 1:11 PM, Walter B. Ligon III wrote:
I've implemented the changes we discussed to the state machine.
I've tested basic functionality and everything seems to be working
fine.
No big surprise, wasn't too big a deal.
What do you guys want? Do you want me to commit the
Fixed.
-sam
On Mar 21, 2008, at 12:31 PM, Walter B. Ligon III wrote:
Hey, I've noticed that the links in the TOC at the top of the
various online docs is broken. They all refer to index.php that
doesn't seem to be there. Is there supposed to be a sym link or
something?
Walt
--
Dr.
On Mar 14, 2008, at 2:20 PM, Pete Wyckoff wrote:
# define a few generic variables that we need to use
-DESTDIR =
srcdir = @srcdir@
-prefix = $(DESTDIR)@prefix@
-datarootdir = $(DESTDIR)@datarootdir@
-mandir = $(DESTDIR)@mandir@
-exec_prefix = $(DESTDIR)@exec_prefix@
+prefix = @prefix@
On Mar 14, 2008, at 2:23 PM, Pete Wyckoff wrote:
This one I also think is good. But it installs things like this:
titan$ ll /usr/local/pvfs-test/lib
total 2712
drwxr-xr-x 2 pw pw4096 2008-03-14 15:16 ./
drwxr-xr-x 7 pw pw4096 2008-03-14 15:16 ../
lrwxrwxrwx 1 pw pw 13 2008-03-14
On Mar 14, 2008, at 2:48 PM, Pete Wyckoff wrote:
[EMAIL PROTECTED] wrote on Fri, 14 Mar 2008 14:32 -0500:
On Mar 14, 2008, at 2:20 PM, Pete Wyckoff wrote:
# define a few generic variables that we need to use
-DESTDIR =
srcdir = @srcdir@
-prefix = $(DESTDIR)@prefix@
-datarootdir =
On Mar 12, 2008, at 8:53 AM, Phil Carns wrote:
The shortest way that I see is this:
1) call PINT_cached_config_get_server_name() to get the name of the
server that owns the handle in question (ie, tcp://host:port)
2) call get_server_config_struct() to get a pointer to your local
On Feb 26, 2008, at 12:17 PM, Pete Wyckoff wrote:
[EMAIL PROTECTED] wrote on Tue, 26 Feb 2008 10:54 -0500:
Update of /projects/cvsroot/pvfs2/src/io/job
In directory parlweb1:/tmp/cvs-serv3926/src/io/job
Modified Files:
Tag: small-file-branch
job.c
Log Message:
fix return codes
On Feb 21, 2008, at 1:58 PM, Phil Carns wrote:
Sam Lang wrote:
On Feb 14, 2008, at 4:34 PM, Pete Wyckoff wrote:
Add --enable option to build a threaded pvfs2-client-core, that is,
an executable linked against pvfs2-threaded.so.
Remove the --threaded argument from pvfs2-client. Whatever
On Feb 19, 2008, at 11:12 AM, Pete Wyckoff wrote:
[EMAIL PROTECTED] wrote on Tue, 19 Feb 2008 01:24 -0600:
On Feb 16, 2008, at 5:46 PM, Pete Wyckoff wrote:
If anyone is excited about tracking the latest kernels,
Oh, so very.
pvfs kmod
breaks on 2.6.25-rc1 and later due to this commit.
On Feb 19, 2008, at 3:54 PM, Pete Wyckoff wrote:
[EMAIL PROTECTED] wrote on Tue, 19 Feb 2008 14:57 -0500:
You wouldn't have to add read-only export capability- its already
there :)
Have a look at the ReadOnly option and comments that go with it in
src/common/misc/server-config.c. I've
On Feb 16, 2008, at 5:46 PM, Pete Wyckoff wrote:
If anyone is excited about tracking the latest kernels,
Oh, so very.
pvfs kmod
breaks on 2.6.25-rc1 and later due to this commit. It's a bit too
deep for me to handle.
More config checks and #ifdefs. We basically can call our read_inode
On Feb 15, 2008, at 1:17 PM, Pete Wyckoff wrote:
[EMAIL PROTECTED] wrote on Thu, 14 Feb 2008 17:18 -0600:
I think the patch looks great. My only concern is that this will
make the
threaded client daemon never get used. My view is that we should
probably
have the threaded version as the
On Feb 14, 2008, at 4:34 PM, Pete Wyckoff wrote:
Add --enable option to build a threaded pvfs2-client-core, that is,
an executable linked against pvfs2-threaded.so.
Remove the --threaded argument from pvfs2-client. Whatever version
of client-core that was configured is all that is available
are about to add
a server request/state machine or 3.
Walt
Sam Lang wrote:
The attached patch tries to cleanup some of our server request
code, so that adding a new server request+state machine doesn't
mean searching for all the places where we perform a switch/case on
the server op enum
Sam Lang wrote:
Hi Walt,
Yes its against HEAD, and its already been committed. It should
apply cleanly to your branch though.
-sam
On Feb 13, 2008, at 9:09 AM, Walter B. Ligon III wrote:
Sam, is this patch against the current trunk? This looks like
something we should include in our branch
Troy,
Could you also sent the stacktrace from gdb where the segfault
occurs? That's going to be the most useful info for us.
Thanks,
-sam
On Feb 13, 2008, at 4:24 PM, Troy Benjegerdes wrote:
http://www.scl.ameslab.gov/~troy/pvfs/pvfs2-client-log-crash
or
On Feb 12, 2008, at 4:28 PM, Troy Benjegerdes wrote:
Yes, that's much better, 567 MB/sec from two servers better ;)
Cool.
-sam
Pete Wyckoff wrote:
There's one more critical-looking patch that slang checked in
recently. Without looking too much further into it, I hope this
fixes it for
and updating the two into one is the right thing.
-sam
-Phil
Sam Lang wrote:
On Feb 9, 2008, at 4:29 PM, Pete Wyckoff wrote:
[EMAIL PROTECTED] wrote on Sat, 09 Feb 2008 15:59 -0600:
I would do that if I could think of a way. Those functions have
to access
the request specific field
On Feb 9, 2008, at 4:29 PM, Pete Wyckoff wrote:
[EMAIL PROTECTED] wrote on Sat, 09 Feb 2008 15:59 -0600:
I would do that if I could think of a way. Those functions have to
access
the request specific field in the request structure. delete for
example
needs to get at:
req-u.delete.handle
On Feb 7, 2008, at 10:46 AM, Walter B. Ligon III wrote:
Sam,
I've been looking more closely at the state machine stuff in prep
for the stuff we're working on and I have some questions about the
mods you made wrt the the quicklist. I just want to be sure I
understand things right.
In
On Feb 7, 2008, at 3:02 PM, Walter B. Ligon III wrote:
Sam Lang wrote:
There's a specific check in the PINT_sm_start_child_frames code
that prevents this, and I'm using the pjmp to start nested state
machines a bunch without the parent being started as you describe.
Can you send me
What's thc? ;-)
-sam
On Jan 31, 2008, at 9:42 AM, Bradley Settlemyer wrote:
Blargh.
That got it to load, but its still messed up a bit. On a mount, I
get:
[EMAIL PROTECTED] pvfs]# mount -v -t pvfs2 thc://localhost:3334/pvfs2-
fs /mnt/pvfs2
mount: Protocol not available
Dmesg reports:
On Jan 29, 2008, at 1:42 PM, Pete Wyckoff wrote:
[EMAIL PROTECTED] wrote on Tue, 29 Jan 2008 13:32 -0600:
On Jan 28, 2008, at 6:43 PM, Pete Wyckoff wrote:
[EMAIL PROTECTED] wrote on Mon, 28 Jan 2008 16:38 -0600:
Attached patch disables the handle ledger. For those not
familiar, the
handle
Attached patch disables the handle ledger. For those not familiar,
the handle ledger is an in-memory structure that maintains allocated
handles for a given server. I'm disabling it because reading the
entire database each time the server loads is extremely expensive for
large
Yes, The bstream macro never used the coll_id.
-sam
On Jan 21, 2008, at 2:24 PM, Rob Ross wrote:
New versions will still be able to find bstream files for files
created by old servers? -- Rob
On Jan 18, 2008, at 12:42 PM, Pete Wyckoff wrote:
[EMAIL PROTECTED] wrote on Fri, 18 Jan 2008
On Jan 17, 2008, at 3:26 PM, Pete Wyckoff wrote:
I just noticed this bit in trove-dbpf/dbpf.h. The code carefully
builds ((collid 24) | handleid) then does a modulo against
a power of 2, effectively stripping off all but the low bits of
the handleid.
The collid is not used in the hash at
when we think that there are bytes there...
[..]
On Dec 7, 2007, at 4:55 PM, Sam Lang wrote:
I'm seeing recv on a socket in non-blocking mode returning EAGAIN
occasionally, even though epoll has just told us there's bytes
waiting. I
guess that's why the call was initially a blocking recv. I can
locked in the socket - say if a packet receive
handler is running - which would block the call, even though there
ARE bytes in the socket?
Walt
Sam Lang wrote:
I agree Pete -- its messy. Just by the names of errnos, it seems
appropriate to return what's been completed if we get EWOULDBLOCK
Hi Juan,
I think you may have gotten a version of the code I committed
yesterday that caused this behavior. I've since committed fixes, but
just to be sure -- and so that you don't have to retry everything,
what version of dcache.c is listed in src/kernel/linux-2.6/CVS/Entries?
-sam
error is specific to
readdir being done in chunks of 26, not with dentry revalidate/
lookups, right?
-sam
Thanks
Murali
On Dec 5, 2007 12:35 PM, Sam Lang [EMAIL PROTECTED] wrote:
Hi Murali,
I'm trying to figure out a bug in pvfs_revalidate_common. My
understanding is that the revalidate
I'm seeing recv on a socket in non-blocking mode returning EAGAIN
occasionally, even though epoll has just told us there's bytes
waiting. I guess that's why the call was initially a blocking recv.
I can add a loop around the non-blocking recv while it returns EAGAIN,
unless someone can
at it or shall I take a stab?
I think this bug might be on the server, so if you want to look into
it keep that in mind.
-sam
thanks,
Murali
-sam
Thanks
Murali
On Dec 5, 2007 12:35 PM, Sam Lang [EMAIL PROTECTED] wrote:
Hi Murali,
I'm trying to figure out a bug in pvfs_revalidate_common
I got tired of creating a transient tabfile with one entry and then
setting PVFS2TAB_FILE to that. So I've added a PVFS2EP env that
allows setting the filesystem endpoint in the env directly:
PVFS2EP=/tmp/mnt=tcp://mynode:3334/pvfs2-fs
A hack certainly, but as something semi-secret, it
1 - 100 of 357 matches
Mail list logo