? It is presently 10 secs, if it works fine for you we will
increase the default value in the code.
Thanks
Krishna
On Tue, Mar 10, 2009 at 11:04 AM, Dan Parsons dpars...@nyip.net wrote:
I just received this error message using rc4:
2009-03-09 21:58:16 E [client-protocol.c:505
I just received this error message using rc4:
2009-03-09 21:58:16 E [client-protocol.c:505:client_ping_timer_expired]
distfs03-stripe: ping timer expired! bailing transport
2009-03-09 21:58:16 N [client-protocol.c:6607:notify] distfs03-stripe:
disconnected
It happened a total of 7 times across my
I'm getting the below error messages in rc4. Like my previous email, there
doesn't seem to be any pattern as to which server/client it's happening on,
though the errors are occurring fairly frequently.
2009-03-09 17:32:26 E [unify.c:585:unify_lookup] unify: returning ESTALE for
otherwise.
2009/2/23 Gordan Bobic gor...@bobich.net mailto:gor...@bobich.net
Dan Parsons wrote:
I'm having an issue with glusterfs exceeding its cache-size
value. Right now I have it set to 4000MB and I've seen it climb
as high as 4800MB. If I set it to 5000, I've seen it go
(just the file
name, not whole path) is 16 or multiple of 16??
Regards,
Amar
2009/2/25 Dan Parsons dpars...@nyip.net
Further information- I can't find any rhyme or reason as to why some files
have this problems and others don't. It's not every file on dht or every
file on stripe I don't
After upgrading to rc2, I'm getting unify errors for a lot of files. If I
try to read one of these files, I get an I/O error. Here are the
corresponding lines from gluster log:
2009-02-25 13:42:49 E [unify.c:1239:unify_open] unify:
/bio/db/blast/blastp-nr_v9/blastp-nr.19.pni: entry_count is 1
, like it should.
Did the syntax for specifying file extensions to 'option scheduler switch'
change between rc1 and rc2?
Dan
On Wed, Feb 25, 2009 at 1:44 PM, Dan Parsons dpars...@nyip.net wrote:
After upgrading to rc2, I'm getting unify errors for a lot of files. If I
try to read one
advise.
Dan
On Wed, Feb 25, 2009 at 1:47 PM, Dan Parsons dpars...@nyip.net wrote:
Upon further examination, it looks like maybe unify is expecting that .pni
file to be handled by stripe, as according to the error message, it's
expecting to find the file on more than 1 server. It's supposed
or multiple of 16??
Regards,
Amar
2009/2/25 Dan Parsons dpars...@nyip.net
Further information- I can't find any rhyme or reason as to why some files
have this problems and others don't. It's not every file on dht or every
file on stripe I don't think I've found any broken files
In a previous email to this list, someone said:
I'm using Glusterfs RC 2.
Is rc2 actually available? I do not see it at
http://ftp.gluster.com/pub/gluster/glusterfs/2.0/LATEST/. Please advise.
Dan
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
segments.
Regards,
Amar
NOTE: 'malloc_stats' will be printed to 'stdout' if we enable -DDEBUG while
compiling glusterfs, as it hits performance badly otherwise.
2009/2/23 Gordan Bobic gor...@bobich.net
Dan Parsons wrote:
I'm having an issue with glusterfs exceeding its cache-size value
...@bobich.net wrote:
Dan Parsons wrote:
I will do this today. I noticed that I already have vm.drop_caches set to
3 via sysctl.conf, based on a suggestion from you from long ago. Should I
delete this under normal usage? Is it possible that this setting, enabled by
default, is causing my
allocation segments.
Regards,
Amar
NOTE: 'malloc_stats' will be printed to 'stdout' if we enable -DDEBUG while
compiling glusterfs, as it hits performance badly otherwise.
2009/2/23 Gordan Bobic gor...@bobich.net
Dan Parsons wrote:
I'm having an issue with glusterfs exceeding its cache
Can you point me at a list of changes between rc1 and rc2?
Dan
On Tue, Feb 24, 2009 at 7:30 AM, Dan Parsons dpars...@nyip.net wrote:
In a previous email to this list, someone said:
I'm using Glusterfs RC 2.
Is rc2 actually available? I do not see it at
http://ftp.gluster.com/pub/gluster
I'm having an issue with glusterfs exceeding its cache-size value. Right now
I have it set to 4000MB and I've seen it climb as high as 4800MB. If I set
it to 5000, I've seen it go as high as 6000MB. This is a problem because it
causes me to set the value very low so that my apps don't get pushed
I'm getting the below error message on some of my gluster clients when under
high I/O load:
009-02-23 12:43:50 E [unify.c:362:unify_lookup_cbk] unify: child(dht0):
path(/bio/data/fast-hmmsearch-all/tmp5bgb6I_fast-hmmsearch-all_job/result.tigrfam.TIGR03461.hmmhits):
No such file or directory
Yay for new release!
Can you tell me what differences there are between 2.0.0rc1 and
1.4.0tla846?
(testing your ioc patch now, btw)
Dan Parsons
On Jan 20, 2009, at 3:31 PM, Anand Babu Periasamy wrote:
GlusterFS v2.0.0rc1 Announcement
We are happy to announce the first RC release
Avati spent some time with me yesterday going through two different cores. He
says he has all the info he needs now.
Yes, it is very easy to reproduce. It takes about 15 minutes.
Dan Parsons
-Original Message-
From: Krishna Srinivas kris...@zresearch.com
Date: Fri, 16 Jan 2009 18:05
auth.addr.posix-unify.allow 10.8.101.*
option auth.addr.posix-stripe.allow 10.8.101.*
option auth.addr.posix-unify-switch-ns.allow 10.8.101.*
subvolumes posix-unify posix-stripe posix-unify-switch-ns
end-volume
Dan Parsons
___
Gluster
Yes, I put a ls /glusterfs in my mount script. I tried using the
very latest fuse (2.7.4) and the problem was still there.
Dan Parsons
On Jan 13, 2009, at 11:57 PM, Anand Avati wrote:
Dan,
This is because in some cases fuse does not lookup the / of the
filesystem before proceeding
received = 2009-01-14 10:58:56. transport-
timeout = 42
What do those mean? I see no errors on any of the servers.
Dan Parsons
On Jan 14, 2009, at 10:57 AM, Dan Parsons wrote:
I'm seeing a lot of messages like this, any time a directory is even
just ls'd:
2009-01-14 10:56:15 W [unify-self
.*
option auth.addr.posix-unify-switch-ns.allow 10.8.101.*
subvolumes posix-unify posix-stripe posix-unify-switch-ns
end-volume
Dan Parsons
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo
Just to clarify, the below problems occurred on tla 846, which is to
my knowledge, the latest.
Dan Parsons
On Jan 14, 2009, at 11:24 AM, Dan Parsons wrote:
Every time I try to do a big rsync operation, the glusterfs client
is crashing on me. First there are a ton of self-heal messages
= 42
Dan Parsons
On Jan 14, 2009, at 11:50 AM, Anand Avati wrote:
I'm seeing a lot of messages like this, any time a directory is
even just
ls'd:
2009-01-14 10:56:15 W [unify-self-heal.c:593:unify_sh_checksum_cbk]
unify:
Self-heal triggered on directory
/bio/data/gene_flagging
.*
subvolumes posix-unify posix-stripe posix-unify-switch-ns
end-volume
Dan Parsons
On Jan 14, 2009, at 11:42 AM, Anand Avati wrote:
Dan,
for the purpose of debugging, can you do 'option self-heal off' in
unify and try again (with a fresh log file)?
thanks,
avati
2009/1/15 Dan Parsons
served from all 4 gluster servers. So I renamed
the file to blastp-nr.huge (*.huge is in my switch scheduler config
line) and made a symlink of the original name to it.
Do you think this is a problem?
Dan Parsons
On Jan 14, 2009, at 11:42 AM, Anand Avati wrote:
Dan,
for the purpose
In my install of 1.4rc7, /sbin/mount.glusterfs is a simple shell
script, and glusterfs is actually a symlink to glusterfsd. So, in
short, I'd have to say that in 1.4rc7, yes, the client and server are
one binary.
Dan Parsons
On Jan 14, 2009, at 12:38 PM, Kevan Benson wrote:
Sean Davis
So far so good with this 'option self-heal off' setting. It seems to
have fixed my problem. But it should be working properly without that
option, right? Do you know what is causing this? Anything I can do to
help?
Dan Parsons
On Jan 14, 2009, at 11:42 AM, Anand Avati wrote:
Dan
Using the same disk, yes, but two separate directories. I put the conf
files in my original email to the list.
Dan Parsons
On Jan 13, 2009, at 3:16 AM, Krishna Srinivas wrote:
Dan,
Are you using same disk for both dht and stripe at the back end? i.e
create two directories on the disk
I have tried that and get the same problem every time.
Dan Parsons
On Jan 13, 2009, at 3:37 AM, Krishna Srinivas wrote:
Dan,
Can you removie the backend directories and re-create it and try
this again?
Krishna
On Tue, Jan 13, 2009 at 6:31 AM, Dan Parsons dpars...@nyip.net
wrote:
I'm
Nevermind, the conf files I put up were for my dht problem, from which
I stripped out my unify stuff (I was reducing conf file complexity to
focus on the dht problem). I'll get back to this once I make dht work.
Dan Parsons
On Jan 13, 2009, at 10:07 AM, Dan Parsons wrote:
Using the same
, everything works correctly, until I unmount
and remount, in which case I have to copy a regular file again.
Dan Parsons
On Jan 13, 2009, at 10:08 AM, Dan Parsons wrote:
I have tried that and get the same problem every time.
Dan Parsons
On Jan 13, 2009, at 3:37 AM, Krishna Srinivas wrote
, assuming I specify 8 threads
per sub-volume.
Dan Parsons
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel
Dan Parsons
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel
but forgot about it until now.
Thanks!
Dan Parsons
On Jan 13, 2009, at 9:12 PM, Harald Stürzebecher wrote:
2009/1/14 Dan Parsons dpars...@nyip.net:
I'm copying data back to my newly rebuilt glusterfs, and one of the
things
I'm doing is using unify with the switch scheduler to send certain
file
it is. Is
this the intended behavior?
My end-goal here is to be able to have some files dht'd and some
striped. This seems to be the best way to do it, if I'm wrong please
let me know.
Dan Parsons
___
Gluster-devel mailing list
Gluster-devel
-brick
end-volume
# END OF CLIENT-UNIFY
volume dht0
type cluster/dht
subvolumes distfs01-unify-brick distfs02-unify-brick distfs03-unify-
brick distfs04-unify-brick
end-volume
Dan Parsons
Now that I'm upgrading to gluster 1.4/2.0, I'm going to take the time and
rearchitect things.
Hardware:
Gluster servers:
4 blades connected via 4gbit fc to fast, dedicated storage. Each server has two
bonded Gig-E links to the rest of my network, for 8gbit/s theoretical
throughput.
Gluster
On Jan 8, 2009, at 5:23 AM, Joe Landman wrote:
Dan Parsons wrote:
Now that I'm upgrading to gluster 1.4/2.0, I'm going to take the time
and rearchitect things.
Hardware: Gluster servers: 4 blades connected via 4gbit fc to fast,
dedicated storage. Each server has two bonded Gig-E links
to be released?
(2) I rely very heavily on the ioc and stripe xlators. I'd also like
to consider using the new HA xlator. Are there any known bugs with
them in the latest rc? If so, can you please direct me to a web page
that has details on them?
Dan Parsons
Anyone? :)
--Original Message--
From: Dan Parsons
Sender: gluster-devel-bounces+dparsons=nyip@nongnu.org
To: Gluster Developers Discussion List
Subject: [Gluster-devel] upgrading; 1.4 vs waiting for 2.x
Sent: Jan 7, 2009 10:09 AM
I really need to upgrade to 1.4 or 2.x or whatever it's
really need that io-cache leak fix now.
Dan Parsons
On Jan 5, 2009, at 3:59 AM, Anand Babu Periasamy wrote:
Dear Community,
Since the 1.3 release, we have come a long way to reach this level
of code maturity.
We are preparing to call the next release as the 2.0.0 instead of
1.4.0
I would very much love to have the feature described below
(authoritative repository), using gluster as basically an automatic,
real-time rsync
Dan Parsons
On Dec 30, 2008, at 9:17 PM, Martin Fick wrote:
--- On Tue, 12/30/08, Basavanagowda Kanur go...@zresearch.com wrote:
On Tue, Dec
. We wanted to run some big jobs over the
holiday break but this crash is getting in the way.
Is there *anything* that can be done?
Dan Parsons
On Dec 17, 2008, at 3:28 PM, Anand Avati wrote:
Dan,
I have a vague memory about giving a custom patch for io-cache. Was
that
you? Can you mail
Just to confirm, you're saying echo 3 to that /proc entry and then re-
run my various workloads and see if problem happens again?
Or echo 3 to that /proc entry on a node that is already using 12GB ioc
memory and see if it drops down to 2GB?
which one?
Dan Parsons
On Dec 22, 2008, at 8
K. Will do
Dan Parsons
On Dec 22, 2008, at 8:53 AM, Anand Avati wrote:
Just to confirm, you're saying echo 3 to that /proc entry and then
re-run my
various workloads and see if problem happens again?
Or echo 3 to that /proc entry on a node that is already using 12GB
ioc
memory and see
.
Thanks again for your help, I'll write back with more info if it
happens again.
Dan Parsons
On Dec 22, 2008, at 8:47 AM, Anand Avati wrote:
Dan,
can you do 'echo 3 /proc/sys/vm/drop_caches' and see if the usage
comes back to normal?
avati
2008/12/22 Dan Parsons dpars...@nyip.net:
OK
OK, yes, the bug has come back. What do I try next?
As far as a cronjob to invalidate the cache, what would that look
like? The echo 3 thing didn't take any immediate effect
Dan Parsons
On Dec 22, 2008, at 12:08 PM, Joe Landman wrote:
Dan Parsons wrote:
OK, I did a little bit
# In this example it is 'client' you may
have to change it according to your spec file.
option page-size 1MB # 128KB is default
option cache-size 2048MB# 32MB is default
option force-revalidate-timeout 5 # 1second is default
option priority *.psiblast:3,*.seq:2,*:1
end-volume
Dan Parsons
,*.seq:2,*:1
end-volume
volume fixed
type features/fixed-id
option fixed-uid 0
option fixed-gid 900
subvolumes ioc
end-volume
Dan Parsons
On Dec 17, 2008, at 2:09 PM, Anand Avati wrote:
Dan,
Is it feasible for you to try 1.4.0pre4? We have fixed a couple of
ioc leaks in that branch
You did give me a patch (which I very much appreciate!) but that was
for an older version of the 1.3 branch, I'm using the one off the site
now. The patch was to fix a situation where ioc wouldn't cache
anything at all if it was being striped.
Dan Parsons
On Dec 17, 2008, at 3:28 PM
I think you're probably already aware of this, and I'm not sure it's
the cause of your problem, but I'd like to chime in and say that
networking between VMs and the host that runs the VM can sometimes be
very tricky in non-obvious ways.
Dan Parsons
On Nov 21, 2008, at 9:32 AM, Anand
Yes, same experience here, VMware is very good about VM-host
networking. I had lots of trouble doing the same thing with Xen,
though I did eventually make it work.
Dan Parsons
On Nov 21, 2008, at 10:13 AM, Ton van Rosmalen wrote:
Hi,
Dan Parsons schreef:
I think you're probably
Any expected release date for 1.4.0 final?
Dan Parsons
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel
]
This box has only 4GB RAM so i killed the 'cat' process before things
got out of hand. But, there's your test.
[EMAIL PROTECTED] ~]# glusterfs --version
glusterfs 1.3.11 built on Aug 21 2008 11:26:38
Repository revision: glusterfs--mainline--2.5--patch-795
Dan Parsons
On Nov 7, 2008
bug.
Dan Parsons
On Nov 5, 2008, at 1:05 AM, Lukas Hejtmanek wrote:
Hello,
On Tue, Nov 04, 2008 at 12:37:03PM +0530, Krishna Srinivas wrote:
We want to reproduce the leak in our setup to fix it. What is your
setup on the client side? How many servers do you have? What are the
applications you
(pvfs, lustre) - it's just
this one io-cache memory bug that's been getting less-than-average
attention, and perhaps with this recent spark in attention that will
change :)
Dan Parsons
On Nov 5, 2008, at 11:20 AM, rhubbell wrote:
On Wed, 2008-11-05 at 19:55 +0100, Lukas Hejtmanek wrote
this is soon/now).
This doesn't immediately help you, I know, but I suggest maybe giving
glusterfs a bit more time/effort to do what you want.
Dan Parsons
On Nov 5, 2008, at 11:50 AM, rhubbell wrote:
Thanks Dan for the synopsis.
I wish I could experience a bug. (^;
I'm unable to even
Quick suggestion: get tla working on a linux box (its easy) then
tarball the directory, copy it to your solaris box?
Dan Parsons
On Nov 5, 2008, at 10:38 PM, rhubbell wrote:
Unable to get tla to build on solaris. (^:
I guess I give up for now. There was a sed error during configure
Anyone have any comments on this?
Dan Parsons
On Aug 26, 2008, at 12:32 AM, Dan Parsons wrote:
I'm running glusterfs 1.3.11. I have cache-size set to '2048MB' in
my conf file, but in this particular test I'm running (catting a
6.3GB file to /dev/null), it isn't stopping at 2GB
Back when I was using glusterfs 1.3.9, io-cache did not work with striping. I
had to apply a patch to make that work. Do you know if this is still the case
with 1.3.11?
Dan Parsons
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http
Are there any issues with having a 32-bit glusterfs client talk to a
64-bit glusterfs server?
Dan Parsons
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel
Sorry, I replied to the wrong thread, please disregard.
Dan Parsons
On May 26, 2008, at 2:11 PM, Dan Parsons wrote:
And now, after a while of running, the cache doesn't seem to get
used at all anymore. My jobs are running fine, but glusterfs memory
usage is super low and all jobs
And now, after a while of running, the cache doesn't seem to get used
at all anymore. My jobs are running fine, but glusterfs memory usage
is super low and all jobs are pulling data directly from the network.
How does io-cache determine what files to cache and for how long?
Dan Parsons
running on 32-bit CentOS 5 with 2.6.23 kernel,
glusterfs-1.3.8, using fuse.
Any tips will be greatly appreciated.
Dan Parsons
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel
Any luck on documenting this? I'd relaly like to set it up.
Dan Parsons
On May 10, 2008, at 1:10 PM, Anand Babu Periasamy wrote:
Hi Gordan,
option transport-type unix/server is available. Even better
approach is to
run in single address mode (server and client merged into single
process
Amar, quick question. I've switched to readahead but really wish I
could use io-cache. How likely do you think changing block-size from
128kb to 1MB (the same as what stripe uses, based on your advice)
would fix the crash issue?
Dan Parsons
On May 6, 2008, at 12:43 PM, Amar S. Tumballi
67 matches
Mail list logo