At 06:57 AM 3/12/2010, Marcus Bointon wrote:
I'm just wondering how gluster and iscsi might work together. I
suspect that in many situations they could be used interchangeably,
but how about them working together? If a setup providing a gluster
AFR service was on top of remote iSCSI targets,
slower since it's constantly doing all this unnecessary extra work.
^C
Keith Freedman wrote:
At 08:13 AM 3/6/2010, Chad wrote:
I second this question/request.
When the 1st server goes down, how do we
eliminate the hang time? 5 seconds is a long time for a file system to be hung
At 03:26 AM 4/5/2009, Stas Oskin wrote:
Hi.
I wanted to report another issue I found with the AFR.
It seems that while AFR'ing two servers and one is completely empty,
if the empty server is be defined as favorite-child - no syncing
happens at all!
Meaning, I needed to manually change the
AM 3/16/2009, Keith Freedman wrote:
Also, I'm wondering if this is related to the fact that I have
single process client/server.
which used to be the recommended method and now is not.
if I split those out, will that solve my problem?
At 09:50 AM 3/16/2009, Keith Freedman wrote:
At 04:06 AM 3/16
At 01:07 AM 3/17/2009, Stas Oskin wrote:
Hi.
As far as I understand, in case one of the servers had downtime and
comes back, GlusterFS automatically synchronizes the files on it
when they are accessed.
Would such synchronization increase the time the user waits for the
file? Meaning the
I'm guessing you need to build glusterfs-1.3 into a different
--prefix spot to make sure it's binaries and libraries aren't
clobbering eachother.
then, I'm guessing you need to hard link the binary, instead of
dynamic linking or the linker will try to grab the wrong one.
At 06:24 AM
Also, I'm wondering if this is related to the fact that I have single
process client/server.
which used to be the recommended method and now is not.
if I split those out, will that solve my problem?
At 09:50 AM 3/16/2009, Keith Freedman wrote:
At 04:06 AM 3/16/2009, Vikas Gorur wrote:
2009/3
At 10:34 AM 3/16/2009, Pathiakis, Paul wrote:
Content-class: urn:content-classes:message
Content-Type: multipart/alternative;
boundary=_=_NextPart_001_01C9A65D.791DF785
I have machine 1 with a /a/foo that is populated.
I have machine 2 with a /a/foo that is not populated.
Shouldn't
any thoughts on this one?
it seems to be causing some severe problems.
there are occasions where things block on all nodes, seemingly
waiting for one to get a lock that it never gets?
and I've no real way of finding out which file is the problem or why?
At 11:21 PM 3/13/2009, Keith Freedman
all of a sudden, I'm getting messages such as this:
2009-03-13 23:14:06 C [posix.c:709:pl_forget] posix-locks-home1:
Pending fcntl locks found!
and some processes are hanging waiting presumably for the locks?
any way to find out what files are being locked and unlock them.
restarting gluster
At 05:09 AM 3/9/2009, Stas Oskin wrote:
Nope, it actually my first setup in lab. No errors - it just seems
as not synchronizing anything. The version I'm using is the latest one - 2 rc2.
Perhaps I need to modify anything else in addition to GlusterFS
installation - like file-systems
At 05:34 AM 3/9/2009, Krishna Srinivas wrote:
Do not use single process as both server and client as we saw issues
related to locking. Can you see if using different processes for
server and client works fine w.r.t replication?
this is news to me. when will this be fixed?
It used to be that
make sure you have the kernel-sources rpm installed
you can't build the kernel fuse module but I think you still need the
kernel headers to compile everything else.
At 03:46 PM 2/9/2009, Chris Maciejewski wrote:
Hi,
Could someone please let me know how can I compile fuse-2.7.3glfs10 on
Ubuntu
At 04:20 AM 1/30/2009, Barnaby Gray wrote:
I'm in the process of setting up server-side AFR with 2 servers in
separate data centres, separated by a WAN. Writes will be relatively
few, so we can live with the performance limitations of the WAN.
I noticed unexpected performance though when listing
At 09:41 PM 1/26/2009, Raghavendra G wrote:
namespace is just a cache, which holds the directory tree of unify.
Please note that the files contained in this directory tree are of
zero byte sized.
what about storage scalibality in this design? both server and
client. can you please give one
At 09:32 AM 1/27/2009, Prabhu Ramachandran wrote:
On 01/27/09 02:21, Keith Freedman wrote:
At 10:36 AM 1/26/2009, Prabhu Ramachandran wrote:
I dont see any problems with your config.
other than, if your network connection is very sporadic, then
you'll be caught often by waiting for timeouts which
At 10:36 AM 1/26/2009, Prabhu Ramachandran wrote:
Here, if the network connection fails and is back up in short periods of
time you'll alway be experiencing delays as gluster is often waiting for
timeouts, then the server is visible again, it auto-heals, then it's not
visible and it has to
we ran into this problem.
it seems related to timestamps being off by microseconds.
when someone would check their email on one machine, then hit another
whose time was off by even microseconds, it would think all the
messages are suddenly new or different from ones it's checked.
my guess is
At 10:55 AM 1/21/2009, Stas Oskin wrote:
This the bit I don't understand - shouldn't the Lustre nodes sync
the data between themselves? If there is a shared storage device
needed on some medium, what then the Lustre storage nodes actually do?
I mean, what is the idea of Lustre being cluster
19 matches
Mail list logo