Hello,
We have started looking into some used inifiniband hardware to up our
network from the typical gigabit ethernet to the 10gigabit infiniband
stuff. Im curious to hear from anyone that has maybe done this or has
been able to see how much of a performance gain there is. My primary
interest
On Fri, Jan 22, 2010 at 1:38 AM, Brandon Lamb brandonl...@gmail.com wrote:
Hello,
Not sure what if anything might be needed so I'll wait for someone to
tell me (as far as logs, configs, whatnot).
Basically I have two servers with two clients mounting a replicated
brick. I am using
Hopefully I didnt just send this three times, safari wasnt working...
Hello, I was trying to compile mod_glusterfs for lighttpd 1.5 (rev 2393 from
svn) and the Makefile.am.diff is not working, I attempted to make the
changes manually although I am not 100% positive I did it right. I re-ran
Hello,
Im hoping someone(s) can recommend commodity hardware to use in a 2
and/or 3 server unify setup.
I am discussing with our other admins about migrating from a
monolithic 16 drive scsi (160 drives) nfs server to a 2 or 3 server
glusterfs setup. Given the two options what would you use for 2
Hello,
If I have two data servers that I use unify to create a single export,
what happens to a client if one of the servers goes down?
Does the client keep on working but send all traffic to the node that
is up or does it fail completely?
___
at 2:10 AM, Brandon Lamb [EMAIL PROTECTED] wrote:
Hello,
If I have two data servers that I use unify to create a single export,
what happens to a client if one of the servers goes down?
Does the client keep on working but send all traffic to the node that
is up or does it fail completely
Hello,
I cant seem to find (searching mailing list and wiki) what DHT is? I
come up with some references to it in the mailing list but no
documentation on what it is or does really.
Also, second question, current state of AFR? Should performance/speed
be up to par yet or is that still being
I did see a post awhile back about the new AFR killing write speeds,
any update on this?
I setup a 2 server (both are server/client) and only mounted the afr
brick on one of them and attempted to untar linux-source-2.6.26.tar...
what took 5-9 seconds on the local disk took over 5 minutes on the
Hello all,
I was just reviewing the roadmap again after months and thought I
would ask if there was any kind of idea on a time frame for the
following 1.4 features
storage/bdb - distributed BerkeleyDB based storage backend (very
efficient for small files)
binary protocol - bit level protocol
I havent been watching the list extremely close lately, but I didnt
happen to see any updates on this.
Is this still a work in progress? this was related to the two clients
writing to the same AFR file, versions being the same but contents
different.
Sort of two questions depending on the answer to the first.
If i have 10 clients doing afr to 2 servers, and one of the servers is
faster (20 scsi versus 8 sata2), how does afr pick which one to read
data from? some kind of round robin or something?
Question 2 would be, could I set this on my
Ok there are two threads on this and I am trying to wrap my head around it.
So a simple 2 server, 2 client, client side afr setup.
The clients at the SAME time do:
client1 # echo one file.txt
client2 # echo two file.txt
Are the threads regarding this and the conclusion at this point saying
I just did some testing, and came to the conclusion that trying to
setup afr using one server with pre-existing data and a blank server,
and copying your data and removing xattr's on the copied data then
initiating afr DOES NO GOOD.
server1 - 400 megs of data in 10 tarballs, removed all xattr
On Mon, May 5, 2008 at 5:58 PM, Brandon Lamb [EMAIL PROTECTED] wrote:
I just did some testing, and came to the conclusion that trying to
setup afr using one server with pre-existing data and a blank server,
and copying your data and removing xattr's on the copied data then
initiating afr DOES
2 server using afr (client side afr), 1 client
1) 10 file[0-9].tar.bz2 @ 45M on /mnt/raid/gfs on server1
2) scp file* on server1 to /mnt/raid/gfs on server2
3) on server1 find /mnt/raid/gfs -exec setfattr
trusted.glusterfs.version -v 1 {} \;
4) on server1 find /mnt/raid/gfs -exec setfattr
On Sat, May 3, 2008 at 3:26 PM, Brandon Lamb [EMAIL PROTECTED] wrote:
On Sat, May 3, 2008 at 2:56 PM, Brandon Lamb [EMAIL PROTECTED] wrote:
On Sat, May 3, 2008 at 2:50 PM, Brandon Lamb [EMAIL PROTECTED] wrote:
I couldnt find anything related to this with a quick search.
Lets take
On Sun, May 4, 2008 at 8:33 PM, Samuel Douglas [EMAIL PROTECTED] wrote:
We have used Glusterfs fine on an amd64 version of Debian, running a
fairly recent kernel. No problems there. We have only used the TCP
transport though.
-- Samuel
On 5/5/08, Brandon Lamb [EMAIL PROTECTED] wrote:
Any
Krishna,
I am setting up 4 vmware boxes using fresh debian installations. I
thought I would try on a completely different setup to see if there
was any difference than running on fedora 8.
Were you able to look at the pastebin stuff with the logs and spec
files I was using to see if there was
On Sat, May 3, 2008 at 9:43 AM, Brandon Lamb [EMAIL PROTECTED] wrote:
Krishna,
I am setting up 4 vmware boxes using fresh debian installations. I
thought I would try on a completely different setup to see if there
was any difference than running on fedora 8.
Were you able to look
I couldnt find anything related to this with a quick search.
Lets take a case where we have 3 data servers, so we do AFR using
unify. For unify we specify a namespace volume that exists on lets say
server1.
Ok so what happens when server1 goes down, now there is no namespace.
What are the
On Sat, May 3, 2008 at 2:50 PM, Brandon Lamb [EMAIL PROTECTED] wrote:
I couldnt find anything related to this with a quick search.
Lets take a case where we have 3 data servers, so we do AFR using
unify. For unify we specify a namespace volume that exists on lets say
server1.
Ok so what
I think it would help if we had a list of all the different use cases
that someone might use glusterfs, then build wiki howtos based off
that. I will try to think of all I can here
No clustering translators
*) Single server. (n) client machines. (NFS clone)
AFR - unify / AFR + unify / data
On Sat, May 3, 2008 at 2:56 PM, Brandon Lamb [EMAIL PROTECTED] wrote:
On Sat, May 3, 2008 at 2:50 PM, Brandon Lamb [EMAIL PROTECTED] wrote:
I couldnt find anything related to this with a quick search.
Lets take a case where we have 3 data servers, so we do AFR using
unify. For unify we
On Thu, May 1, 2008 at 11:03 PM, Krishna Srinivas [EMAIL PROTECTED] wrote:
On Fri, May 2, 2008 at 7:42 AM, Brandon Lamb [EMAIL PROTECTED] wrote:
Faster interconnect hardware costs lots of $$$. Wouldnt there be less
servers in most cases, meaning less hardware to buy?
I just took a look
On Thu, May 1, 2008 at 10:49 PM, Krishna Srinivas [EMAIL PROTECTED] wrote:
Brandon,
$ echo hello file.txt
-bash: file.txt: Input/output error
Can you check in the logs if there is something related to this?
It should have worked fine there.
Also for client2, the order should be brick2
glusterfs.log from client1
-
2008-05-01 16:06:46 E [protocol.c:271:gf_block_unserialize_transport]
brick2: EOF from peer (208.200.248.17:6996)
2008-05-01 16:06:49 E [tcp-client.c:190:tcp_connect] brick2:
non-blocking connect() returned: 111 (Connection refused)
2008-05-01 16:06:49
oh yea and glusterfsd.log from server1
--
2008-05-01 23:22:06 E [tcp-client.c:190:tcp_connect] brick2:
non-blocking connect() returned: 111 (Connection refused)
2008-05-01 23:22:07 E [tcp-client.c:190:tcp_connect] brick2:
non-blocking connect() returned: 111 (Connection refused)
And I still get
[EMAIL PROTECTED] gfs]# cat file.txt
cat: file.txt: Input/output error
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel
Oh and just an FYI on the spec files, yes the IP addresses are
correct. I have two NICs in all machines and two seperate switches.
server1 is *.16 and server2 is *.17
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
I am running this version on all servers
[EMAIL PROTECTED] gfs]# glusterfs -V
glusterfs 1.3.8 built on Apr 30 2008 22:13:00
Repository revision: glusterfs--mainline--2.5--patch-760
And by servers I meant all machines (servers AND clients)
=P
___
On Thu, May 1, 2008 at 11:39 PM, Krishna Srinivas [EMAIL PROTECTED] wrote:
Can you check with the latest code from TLA?
There was a related fix that went in recently. So i am guessing if its that.
Thanks
Krishna
On Fri, May 2, 2008 at 12:03 PM, Brandon Lamb [EMAIL PROTECTED] wrote:
Oh
On Fri, May 2, 2008 at 3:04 AM, Daniel Maher [EMAIL PROTECTED] wrote:
On Thu, 1 May 2008 14:47:39 -0700 Brandon Lamb
[EMAIL PROTECTED] wrote:
http://www.gluster.org/docs/index.php/Setting_up_AFR_on_two_servers_with_client_side_replication
Look over and make sure it is kosher?
I added
On Fri, May 2, 2008 at 8:40 AM, Brandon Lamb [EMAIL PROTECTED] wrote:
On Fri, May 2, 2008 at 3:04 AM, Daniel Maher [EMAIL PROTECTED] wrote:
On Thu, 1 May 2008 14:47:39 -0700 Brandon Lamb
[EMAIL PROTECTED] wrote:
http://www.gluster.org/docs/index.php
Krishna here is my debug log. Changes i made was to use a single
network instead of using the back network for the servers.
http://glusterfs.pastebin.com/m26685d78
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
Steps: touch file on client2, echo hello to the file on client2, cat
the file on client2, cat the file on client1. kill glusterfsd on
server2, cat file on client1 doesnt work, cat file on client2
transport endpoint not connected
[EMAIL PROTECTED] gfs]# touch file.txt
[EMAIL PROTECTED] gfs]# echo
On Fri, May 2, 2008 at 9:12 AM, Daniel Maher [EMAIL PROTECTED] wrote:
On Fri, 2 May 2008 08:51:22 -0700 Brandon Lamb
[EMAIL PROTECTED] wrote:
A note on RRDNS, maybe my understanding is incorrect, can anyone
comment?
From the wiki page i wrote today :
http://www.gluster.org/docs/index.php
Dangit, I dont get it. It looks like my configs are almost the same as
the new wiki Daniel posted minus the performance translators.
Am I just getting something totally whacky going on?
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
On Fri, May 2, 2008 at 9:30 AM, Brandon Lamb [EMAIL PROTECTED] wrote:
On Fri, May 2, 2008 at 9:21 AM, Shaofeng Yang [EMAIL PROTECTED] wrote:
Can anybody share some thoughts about those cluster file systems? We are
trying to compare the pros and cons for each solution.
Thanks,
Shaofeng
On Fri, May 2, 2008 at 10:08 AM, [EMAIL PROTECTED] wrote:
On Fri, 2 May 2008, Brandon Lamb wrote:
On Fri, May 2, 2008 at 9:30 AM, Brandon Lamb [EMAIL PROTECTED]
wrote:
On Fri, May 2, 2008 at 9:21 AM, Shaofeng Yang [EMAIL PROTECTED] wrote:
Can anybody share some thoughts about
On Fri, May 2, 2008 at 10:16 AM, Brandon Lamb [EMAIL PROTECTED] wrote:
On Fri, May 2, 2008 at 10:08 AM, [EMAIL PROTECTED] wrote:
On Fri, 2 May 2008, Brandon Lamb wrote:
On Fri, May 2, 2008 at 9:30 AM, Brandon Lamb [EMAIL PROTECTED]
wrote:
On Fri, May 2, 2008 at 9:21 AM, Shaofeng
On Fri, May 2, 2008 at 9:21 AM, Shaofeng Yang [EMAIL PROTECTED] wrote:
Can anybody share some thoughts about those cluster file systems? We are
trying to compare the pros and cons for each solution.
Thanks,
Shaofeng
Tought question as it depends on what you are needing. Myself I have
messed
Would I be correct in this thinking?
2 servers as data nodes, multiple clients doing afr client side. This
is a new setup where server1 has 100 gigs of data and server2 has
empty directory.
So by doing afr client side, does the client have to read and transfer
data FROM server1, and then copy it
On Wed, Apr 30, 2008 at 10:35 PM, Brandon Lamb [EMAIL PROTECTED] wrote:
On Wed, Apr 30, 2008 at 2:09 PM, Brandon Lamb [EMAIL PROTECTED] wrote:
On Wed, Apr 30, 2008 at 2:04 PM, Brandon Lamb [EMAIL PROTECTED] wrote:
I cant find the mail, I noticed it a few days ago, someone mentioned
On Wed, Apr 30, 2008 at 11:21 PM, Brandon Lamb [EMAIL PROTECTED] wrote:
On Wed, Apr 30, 2008 at 10:35 PM, Brandon Lamb [EMAIL PROTECTED] wrote:
On Wed, Apr 30, 2008 at 2:09 PM, Brandon Lamb [EMAIL PROTECTED] wrote:
On Wed, Apr 30, 2008 at 2:04 PM, Brandon Lamb [EMAIL PROTECTED] wrote
So in my case where i have one directory with existing data and
another that is empty, should i set the trusted version to 1 on the
pre existing data and 3 on the empty directory? Or am I totally
missing what this does?
I agree starting with a clean slate is much easier/cleaner. But when
you have
On Thu, May 1, 2008 at 11:01 AM, Christopher Hawkins
[EMAIL PROTECTED] wrote:
I think a little documentation there would be fantastic. I am also starting
with a full set of files that cannot be easily copied (a shared root... It
kind of has to be there already, by definition!).
Personally I
I think I accidentally butchered this thread because this was actually
a question on client versus server side afr, not setting up with pre
existing data...
HOWEVER.
I just had success. This time i tried with a TEST directory rather
than live data... /genius
Server1
/mnt/raid/gfs - contains 4
Ok I got it all up
http://www.gluster.org/docs/index.php/Setting_up_AFR_on_two_servers_with_pre-existing_data
Does it all look right?
Krishna replied with the suggestion of removing file attributes, any
preference between that or setting the version to a lower value? I
guess the end result is
On Thu, May 1, 2008 at 12:30 PM, Krishna Srinivas [EMAIL PROTECTED] wrote:
On Fri, May 2, 2008 at 12:51 AM, Brandon Lamb [EMAIL PROTECTED] wrote:
Ok updated the wiki, and i took There is no need to set the
trusted.glusterfs.createtime xattr. to mean there is no reason to
include
Replies are inline, hopefully im right here ;-)
spec example snipped
Note: it is assumed that host-a:/data/a, host-b:/data/b,
host-ns:/data/ns exist.
Q. Does gluster have a problem with server spec volumes which do
not exist on a given host?
I believe you are asking what happens when
On Thu, May 1, 2008 at 1:49 PM, John Marshall [EMAIL PROTECTED] wrote:
Brandon Lamb wrote:
Replies are inline, hopefully im right here ;-)
spec example snipped
Note: it is assumed that host-a:/data/a, host-b:/data/b,
host-ns:/data/ns exist.
Q. Does gluster have
On Thu, May 1, 2008 at 2:22 PM, John Marshall [EMAIL PROTECTED] wrote:
Brandon Lamb wrote:
On Thu, May 1, 2008 at 1:49 PM, John Marshall [EMAIL PROTECTED]
wrote:
My question comes from wanting to export multiple, differently named,
directories from different hosts (assume 1 dir
http://www.gluster.org/docs/index.php/Setting_up_AFR_on_two_servers_with_client_side_replication
Look over and make sure it is kosher?
I added a section at the bottom for gotchas, can you take a quick
look to make sure they are accurate statements.
=P
On Thu, May 1, 2008 at 2:47 PM, Brandon Lamb [EMAIL PROTECTED] wrote:
http://www.gluster.org/docs/index.php/Setting_up_AFR_on_two_servers_with_client_side_replication
Look over and make sure it is kosher?
I added a section at the bottom for gotchas, can you take a quick
look to make sure
http://www.gluster.org/docs/index.php/Setting_up_AFR_on_two_servers_with_server_side_replication
Most importantly please check out the Breaking things section I
added. I will have to go look through the mailing list for
conversations regarding this.
There is no way around losing the cluster by
On Thu, May 1, 2008 at 3:36 PM, Brandon Lamb [EMAIL PROTECTED] wrote:
http://www.gluster.org/docs/index.php/Setting_up_AFR_on_two_servers_with_server_side_replication
Most importantly please check out the Breaking things section I
added. I will have to go look through the mailing list
From wiki:
If '/mnt/sda1' is your export disk, it is nice if you export
'/mnt/sda1/export/' through glusterfs, instead of exporting /mnt/sda1
itself
Question:
Why?
=P
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
* Make sure that the underlying server filesystems have equal
available disk space. If one runs out files will still be written to
the other server(s) but you may end up with less copies of your data
then you think you have.
This IS correct, right? You will still be able to performa writes on
the
Was there an option added somewhere to be able to choose whether to
dispaly the least or most amount of disk space? I added info on this
to the AFR things to know on the wiki, but I was wondering if maybe
there was actually an option of how glusterfs displays that info.
On Thu, May 1, 2008 at 5:59 PM, Amar S. Tumballi [EMAIL PROTECTED] wrote:
On Thu, May 1, 2008 at 5:37 PM, Brandon Lamb [EMAIL PROTECTED] wrote:
Was there an option added somewhere to be able to choose whether to
dispaly the least or most amount of disk space? I added info on this
to the AFR
I see option transport-type ib-verbs/server and there was something
in the documentation about using ibverbs for faster performance.
Can I use it interchangably with option transport-type tcp/server?
If I dont have infiniband is this still some kind of communications
library I can use over
Faster interconnect hardware costs lots of $$$. Wouldnt there be less
servers in most cases, meaning less hardware to buy?
I just took a look at infiniband hardware, its expensive. If I wanted
to upgrade my network, I would much rather upgrade my server machines
at 2-4 computer instead of 10 mail
On Wed, Apr 30, 2008 at 2:04 PM, Brandon Lamb [EMAIL PROTECTED] wrote:
I cant find the mail, I noticed it a few days ago, someone mentioned
something about setting the file attributes to 3? Does that make
sense?
Anyway, it sounded like what may have been giving me some grief last
time I set
I cant find the mail, I noticed it a few days ago, someone mentioned
something about setting the file attributes to 3? Does that make
sense?
Anyway, it sounded like what may have been giving me some grief last
time I set up a 2 server afr config. On one server I had my web data,
and on the other
http://www.gluster.org/docs/index.php/Install_and_run_GlusterFS_v1.3_in_10mins
I was reading through this, and I know this is totally an opinion
thing, so I am only offering this as my opinion.
I believe it would be nicer and cleaner for this page to either not
include a spec file example, and
On Wed, Apr 30, 2008 at 2:59 PM, Amar S. Tumballi [EMAIL PROTECTED] wrote:
On Wed, Apr 30, 2008 at 2:51 PM, Brandon Lamb [EMAIL PROTECTED] wrote:
http://www.gluster.org/docs/index.php/Install_and_run_GlusterFS_v1.3_in_10mins
I was reading through this, and I know this is totally an opinion
http://www.gluster.org/docs/index.php/GlusterFS_User_Guide_v1.3#GlusterFS_Installation
This should maybe instead link to the installation section?
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://www.gluster.org/docs/index.php/GlusterFS_User_Guide_v1.3#Configuring_GlusterFS_as_Network_Filesystem
More redundancy with spec file examples, move to its own section or
link to already existing?
___
Gluster-devel mailing list
and do
that, i can see the diff of what changes you did, and just moderate it if
required).
On Wed, Apr 30, 2008 at 5:26 PM, Brandon Lamb [EMAIL PROTECTED] wrote:
http://www.gluster.org/docs/index.php/GlusterFS_User_Guide_v1.3#Configuring_GlusterFS_as_Network_Filesystem
More redundancy
From the best practices section of the wiki:
One process for one export works best with the current codebase.
(Sharing a namespace export in the same server is not a problem
And from the faq from mailing list, first question
Q So the question is, what makes this different from running two
From the faq page
What about deletion self/auto healing?
With auto healing or self healing only file creation is healed. If a
brick is missing because of a disk crash re-creation of files is ok
but if it's a temporary network problem synchronizing deletion is
mandatory.
Q: How do I
On Wed, Apr 30, 2008 at 2:09 PM, Brandon Lamb [EMAIL PROTECTED] wrote:
On Wed, Apr 30, 2008 at 2:04 PM, Brandon Lamb [EMAIL PROTECTED] wrote:
I cant find the mail, I noticed it a few days ago, someone mentioned
something about setting the file attributes to 3? Does that make
sense
Hello,
So I have seen this suggested as the way to trigger self-heal
find /mnt/webcluster2 -depth -type f -exec head -n 1 {} \; /dev/null
I am using fedora 8. according to the man pages -n 1 will tell head to
print the first line of every file.
There is also the -c flag
find /mnt/webcluster2
On Thu, Apr 10, 2008 at 9:49 PM, Brandon Lamb [EMAIL PROTECTED] wrote:
Hello,
So I have seen this suggested as the way to trigger self-heal
find /mnt/webcluster2 -depth -type f -exec head -n 1 {} \; /dev/null
I am using fedora 8. according to the man pages -n 1 will tell head to
print
On Wed, Apr 9, 2008 at 7:43 AM, Daniel Maher [EMAIL PROTECTED] wrote:
On Wed, 9 Apr 2008 18:50:31 +0530 Krishna Srinivas
[EMAIL PROTECTED] wrote:
OK, we are trying to reproduce the setup and fix the problem...
btw, can you without unify and see if failover happens cleanly.
Correct the
Yet another AFR Question.. ha ha. Im a snowflake...
Ok so I was just reading the glusterfs.org wiki and there are quite a
few new documentations / howtos / examples than previously (very big
thank you to whoever has contributed to that).
It brought up a question, do I need to use unify with AFR?
On Jan 17, 2008 11:35 PM, Sascha Ottolski [EMAIL PROTECTED] wrote:
Am Freitag 18 Januar 2008 04:00:06 schrieb Anand Avati:
Brandon,
which is the fuse kernel module version you are using? please with
the fuse kernel module from -
On Jan 17, 2008 9:56 AM, Brandon Lamb [EMAIL PROTECTED] wrote:
http://ezopg.com/gfs/
I uploaded my client config and the server configs for the 2 servers.
3 seperate machines.
I can get to mounting, then do cd /mnt/gfs (mounted gfs dir) and then
i typed find . -type f -exec head -c 1
http://ezopg.com/gfs/
I uploaded my client config and the server configs for the 2 servers.
3 seperate machines.
I can get to mounting, then do cd /mnt/gfs (mounted gfs dir) and then
i typed find . -type f -exec head -c 1 {} \; /dev/null and got
find: ./test: Transport endpoint is not connected
on the client itself would be even better.
3. try removing write-behind.
We're interested in knowing your results from those changes.
avati
2008/1/17, Brandon Lamb [EMAIL PROTECTED]:
On Jan 17, 2008 9:56 AM, Brandon Lamb [EMAIL PROTECTED] wrote:
http://ezopg.com/gfs/
I uploaded my
files while write-behind is
on. can you try with an untar of large files, with write-behind on?
thanks,
avati
2008/1/18, Brandon Lamb [EMAIL PROTECTED]:
On Jan 17, 2008 6:15 PM, Brandon Lamb [EMAIL PROTECTED] wrote:
On Jan 17, 2008 6:13 PM, Anand Avati [EMAIL PROTECTED] wrote
On Jan 17, 2008 7:01 PM, Brandon Lamb [EMAIL PROTECTED] wrote:
On Jan 17, 2008 7:00 PM, Anand Avati [EMAIL PROTECTED] wrote:
Brandon,
which is the fuse kernel module version you are using? please with the fuse
kernel module from -
http://ftp.zresearch.com/pub/gluster/glusterfs/fuse
I have been reading through old mails for this list and the wiki and i
am confused on what machine would do the writes for an afr setup.
Say I have two servers (192.168.0.10, 192.168.0.11) with configs
-
volume locks
type features/posix-locks
subvolumes brick
end-volume
volume
On Jan 7, 2008 8:41 PM, matthew zeier [EMAIL PROTECTED] wrote:
Anand Avati wrote:
Brandon,
who does the copy is decided where the AFR translator is loaded. if you
have AFR loaded on the client side, then the client does the two writes. you
can also have AFR loaded on the server side, and
On Jan 7, 2008 8:48 PM, matthew zeier [EMAIL PROTECTED] wrote:
Im working on trying to figure out how to do this right now, stumbling
my way through a server config. Using NFS my 2 servers sit at under
0.40 load most of the time, I would rather add extra load to them than
my client
On Jan 7, 2008 9:11 PM, Anand Avati [EMAIL PROTECTED] wrote:
Here is a very nice tutorial from Paul England which gives a usage case for
server side replication and high availability -
http://www.gluster.org/docs/index.php/GlusterFS_High_Availability_Storage_with_GlusterFS
The configuration
On Jan 7, 2008 9:53 PM, Anand Avati [EMAIL PROTECTED] wrote:
I got some very odd failover problems when using the RR DNS failover
config (when testing failures mid-write, etc), but that was a few
versons back. I highly recommend using AFR on the client side as a
failover solution. It's
I am just now getting back to being able to spend some time looking
into glusterfs, and I have been half paying-attention to the mailing
list.
My question comes down to with 2 servers and 10 or so client machines,
should i re-export via NFS the glusterfs mounts on the 2 servers to
the 10 clients
Uh oh. Maybe I missed reading something.
Does NFS currently not work with glusterfs? I was planning on having a
3 server cluster that exported the mounted glusterfs directory to NFS
on each of the 3 servers.
On 6/14/07, Brent A Nelson [EMAIL PROTECTED] wrote:
Since many people interested in
, at the
moment (at least with nfs-kernel-server).
On Thu, 14 Jun 2007, Brandon Lamb wrote:
Uh oh. Maybe I missed reading something.
Does NFS currently not work with glusterfs? I was planning on having a
3 server cluster that exported the mounted glusterfs directory to NFS
on each of the 3 servers
Haha ok now I have something screwy. Below is my client.vol file.
Now when I mount the cluster and try to create a directory it comes back with
[EMAIL PROTECTED] glusterfs]# ls
one two
[EMAIL PROTECTED] glusterfs]# mkdir three
mkdir: cannot create directory `three': File exists
[EMAIL
, Brandon Lamb [EMAIL PROTECTED] wrote:
Haha ok now I have something screwy. Below is my client.vol file.
Now when I mount the cluster and try to create a directory it comes back
with
[EMAIL PROTECTED] glusterfs]# ls
one two
[EMAIL PROTECTED] glusterfs]# mkdir three
mkdir: cannot create directory
If I have 3 data servers, and I want to have 2 copies of every file
across the 3 servers using AFR, what would the configs look like
assuming /home/export is where i want to store my data on all 3
servers.
To get a raid 1 do i need as many copies as i have servers, or can i
say only 2 copies
replicates to first
'n' servers if you need a replica 'n'. Currently you need to set up unify
over afr, and export 2 different volumes from each server and get the
required functionality. We are hoping to give this functionality soon.
-bulde
On 6/6/07, Brandon Lamb [EMAIL PROTECTED] wrote
Are there plans for a command to check the client config for AFR and
make sure you actually have the right amount of copies for whatever
the config specifies?
So if i say *:2 copies and on a 3 server cluster 1 goes down for a
day, now its likely i only have 1 copy of my files, when i bring back
AWESOME!
=D
On 6/4/07, Anand Avati [EMAIL PROTECTED] wrote:
the 'self heal' coming in for 1.3-STABLE solves exactly what you are
talking about.
thanks,
avati
2007/6/4, Brandon Lamb [EMAIL PROTECTED]:
Are there plans for a command to check the client config for AFR and
make sure you
Hopefully one of the devs can answer this.
I just setup a 2 server cluster. server1 has 144 gigs of free space,
server2 has 78 gigs.
I have both set to do the AFR thing and told the config to replicate *
(all files right?) with 2 copies.
So what happens when I get to 80 gigs, will i start
I was wondering if there was any input on best practices of setting up
a 2 or 3 server cluster.
My question has to do with where to run glusterfsd (server) and where
to run glusterfs (mounting as a client).
Should I keep the servers that are actually handling the drives and
exporting the
On 6/3/07, James Porter [EMAIL PROTECTED] wrote:
that is a good question, and how would you compile glusterfs and glusterfsd
?
On 6/3/07, Brandon Lamb [EMAIL PROTECTED] wrote:
I was wondering if there was any input on best practices of setting up
a 2 or 3 server cluster.
My question has
I am a sysadmin for an ISP and we currently have a single server 20
scsi drive raid that stores our maildir format mail data, we have
about 105 gigs of mail.
Our problem is that we have a single point of failure. We have backups
sure but if we lost our drive thats over 5 hours to copy from a
100 matches
Mail list logo