Hi, All
I'm working on distributed storage system analysis, including glusterfs.
With some information on internet and done some test, I found that when brain
split or network partition occurs, both side of the system can accept new write
requests. It seems gluster can not guarantee consistency i
Hi all,
I'm attempting to create a 4 nodes cluster over EC2. I'm fairly new to this
and so may not be seeing something obvious.
- Established passworldless SSH between nodes.
- edited /etc/sysconfig/network HOSTNAME=node#.ec2 to satisfy FQDN
- mounted xfs /dev/xvdh /mnt/brick1
- stopped iptab
Anyone have any experience using readdir-readahead for gluster dht? I need to
do something to improve ls performance, it's so slow it makes the filesystem
almost unusable.
This email and any files transmitted with it are confidential and are intended
solely f
On 02/14/2014 11:56 AM, Florent Bautista wrote:
On 02/14/2014 05:11 PM, Kaleb S. KEITHLEY wrote:
With 4 bricks (brick = node + volume) and "replica 3" you'd only have
a replica of one of your bricks, the other two would not be protected.
I really don't understand what you call "replica" so..
On 02/14/2014 05:11 PM, Kaleb S. KEITHLEY wrote:
>
> With 4 bricks (brick = node + volume) and "replica 3" you'd only have
> a replica of one of your bricks, the other two would not be protected.
I really don't understand what you call "replica" so...
"replica" is replica of what ? Of brick ? of
On 02/14/2014 10:44 AM, Florent Bautista wrote:
Hi all,
I would like to try GlusterFS but I really don't understand how is
handled stripping and replication.
I have 3 nodes, with 1 brick each.
What can I do with that ?
I would like files stripped across the 3 servers and also files
replicated
Hi all,
I would like to try GlusterFS but I really don't understand how is
handled stripping and replication.
I have 3 nodes, with 1 brick each.
What can I do with that ?
I would like files stripped across the 3 servers and also files
replicated on the 3 servers. Why not can I do that ?
Then,
3 million small files (many files below 1k in size).
-Original Message-
From: Vijay Bellur [mailto:vbel...@redhat.com]
Sent: Friday, February 14, 2014 10:18 AM
To: Prasad, Nirmal; gluster-users@gluster.org
Subject: Re: [Gluster-users] BDB support
On 02/14/2014 06:44 AM, Prasad, Nirmal wr
On 02/14/2014 06:44 AM, Prasad, Nirmal wrote:
In 3.4.2 – can I use BDB as the storage backend? Is there a specific
reason this has been removed and is it planned to be added?
BDB has not been maintained for a while and hence is not in sync with
the latest code base. What is your use case to h
Could you try again after changing the log-level to DEBUG using:
# gluster volume geo-replication config log-level DEBUG
Also, logs from both master and slave would help.
Thanks,
-venky
On Wed, Feb 12, 2014 at 4:44 PM, John Ewing wrote:
> No, its the latest 3.3 series release.
>
> 3.3.2 on
Thanks for the response. I have been having this problem for a while but
now that I test again I cant reproduce it... If it comes backup I will
run these commands thanks again..
On 2/11/14, 9:22 PM, Venky Shankar wrote:
Could you provide this information from the server (also the
client/server
On 13/02/14 12:39, Vijay Bellur wrote:
On 02/11/2014 08:48 PM, teg...@renget.se wrote:
Hi,
have a system consisting of 4 bricks, distributed, 3.4.1, and I have
noticed that some of the files are stored on three of the bricks.
Typically a listing can look something like this:
brick1:
12 matches
Mail list logo