Hi,
I'm relatively new to GlusterFS and was wondering if someone may be able to
help me with my design.
I'm in AU and am in the process of moving a website footprint into AWS in
the next few weeks. As part of this I need a highly available shared
filesystem to be used by various Windows and Linux
I've mounted a gluster 1x2 replica through NFS in oVirt. The NFS share
holds the qcow images of the VMs.
I recently nuked a whole replica brick in an 1x2 array (for numerous other
reasons including split-brain), the brick self healed and restored back to
the same state as its partner.
4 days late
Hello together,
very interesting.
i switched to xfs filesystem. But same situation. For my proposes both
filesystems are perfect fast.
Then i did a benchmark with command 'dd'
Tried with 'dd if=/dev/zero of=/share/mounted_glustered_dir/test.img bs=1M
count=1000'.
Perfect write speed! Also at
Hallo,
i looked at iotop.
One file will be written (Average Speed 250k/s). After the file is written
nothing happens for 1 or 2 seconds. Then the next procedure for writing will be
started.
The cpu is idle and very bored.
Does any one has any clue for me?
Hopeful greetings,
Michael
Am 27
Hello together,
i installed gluster (Version glusterfs 3.4.0qa2) on an opensuse 12.3
environment. Based on an ext4 filesystem i created a two peer (geo replication)
environment.
When i copy some files, the performance is very slow.
For 500KB the system need between 14 till 20 seconds.
This ca
How does the new command set achieve this?
old layout (2x2):
rep=2: h1:/b1 h2:/b1 h1:/b2 h2:/b2
new layout (3x2):
rep=2: h1:/b1 h2:/b1 h1:/b2 h3:/b1 h2:/b2 h3:/b2
purpose for the new layout is to make sure there is no SOF, as I cannot
simple add h3:/b1 and h3:/b2 as a pair.
With replace-bric
On Wed, Jul 24, 2013 at 11:00 PM, Vijay Bellur wrote:
> Hi All,
>
> We are considering a 4 month release cycle for GlusterFS 3.5. The
> tentative dates are as under:
>
> 14th Aug, 2013 - Feature proposal freeze
>
> 4th Oct, 2013 - Feature freeze & Branching
>
>
Considering the Feature freeze date
Yes, I’ve seen your blog post, but I don’t understand why I get so many
problems with symlinks. Our use cases are insert and read-only. We don’t update
our files, and we don’t move them either, and only 1 server out of 8 generates
missing links like these. I understand the mappings with the gfid
Have you read,
http://joejulian.name/blog/what-is-this-new-glusterfs-directory-in-33/
Jocelyn Hotte wrote:
>Hello, I have a Gluster cluster with the following layout: 4 x 2
>I have one brick which its .gluster folder is full of dangling links.
>We've tried cleaning them out and triggering a heal
Also, you mention a changelog, is there anywhere I can view the changelog?
I’ve stumbled upon the issue because I was trying to do a rebalance of my data,
and it didn’t seem to work (it scanned 262 files when I have over millions of
files). In the rebalance logs, I’ve seen things like these:
[20
> Hello all,
> DHT's remove-brick + rebalance has been enhanced in the last couple of
> releases to be quite sophisticated. It can handle graceful decommissioning
> of bricks, including open file descriptors and hard links.
>
>
Last set of patches for this should be reviewed and accepted before we
I’m using the glusterfs 3.3.2 built on Jul 21 2013 16:38:56
Repository revision: git://git.gluster.com/glusterfs.git
To clear symlinks, we used the following command in the .gluster folder:
symlinks –dr .
We did the healing last week, and my servers all have an uptime of 14 days
From: John Mark
What version are you using?
Don't know if there's a command to automatically clean out the directory, but
Gluster seems to think that there's a changelog of files that need healing. Did
you have a replicated server go offline for a bit?
-JM
- Original Message -
> Hello, I have a Gl
Hello, I have a Gluster cluster with the following layout: 4 x 2
I have one brick which its .gluster folder is full of dangling links. We've
tried cleaning them out and triggering a heal, but they keep coming back. Is
there something I could try?
- Jocelyn
___
- Original Message -
> RE: [Gluster-users] Cluster with NFS
> Thanks Marcus.
> I understand ... The Native Gluster client have very poor performance ...
I would be curious what your use case is and why performance is bad.
> If i use NFS i could mount on an IPFailover. If one of the fil
So I have figured my problems out the version of glusterfs on the
hypervisor node was 3.2.7 (from EPEL) and I hadn't realized this.
Getting it running the gluster-fuse packages for 3.4.0-8 made this all
work just fine. I see that nova takes care of the mounting of the
volume for you.
Sorry for th
Thanks Marcus.
I understand ... The Native Gluster client have very poor performance ...
If i use NFS i could mount on an IPFailover. If one of the fileserver is down,
the IPfailover switch one the second Fileserver (not managed by Gluster Client
but the only the network).
Next test, i will in
On 9/27/13 7:34 AM, John Mark Walker wrote:
> This is for Havana, which has the libgfapi-nova integration. In which
> case, it shouldn't need to mount volumes but you're correct that it
> doesn't do iscsi, either.
>
> I haven't tried this personally, so I'll ping someone who has and see
> what the
On 09/27/2013 07:42 PM, Derek Yarnell wrote:
Just checking if the pre-requisites are covered:
1. "setting storage.owner-gid: 165" and "storage.owner-uid: 165" on the
gluster volume for cinder.
2. The appropriate SELinux booleans[1] are set.
[1] Ref: https://bugzilla.redhat.com/show_bug.cgi?id=9
> Just checking if the pre-requisites are covered:
> 1. "setting storage.owner-gid: 165" and "storage.owner-uid: 165" on the
> gluster volume for cinder.
>
> 2. The appropriate SELinux booleans[1] are set.
>
> [1] Ref: https://bugzilla.redhat.com/show_bug.cgi?id=995139#c0
Yes I have set the volu
On 27 Sep 2013, at 15:49, DUBOURG Kevin wrote:
> An idiot question ... I would like to use NFS on my client instead of the
> Native Gluster Client.
>
> If i do that, i can not able to use the cluster functionnality ?
>
> I think that the date will be duplicated between the fileserver 1 and the
Hello,
An idiot question ... I would like to use NFS on my client instead of the
Native Gluster Client.
If i do that, i can not able to use the cluster functionnality ?
I think that the date will be duplicated between the fileserver 1 and the
fileserver2, but on my client i can specify only on
Hi list,
I'm trying to deploy a GlusterFS cluster to use as a base filesystem
storage for VMWare ESXi. I want to test the glusterFS performance to store
the virtual machines.
I've created two VMs with only two replicas, both VM have 16GB of RAM and 8
CPU at 2 GHz. I've configured the glusterFS vo
RPM: http://bits.gluster.org/pub/gluster/glusterfs/3.4.1/
SRC: http://bits.gluster.org/pub/gluster/glusterfs/src/glusterfs-3.4.1.tar.gz
This release is made off jenkins-release-45
-- Gluster Build System
___
Gluster-users mailing list
Gluster-users@gl
This is for Havana, which has the libgfapi-nova integration. In which case,
it shouldn't need to mount volumes but you're correct that it doesn't do
iscsi, either.
I haven't tried this personally, so I'll ping someone who has and see what
they say.
-jm
On Sep 27, 2013 1:50 AM, "Maciej Gałkiewicz"
Hello All,
I installed glusterFS 3.2.7 on a debian Squeeze.
I created a volumed name "clients". I am able to mount this volume with NFS or
NFS.glusterfs on my client.
I tried to activate the quota on the volume, it's ok :
root@fs2:~# gluster volume info
Volume Name: clients
Type: Distribute
S
On Fri, 2013-09-27 at 00:35 -0700, Anand Avati wrote:
> Hello all,
Hey,
Interesting timing for this post...
I've actually started working on automatic brick addition/removal. (I'm
planning to add this to puppet-gluster of course.) I was hoping you
could help out with the algorithm. I think it's a
Hello all,
DHT's remove-brick + rebalance has been enhanced in the last couple of
releases to be quite sophisticated. It can handle graceful decommissioning
of bricks, including open file descriptors and hard links.
This in a way is a feature overlap with replace-brick's data migration
functionali
28 matches
Mail list logo