Hi Tomo,
That is correct. These are harmless, and if you still want to migrate these
issues, then use the force option.
With regards,
Shishir
- Original Message -
From: "Tomoaki Sato"
To: "Shishir Gowda"
Cc: gluster-users@gluster.org, "Amar Tumballi"
Sent: Thursday, June 28, 2012 10:
Shishir,
Thank you for providing the requested information.
I saw non-zero 'failures' count in output of 'gluster volume rebalance
status' and saw following log massages.
Let me confirm that these are harmless.
/var/log/glusterfs/vol1-rebalance.log:[2012-06-28 12:49:06.381196] W
[dht-rebalan
Hi Tomo,
gluster volume rebalance will re-distribute data only when the destination
server/brick has higher available space (also taking into account the file to
be migrated) than the source server/brick. This is done to maintain a balance
across the bricks/servers.
A force option would bypass
Shishir,
Thank you for your prompt reply.
When should I specify 'force' option for 'gluster volume reparance
start' ?
-- excerpt from manual page --
8.5.2. Rebalancing Volume to Fix Layout and Migrate Data
After expanding or shrinking a volume (using the add-brick and remove-brick
commands r
Hello there
Our gluster client was crashed 2 times last night. The gluster partition
was unavailable for a long time and i have to remount it manually. On
the gluster log, i saw things like these:
/pending frames:
frame : type(1) op(LOOKUP)
frame : type(1) op(LOOKUP)
patchset: git://git.glus
which OS are you using? I believe 3.3 will install but won't run on
older CentOSs (5.7/5.8) due to libc skew.
and you did 'modprobe fuse' before you tried to mount it...?
hjm
On Wed, Jun 27, 2012 at 12:46 PM, Robin, Robin wrote:
> Hi,
>
> Just updated to Gluster-3.3; I can't seem to mount my in
Why don't you have KVM running on the Gluster bricks as well?
We have a 4 node cluster (each with 4x 300GB 15k SAS drives in RAID10), 10
gigabit SFP+ Ethernet (with redundant switching). Each node participates in
a distribute+replicate Gluster namespace and runs KVM. We found this to be
the most e
On Wed, 27 Jun 2012, Brian Candler wrote:
For a 16-disk array, your IOPS is not bad. But are you actually storing a
VM image on it, and then doing lots of I/O within that VM (as opposed to
mounting the volume form within the VM)? If so, can you specify your exact
configuration, including OS an
Hey all, does anyone know if there has been a barclamp module (for the Dell
Crowbar project) which has been built for Gluster? I'm guessing if there is
not one that RedHat might not be such a fan of automating this step, but I'm
crossing my fingers!
Justice London
On Wed, Jun 27, 2012 at 03:07:21PM -0500, Nathan Stratton wrote:
> >I've made a test setup like this, but unfortunately I haven't yet been able
> >to get half-decent performance out of glusterfs 3.3 as a KVM backend. It
> >may work better if you use local disk for the VM images, and within the VM
On Wed, 27 Jun 2012, Brian Candler wrote:
I've made a test setup like this, but unfortunately I haven't yet been able
to get half-decent performance out of glusterfs 3.3 as a KVM backend. It
may work better if you use local disk for the VM images, and within the VM
mount the glusterfs volume fo
On Wed, Jun 27, 2012 at 10:06:30AM +0200, Nicolas Sebrecht wrote:
> We are going to try glusterfs for our new HA servers.
>
> To get full HA, I'm thinking of building it this way:
>
> ++ ++
> || |
Hi,
Just updated to Gluster-3.3; I can't seem to mount my initial test volume. I
did the mount on the gluster server itself (which works on Gluster-3.2).
# rpm -qa | grep -i gluster
glusterfs-fuse-3.3.0-1.el6.x86_64
glusterfs-server-3.3.0-1.el6.x86_64
glusterfs-3.3.0-1.el6.x86_64
# gluster volu
If you do decide to use 2 switchs:
1) for KVM Hosts use 2 nics, bridge them and run KVM on the bridge (usually
br0), link each nic to a different switch
2) Interlink those switches.
Dan
-Original Message-
From: gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org]
On Wed, Jun 27, 2012 at 12:23 AM, Brian Candler wrote:
> On Tue, Jun 26, 2012 at 02:08:42PM -0700, Simon Blackstein wrote:
> >Thanks Brian.
> >Yes, got rid of the .glusterfs and .vSphereHA directory that VMware
> >makes. Rebooted, so yes it was remounted and used a different mount
> >
The 27/06/12, Gerald Brandt wrote:
> Hi,
>
> If your switch breaks, you are done. Put each Gluster server on it's own
> switch.
Right. Handling switch failures isn't what I'm most worried about but I
guess that I'll need to add a network link between KVM hypervisors, too.
Thanks for this tip,
- Original Message -
> From: "Nicolas Sebrecht"
> To: "gluster-users"
> Sent: Wednesday, June 27, 2012 3:06:30 AM
> Subject: [Gluster-users] about HA infrastructure for hypervisors
>
> Hi,
>
> We are going to try glusterfs for our new HA servers.
>
> To get full HA, I'm thinking of b
Hi Tomo,
That is correct. The gluster volume rebalance is no more dependent on
gluster-fuse package.
With regards,
Shishir
- Original Message -
From: "Tomoaki Sato"
To: "Amar Tumballi"
Cc: gluster-users@gluster.org
Sent: Wednesday, June 27, 2012 1:36:44 PM
Subject: [Gluster-users] glu
Amar,
Let me confirm that gluster-fuse package is not required in NFS-server-use of
GlusterFS-3.3.0 as 'glaster volume rebalance' dose not use FUSE.
Regards,
Tomo
(2011/08/29 15:09), Amar Tumballi wrote:
1. fuse module is required to 'glaster volume rebalance'.
2. 'gluster volume r
Hi,
We are going to try glusterfs for our new HA servers.
To get full HA, I'm thinking of building it this way:
++ ++
|| ||
| KVM hypervisor |-++---| KVM hypervisor |
|
On Tue, Jun 26, 2012 at 02:08:42PM -0700, Simon Blackstein wrote:
>Thanks Brian.
>Yes, got rid of the .glusterfs and .vSphereHA directory that VMware
>makes. Rebooted, so yes it was remounted and used a different mount
>point name. Also got rid of attribute I found set on the root:
21 matches
Mail list logo