er volume add-brick vmware2 replica 2 gluster01:/mnt/disk11/vmware2
> gluster03:/mnt/disk11/vmware2 gluster02:/mnt/disk11/vmware2
> gluster04:/mnt/disk11/vmware2
>
>
> starting fix layout:
> gluster volume rebalance vmware2 fix-layout start
>
> Starting rebalance:
>
something you'll need !
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Description: Digital signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users
rently that bug was fixed recently, so latest versions should be
pretty stable yeah.
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Description: Digital signature
___
Gluster-users mailing list
Gluster-users@gluster.org
htt
things like hosting a single
huge file that would be bigger than one of your replicas.
We use it for VM disks, as it decreases heal times a lot.
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Description: Digital signature
keep clearing caches
and stuff like that all the time, it's a whole lot of small files load).
Thanks
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Description: Digital signature
___
Gluster-users mailing list
> We fixed this (thanks to Satheesaran for recreating the issue and to
> Raghavendra G and Pranith for the RCA) as recently as last week.
> The bug was in DHT-shard interaction.
Ah, that's a great news !
I'll give the next relaeses a try for our next cluster then, thanks for
t
>
> Sure sounds like what corrupted everything for me a few months ago :).
Not quite though, the corruption occured before the rebalance in my case.
Maybe you just didn't realise before starting the rebalance ?
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
si
ot ideal, but not a bid deal, at least yet.
Since VMs are
easy enough to live migrate from one volume to another, it seemed like the
easiest solution.
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Description: Digital signature
> One more actually I don¹t know how you are handling HA but with glusterfs
> but I believe that if you are not using NFS Ganesha you have single point of
> failure everytime , isn't it ?
Not if they are using the gluster fuse client, which is probably the case.
--
Kevin
>
>
> I really hope this is not true.
>
It is true if you are using the fuse client / libgfapi.
I guess you coul use NFS then, which does only talk to
the one server. But having that do any HA is a bit of a
nightmare :/
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 011
sitate to post the solution here if you do find it, I think
you're
neither the first nor the last person to ask that question here.
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Description: Digital signature
_
you need to
find a way on oVirt to redirect the output of the qemu process somewhere.
That's where you'll find the libgfapi logs.
Never used oVirt so I can't really help on that :/
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Description: Digital s
VM, which can be annoying to get depending on what
you are using.
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Description: Digital signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gl
ving the disk and before erasing the snapshots.
> Otherwise they’d crash when removing the snapshot, something in qemu not
> quite right I imagine.
>
All of this sounds weird, I've never had a problem like this. But I'm not using
ovirt, maybe it's a problem with that.
--
able to add the two arbiters at once.
If you can afford downtime creating a new volume is clearly a lot simpler :)
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Description: Digital signature
___
Gluster-users mailing list
ng you have access to.
I'm betting you'll see your error there.
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Description: Digital signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://list
uses some qemu command behing the scene so you can do it even
if you aren't using proxmox though, just need to figure out the syntax.
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Description: Digital signature
___
> So, is the setup wrong or does gluster not provide high availability?
How exactly is it setup ?
libgfapi ? fuse ? NFS mount ?
It should work, we're using proxmox at work (which uses KVM) with gluster
and it does work well. What version of gluster are you using ?
--
Kevin Lemon
n 3.7.12 so maybe it's better in renew releases.
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Description: Digital signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users
> - or change the shard setting inplace? (I don't think that would work)
It will only affect new files, so you'll need to copy the current images
to new names. That's why I was speaking about livre migrating the disk
to the same volume, that's how I did it last year.
ly hours depending on the size. That's just not acceptable. Unless that's
related to the bug in the heal algo and it's been fixed ? Not sure
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Description: Digital signature
___
type: auto
>
> network.remote-dio: enable
>
> cluster.eager-lock: enable
>
> performance.stat-prefetch: off
>
> performance.io-cache: off
>
> performance.read-ahead: off
>
> performance.quick-read: off
>
>
ay
of choosing the defaults is very very dangerous, and
should probably be written in big fat red letter in the doc.
Maybe it is and I just missed it though, but I'm pretty sure
I'm not the only one.
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Description: D
re the VM will work a lot better during the live migration
using nfs than sshfs :)
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Description: Digital signature
___
Gluster-users mailing list
Gluster-users@gluster.org
htt
> They have different ovirt mgmt.
I don't use ovirt, but when I need to migrate
a VM between two versions I just mount the other
cluster using NFS instead of fuse / libgfapi.
Then I just use the live disk migration of KVM
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E
Well, I was wrong, /etc/init.d/glusterfs-server stop then start did solve
the problem. Guess it's really not the same problem as the one my colleague
on 3.8 is having :)
On Sun, Jan 15, 2017 at 03:06:54PM +0100, Kevin Lemonnier wrote:
> Hi,
>
> Our monitoring has been alerting a
down the leak ? It's 3.7.12, maybe
it's already been solved ? I know a colleague is having a similar problem
on other servers with up to date 3.8, but they aren't doing VM hosting
so maybe it's unrelated ..
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E
o all the bricks at once. I'll lose 1/3 of the
bandwith for the machines running on those new proxmox nodes, but that's
really not a bid deal, bandwith never was the problem, latency is, and
this solution won't change latency at all.
Or am I missing something key here ?
--
Kevin Le
and address of the different bricks, not to
access the data. So if the node was peer probed, it should have the
volume informations (gluster volume info would show the volume) even
if no bricks aren't actually hosted on localhost, am I wrong ?
I'm pretty sure I already tested this a while a
o throught the trouble
of setting all of this up.
(I also have one of the original servers as backup volfile, but who knows
if that'll always be up, I like using localhost since if localhost is down
I have other problems anyway :D)
Thanks !
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283
n, to allow a non-root user to run
"/usr/sbin/gluster volume profile info cumulative" ? Or any other way
to get the total byte read and written on a volume for a user maybe ?
Thanks
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Description: Di
e will always have a way to know how to heal,
and should never go RO.
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Description: Digital signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluste
> I've configured two test gluster servers (RHEL7) running glusterfs 3.7.18.
> [...]
> Any ideas what I'm doing wrong?
I'd say you need 3 servers. GlusterFS goes RO without a quorum, and one server
isn't a quorum. That's to avoid split brains.
--
Kevin Lem
onfig
somewhere
but that's it :)
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Description: Digital signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
?
>
Hi,
Haven't used solaris in years, but as far as I know it doesn't support FUSE,
which makes it impossible to run gluster's client, sorry.
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Description: Digital signature
ith my config,
and since I have no use for the new feature why take the risk to update ?
Interesting comments on MooseFS, I've seen it but never tried it yet
because of the single server managing the cluster, seems like a huge
risk. Guess there must be ways to have that role failover or
need normal master / slave replication, it's great
for that, but to run VMs I really don't like it. One of our client has a
proxmox 3 cluster with DRBD 8, everytime there is a little problem with
the network it's horrible to fix, compared to gluster.
--
Kevin Lemonnier
PGP Fi
data.
Not really shocked there. Guess the cli should warn you when you try re-setting
the option though, that would be nice.
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Description: Digital signature
___
corruption I'll loose everything
We've had a lot of problems in the past, but at least for us 3.7.12 (and 3.7.15)
seems to be working pretty well as long as you don't add bricks. We started
doing
multiple little clusters and abandonned the idea of one big cluster, had no
issues si
x27;t matter, we used to be running VMs off that.
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Description: Digital signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
ywhere it makes sense to
use it, one less thing to maintain.
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Description: Digital signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mai
to processes accessing the same file won't end up well I think.
Dovecot has it's own replication mechanism, you should problably
take a look at dsync.
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Descript
Paul Boven.
> --
> Paul Boven +31 (0)521-596547
> Unix/Linux/Networking specialist
> Joint Institute for VLBI in Europe - www.jive.eu
> VLBI - It's a fringe science
> ___
> Gluster-users mailing list
> Gluster-users@gluster.or
autofs/gluster-data /gluster-data
That way you'd end up with pretty much what you want.
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Description: Digital signature
___
Gluster-users mailing list
Gluster-use
tofs
In /etc/auto.map.d/master.autofs :
applicatif
-fstype=glusterfs,defaults,_netdev,backupvolfile-server=other_node
localhost:/applicatif
With this, /mnt/autofs/applicatif will automatically mount the /applicatif
volume.
--
Kevin Lemonnier
PGP Fing
but sleep 10 won't.
Now if sleep 10 works for you great, but I guess you wouldn't be posting
here if it did :D
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Description: Digital signature
___
Gluster-users mailing li
workaround.
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Description: Digital signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
ox unfortunatly, you need to start qemu by hand
to get those as far as I know.
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Description: Digital signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.
ok.
>
Ha, glad you could reproduce this ! (Well, all things considered)
Looks very much like what I had indeed. So it's still a problem
in recent versions, glad I didn't try again then.
Thanks for taking the time, let's hope that'll help them :)
--
Kevin Lemonnier
gluster volume add-brick VMs host1:/brick host2:/brick host3:/brick force
(I have the same without force just before that, so I assume force is needed)
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Description: Digital signature
___
you start a
rebalance, but I didn't even get to that point :(
That's the only bug I've experienced so far in 3.7.12, everything else
(including increasing the replica count) seems to be working perfectly fine.
That's why I'm still installing that version, even though it&
ied the rebalance when everything came crashing down, hoping that
might fix it, but it didn't.
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Description: Digital signature
___
Gluster-users mailing list
Gluster-users@gluster
3.7.12. It took a while to finally get a version that worked for us,
so we stayed on it once we got it. Maybe that problem has already been fixed in
later versions.
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Description:
7;t really mind if it heals everytime there is a little lag ..
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Description: Digital signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
ble test it
on a non-production environment first. Still. hard to replicate the
same load for tests ..
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Description: Digital signature
___
Gluster-users mailing list
Gluster-us
un into the problem someday too, we're just having
a lot more crashes than the average user for reasons I mentionned in
another mail.
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Description: Digital signature
___
G
ice it, but when you have hundreds of server the tyniest
little problem on their side has to kill a few of them, so you can't miss it.
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Description: Digital signature
___
Glu
ered,
they belong to different clients.
It's basically like a VPS service, with the clients running whatever
they want on their VMs, and 95% of the time that's a very classic LAMP.
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
sign
t VMs, but that doesn't seem like an ideal sale speech :
"You can do whatever you want on the VM, but try to avoid MySQL, it crashes".
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Description: Digital signature
___
to test and give us feedback, that
>would be very helpful and we can move very quickly to get it in GA state.
>On Tue, Sep 27, 2016 at 5:13 PM, AndrA(c) Bauer wrote:
>
> Dito...
> Am 24.09.2016 um 17:29 schrieb Kevin Lemonnier:
> > On Sat, Sep 24, 2016 a
dn't
work with you :). We are using 3.7.12 and 3.7.15 though, didn't try 3.8 yet.
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Description: Digital signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
support things like prestashop or wordpress is to either run a VM on top of
glusterFS, which isn't always possible, or just use something else. I really
hope the improvments Pranith mentionned will help in that regard, I'd love
to use GlusterFS for more than just hosting VMs.
--
Kev
At least that
proves VM hosting works pretty well now though !
Now I can't try tiering, unfortunatly I don't have the option of having
hardware for
that, but maybe that would indeed solve it if it makes looking up lots of tiny
files
quicker.
--
Kevin Lemonnier
PGP Fingerprint
case as I was saying I ended up blocking everything with iptables, that
works for this cluster but doesn't for others,
so that's not a good fix for me. I wish I could just tell gluster to bind on a
specific IP.
--
Kevin Lemonnier
PGP Fingerprint : 89A5
ivate addresses, is that
doable ?
A quick google search shows people doing it by editing the volfile, but I
suspect
that's an old method right ? There must be a way to tell gluster to just not
listen
on the public IP.
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
sig
s ?
>
No looks like reject * does reject all without checking the allow ..
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Description: Digital signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://ww
be authorized by default, but looks
like it wasn't !
I don't need to authorize the domains right, just the IPs ?
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Description: Digital signature
___
Gluster-users
e
network.remote-dio: enable
cluster.server-quorum-type: server
cluster.quorum-type: auto
performance.readdir-ahead: on
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Description: Digital signature
___
Gluster-users mailin
t have much space on this volume,
but for one of the VM we'd need to recover that disk, see if we may be able
to extract some data from it with some time.
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Description: Digital signature
__
lso provide output of `gluster volume info`.
>-Krutika
>On Tue, Sep 6, 2016 at 4:29 AM, Kevin Lemonnier
>wrote:
>
> >A A - What was the original (and current) geometry? (status and info)
>
> It was a 1x3 that I was trying to bump to 2x3.
> >
so long, no choice ..
>On 6/09/2016 8:00 AM, Kevin Lemonnier wrote:
>
> I tried a fix-layout, and since that didn't work I removed the brick (start
> then commit when it showed
> completed). Not better, the volume is now running on the 3 original bricks
> (replica 3) but
thos shards do exist
(and are bigger) on the "live" volume. I don't understand why now that I have
removed the new bricks
everything isn't working like before ..
On Mon, Sep 05, 2016 at 11:06:16PM +0200, Kevin Lemonnier wrote:
> Hi,
>
> I just added 3 bricks to a vo
Hi,
I just added 3 bricks to a volume and all the VMs are doing I/O errors now.
I rebooted a VM to see and it can't start again, am I missing something ? Is
the reblance required
to make everything run ?
That's urgent, thanks.
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0
plica 2, removing a brick from
the first replica,
then add 2 of the new servers as disperese (at that point the volume would be
2x2),
then go up to replica 3 adding the third one plus the one I removed earlier.
That should work, right ? There is no other "better" way of doing it ?
Th
t'd be okay.
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Description: Digital signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
r-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Description: Digital signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
during the volume creation.
It is, but existing shards won't be touched : You'll need to move the file out
and back
in to apply the new shard size.
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Description: Digital signature
___
ter.server-quorum-type: server
cluster.quorum-type: auto
performance.readdir-ahead: on
Thanks
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Description: Digital signature
___
Gluster-users mailing list
Gluster-users@glus
s the same bug.
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Description: Digital signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
ster.org/pub/gluster/glusterfs/3.7/3.7.12/Debian/jessie/apt
jessie main
Should include the patch, right ?
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Description: Digital signature
___
Gluster-users mailing list
-06-29 11:26:52.485443] C
[rpc-clnt-ping.c:165:rpc_clnt_ping_timer_expired] 0-gluster-client-5: server
172.16.0.50:49153 has not responded in the last 42 seconds, disconnecting.
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Descript
anymore, at least not in the latest proxmox.
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Description: Digital signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
>
> Which NFS server are you using? the std one built into proxmox/debian?
> how do you handle redundancy?
I mean the one in gluster, I added the gluster volume as NFS in proxmox
instead of as gluster.
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
De
ndsay
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Description: Digital signature
_
On Wed, Jun 29, 2016 at 03:40:35PM +0530, Krutika Dhananjay wrote:
>Try 64MB?
>
Yep, that was it ! Thanks.
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Description: Digital signature
___
Gluster-users mailin
ber format "64M" in option "shard-block-size"
Was there a change to the format of the shard sizes I missed ?
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Description: Digital signature
___
Gluster-us
> cluster.shd-max-threads:4
> cluster.locking-scheme:granular
So you had no problems before setting that ? I'm currently re-installing my test
servers, as you can imagine really really hoping 3.7.12 fixes the corruption
problem,
I hope there isn't a new horrible bug ..
--
Kev
most identical
> to the plain qemu (this looks pretty strange to me)
I doubt that, it works well but I do see a difference when I rsync stuff in the
VMs.
Not a huge deal though, the bandwith isn't of great interest to me, the latency
is.
--
Kevin Lemonnier
esting, it's a lot better
and the client was happy.
I don't know how much the update contributed, but I assume the ping is
playing a bit part in this.
I could send you the bonnie++ results from those two tests tomorrow if you want,
kept that at work.
I'll probably test that again with
irm that it's been solved !
Should be out hopefully in the next few days last I heard.
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Description: Digital signature
___
Gluster-users mailing list
Gluster-users@gluster.org
5
> Fax.: +49-331-9773122
> Email : klemens.kit...@cs.uni-potsdam.de
> XMPP: kit...@jabber.ccc.de
>
> gpg --recv-keys --keyserver pgp.mit.edu 6EA09333
> ___
> Gluster-users mailing list
> Gluster-
>Just a thought - do you have bitrot detection enabled? (I don't)
Yes, I did configure it to do a daily scrub when I reinstalled last time,
when I was wondering if maybe it was hardware. Doesn't seem like it detected
anything.
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 0
error
as usual, just attached a screen of the VM's console, might help.
I can see that everytime the VM powers down, glusterFS complains about an inode
still
active, might it be the problem ?
Thanks for the help !
On Wed, May 25, 2016 at 04:10:02PM +0200, Kevin Lemonnier wrote:
> Just
lly ran into a 'read-only filesystem' issue, then it could
> possibly because of a bug in AFR
> that Pranith recently fixed.
> To confirm if that is indeed the case, could you tell meA if you saw
> the pause after a brick (single brick) was
> down wh
brick) was
>down while IO was going on?
>
>-Krutika
>On Wed, May 25, 2016 at 1:28 PM, Kevin Lemonnier
>wrote:
>
> >A A Whats the underlying filesystem under the bricks?
>
> I use XFS, I read that was recommended. What are you using ?
>
>Whats the underlying filesystem under the bricks?
I use XFS, I read that was recommended. What are you using ?
Since yours seems to work, I'm not opposed to changing !
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Description: Digital s
directsync is better than nothing, but still doesn't solve the problem.
Really can't use this in production, the VM goes read only after a few
days because it saw too much I/O errors. Must be missing something
On Tue, May 24, 2016 at 12:24:44PM +0200, Kevin Lemonnier wrote:
> S
mailing list, did I miss some doc saying you should be using
directsync with glusterfs ?
On Tue, May 24, 2016 at 11:33:28AM +0200, Kevin Lemonnier wrote:
> Hi,
>
> Some news on this.
> I actually don't need to trigger a heal to get corruption, so the problem
> is not the hea
was using local
storage
would corrupt too, right ?
Could I be missing some critical configuration for VM storage on my gluster
volume ?
On Mon, May 23, 2016 at 01:54:30PM +0200, Kevin Lemonnier wrote:
> Hi,
>
> I didn't specify it but I use "localhost" to add the st
, seems to be resolved. I hope it won't do that again
though
Thanks
On Mon, May 23, 2016 at 07:05:56PM +0200, Kevin Lemonnier wrote:
> On Mon, May 23, 2016 at 04:06:06PM +0100, Anant wrote:
> >Have you tried to stop all services of gluster ?? Like - glusterd ,
>
1 - 100 of 131 matches
Mail list logo