ster0-replicate-0: background meta-data data self-heal failed on
/some-path-here/disk0
[2014-04-23 13:02:20.253380] E
[afr-self-heal-common.c:2212:afr_self_heal_completion_cbk]
0-gluster0-replicate-1: background meta-data data self-heal failed on
/some-other-path/disk0
--
Vennlig hilsen
at 2:57 PM, Hoggins! wrote:
> Hello,
>
> Simply first stop the Glusterfs services on the brick you intend to
> shutdown, and everything is fine.
> I also experienced issues when I rebooted a brick without stopping the
> services first.
>
> Hoggins!
>
> Le 03/12/2
Hello.
We've got a 2x2 volume with two servers, holding Xen images for about
200 instances.
Yesterday we wanted to reboot one of the servers into a new kernel and
a new version of Gluster, and simply rebooted the server in question.
This caused problems for some of the Xen instances, giving IO er
to say that the first one can't use cache at all,
while the second one uses all the cache there is.
Try to run the last one with "conv=fsync".
This will sync the file at the end of writing, ensuring that when dd
returns the data should be on disk. This will probably even out the
run tim
-dev.trollweb.net/gluster-3.4.0alpha2-debs/.
If you want to fix the logrotate for 3.3, one option is to simply
overwrite the existing one with this one:
https://raw.github.com/torbjorntorbjorn/glusterfs-debian/3.4.0-squeeze/debian/glusterfs-common.logrotate.
--
Vennlig hilsen
Torbjørn Thorsen
gt; >> supportability is at the top of the pile.
>> > >
>> > > I have to agree; GlusterFS has been in use here in production for a
>> > > while,
>> > > and while it mostly works, it's been fragile and documentation has
>> > > been
>> > > disappointing. Despite 3.
lpha2.tar.gz.
Some minor adjustments were made, as the versions in the original
packaging seemed to refer to packages from Debian Wheezy.
--
Vennlig hilsen
Torbjørn Thorsen
Utvikler / driftstekniker
Trollweb Solutions AS
- Professional Magento Partner
www.trollweb.no
Telefon dagtid: +47 512
d see if I still see the same
behavior from write-behind.
--
Vennlig hilsen
Torbjørn Thorsen
Utvikler / driftstekniker
Trollweb Solutions AS
- Professional Magento Partner
www.trollweb.no
Telefon dagtid: +47 51215300
Telefon kveld/helg: For kunder med Serviceavtale
Besøksadresse: Luramyrve
On Thu, Mar 7, 2013 at 9:32 AM, Torbjørn Thorsen wrote:
> After a while the loop device and Xen instance is slow, and profile
> tells me it's only seeing 4kb writes.
> Toggling the write-behind translator off and on again gets me back to
> the initial transfer rate.
>
> I
On Tue, Mar 5, 2013 at 1:57 PM, Torbjørn Thorsen wrote:
> On Fri, Mar 1, 2013 at 7:01 PM, Brian Foster wrote:
>> On 03/01/2013 11:48 AM, Torbjørn Thorsen wrote:
...
> A degraded loop device without an open fd will be fast after a toggle
> of write-behind.
> However, it seems th
On Fri, Mar 1, 2013 at 7:01 PM, Brian Foster wrote:
> On 03/01/2013 11:48 AM, Torbjørn Thorsen wrote:
>> On Thu, Feb 28, 2013 at 4:54 PM, Brian Foster wrote:
> All writes are done with sync, so I don't quite understand how cache
>> flushing comes in.
>>
>
> F
ciple I like the idea
>> of
>> a virtual distribution/replication system which sits on top of existing
>> local filesystems. But for storage, I need something where operational
>> supportability is at the top of the pile.
>>
>> Regards,
>>
>> Brian.
>
>
>
>
0 805
65536b+ 0 19086
131072b+ 22 851
The script and gluster profile output can be
downloaded at http://torbjorn-dev.trollweb.net/gluster/.
To me it seems that a fresh loop device does mostly 64kb writes,
and at some point during a
On Wed, Feb 27, 2013 at 9:46 PM, Brian Foster wrote:
> On 02/27/2013 10:14 AM, Torbjørn Thorsen wrote:
>> I'm seeing less-than-stellar performance on my Gluster deployment when
>> hosting VM images on the FUSE mount.
>> If we use a file on the gluster mount as backing
e my
situation isn't unique.
However, I'm under the impression that other are using a similar setup
with much better performance.
[1]:
* http://www.gluster.org/pipermail/gluster-users/2012-January/032369.html
* http://www.gluster.org/pipermail/gluster-users/2012-July/033763.html
--
V
installed
>> glusterfs-rdma.x86_64 3.3.1-1.el6 installed
>> glusterfs-server.x86_643.3.1-1.el6 installed
>>
>> Thanks.
>
> ___
> Gluster-users mailing list
> Glu
hub.com/gluster/glusterfs/blob/master/extras/glusterfs-logrotate
>
> _______
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
--
Vennlig hilsen
Torbjørn Thorse
r binaries! (gluster, glusterfs, glusterd)
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
--
Vennlig hilsen
Torbjørn Thorsen
Utvikler / driftstekniker
Trollweb
This is a self-reply.
I've not been able to reproduce the situation, and it just might be my
very own fault.
Quite possible that misunderstanding and monitoring issues were the
root causes of my problem.
On Tue, Nov 27, 2012 at 4:44 PM, Torbjørn Thorsen wrote:
> Hey, all.
>
>
web.net:/srv/gluster/brick0$
Brick3: srv18.trollweb.net:/srv/gluster/brick1$
Brick4: srv17.trollweb.net:/srv/gluster/brick1$
--
Vennlig hilsen
Torbjørn Thorsen
Trollweb Solutions AS
___
Gluster-users mailing list
Gluster-users@gluster.org
http://sup
20 matches
Mail list logo