Re: [Gluster-users] Reconstructing files from shards

2018-04-23 Thread Alessandro Briosi
Il 22/04/2018 11:39, Gandalf Corvotempesta ha scritto: > Il dom 22 apr 2018, 10:46 Alessandro Briosi <a...@metalit.com > <mailto:a...@metalit.com>> ha scritto: > > Imho the easiest path would be to turn off sharding on the volume and > simply do a copy o

Re: [Gluster-users] Reconstructing files from shards

2018-04-22 Thread Alessandro Briosi
Il 20/04/2018 21:44, Jamie Lawrence ha scritto: Hello, So I have a volume on a gluster install (3.12.5) on which sharding was enabled at some point recently. (Don't know how it happened, it may have been an accidental run of an old script.) So it has been happily sharding behind our backs

[Gluster-users] Another VM crashed

2018-01-05 Thread Alessandro Briosi
Hi all, I still experience vm crashes with glusterfs. The VM I had problems (kept on crashing) was moved away from gluster and have had no problems since. Now another VM is doing the same. It just shutsdown. gluster is 3.8.13 I know now you are on 3.10 and 3.12, but I had troubles upgrading

Re: [Gluster-users] create volume in two different Data Centers

2017-10-24 Thread Alessandro Briosi
Il 24/10/2017 12:45, atris adam ha scritto: > thanks for answering. But I have to setup and test it myself and > record the result. Can you guide me a little more. The problem is, one > valid ip for each data centers exist, and each data centers have 3 > servers. How should I config the network in

Re: [Gluster-users] Fwd: Re: VM going down

2017-05-25 Thread Alessandro Briosi
Il 25/05/2017 15:24, Joe Julian ha scritto: > You'd want to see the client log. I'm not sure where proxmox > configures those to go. > > On May 24, 2017 11:57:33 PM PDT, Alessandro Briosi <a...@metalit.com> > wrote: > > Il 19/05/2017 17:27, Alessandro Briosi ha scrit

Re: [Gluster-users] Fwd: Re: VM going down

2017-05-25 Thread Alessandro Briosi
Il 25/05/2017 15:24, Joe Julian ha scritto: > You'd want to see the client log. I'm not sure where proxmox > configures those to go. This is all the content of glusterfs/cli.log (previous file cli.log.1 is from 5 days ago) [2017-05-25 06:21:30.736837] I [cli.c:728:main] 0-cli: Started running

Re: [Gluster-users] Fwd: Re: VM going down

2017-05-25 Thread Alessandro Briosi
Il 19/05/2017 17:27, Alessandro Briosi ha scritto: > Il 12/05/2017 12:09, Alessandro Briosi ha scritto: >>> You probably should open a bug so that we have all the troubleshooting >>> and debugging details in one location. Once we find the problem we can >>> move

Re: [Gluster-users] Fwd: Re: VM going down

2017-05-19 Thread Alessandro Briosi
Il 12/05/2017 12:09, Alessandro Briosi ha scritto: >> You probably should open a bug so that we have all the troubleshooting >> and debugging details in one location. Once we find the problem we can >> move the bug to the right component. >> https://bugzilla.redhat.c

[Gluster-users] Fwd: Re: VM going down

2017-05-12 Thread Alessandro Briosi
Il 12/05/2017 11:36, Niels de Vos ha scritto: > On Thu, May 11, 2017 at 03:49:27PM +0200, Alessandro Briosi wrote: >> Il 11/05/2017 14:09, Niels de Vos ha scritto: >>> On Thu, May 11, 2017 at 12:35:42PM +0530, Krutika Dhananjay wrote: >>>> Niels, >>>> >

Re: [Gluster-users] VM going down

2017-05-11 Thread Alessandro Briosi
Il 11/05/2017 16:15, Pranith Kumar Karampuri ha scritto: > On Thu, May 11, 2017 at 7:19 PM, Alessandro Briosi <a...@metalit.com > <mailto:a...@metalit.com>> wrote: > > Il 11/05/2017 14:09, Niels de Vos ha scritto: >> On Thu, May 11, 2017 at 12:35:42PM

Re: [Gluster-users] VM going down

2017-05-11 Thread Alessandro Briosi
Il 11/05/2017 14:09, Niels de Vos ha scritto: > On Thu, May 11, 2017 at 12:35:42PM +0530, Krutika Dhananjay wrote: >> Niels, >> >> Allesandro's configuration does not have shard enabled. So it has >> definitely not got anything to do with shard not supporting seek fop. > Yes, but in case sharding

Re: [Gluster-users] VM going down

2017-05-11 Thread Alessandro Briosi
Il 11/05/2017 09:05, Krutika Dhananjay ha scritto: > Niels, > > Allesandro's configuration does not have shard enabled. So it has > definitely not got anything to do with shard not supporting seek fop. > > Copy-pasting volume-info output from the first mail: > Hi, I know sharding is not enabled,

Re: [Gluster-users] VM going down

2017-05-10 Thread Alessandro Briosi
Il 09/05/2017 23:41, Lindsay Mathieson ha scritto: On 10/05/2017 12:59 AM, Alessandro Briosi wrote: Also the seek errors where there before when there was no arbiter (only 2 replica). And finally seek error is triggered when the VM is started (at least the one in the logs). Could

Re: [Gluster-users] VM going down

2017-05-09 Thread Alessandro Briosi
Il 09/05/2017 16:10, Niels de Vos ha scritto: > ... >>> client from >>> srvpve2-162483-2017/05/08-10:01:06:189720-datastore2-client-0-0-0 >>> (version: 3.8.11) >>> [2017-05-08 10:01:06.237433] E [MSGID: 113107] [posix.c:1079:posix_seek] >>> 0-datastore2-posix: seek failed on fd 18 length

Re: [Gluster-users] VM going down

2017-05-08 Thread Alessandro Briosi
Il 08/05/2017 12:38, Krutika Dhananjay ha scritto: > The newly introduced "SEEK" fop seems to be failing at the bricks. > > Adding Niels for his inputs/help. > Don't know if this is related though the SEEK is done only when the VM is started, not when it's suddenly shutdown. Though it's an odd

Re: [Gluster-users] VM going down

2017-05-08 Thread Alessandro Briosi
Il 08/05/2017 12:57, Jesper Led Lauridsen TS Infra server ha scritto: > > I dont know if this has any relation to you issue. But I have seen > several times during gluster healing that my wm’s fail or are marked > unresponsive in rhev. My conclusion is that the load gluster puts on > the wm-images

[Gluster-users] VM going down

2017-05-08 Thread Alessandro Briosi
Hi all, I have sporadic VM going down which files are on gluster FS. If I look at the gluster logs the only events I find are: /var/log/glusterfs/bricks/data-brick2-brick.log [2017-05-08 09:51:17.661697] I [MSGID: 115036] [server.c:548:server_rpc_notify] 0-datastore2-server: disconnecting

Re: [Gluster-users] [Gluster-devel] Glusterfs meta data space consumption issue

2017-04-12 Thread Alessandro Briosi
Il 12/04/2017 14:16, ABHISHEK PALIWAL ha scritto: > I have did more investigation and find out that brick dir size is > equivalent to gluster mount point but .glusterfs having too much > difference > You are probably using sharding? Buon lavoro. /Alessandro Briosi/ *METAL.it Nord

Re: [Gluster-users] Network redundance

2017-04-06 Thread Alessandro Briosi
Il 06/04/2017 14:08, Pavel Szalbot ha scritto: > Hi, does your stack support MC-LAG (multichassis link aggregation > group)? If so, configure LAGs with interfaces on both switches and you > should be fine. Yes it does, and I thought on using it too, but then all the ports should be redundant (not

[Gluster-users] Network redundance

2017-04-06 Thread Alessandro Briosi
Hi all, a feature that would really be nice is network redundance. In a current installation I have 3 servers with 2 switches (staked with 10gb uplink). The servers contain VM. I have configured the hosts to comunicate through the first switch, but what happens is the switch goes down? Probably

Re: [Gluster-users] adding arbiter

2017-04-05 Thread Alessandro Briosi
c'd to the arbiter brick. > > Please refer to http://review.gluster.org/#/c/14502/2 for more details. Perfect thank you. Buon lavoro. /Alessandro Briosi/ *METAL.it Nord S.r.l.* Via Maioliche 57/C - 38068 Rovereto (TN) Tel.+39.0464.430130 - Fax +39.046

Re: [Gluster-users] adding arbiter

2017-04-04 Thread Alessandro Briosi
Il 04/04/2017 16:16, Krutika Dhananjay ha scritto: > So the corruption bug is seen iff your vms are online while fix-layout > and/or rebalance is going on. > Does that answer your question? > > The same issue has now been root-caused and there will be a fix for it > soon by Raghavendra G. Yes it

Re: [Gluster-users] adding arbiter

2017-04-03 Thread Alessandro Briosi
Il 01/04/2017 04:22, Gambit15 ha scritto: > As I understand it, only new files will be sharded, but simply > renaming or moving them may be enough in that case. > > I'm interested in the arbiter/sharding bug you've mentioned. Could you > provide any more details or a link? > I think it is

[Gluster-users] adding arbiter

2017-03-30 Thread Alessandro Briosi
Hi I need some advice. I'm currently on 3.8.10 and would like to know the following: 1. If I add an arbiter to an existing volume should I also run a rebalance? 2. If I had sharding enabled would adding the arbiter trigger the corruption bug? 3. What's the procedure to enable sharding on an

Re: [Gluster-users] Sharding?

2017-03-10 Thread Alessandro Briosi
Il 10/03/2017 10:28, Kevin Lemonnier ha scritto: >> I haven't done any test yet, but I was under the impression that >> sharding feature isn't so stable/mature yet. >> In the remote of my mind I remember reading something about a >> bug/situation which caused data corruption. >> Can someone

Re: [Gluster-users] Sharding?

2017-03-10 Thread Alessandro Briosi
Il 09/03/2017 17:17, Vijay Bellur ha scritto: > > > On Thu, Mar 9, 2017 at 11:10 AM, Kevin Lemonnier > wrote: > > > I've seen the term sharding pop up on the list a number of times > but I > > haven't found any documentation or

Re: [Gluster-users] detecting replication issues

2017-02-24 Thread Alessandro Briosi
Il 24/02/2017 14:50, Joseph Lorenzini ha scritto: > 1. I want the mount /etc/fstab to be able to fail over to any one of > the three servers that I have. so if one server is down, the client > can still mount from servers 2 and 3. *backupvolfile-server *option * *should do the work or use the

Re: [Gluster-users] distribute replicated volume and tons of questions

2017-02-22 Thread Alessandro Briosi
Il 21/02/2017 18:33, Gandalf Corvotempesta ha scritto: > Some questions: > > 1) can I start with a simple replicated volume and then move to a > ditributed, replicated by adding more bricks ? I would like to start > with 3 disks and then add 3 disks more in next month. > seems stupid but this

Re: [Gluster-users] Different network for server and client

2017-02-22 Thread Alessandro Briosi
Il 22/02/2017 18:31, Deepak Naidu ha scritto: > I have a setup where storage nodes use network-1 & client nodes use > network-2. > > In both server/client, I use /etc/hosts entry to define storage node > name, example node1, node2, node3 etc... > > When client uses node1 hostname it resolves to

Re: [Gluster-users] Different network for server and client

2017-02-22 Thread Alessandro Briosi
Il 22/02/2017 13:54, Gandalf Corvotempesta ha scritto: > I don't think would be possible because is the client that write on > all server > The replication is made by the client, not by the server I really hope this is not true. Alessandro ___

[Gluster-users] Different network for server and client

2017-02-22 Thread Alessandro Briosi
Hi, just a quick info. Is it possible to use 2 different networks with gluster? One for gluster server to sync, and one for the clients to connect to the server? Obviously making the client failover to still use the same network on the failover host. I'd like to have this also for the server as

Re: [Gluster-users] possible gluster error causing kvm to shutdown

2017-02-21 Thread Alessandro Briosi
Il 21/02/2017 16:59, Daniele Antolini ha scritto: > Is it possible that self-heal process on the kvm VM runs intensively > and shutdown automatically the vm? > > D Yes, could be that the VM is intesively used (though during the night the only thing it probably was doing was backup, which is

Re: [Gluster-users] possible gluster error causing kvm to shutdown

2017-02-21 Thread Alessandro Briosi
Il 21/02/2017 09:53, Alessandro Briosi ha scritto: > Hi all, > I have had a couple of times now a KVM VM which suddenly was shutdown > (whithout any apparent reason) > > At the time this happened the only thing I can find in logs are related > to gluster: > > Stop

[Gluster-users] possible gluster error causing kvm to shutdown

2017-02-21 Thread Alessandro Briosi
Hi all, I have had a couple of times now a KVM VM which suddenly was shutdown (whithout any apparent reason) At the time this happened the only thing I can find in logs are related to gluster: Stops have happened at 16.19 on the 13th and at 03.34 on the 19th. (time is local time which is GMT+1)

[Gluster-users] Gluster anc balance-alb

2017-02-14 Thread Alessandro Briosi
Hi all, I'd like to have a clarification on bonding with gluster. I have a gluster deployment which is using a bond with 4 eths. The bond is configured with balance-alb as 2 are connected to 1 switch and the other 2 to another switch. This is for traffic balance and redundancy. The switches are

Re: [Gluster-users] gluster and multipath

2017-01-27 Thread Alessandro Briosi
Il 24/01/2017 13:58, Lindsay Mathieson ha scritto: On 24/01/2017 10:23 PM, Alessandro Briosi wrote: Ok, I also am going to use Proxmox. Any advise on how to configure the bricks? I plan to have a 2 node replica. Would appreciate you sharing your full setup :-) Three node replica - preferred

Re: [Gluster-users] gluster and multipath

2017-01-24 Thread Alessandro Briosi
Il 24/01/2017 13:53, Cedric Lemarchand ha scritto: It would work with traditional LACP if both switches are manageable and in the same stack. If switches are dumbs (ie only L2 or not stackable), I think there is a Linux bonding mode that can do the work, but well I would stay away from such