Hello Community,
I have a problem running a snapshot of a replica 3 arbiter 1 volume.
Error:[root@ovirt2 ~]# gluster snapshot create before-423 engine description 
"Before upgrade of engine from 4.2.2 to 4.2.3" snapshot create: failed: 
Snapshot is supported only for thin provisioned LV. Ensure that all bricks of 
engine are thinly provisioned LV.Snapshot command failed
Volume info:
Volume Name: engineType: ReplicateVolume ID: 
30ca1cc2-f2f7-4749-9e2e-cee9d7099dedStatus: StartedSnapshot Count: 0Number of 
Bricks: 1 x (2 + 1) = 3Transport-type: tcpBricks:Brick1: 
ovirt1:/gluster_bricks/engine/engineBrick2: 
ovirt2:/gluster_bricks/engine/engineBrick3: 
ovirt3:/gluster_bricks/engine/engine (arbiter)Options 
Reconfigured:cluster.granular-entry-heal: enableperformance.strict-o-direct: 
onnetwork.ping-timeout: 30storage.owner-gid: 36storage.owner-uid: 36user.cifs: 
offfeatures.shard: oncluster.shd-wait-qlength: 10000cluster.shd-max-threads: 
8cluster.locking-scheme: granularcluster.data-self-heal-algorithm: 
fullcluster.server-quorum-type: servercluster.quorum-type: 
autocluster.eager-lock: enablenetwork.remote-dio: 
offperformance.low-prio-threads: 32performance.io-cache: 
offperformance.read-ahead: offperformance.quick-read: 
offtransport.address-family: inetnfs.disable: onperformance.client-io-threads: 
offcluster.enable-shared-storage: enable

All bricks are on thin lvm with plenty of space, the only thing that could be 
causing it is that ovirt1 & ovirt2 are on /dev/gluster_vg_ssd/gluster_lv_engine 
, while arbiter is on /dev/gluster_vg_sda3/gluster_lv_engine.
Is that the issue ? Should I rename my brick's VG ?If so, why there is no 
mentioning in the documentation ?

Best Regards,Strahil Nikolov
_______________________________________________
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Reply via email to