When host VM's on gluster you should be setting the following:

performance.readdir-ahead: on
cluster.data-self-heal: on
cluster.quorum-type: auto
cluster.server-quorum-type: server
performance.strict-write-ordering: off
performance.stat-prefetch: on
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
cluster.eager-lock: enable
network.remote-dio: enable
cluster.granular-entry-heal: yes
cluster.locking-scheme: granular

And possibly consider downgrading to 3.8.4?

On 14/12/2016 10:05 PM, cont...@makz.me wrote:
I've tried with ide / sata & scsi, for an existing VM it crash at boot screen
(windows 7 logo), for new install it crash when i click next at the disk
selector

root@hvs1:/var/log/glusterfs# **gluster volume info**
Volume Name: vm
Type: Replicate
Volume ID: 76ff16d4-7bd3-4070-b39d-8a173c6292c3
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: hvs1-gluster:/gluster/vm/brick
Brick2: hvs2-gluster:/gluster/vm/brick
Brick3: hvsquorum-gluster:/gluster/vm/brick
root@hvs1:/var/log/glusterfs# **gluster volume status**
Status of volume: vm
Gluster process                             TCP Port  RDMA Port  Online  Pid
\-----------------------------------------------------------------------------
-
Brick hvs1-gluster:/gluster/vm/brick        49153     0          Y       3565
Brick hvs2-gluster:/gluster/vm/brick        49153     0          Y       3465
Brick hvsquorum-gluster:/gluster/vm/brick   49153     0          Y       2696
NFS Server on localhost                     N/A       N/A        N       N/A
Self-heal Daemon on localhost               N/A       N/A        Y       3586
NFS Server on hvsquorum-gluster             N/A       N/A        N       N/A
Self-heal Daemon on hvsquorum-gluster       N/A       N/A        Y       4190
NFS Server on hvs2-gluster                  N/A       N/A        N       N/A
Self-heal Daemon on hvs2-gluster            N/A       N/A        Y       6212
Task Status of Volume vm
\-----------------------------------------------------------------------------
-
There are no active volume tasks
root@hvs1:/var/log/glusterfs# **gluster volume heal vm info**
Brick hvs1-gluster:/gluster/vm/brick
Status: Connected
Number of entries: 0
Brick hvs2-gluster:/gluster/vm/brick
Status: Connected
Number of entries: 0
Brick hvsquorum-gluster:/gluster/vm/brick
Status: Connected
Number of entries: 0

**Here's the kvm log**

[2016-12-14 11:08:29.210641] I [MSGID: 104045] [glfs-master.c:91:notify]
0-gfapi: New graph 68767331-2e73-7276-2e62-62772e62652d (0) coming up
[2016-12-14 11:08:29.210689] I [MSGID: 114020] [client.c:2356:notify] 0-vm-
client-0: parent translators are ready, attempting connect on transport
[2016-12-14 11:08:29.211020] I [MSGID: 114020] [client.c:2356:notify] 0-vm-
client-1: parent translators are ready, attempting connect on transport
[2016-12-14 11:08:29.211272] I [rpc-clnt.c:1947:rpc_clnt_reconfig] 0-vm-
client-0: changing port to 49153 (from 0)
[2016-12-14 11:08:29.211285] I [MSGID: 114020] [client.c:2356:notify] 0-vm-
client-2: parent translators are ready, attempting connect on transport
[2016-12-14 11:08:29.211838] I [MSGID: 114057] [client-
handshake.c:1446:select_server_supported_programs] 0-vm-client-0: Using
Program GlusterFS 3.3, Num (1298437), Version (330)
[2016-12-14 11:08:29.211910] I [rpc-clnt.c:1947:rpc_clnt_reconfig] 0-vm-
client-1: changing port to 49153 (from 0)
[2016-12-14 11:08:29.211980] I [rpc-clnt.c:1947:rpc_clnt_reconfig] 0-vm-
client-2: changing port to 49153 (from 0)
[2016-12-14 11:08:29.212227] I [MSGID: 114046] [client-
handshake.c:1222:client_setvolume_cbk] 0-vm-client-0: Connected to vm-
client-0, attached to remote volume '/gluster/vm/brick'.
[2016-12-14 11:08:29.212239] I [MSGID: 114047] [client-
handshake.c:1233:client_setvolume_cbk] 0-vm-client-0: Server and Client lk-
version numbers are not same, reopening the fds
[2016-12-14 11:08:29.212296] I [MSGID: 108005] [afr-common.c:4298:afr_notify]
0-vm-replicate-0: Subvolume 'vm-client-0' came back up; going online.
[2016-12-14 11:08:29.212316] I [MSGID: 114035] [client-
handshake.c:201:client_set_lk_version_cbk] 0-vm-client-0: Server lk version =
1
[2016-12-14 11:08:29.212426] I [MSGID: 114057] [client-
handshake.c:1446:select_server_supported_programs] 0-vm-client-1: Using
Program GlusterFS 3.3, Num (1298437), Version (330)
[2016-12-14 11:08:29.212590] I [MSGID: 114057] [client-
handshake.c:1446:select_server_supported_programs] 0-vm-client-2: Using
Program GlusterFS 3.3, Num (1298437), Version (330)
[2016-12-14 11:08:29.212874] I [MSGID: 114046] [client-
handshake.c:1222:client_setvolume_cbk] 0-vm-client-2: Connected to vm-
client-2, attached to remote volume '/gluster/vm/brick'.
[2016-12-14 11:08:29.212886] I [MSGID: 114047] [client-
handshake.c:1233:client_setvolume_cbk] 0-vm-client-2: Server and Client lk-
version numbers are not same, reopening the fds
[2016-12-14 11:08:29.212983] I [MSGID: 114046] [client-
handshake.c:1222:client_setvolume_cbk] 0-vm-client-1: Connected to vm-
client-1, attached to remote volume '/gluster/vm/brick'.
[2016-12-14 11:08:29.212992] I [MSGID: 114047] [client-
handshake.c:1233:client_setvolume_cbk] 0-vm-client-1: Server and Client lk-
version numbers are not same, reopening the fds
[2016-12-14 11:08:29.213042] I [MSGID: 114035] [client-
handshake.c:201:client_set_lk_version_cbk] 0-vm-client-2: Server lk version =
1
[2016-12-14 11:08:29.227393] I [MSGID: 114035] [client-
handshake.c:201:client_set_lk_version_cbk] 0-vm-client-1: Server lk version =
1
[2016-12-14 11:08:29.228239] I [MSGID: 108031] [afr-
common.c:2068:afr_local_discovery_cbk] 0-vm-replicate-0: selecting local
read_child vm-client-0
[2016-12-14 11:08:29.228832] I [MSGID: 104041] [glfs-
resolve.c:885:__glfs_active_subvol] 0-vm: switched to graph
68767331-2e73-7276-2e62-62772e62652d (0)
[2016-12-14 11:08:29.232505] W [MSGID: 114031] [client-rpc-
fops.c:2210:client3_3_seek_cbk] 0-vm-client-0: remote operation failed [No
such device or address]
kvm: block/gluster.c:1182: find_allocation: Assertion `offs >= start' failed.
**<\- This is when it crash**
Aborted

On Dec 14 2016, at 12:55 pm, Lindsay Mathieson <lindsay.mathie...@gmail.com>
wrote:

On 14/12/2016 8:51 PM, cont...@makz.me wrote:
Today i've upgraded to 4.4, snapshot now works well but since i've updated,
all my windows VM are broken, when windows try to write on disk the vm crash
without error, even when i try to reinstall a windows, when i partition or
format the vm instantly crash.
Hmmm - I upgraded to PVE 4.4 today (rolling upgrade, 3 nodes) and am
using Gluster 3.8.4, all my windows VM's are running fine + snapshots
ok. Sounds like you a gluster volume in trouble.

What virtual disk interface are you using with the windows VM'? ide?
virtio? scsi+virtio controller?

can you post:
gluster volume info
gluster volume status

gluster volume heal <vol name> heal info
\--
Lindsay Mathieson

_______________________________________________
pve-user mailing list
pve-user@pve.proxmox.com
<http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user>

_______________________________________________
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


--
Lindsay Mathieson

_______________________________________________
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

Reply via email to