Re: [PVE-User] Proxmox 6 loses network every 24 hours

2020-04-18 Thread Gerald Brandt

Changing Proxmox from DHCP on vmbr0 to static stopped the issue.


Gerald


On 2020-04-16 8:31 a.m., Gerald Brandt wrote:

Hi,

I have a Proxmox 6 server at SoYouStart https://www.soyoustart.com. 
I've been running a server there for years, running Proxmox 4. Instead 
of upgrading, I backed up all my VMs, did a fresh Proxmox 6 install, 
and restored my VMs. It works well, except for NFS, but that's another 
topic.



Every 24 hours, vmbr0 disappears. The SoYouStart staff can't ping the 
machine, assume it's gone down, and hard reboot the server. 24 hours 
after it's up, the cycle repeats. I'm not sure what is going on.


One weird thing, is that vmbr0 isn't configured in Proxmox, SoYouStart 
has it configured some other way. Could this be part of the problem? 
It looks like vmbr0 gets its IP via DHCP. (Yup, check in 
/etc/network/interfaces)


Apr 15 21:32:32 ns500184 ifup[695]: DHCPDISCOVER on vmbr0 to 
255.255.255.255 port 67 interval 7
Apr 15 21:32:32 ns500184 dhclient[744]: Sending on 
LPF/vmbr0/00:25:90:7b:a2:b8
Apr 15 21:32:32 ns500184 dhclient[744]: DHCPDISCOVER on vmbr0 to 
255.255.255.255 port 67 interval 7
Apr 15 21:32:32 ns500184 dhclient[744]: DHCPDISCOVER on vmbr0 to 
255.255.255.255 port 67 interval 13
Apr 15 21:32:32 ns500184 ifup[695]: DHCPDISCOVER on vmbr0 to 
255.255.255.255 port 67 interval 13
Apr 15 21:32:32 ns500184 ifup[695]: DHCPREQUEST for x.x.x.x on vmbr0 
to 255.255.255.255 port 67
Apr 15 21:32:32 ns500184 dhclient[744]: DHCPREQUEST for x.x.x.x on 
vmbr0 to 255.255.255.255 port 67
Apr 15 21:32:32 ns500184 cloud-init[819]: ci-info: | vmbr0  | True 
| x.x.x.x    | 255.255.255.0 | global | 00:25:90:7b:a2:b8 |
Apr 15 21:32:32 ns500184 cloud-init[819]: ci-info: | vmbr0  | True | 
xxx/64 |   .   |  link  | 00:25:90:7b:a2:b8 |
Apr 15 21:32:32 ns500184 cloud-init[819]: ci-info: |   0   | 
0.0.0.0    | x.x.x.254 |    0.0.0.0    |   vmbr0   |   UG  |
Apr 15 21:32:32 ns500184 cloud-init[819]: ci-info: |   1   | x.x.x.0  
|   0.0.0.0    | 255.255.255.0 |   vmbr0   |   U   |
Apr 15 21:32:32 ns500184 cloud-init[819]: ci-info: |   1   | 
fe80::/64  |    ::   |   vmbr0   |   U   |
Apr 15 21:32:32 ns500184 cloud-init[819]: ci-info: |   3   | local    
|    ::   |   vmbr0   |   U   |
Apr 15 21:32:32 ns500184 cloud-init[819]: ci-info: |   4   | ff00::/8  
|    ::   |   vmbr0   |   U   |
Apr 15 21:32:32 ns500184 ntpd[891]: Listen normally on 3 vmbr0 
x.x.x.x:123
Apr 15 21:32:32 ns500184 ntpd[891]: Listen normally on 5 vmbr0 
[xx]:123
Apr 15 21:32:32 ns500184 kernel: [   12.212574] vmbr0: port 1(enp1s0) 
entered blocking state
Apr 15 21:32:32 ns500184 kernel: [   12.212639] vmbr0: port 1(enp1s0) 
entered disabled state
Apr 15 21:32:32 ns500184 kernel: [   15.603713] vmbr0: port 1(enp1s0) 
entered blocking state
Apr 15 21:32:32 ns500184 kernel: [   15.603773] vmbr0: port 1(enp1s0) 
entered forwarding state
Apr 15 21:32:32 ns500184 kernel: [   15.603914] IPv6: 
ADDRCONF(NETDEV_CHANGE): vmbr0: link becomes ready
Apr 15 21:32:47 ns500184 kernel: [   35.452560] vmbr0: port 
2(tap5000i0) entered blocking state
Apr 15 21:32:47 ns500184 kernel: [   35.452608] vmbr0: port 
2(tap5000i0) entered disabled state
Apr 15 21:32:47 ns500184 kernel: [   35.452772] vmbr0: port 
2(tap5000i0) entered blocking state
Apr 15 21:32:47 ns500184 kernel: [   35.452819] vmbr0: port 
2(tap5000i0) entered forwarding state


Gerald



proxmox-ve: 6.1-2 (running kernel: 5.3.18-3-pve)
pve-manager: 6.1-8 (running version: 6.1-8/806edfe1)
pve-kernel-helper: 6.1-8
pve-kernel-5.3: 6.1-6
pve-kernel-5.0: 6.0-11
pve-kernel-5.3.18-3-pve: 5.3.18-3
pve-kernel-5.0.21-5-pve: 5.0.21-10
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.3-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35
libjs-extjs: 6.0.1-10
libknet1: 1.15-pve1
libpve-access-control: 6.0-6
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.0-17
libpve-guest-common-perl: 3.0-5
libpve-http-server-perl: 3.0-5
libpve-storage-perl: 6.1-5
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 3.2.1-1
lxcfs: 4.0.1-pve1
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.1-3
pve-cluster: 6.1-4
pve-container: 3.0-23
pve-docs: 6.1-6
pve-edk2-firmware: 2.20200229-1
pve-firewall: 4.0-10
pve-firmware: 3.0-7
pve-ha-manager: 3.0-9
pve-i18n: 2.0-4
pve-qemu-kvm: 4.1.1-4
pve-xtermjs: 4.3.0-1
pve-zsync: 2.0-3
qemu-server: 6.1-7
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.3-pve1

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user



___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Proxmox 6 loses network every 24 hours

2020-04-16 Thread Gerald Brandt

Hi,

I have a Proxmox 6 server at SoYouStart https://www.soyoustart.com. I've 
been running a server there for years, running Proxmox 4. Instead of 
upgrading, I backed up all my VMs, did a fresh Proxmox 6 install, and 
restored my VMs. It works well, except for NFS, but that's another topic.



Every 24 hours, vmbr0 disappears. The SoYouStart staff can't ping the 
machine, assume it's gone down, and hard reboot the server. 24 hours 
after it's up, the cycle repeats. I'm not sure what is going on.


One weird thing, is that vmbr0 isn't configured in Proxmox, SoYouStart 
has it configured some other way. Could this be part of the problem? It 
looks like vmbr0 gets its IP via DHCP. (Yup, check in 
/etc/network/interfaces)


Apr 15 21:32:32 ns500184 ifup[695]: DHCPDISCOVER on vmbr0 to 255.255.255.255 
port 67 interval 7
Apr 15 21:32:32 ns500184 dhclient[744]: Sending on   LPF/vmbr0/00:25:90:7b:a2:b8
Apr 15 21:32:32 ns500184 dhclient[744]: DHCPDISCOVER on vmbr0 to 
255.255.255.255 port 67 interval 7
Apr 15 21:32:32 ns500184 dhclient[744]: DHCPDISCOVER on vmbr0 to 
255.255.255.255 port 67 interval 13
Apr 15 21:32:32 ns500184 ifup[695]: DHCPDISCOVER on vmbr0 to 255.255.255.255 
port 67 interval 13
Apr 15 21:32:32 ns500184 ifup[695]: DHCPREQUEST for x.x.x.x on vmbr0 to 
255.255.255.255 port 67
Apr 15 21:32:32 ns500184 dhclient[744]: DHCPREQUEST for x.x.x.x on vmbr0 to 
255.255.255.255 port 67
Apr 15 21:32:32 ns500184 cloud-init[819]: ci-info: | vmbr0  |  True | 
x.x.x.x| 255.255.255.0 | global | 00:25:90:7b:a2:b8 |
Apr 15 21:32:32 ns500184 cloud-init[819]: ci-info: | vmbr0  |  True | 
xxx/64 |   .   |  link  | 00:25:90:7b:a2:b8 |
Apr 15 21:32:32 ns500184 cloud-init[819]: ci-info: |   0   |0.0.0.0| 
x.x.x.254 |0.0.0.0|   vmbr0   |   UG  |
Apr 15 21:32:32 ns500184 cloud-init[819]: ci-info: |   1   |   x.x.x.0  |   
0.0.0.0| 255.255.255.0 |   vmbr0   |   U   |
Apr 15 21:32:32 ns500184 cloud-init[819]: ci-info: |   1   |  fe80::/64  |
::   |   vmbr0   |   U   |
Apr 15 21:32:32 ns500184 cloud-init[819]: ci-info: |   3   |local|
::   |   vmbr0   |   U   |
Apr 15 21:32:32 ns500184 cloud-init[819]: ci-info: |   4   |   ff00::/8  |
::   |   vmbr0   |   U   |
Apr 15 21:32:32 ns500184 ntpd[891]: Listen normally on 3 vmbr0 x.x.x.x:123
Apr 15 21:32:32 ns500184 ntpd[891]: Listen normally on 5 vmbr0 [xx]:123
Apr 15 21:32:32 ns500184 kernel: [   12.212574] vmbr0: port 1(enp1s0) entered 
blocking state
Apr 15 21:32:32 ns500184 kernel: [   12.212639] vmbr0: port 1(enp1s0) entered 
disabled state
Apr 15 21:32:32 ns500184 kernel: [   15.603713] vmbr0: port 1(enp1s0) entered 
blocking state
Apr 15 21:32:32 ns500184 kernel: [   15.603773] vmbr0: port 1(enp1s0) entered 
forwarding state
Apr 15 21:32:32 ns500184 kernel: [   15.603914] IPv6: ADDRCONF(NETDEV_CHANGE): 
vmbr0: link becomes ready
Apr 15 21:32:47 ns500184 kernel: [   35.452560] vmbr0: port 2(tap5000i0) 
entered blocking state
Apr 15 21:32:47 ns500184 kernel: [   35.452608] vmbr0: port 2(tap5000i0) 
entered disabled state
Apr 15 21:32:47 ns500184 kernel: [   35.452772] vmbr0: port 2(tap5000i0) 
entered blocking state
Apr 15 21:32:47 ns500184 kernel: [   35.452819] vmbr0: port 2(tap5000i0) 
entered forwarding state

Gerald



proxmox-ve: 6.1-2 (running kernel: 5.3.18-3-pve)
pve-manager: 6.1-8 (running version: 6.1-8/806edfe1)
pve-kernel-helper: 6.1-8
pve-kernel-5.3: 6.1-6
pve-kernel-5.0: 6.0-11
pve-kernel-5.3.18-3-pve: 5.3.18-3
pve-kernel-5.0.21-5-pve: 5.0.21-10
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.3-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35
libjs-extjs: 6.0.1-10
libknet1: 1.15-pve1
libpve-access-control: 6.0-6
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.0-17
libpve-guest-common-perl: 3.0-5
libpve-http-server-perl: 3.0-5
libpve-storage-perl: 6.1-5
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 3.2.1-1
lxcfs: 4.0.1-pve1
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.1-3
pve-cluster: 6.1-4
pve-container: 3.0-23
pve-docs: 6.1-6
pve-edk2-firmware: 2.20200229-1
pve-firewall: 4.0-10
pve-firmware: 3.0-7
pve-ha-manager: 3.0-9
pve-i18n: 2.0-4
pve-qemu-kvm: 4.1.1-4
pve-xtermjs: 4.3.0-1
pve-zsync: 2.0-3
qemu-server: 6.1-7
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.3-pve1

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Cannot start VM - timeout waiting on systemd

2020-03-31 Thread Gerald Brandt
I get a timeout waiting on systemd when I try to start a VM. Any ideas. 
It's not CPU load or memory.



Mar 18 11:11:58 proxmox-1 pvedaemon[5035]: start VM 141: 
UPID:proxmox-1:13AB:0838CFFC:5E72484E:qmstart:141:root@pam:
Mar 18 11:11:58 proxmox-1 pvedaemon[2463]:  starting task 
UPID:proxmox-1:13AB:0838CFFC:5E72484E:qmstart:141:root@pam:
Mar 18 11:12:00 proxmox-1 systemd[1]: Starting Proxmox VE replication 
runner...

Mar 18 11:12:01 proxmox-1 pmxcfs[3161]: [status] notice: received log
Mar 18 11:12:02 proxmox-1 systemd[1]: Started Session 459 of user root.
Mar 18 11:12:05 proxmox-1 systemd[1]: Started Proxmox VE replication runner.
Mar 18 11:12:06 proxmox-1 pvedaemon[5035]: timeout waiting on systemd
Mar 18 11:12:06 proxmox-1 pvedaemon[2463]:  end task 
UPID:proxmox-1:13AB:0838CFFC:5E72484E:qmstart:141:root@pam: timeout 
waiting on systemd
Mar 18 11:12:07 proxmox-1 qm[5123]: VM 141 qmp command failed - VM 141 
not running

Mar 18 11:12:08 proxmox-1 pmxcfs[3161]: [status] notice: received log
Mar 18 11:12:08 proxmox-1 kernel: [1379441.284039] INFO: task kvm:5081 
blocked for more than 120 seconds.
Mar 18 11:12:08 proxmox-1 kernel: [1379441.284218]   Tainted: 
P   O 4.15.18-24-pve #1
Mar 18 11:12:08 proxmox-1 kernel: [1379441.284358] "echo 0 > 
/proc/sys/kernel/hung_task_timeout_secs" disables this message.
Mar 18 11:12:08 proxmox-1 kernel: [1379441.284540] kvm D    0  5081  
1 0x8006

Mar 18 11:12:08 proxmox-1 kernel: [1379441.284553] Call Trace:
Mar 18 11:12:08 proxmox-1 kernel: [1379441.284579] __schedule+0x3e0/0x870
Mar 18 11:12:08 proxmox-1 kernel: [1379441.284590] schedule+0x36/0x80
Mar 18 11:12:08 proxmox-1 kernel: [1379441.284601] 
schedule_timeout+0x1d4/0x360
Mar 18 11:12:08 proxmox-1 kernel: [1379441.284613]  ? 
call_rcu_sched+0x17/0x20
Mar 18 11:12:08 proxmox-1 kernel: [1379441.284626]  ? 
__percpu_ref_switch_mode+0xd7/0x180
Mar 18 11:12:08 proxmox-1 kernel: [1379441.284635] 
wait_for_completion+0xb4/0x140

Mar 18 11:12:08 proxmox-1 kernel: [1379441.284644]  ? wake_up_q+0x80/0x80
Mar 18 11:12:08 proxmox-1 kernel: [1379441.284657] exit_aio+0xeb/0x100
Mar 18 11:12:08 proxmox-1 kernel: [1379441.284670] mmput+0x2b/0x130
Mar 18 11:12:08 proxmox-1 kernel: [1379441.284696] 
vhost_dev_cleanup+0x382/0x3b0 [vhost]
Mar 18 11:12:08 proxmox-1 kernel: [1379441.284711] 
vhost_net_release+0x53/0xb0 [vhost_net]

Mar 18 11:12:08 proxmox-1 kernel: [1379441.284721] __fput+0xea/0x220
Mar 18 11:12:08 proxmox-1 kernel: [1379441.284731] fput+0xe/0x10
Mar 18 11:12:08 proxmox-1 kernel: [1379441.284742] task_work_run+0x9d/0xc0
Mar 18 11:12:08 proxmox-1 kernel: [1379441.284752] do_exit+0x2f6/0xbd0
Mar 18 11:12:08 proxmox-1 kernel: [1379441.284762]  ? 
__switch_to_asm+0x41/0x70
Mar 18 11:12:08 proxmox-1 kernel: [1379441.284771]  ? 
__switch_to_asm+0x41/0x70
Mar 18 11:12:08 proxmox-1 kernel: [1379441.284778]  ? 
__switch_to_asm+0x35/0x70
Mar 18 11:12:08 proxmox-1 kernel: [1379441.284785]  ? 
__switch_to_asm+0x41/0x70
Mar 18 11:12:08 proxmox-1 kernel: [1379441.284793]  ? 
__switch_to_asm+0x35/0x70

Mar 18 11:12:08 proxmox-1 kernel: [1379441.284802] do_group_exit+0x43/0xb0
Mar 18 11:12:08 proxmox-1 kernel: [1379441.284812] get_signal+0x15a/0x7f0
Mar 18 11:12:08 proxmox-1 kernel: [1379441.284825]  ? do_futex+0x7e6/0xd10
Mar 18 11:12:08 proxmox-1 kernel: [1379441.284838] do_signal+0x37/0x710
Mar 18 11:12:08 proxmox-1 kernel: [1379441.284849]  ? 
blk_finish_plug+0x2c/0x40
Mar 18 11:12:08 proxmox-1 kernel: [1379441.284859]  ? 
hrtimer_nanosleep+0xd8/0x1f0

Mar 18 11:12:08 proxmox-1 kernel: [1379441.284866]  ? SyS_futex+0x83/0x180
Mar 18 11:12:08 proxmox-1 kernel: [1379441.284878] 
exit_to_usermode_loop+0x80/0xd0

Mar 18 11:12:08 proxmox-1 kernel: [1379441.284888] do_syscall_64+0x100/0x130
Mar 18 11:12:08 proxmox-1 kernel: [1379441.284899] 
entry_SYSCALL_64_after_hwframe+0x3d/0xa2

Mar 18 11:12:08 proxmox-1 kernel: [1379441.284907] RIP: 0033:0x7f6aff09c469
Mar 18 11:12:08 proxmox-1 kernel: [1379441.284912] RSP: 
002b:7f6af33fc638 EFLAGS: 0246 ORIG_RAX: 00ca
Mar 18 11:12:08 proxmox-1 kernel: [1379441.284922] RAX: fe00 
RBX: 55e6ad1894c8 RCX: 7f6aff09c469
Mar 18 11:12:08 proxmox-1 kernel: [1379441.284927] RDX:  
RSI:  RDI: 55e6ad1894c8
Mar 18 11:12:08 proxmox-1 kernel: [1379441.284931] RBP:  
R08:  R09: 
Mar 18 11:12:08 proxmox-1 kernel: [1379441.284936] R10:  
R11: 0246 R12: 
Mar 18 11:12:08 proxmox-1 kernel: [1379441.284941] R13: 7ffcf670c33f 
R14: 7f6af2bff000 R15: 0003

Mar 18 11:12:08 proxmox-1 pvestatd[3372]: got timeout


___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Can't start VM

2020-02-10 Thread Gerald Brandt



On 2020-02-10 12:06 p.m., Laurent Dumont wrote:

This does feel like a resource exhaustion issue. Do you have any external
monitoring in place for that Proxmox server?


I don't. I'll get something in place. htop showed good memory usage, 
disk space was fine. I noticed NFS dropped off for a bit, so looking 
into that. It was up when I checked out the system.



Gerald





On Mon, Feb 10, 2020 at 11:11 AM Gerald Brandt  wrote:


On 2020-02-10 10:04 a.m., leesteken--- via pve-user wrote:

Sorry, I have no experience with clusters.
Would Proxmox not do this automatically when you shutdown the node?


Only if I have HA turned on, and I do not.

Gerald



___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Can't start VM

2020-02-10 Thread Gerald Brandt



On 2020-02-10 10:04 a.m., leesteken--- via pve-user wrote:

Sorry, I have no experience with clusters.
Would Proxmox not do this automatically when you shutdown the node?



Only if I have HA turned on, and I do not.

Gerald



___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Can't start VM

2020-02-10 Thread Gerald Brandt

Hi,

I have a small cluster of 4 servers. One server in the cluster seems to 
be having an issue. For the last two weeks, one or two of the VMs seem 
to psuedo lock up over the weekend. I can still VNC in and type in a 
login, but after typing the password, the system never responds. Also, 
all services on that VM (web, version control) are non-responsive. A 
reset from VM console doesn't work, I need to do a stop and start.



However, on start, I get this from the Proxmox server, and the VM never 
starts:


Feb 10 09:29:32 proxmox-2 pvedaemon[1142]: start VM 107: 
UPID:proxmox-2:0476:0311D642:5E4176DC:qmstart:107:root@pam:
Feb 10 09:29:32 proxmox-2 pvedaemon[16365]:  starting task 
UPID:proxmox-2:0476:0311D642:5E4176DC:qmstart:107:root@pam:

Feb 10 09:29:34 proxmox-2 systemd[1]: Started Session 266 of user root.
Feb 10 09:29:35 proxmox-2 qm[1236]: VM 107 qmp command failed - VM 107 
not running

Feb 10 09:29:37 proxmox-2 pvedaemon[1142]: timeout waiting on systemd
Feb 10 09:29:37 proxmox-2 pvedaemon[16365]:  end task 
UPID:proxmox-2:0476:0311D642:5E4176DC:qmstart:107:root@pam: timeout 
waiting on system



I have to migrate all the VMs off the server and reboot the server. Any 
ideas?



Gerald


# pveversion --verbose
proxmox-ve: 5.4-2 (running kernel: 4.15.18-24-pve)
pve-manager: 5.4-13 (running version: 5.4-13/aee6f0ec)
pve-kernel-4.15: 5.4-12
pve-kernel-4.15.18-24-pve: 4.15.18-52
pve-kernel-4.15.18-21-pve: 4.15.18-48
pve-kernel-4.15.18-18-pve: 4.15.18-44
pve-kernel-4.15.18-13-pve: 4.15.18-37
pve-kernel-4.15.18-12-pve: 4.15.18-36
pve-kernel-4.15.18-11-pve: 4.15.18-34
pve-kernel-4.15.18-10-pve: 4.15.18-32
pve-kernel-4.15.18-7-pve: 4.15.18-27
pve-kernel-4.15.18-5-pve: 4.15.18-24
pve-kernel-4.15.18-4-pve: 4.15.18-23
pve-kernel-4.15.17-3-pve: 4.15.17-14
pve-kernel-4.13.13-5-pve: 4.13.13-38
pve-kernel-4.13.13-2-pve: 4.13.13-33
pve-kernel-4.13.4-1-pve: 4.13.4-26
corosync: 2.4.4-pve1
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.1-12
libpve-apiclient-perl: 2.0-5
libpve-common-perl: 5.0-56
libpve-guest-common-perl: 2.0-20
libpve-http-server-perl: 2.0-14
libpve-storage-perl: 5.0-44
libqb0: 1.0.3-1~bpo9
lvm2: 2.02.168-pve6
lxc-pve: 3.1.0-7
lxcfs: 3.0.3-pve1
novnc-pve: 1.0.0-3
proxmox-widget-toolkit: 1.0-28
pve-cluster: 5.0-38
pve-container: 2.0-41
pve-docs: 5.4-2
pve-edk2-firmware: 1.20190312-1
pve-firewall: 3.0-22
pve-firmware: 2.0-7
pve-ha-manager: 2.0-9
pve-i18n: 1.1-4
pve-libspice-server1: 0.14.1-2
pve-qemu-kvm: 3.0.1-4
pve-xtermjs: 3.12.0-1
qemu-server: 5.0-55
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.13-pve1~bpo2

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] NFSv4.1 multipath available?

2019-04-25 Thread Gerald Brandt

Hi,

I'm mount my NFS server (images and backup) as vers=4.1. It works quite 
well. I currently use balance-rr across 2 GigE links to talk to my NFS 
server.


Is there a way to set up multipath with NFS instead?

Gerald


___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] NFS Storage issue

2019-04-25 Thread Gerald Brandt


On 2019-04-25 7:34 a.m., Gianni Milo wrote:

Hello

I have a Synology with High Availability serving NFS storage to Proxmox.

A couple of months ago, this started happening:

Apr 24 12:32:51 proxmox-1 pvestatd[3298]: unable to activate storage
'NAS' - directory '/mnt/pve/NAS' does not exist or is unreachable
Apr 24 13:08:00 proxmox-1 pvestatd[3298]: unable to activate storage
'NAS' - directory '/mnt/pve/NAS' does not exist or is unreachable



It appears that the NFS storage target was unreachable during that time...



It affecting backups and VM speeds. Any idea what is causing this?


I would guess that the network link used for accessing the NFS target is
getting saturated. This sometimes can be caused by running backup jobs,
and/or other tasks like VMs demanding high i/o to that destination.

There are no errors in the Synology logs, and the storage stays mounted and

accessible.


You mean that it stays accessible after the backups are finished (or
failed) and after the i/o operations return to the normal levels ?
Do you recall if there was any of the above happening during that time ?



Any ideas what's going on?


I would set bandwidth limits either on the storage target or on the backup
jobs. See documentation for details on how to achieve this.
Even better, separate VMs from the backups if that's possible (i.e use
different storage/network links).

G.
___



Thanks for the reply. Although it does happen during backups, causing 
serious issues that require Proxmox server restart, it also happens 
during the day. I don't see any saturation issues during the day, but 
I'll start monitoring that.


So you're saying to set network bandwidth limits on the storage network 
(I do keep a separate network for storage). I guess upgrading the 
synology's with something that has 10GE would help as well.


Thanks,

Gerald


___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] NFS Storage issue

2019-04-24 Thread Gerald Brandt




Storage conf file:


nfs: NAS
    export /volume1/Storage
    path /mnt/pve/NAS
    server 192.168.50.100
    content images,vztmpl,rootdir,iso,backup
    maxfiles 2
    options vers=3,async,noatime,fsc,nodiratime



Sorry, that should be:


options vers=4,async,noatime,fsc,nodiratime


I'm using NFSv4


Gerald


___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] NFS Storage issue

2019-04-24 Thread Gerald Brandt

Hi,

I have a Synology with High Availability serving NFS storage to Proxmox. 
A couple of months ago, this started happening:


Apr 24 12:32:51 proxmox-1 pvestatd[3298]: unable to activate storage 
'NAS' - directory '/mnt/pve/NAS' does not exist or is unreachable
Apr 24 13:08:00 proxmox-1 pvestatd[3298]: unable to activate storage 
'NAS' - directory '/mnt/pve/NAS' does not exist or is unreachable


It affecting backups and VM speeds. Any idea what is causing this? There 
are no errors in the Synology logs, and the storage stays mounted and 
accessible.


Any ideas what's going on?

# pveversion --verbose
proxmox-ve: 5.4-1 (running kernel: 4.15.18-12-pve)
pve-manager: 5.4-3 (running version: 5.4-3/0a6eaa62)
pve-kernel-4.15: 5.3-3
pve-kernel-4.15.18-12-pve: 4.15.18-35
pve-kernel-4.15.18-11-pve: 4.15.18-34
pve-kernel-4.15.18-10-pve: 4.15.18-32
pve-kernel-4.15.18-7-pve: 4.15.18-27
pve-kernel-4.15.18-5-pve: 4.15.18-24
pve-kernel-4.15.18-4-pve: 4.15.18-23
pve-kernel-4.15.17-3-pve: 4.15.17-14
pve-kernel-4.13.13-6-pve: 4.13.13-42
pve-kernel-4.13.13-4-pve: 4.13.13-35
pve-kernel-4.13.13-2-pve: 4.13.13-33
pve-kernel-4.13.4-1-pve: 4.13.4-26
corosync: 2.4.4-pve1
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.1-8
libpve-apiclient-perl: 2.0-5
libpve-common-perl: 5.0-50
libpve-guest-common-perl: 2.0-20
libpve-http-server-perl: 2.0-13
libpve-storage-perl: 5.0-41
libqb0: 1.0.3-1~bpo9
lvm2: 2.02.168-pve6
lxc-pve: 3.1.0-3
lxcfs: 3.0.3-pve1
novnc-pve: 1.0.0-3
proxmox-widget-toolkit: 1.0-25
pve-cluster: 5.0-36
pve-container: 2.0-37
pve-docs: 5.4-2
pve-edk2-firmware: 1.20190312-1
pve-firewall: 3.0-19
pve-firmware: 2.0-6
pve-ha-manager: 2.0-9
pve-i18n: 1.1-4
pve-libspice-server1: 0.14.1-2
pve-qemu-kvm: 2.12.1-3
pve-xtermjs: 3.12.0-1
qemu-server: 5.0-50
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.13-pve1~bpo2


Storage conf file:

# cat /etc/pve/storage.cfg
dir: local
    path /var/lib/vz
    content vztmpl,backup,iso

zfspool: local-zfs
    pool rpool/data
    content rootdir,images
    sparse 1

nfs: NAS
    export /volume1/Storage
    path /mnt/pve/NAS
    server 192.168.50.100
    content images,vztmpl,rootdir,iso,backup
    maxfiles 2
    options vers=3,async,noatime,fsc,nodiratime

nfs: weekly_snapshots
    export /volume2/weekly_snapshots
    path /mnt/pve/weekly_snapshots
    server 192.168.50.100
    content backup
    maxfiles 1
    options vers=3

nfs: daily_snapshots
    export /volume2/daily_snapshots
    path /mnt/pve/daily_snapshots
    server 192.168.50.100
    content backup
    maxfiles 3
    options vers=3

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Upgrade from 3.4, or reinstall?

2018-12-27 Thread Gerald Brandt

Hi,

I have an old 3.4 box. Is it worth upgrading, or should I just backup 
and reinstall?


Gerald


___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Filesystem corruption on a VM?

2018-11-15 Thread Gerald Brandt
Interesting. My XFS VM was corrupted every night when I did a snapshot 
backup. I switched to a shutdown backup and the issue went away.


Gerald

On 2018-11-15 7:32 a.m., Daniel Berteaud wrote:

Le 15/11/2018 à 13:10, Gerald Brandt a écrit :

I've only had filesystem corruption when using XFS in a VM.


In my experience, XFS has been more reliable, and robust. But anyway,
99.9% of the time, FS corruption is caused by one of the underlying layers


++


___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Filesystem corruption on a VM?

2018-11-15 Thread Gerald Brandt


On 2018-10-23 3:58 a.m., Marco Gaiarin wrote:

In a PVE 4.4 cluster i continue to get FS errors like:

  Oct 22 20:51:10 vdmsv1 kernel: [268329.890910] EXT4-fs error (device sda6): 
ext4_mb_generate_buddy:758: group 932, block bitmap and bg descriptor 
inconsistent: 30722 vs 32768 free clusters

and

  Oct 23 09:43:16 vdmsv1 kernel: [314655.032561] EXT4-fs error (device sdb1): 
ext4_validate_block_bitmap:384: comm kworker/u8:2: bg 12: bad block bitmap 
checksum
  Oct 23 09:43:16 vdmsv1 kernel: [314655.034265] EXT4-fs (sdb1): Delayed block 
allocation failed for inode 2632026 at logical offset 2048 with max blocks 1640 
with error 74
  Oct 23 09:43:16 vdmsv1 kernel: [314655.034335] EXT4-fs (sdb1): This should 
not happen!! Data will be lost

Host run 4.4.134-1-pve kernel, and guest is a debian stretch
(4.9.0-8-amd64), and in the same cluster, but also in other clusters, i
have other stretch VMs running in the same host kernel, without
troubles.

Googling around lead me to old jessie bugs (kernels 3.16):

https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1423672
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=818502#22

or things i make it hard to correlate with:

https://access.redhat.com/solutions/155873


Someone have some hints?! Thanks.



I've only had filesystem corruption when using XFS in a VM.


Gerald


___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] I lost the cluster communication in a 10 nodes cluster

2018-10-27 Thread Gerald Brandt



On 2018-10-18 11:21 a.m., Denis Morejon wrote:

I lost the cluster communication again.

I have been using Proxmox since version 1, and this is the first time 
It bothers me so much!


- All the 10 nodes have the same version

(pve-manager/5.2-9/4b30e8f9 (running kernel: 4.13.13-2-pve))

- All they have the same date / time (It is one of the causes It could 
lose the communication)


- The environment is ident (No new switch, no new server)


And why all these nodes lost the communication at the same time ? If 
they are 10 at least 5 have to be with problems to lost the quorum and 
then the connection. Is it true?


I think it is something related to this proxmox version.

What to do ?





That happened to me a few versions back on a 3 node cluster. I had to 
switch heartbeat from multicast to unicast to keep things stable. There 
were no changes in any of my equipment when this happened.



Gerald


___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Proxmox servers don't see each other anymore

2017-06-20 Thread Gerald Brandt



On 2017-06-19 09:45 AM, Gerald Brandt wrote:

Hi,

My proxmox server intermittently don't see each other anymore. I keep 
losing quorum.


I have 3 Proxmox servers, and they will randomly not see each other. 
For a few seconds proxmox-2 will see proxmox-1, but not proxmox-3. 
Then it'll switch and proxmox-2 will only see proxmox-3, then it will 
see no one but itself.


I have the IP and names in /etc/hosts and in DNS.

Any idea where I can start looking?


Gerald



So, I don't know what to do, except go unicast. What has been working 
for years suddenly failed, and I am lost.


Gerald

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Proxmox servers don't see each other anymore

2017-06-20 Thread Gerald Brandt

I just did an omping test. It doesn't look good.

#parallel-ssh -i -H "proxmox-0 proxmox-1 proxmox-2" -A -l root -t 0 omping -c 
600 -i 1 -q proxmox-2 proxmox-1 proxmox-0
Warning: do not enter your password if anyone else has superuser
privileges or access to your account.
Password:
[1] 10:13:49 [SUCCESS] proxmox-1
proxmox-2 : waiting for response msg
proxmox-0 : waiting for response msg
proxmox-0 : joined (S,G) = (*, 232.43.211.234), pinging
proxmox-2 : joined (S,G) = (*, 232.43.211.234), pinging
proxmox-2 : given amount of query messages was sent
proxmox-0 : given amount of query messages was sent

proxmox-2 :   unicast, xmt/rcv/%loss = 600/600/0%, min/avg/max/std-dev = 
0.058/0.193/0.997/0.099
proxmox-2 : multicast, xmt/rcv/%loss = 600/467/22% (seq>=2 22%), 
min/avg/max/std-dev = 0.063/0.217/0.571/0.101
proxmox-0 :   unicast, xmt/rcv/%loss = 600/600/0%, min/avg/max/std-dev = 
0.060/0.238/0.633/0.133
proxmox-0 : multicast, xmt/rcv/%loss = 600/467/22% (seq>=2 22%), 
min/avg/max/std-dev = 0.066/0.261/0.636/0.131
[2] 10:13:50 [SUCCESS] proxmox-2
proxmox-1 : waiting for response msg
proxmox-0 : waiting for response msg
proxmox-0 : joined (S,G) = (*, 232.43.211.234), pinging
proxmox-1 : waiting for response msg
proxmox-1 : joined (S,G) = (*, 232.43.211.234), pinging
proxmox-0 : given amount of query messages was sent
proxmox-1 : given amount of query messages was sent

proxmox-1 :   unicast, xmt/rcv/%loss = 600/600/0%, min/avg/max/std-dev = 
0.054/0.168/0.703/0.090
proxmox-1 : multicast, xmt/rcv/%loss = 600/311/48%, min/avg/max/std-dev = 
0.066/0.222/0.803/0.131
proxmox-0 :   unicast, xmt/rcv/%loss = 600/600/0%, min/avg/max/std-dev = 
0.050/0.271/0.584/0.132
proxmox-0 : multicast, xmt/rcv/%loss = 600/311/48% (seq>=2 48%), 
min/avg/max/std-dev = 0.073/0.299/0.590/0.131
[3] 10:13:50 [SUCCESS] proxmox-0
proxmox-2 : waiting for response msg
proxmox-1 : waiting for response msg
proxmox-2 : waiting for response msg
proxmox-1 : waiting for response msg
proxmox-1 : joined (S,G) = (*, 232.43.211.234), pinging
proxmox-2 : joined (S,G) = (*, 232.43.211.234), pinging
proxmox-2 : given amount of query messages was sent
proxmox-1 : given amount of query messages was sent

proxmox-2 :   unicast, xmt/rcv/%loss = 600/600/0%, min/avg/max/std-dev = 
0.078/0.339/0.735/0.124
proxmox-2 : multicast, xmt/rcv/%loss = 600/437/27%, min/avg/max/std-dev = 
0.126/0.361/0.763/0.128
proxmox-1 :   unicast, xmt/rcv/%loss = 600/600/0%, min/avg/max/std-dev = 
0.096/0.324/0.801/0.134
proxmox-1 : multicast, xmt/rcv/%loss = 600/437/27%, min/avg/max/std-dev = 
0.103/0.376/0.807/0.158
gbr@C-0013:~$



On 2017-06-20 06:54 AM, Uwe Sauter wrote:

Am 20.06.2017 um 13:53 schrieb Gerald Brandt:


On 2017-06-20 06:30 AM, Barry Flanagan wrote:

Hi Dmitry
Regarding "enable it on your servers" - is it safe to enable this on all
servers in the cluster, or must it only be on one? We recently suffered an
issue on our network where multicast stopped working (an old switch was
removed and obviously it was performing the querier function) and I had to
change to unicast just to get things running again. I want to move back to
multicast but don't want to rely on it being enabled on the switches as I
do not control them.

Thanks (and apologies for the slight thread hijack)

-Barry Flanagan


I can temporarily fix the issue by going unicast? Are there instructions on how 
to do that? A quick search didn't bring anything
up for me.

https://pve.proxmox.com/wiki/Multicast_notes#Use_unicast_.28UDPU.29_instead_of_multicast.2C_if_all_else_fails




Gerald



19.06.2017 18:07, Gerald Brandt пишет:

I can still ping each server when there is an issue. Don't want to power

cycle the switches during the workday, but may have to.

On 2017-06-19 09:57 AM, Gilberto Nunes wrote:

Probably some network trouble??
Did you check cables, swotch and so on??
When the problem appears, can you ping each server?

My proxmox server intermittently don't see each other anymore. I keep
losing quorum.

I have 3 Proxmox servers, and they will randomly not see each other.

For a

few seconds proxmox-2 will see proxmox-1, but not proxmox-3. Then it'll
switch and proxmox-2 will only see proxmox-3, then it will see no one

but

itself.

I have the IP and names in /etc/hosts and in DNS.

Any idea where I can start looking?

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


___

Re: [PVE-User] Proxmox servers don't see each other anymore

2017-06-20 Thread Gerald Brandt



On 2017-06-20 06:54 AM, Uwe Sauter wrote:

Am 20.06.2017 um 13:53 schrieb Gerald Brandt:


On 2017-06-20 06:30 AM, Barry Flanagan wrote:

Hi Dmitry
Regarding "enable it on your servers" - is it safe to enable this on all
servers in the cluster, or must it only be on one? We recently suffered an
issue on our network where multicast stopped working (an old switch was
removed and obviously it was performing the querier function) and I had to
change to unicast just to get things running again. I want to move back to
multicast but don't want to rely on it being enabled on the switches as I
do not control them.

Thanks (and apologies for the slight thread hijack)

-Barry Flanagan


I can temporarily fix the issue by going unicast? Are there instructions on how 
to do that? A quick search didn't bring anything
up for me.

https://pve.proxmox.com/wiki/Multicast_notes#Use_unicast_.28UDPU.29_instead_of_multicast.2C_if_all_else_fails



Thank you! My search skills must suck this early in the morning.

Gerald




Gerald



19.06.2017 18:07, Gerald Brandt пишет:

I can still ping each server when there is an issue. Don't want to power

cycle the switches during the workday, but may have to.

On 2017-06-19 09:57 AM, Gilberto Nunes wrote:

Probably some network trouble??
Did you check cables, swotch and so on??
When the problem appears, can you ping each server?

My proxmox server intermittently don't see each other anymore. I keep
losing quorum.

I have 3 Proxmox servers, and they will randomly not see each other.

For a

few seconds proxmox-2 will see proxmox-1, but not proxmox-3. Then it'll
switch and proxmox-2 will only see proxmox-3, then it will see no one

but

itself.

I have the IP and names in /etc/hosts and in DNS.

Any idea where I can start looking?

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Proxmox servers don't see each other anymore

2017-06-20 Thread Gerald Brandt



On 2017-06-20 06:30 AM, Barry Flanagan wrote:

Hi Dmitry
Regarding "enable it on your servers" - is it safe to enable this on all
servers in the cluster, or must it only be on one? We recently suffered an
issue on our network where multicast stopped working (an old switch was
removed and obviously it was performing the querier function) and I had to
change to unicast just to get things running again. I want to move back to
multicast but don't want to rely on it being enabled on the switches as I
do not control them.

Thanks (and apologies for the slight thread hijack)

-Barry Flanagan



I can temporarily fix the issue by going unicast? Are there instructions 
on how to do that? A quick search didn't bring anything up for me.


Gerald



19.06.2017 18:07, Gerald Brandt пишет:

I can still ping each server when there is an issue. Don't want to power

cycle the switches during the workday, but may have to.

On 2017-06-19 09:57 AM, Gilberto Nunes wrote:

Probably some network trouble??
Did you check cables, swotch and so on??
When the problem appears, can you ping each server?

My proxmox server intermittently don't see each other anymore. I keep
losing quorum.

I have 3 Proxmox servers, and they will randomly not see each other.

For a

few seconds proxmox-2 will see proxmox-1, but not proxmox-3. Then it'll
switch and proxmox-2 will only see proxmox-3, then it will see no one

but

itself.

I have the IP and names in /etc/hosts and in DNS.

Any idea where I can start looking?

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Proxmox servers don't see each other anymore

2017-06-19 Thread Gerald Brandt

Just put a dumb $29 netgear switch in place, and the problem still occurs.

Gerald

On 2017-06-19 10:00 AM, Eneko Lacunza wrote:
Try putting a "dumb" switch to interconnect all 3 proxmox nodes. Put 
an uptlink cable to the "original" switch.


I now it works, it's a switch problem - maybe it is broken, maybe it 
is too smart and is dropping cluster communication packages.


I have seen this two times:
- One switch was broken and lost packages -> new switch
- Other switch was core 3Com switch; we seemed unable to configure it 
not to drop our cluster communication packets, so just bought a dumb 
enought HP 1810-24g switch just for the proxmox nodes and problem solved.


Cheers

El 19/06/17 a las 16:45, Gerald Brandt escribió:

Hi,

My proxmox server intermittently don't see each other anymore. I keep 
losing quorum.


I have 3 Proxmox servers, and they will randomly not see each other. 
For a few seconds proxmox-2 will see proxmox-1, but not proxmox-3. 
Then it'll switch and proxmox-2 will only see proxmox-3, then it will 
see no one but itself.


I have the IP and names in /etc/hosts and in DNS.

Any idea where I can start looking?


Gerald


___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user





___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Proxmox servers don't see each other anymore

2017-06-19 Thread Gerald Brandt
I can still ping each server when there is an issue. Don't want to power 
cycle the switches during the workday, but may have to.


Gerald



On 2017-06-19 09:57 AM, Gilberto Nunes wrote:

Hello

Probably some network trouble??
Did you check cables, swotch and so on??
When the problem appears, can you ping each server?

2017-06-19 11:45 GMT-03:00 Gerald Brandt <g...@majentis.com>:


Hi,

My proxmox server intermittently don't see each other anymore. I keep
losing quorum.

I have 3 Proxmox servers, and they will randomly not see each other. For a
few seconds proxmox-2 will see proxmox-1, but not proxmox-3. Then it'll
switch and proxmox-2 will only see proxmox-3, then it will see no one but
itself.

I have the IP and names in /etc/hosts and in DNS.

Any idea where I can start looking?


Gerald


___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user






___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Broken cluster

2017-03-14 Thread Gerald Brandt



On 2017-03-14 02:00 PM, Kevin Lemonnier wrote:

Hi,

I've just been given the task of maintaning an existing "cluster"
of Proxmox 4. I'm putting quotes around the word because currently,
it doesn't work. Each nodes seems to have been somehow added to a
cluster and can see the other nodes, and the sumary tab does work
for each node, but they all show offline except the one I log on.

Since they were all different versions, I just tired updating them
to the latest 4.4, but not better. Then I took a look and realised
that they are all missing the /etc/pve/corosync.conf file.
The command "pvecm status" on some of them just returns that the
file is missing, and on some others says that then does show a
status of a cluster where they are the only one connected (which
a "Activity blocked" since they don't have quorum).

Any idea what happened ? How to fix it ?


Looks like they can't find each other. Make sure each server has 
/etc/hosts entries for all the other servers.


Gerald

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Lost cluster. Resolved.

2017-03-06 Thread Gerald Brandt



On 2017-03-06 09:23 AM, Gerald Brandt wrote:

Hi,

Last night, a friend I hadn't heard from in a year called me. His 4 
server Proxmox 4.2 system was done, and wasn't coming back up. Each 
node could only see itself as up, and some of the nodes didn't show 
every machine in the cluster. Bizarre.


The end result was, none of the machines in the cluster had the other 
nodes in the hosts file. I added them, and it all came up nice.


How was the system able to run before?

My guess is their DNS server provided the names, and when the cluster 
didn't have quorum, and couldn't bring up the DNS server VM, it all 
fell apart. However, that doesn't make sense either, since the DNS 
server needed to be running before the cluster and quorum worked, and 
without quorum, there was no DNS server.


Gerald



Just found out something new. The DNS server was running on an old 
XenServer system, and was recently turned off. So, Proxmox could've been 
getting node names<-->IP addresses via DNS, and starting up OK.


Gerald

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Lost cluster. Resolved.

2017-03-06 Thread Gerald Brandt

Hi,

Last night, a friend I hadn't heard from in a year called me. His 4 
server Proxmox 4.2 system was done, and wasn't coming back up. Each node 
could only see itself as up, and some of the nodes didn't show every 
machine in the cluster. Bizarre.


The end result was, none of the machines in the cluster had the other 
nodes in the hosts file. I added them, and it all came up nice.


How was the system able to run before?

My guess is their DNS server provided the names, and when the cluster 
didn't have quorum, and couldn't bring up the DNS server VM, it all fell 
apart. However, that doesn't make sense either, since the DNS server 
needed to be running before the cluster and quorum worked, and without 
quorum, there was no DNS server.


Gerald


___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] VM migrates then locks up

2017-02-10 Thread Gerald Brandt



On 2017-02-10 01:54 PM, Michael Rasmussen wrote:

On Fri, 10 Feb 2017 13:17:55 -0600
Gerald Brandt <g...@majentis.com> wrote:


No change. Migration from the Xeon system to the AMD works just fine.


This was an issue way back which perhaps has reemerged from the big
void.

Please provide:

 From both proxmox nodes and from the VM (if it is Linux):
lscpu

If VM is *BSD: dmesg -a |grep -i features

If VM is Windows ask somebody else.




Xeon:
root@gbr-proxmox-2:~# lscpu
Architecture:  x86_64
CPU op-mode(s):32-bit, 64-bit
Byte Order:Little Endian
CPU(s):4
On-line CPU(s) list:   0-3
Thread(s) per core:1
Core(s) per socket:2
Socket(s): 2
NUMA node(s):  1
Vendor ID: GenuineIntel
CPU family:6
Model: 15
Model name:Intel(R) Xeon(R) CPU 5148  @ 2.33GHz
Stepping:  11
CPU MHz:   2333.205
BogoMIPS:  4666.69
Virtualization:VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache:  4096K
NUMA node0 CPU(s): 0-3


Intel:
root@gbr-proxmox-1:~# lscpu
Architecture:  x86_64
CPU op-mode(s):32-bit, 64-bit
Byte Order:Little Endian
CPU(s):8
On-line CPU(s) list:   0-7
Thread(s) per core:2
Core(s) per socket:4
Socket(s): 1
NUMA node(s):  1
Vendor ID: AuthenticAMD
CPU family:21
Model: 1
Model name:AMD FX(tm)-8150 Eight-Core Processor
Stepping:  2
CPU MHz:   4153.955
BogoMIPS:  8307.91
Virtualization:AMD-V
L1d cache: 16K
L1i cache: 64K
L2 cache:  2048K
L3 cache:  8192K
NUMA node0 CPU(s): 0-7

VM:
gbr@vcs:~$ lscpu
Architecture:  x86_64
CPU op-mode(s):32-bit, 64-bit
Byte Order:Little Endian
CPU(s):1
On-line CPU(s) list:   0
Thread(s) per core:1
Core(s) per socket:1
Socket(s): 1
NUMA node(s):  1
Vendor ID: GenuineIntel
CPU family:6
Model: 6
Model name:QEMU Virtual CPU version 2.5+
Stepping:  3
CPU MHz:   2333.332
BogoMIPS:  4666.66
Hypervisor vendor: KVM
Virtualization type:   full
L1d cache: 32K
L1i cache: 32K
L2 cache:  4096K
NUMA node0 CPU(s): 0
Flags: fpu de pse tsc msr pae mce cx8 apic sep mtrr pge 
mca cmov pse36 clflush mmx fxsr sse sse2 syscall nx lm rep_good nopl 
xtopology pni cx16 x2apic hypervisor lahf_lm


___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] VM migrates then locks up

2017-02-10 Thread Gerald Brandt



On 2017-02-10 01:22 PM, Chance Ellis wrote:

Does the VM use memory ballooning?





It uses fixed memory. There was a checkmark for balooning that was on. I 
unchecked it and got the same results.



___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] VM migrates then locks up

2017-02-10 Thread Gerald Brandt



On 2017-02-10 01:12 PM, Michael Rasmussen wrote:

On Fri, 10 Feb 2017 12:16:07 -0600
Gerald Brandt <g...@majentis.com> wrote:


No errors on migration and processor type is KVM 64


Try qemu 64 instead.



No change. Migration from the Xeon system to the AMD works just fine.

Gerald

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] VM migrates then locks up

2017-02-10 Thread Gerald Brandt



On 2017-02-10 11:34 AM, Chance Ellis wrote:

What is the processor type set to on the vm?

Any errors from the migration?

On 2/10/17, 11:59 AM, "pve-user on behalf of Gerald Brandt" 
<pve-user-boun...@pve.proxmox.com on behalf of g...@majentis.com> wrote:

 Hi,
 
 I'm running an AMD FX(tm)-8150 Eight-Core Processor. Second system is

 running an Intel(R) Xeon(R) CPU5148  @ 2.33GHz.
 
 When I migrate a machine from the AMD to the Xeon, the VM locks up after

 migration. Both machines are running 4.4, and are up to date.
 
 Any ideas?
 
 Gerald
 



No errors on migration and processor type is KVM 64

Gerald

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] VM migrates then locks up

2017-02-10 Thread Gerald Brandt

Hi,

I'm running an AMD FX(tm)-8150 Eight-Core Processor. Second system is 
running an Intel(R) Xeon(R) CPU5148  @ 2.33GHz.


When I migrate a machine from the AMD to the Xeon, the VM locks up after 
migration. Both machines are running 4.4, and are up to date.


Any ideas?

Gerald



___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] A stop job is running... (xxx/no limit)

2016-11-23 Thread Gerald Brandt



On 2016-11-23 06:30 AM, Marco Gaiarin wrote:

Mandi! Gerald Brandt
   In chel di` si favelave...


I'm trying to shut down a server, and it waits on 'A stop job is
running... (xx/ no limit). Why is there no time limit, and how can I
set one?

NFS storage?



Yup. Why, does that make a difference?

Gerald

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] A stop job is running... (xxx/no limit)

2016-11-22 Thread Gerald Brandt

Hi,

I'm trying to shut down a server, and it waits on 'A stop job is 
running... (xx/ no limit). Why is there no time limit, and how can I set 
one?


Gerald


___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Slow speeds when KVM guest is on NFS

2016-11-15 Thread Gerald Brandt

I don't know if it helps, but I always switch to NFSv4.

nfs: storage
export /proxmox
server 172.23.4.16
path /mnt/pve/storage
options vers=4
maxfiles 1
content iso,backup,images

Gerald

On 2016-11-15 08:48 AM, Mikhail wrote:

Hello,

Please help me to find why I'm seeing slow speeds when KVM guest is on
NFS storage. I have pretty standard setup, running Proxmox 4.1-1. The
storage server is on NFS connected directly (no switches/hubs, direct
NIC-to-NIC connection) via Gigabit ethernet.

I just launched Debian-8.3 stock ISO installation on the KVM guest
that's disk resides on NFS and I'm seeing some terribly slow file copy
operation speeds on debian install procedure - about 200-600
kilobyte/second according to "bwm-ng" output on storage server. I also
tried direct write from my Proxmox host via NFS using "dd" and results
are showing near 1gbit speeds:

# dd if=/dev/zero of=10G bs=1M count=1
1+0 records in
1+0 records out
1048576 bytes (10 GB) copied, 115.951 s, 90.4 MB/s

What could be an issue?

On Proxmox host:

# cat /proc/mounts |grep vmnf
192.168.4.1:/mnt/vmnfs /mnt/pve/vmnfs nfs
rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.4.1,mountvers=3,mountport=47825,mountproto=udp,local_lock=none,addr=192.168.4.1
0 0

storage.cfg:
nfs: vmnfs
export /mnt/vmnfs
server 192.168.4.1
path /mnt/pve/vmnfs
content images
options vers=3
maxfiles 1

KVM guest config:
# cat /etc/pve/qemu-server/85103.conf
bootdisk: virtio0
cores: 1
ide2: ISOimages:iso/debian-8.3.0-amd64-CD-1.iso,media=cdrom
memory: 2048
name: WEB
net0: virtio=3A:39:66:30:63:32,bridge=vmbr0,tag=85
numa: 0
onboot: 1
ostype: l26
smbios1: uuid=97ea543f-ca64-43ab-9d66-9d1c9cd179b0
sockets: 1
virtio0: vmnfs:85103/vm-85103-disk-1.qcow2,size=50G

Any suggestions where to start looking is greatly appreciated.

Thanks.
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user




___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Kernel oops

2016-11-15 Thread Gerald Brandt



On 2016-11-14 03:39 AM, Emmanuel Kasper wrote:

I am non-subscriptions, and I just did an update yesterday to see if
it would fix the error. I'll be running a memtest today to see if I
can find anything.

I hadn't done an update in awhile before that, so I'm leaning towards
a hardware issue. What do you think?

Yes, most probably the ram is the culprit. You might also check that the
RAM modules are properly seated on the motherboard.


___



Bad RAM is exactly what it was. 2 of the 4 DIMMs went bad after 4 years.

Gerald

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Kernel oops

2016-11-13 Thread Gerald Brandt



On 2016-11-13 08:15 AM, Gerald Brandt wrote:

Hi,

I'm getting a lot of crashes on my Proxmox box. I am runing Proxmox on 
a Debian base install, but I have anther boxes that does the same, and 
it is fine.



Nov 13 06:15:54 gbr-proxmox-1 kernel: [61228.442402] [ cut 
here ]
Nov 13 06:15:54 gbr-proxmox-1 kernel: [61228.442408] WARNING: CPU: 2 
PID: 0 at kernel/rcu/tree.c:2733 rcu_process_callbacks+0x5bb/0x5e0()
Nov 13 06:15:54 gbr-proxmox-1 kernel: [61228.442409] Modules linked 
in: nfsv3 rpcsec_gss_krb5 nfsv4 ip_set ip6table_filter ip6_tables 
iptable_filter ip_tables softdog x_tables nfsd auth_rpcgss nfs_acl nfs 
lockd grace fscache sunrpc ib_iser rdma_cm iw_cm ib_cm ib_sa ib_mad 
ib_core ib_addr iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi 
nfnetlink_log nfnetlink xfs snd_hda_codec_hdmi nouveau eeepc_wmi 
asus_wmi kvm_amd kvm sparse_keymap irqbypass mxm_wmi crct10dif_pclmul 
snd_hda_codec_realtek crc32_pclmul video snd_hda_codec_generic ttm 
snd_hda_intel drm_kms_helper drm snd_hda_codec aesni_intel aes_x86_64 
lrw gf128mul glue_helper snd_hda_core ablk_helper cryptd snd_hwdep 
i2c_algo_bit snd_pcm fb_sys_fops syscopyarea snd_timer sysfillrect snd 
sysimgblt input_leds pcspkr serio_raw soundcore edac_mce_amd k10temp 
fam15h_power edac_core shpchp i2c_piix4 8250_fintek mac_hid wmi 
vhost_net vhost macvtap macvlan it87 hwmon_vid autofs4 btrfs raid456 
async_raid6_recov async_memcpy async_pq async_xor async_tx xor 
raid6_pq libcrc32c raid1 ses enclosure uas usb_storage firewire_ohci 
r8169 mii firewire_core crc_itu_t sata_sil24 ahci libahci fjes
Nov 13 06:15:54 gbr-proxmox-1 kernel: [61228.442454] CPU: 2 PID: 0 
Comm: swapper/2 Not tainted 4.4.21-1-pve #1
Nov 13 06:15:54 gbr-proxmox-1 kernel: [61228.442455] Hardware name: To 
be filled by O.E.M. To be filled by O.E.M./SABERTOOTH 990FX, BIOS 0901 
11/24/2011
Nov 13 06:15:54 gbr-proxmox-1 kernel: [61228.442457] 0086 
63ad933f85fa0f2b 88083fc83e70 813f3f83
Nov 13 06:15:54 gbr-proxmox-1 kernel: [61228.442459]  
81ccfadb 88083fc83ea8 81081806
Nov 13 06:15:54 gbr-proxmox-1 kernel: [61228.442460] 81e576c0 
88083fc97f38 0246 

Nov 13 06:15:54 gbr-proxmox-1 kernel: [61228.442462] Call Trace:
Nov 13 06:15:54 gbr-proxmox-1 kernel: [61228.442463]  
[] dump_stack+0x63/0x90
Nov 13 06:15:54 gbr-proxmox-1 kernel: [61228.442469] 
[] warn_slowpath_common+0x86/0xc0
Nov 13 06:15:54 gbr-proxmox-1 kernel: [61228.442471] 
[] warn_slowpath_null+0x1a/0x20
Nov 13 06:15:54 gbr-proxmox-1 kernel: [61228.442473] 
[] rcu_process_callbacks+0x5bb/0x5e0
Nov 13 06:15:54 gbr-proxmox-1 kernel: [61228.442475] 
[] __do_softirq+0x10e/0x2a0
Nov 13 06:15:54 gbr-proxmox-1 kernel: [61228.442476] 
[] irq_exit+0x8e/0x90
Nov 13 06:15:54 gbr-proxmox-1 kernel: [61228.442480] 
[] smp_apic_timer_interrupt+0x42/0x50
Nov 13 06:15:54 gbr-proxmox-1 kernel: [61228.442481] 
[] apic_timer_interrupt+0x82/0x90
Nov 13 06:15:54 gbr-proxmox-1 kernel: [61228.442482]  
[] ? cpuidle_enter_state+0x10a/0x260
Nov 13 06:15:54 gbr-proxmox-1 kernel: [61228.442487] 
[] ? cpuidle_enter_state+0xe6/0x260
Nov 13 06:15:54 gbr-proxmox-1 kernel: [61228.442488] 
[] cpuidle_enter+0x17/0x20
Nov 13 06:15:54 gbr-proxmox-1 kernel: [61228.442491] 
[] call_cpuidle+0x3b/0x70
Nov 13 06:15:54 gbr-proxmox-1 kernel: [61228.442492] 
[] ? cpuidle_select+0x13/0x20
Nov 13 06:15:54 gbr-proxmox-1 kernel: [61228.442494] 
[] cpu_startup_entry+0x2bf/0x380
Nov 13 06:15:54 gbr-proxmox-1 kernel: [61228.442496] 
[] start_secondary+0x154/0x190
Nov 13 06:15:54 gbr-proxmox-1 kernel: [61228.442497] ---[ end trace 
8a742910926b0ed4 ]---
Nov 13 06:17:06 gbr-proxmox-1 kernel: [61300.617812] BUG: unable to 
handle kernel paging request at bb00
Nov 13 06:17:06 gbr-proxmox-1 kernel: [61300.618057] IP: 
[] kmem_cache_alloc+0x77/0x200
Nov 13 06:17:06 gbr-proxmox-1 kernel: [61300.618662] PGD 5cb1c5067 PUD 
5cb0f2067 PMD 0

Nov 13 06:17:06 gbr-proxmox-1 kernel: [61300.619431] Oops:  [#1] SMP
Nov 13 06:17:06 gbr-proxmox-1 kernel: [61300.620253] Modules linked 
in: nfsv3 rpcsec_gss_krb5 nfsv4 ip_set ip6table_filter ip6_tables 
iptable_filter ip_tables softdog x_tables nfsd auth_rpcgss nfs_acl nfs 
lockd grace fscache sunrpc ib_iser rdma_cm iw_cm ib_cm ib_sa ib_mad 
ib_core ib_addr iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi 
nfnetlink_log nfnetlink xfs snd_hda_codec_hdmi nouveau eeepc_wmi 
asus_wmi kvm_amd kvm sparse_keymap irqbypass mxm_wmi crct10dif_pclmul 
snd_hda_codec_realtek crc32_pclmul video snd_hda_codec_generic ttm 
snd_hda_intel drm_kms_helper drm snd_hda_codec aesni_intel aes_x86_64 
lrw gf128mul glue_helper snd_hda_core ablk_helper cryptd snd_hwdep 
i2c_algo_bit snd_pcm fb_sys_fops syscopyarea snd_timer sysfillrect snd 
sysimgblt input_leds pcspkr serio_raw soundcore edac_mce_amd k10temp 
fam15h_power edac_core shpchp i2c_piix4 8250_fintek mac_hid wmi 
vhost_net vhost macvtap macvlan it87 hwmon_vid

[PVE-User] Kernel oops

2016-11-13 Thread Gerald Brandt

Hi,

I'm getting a lot of crashes on my Proxmox box. I am runing Proxmox on a 
Debian base install, but I have anther boxes that does the same, and it 
is fine.



Nov 13 06:15:54 gbr-proxmox-1 kernel: [61228.442402] [ cut 
here ]
Nov 13 06:15:54 gbr-proxmox-1 kernel: [61228.442408] WARNING: CPU: 2 
PID: 0 at kernel/rcu/tree.c:2733 rcu_process_callbacks+0x5bb/0x5e0()
Nov 13 06:15:54 gbr-proxmox-1 kernel: [61228.442409] Modules linked in: 
nfsv3 rpcsec_gss_krb5 nfsv4 ip_set ip6table_filter ip6_tables 
iptable_filter ip_tables softdog x_tables nfsd auth_rpcgss nfs_acl nfs 
lockd grace fscache sunrpc ib_iser rdma_cm iw_cm ib_cm ib_sa ib_mad 
ib_core ib_addr iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi 
nfnetlink_log nfnetlink xfs snd_hda_codec_hdmi nouveau eeepc_wmi 
asus_wmi kvm_amd kvm sparse_keymap irqbypass mxm_wmi crct10dif_pclmul 
snd_hda_codec_realtek crc32_pclmul video snd_hda_codec_generic ttm 
snd_hda_intel drm_kms_helper drm snd_hda_codec aesni_intel aes_x86_64 
lrw gf128mul glue_helper snd_hda_core ablk_helper cryptd snd_hwdep 
i2c_algo_bit snd_pcm fb_sys_fops syscopyarea snd_timer sysfillrect snd 
sysimgblt input_leds pcspkr serio_raw soundcore edac_mce_amd k10temp 
fam15h_power edac_core shpchp i2c_piix4 8250_fintek mac_hid wmi 
vhost_net vhost macvtap macvlan it87 hwmon_vid autofs4 btrfs raid456 
async_raid6_recov async_memcpy async_pq async_xor async_tx xor raid6_pq 
libcrc32c raid1 ses enclosure uas usb_storage firewire_ohci r8169 mii 
firewire_core crc_itu_t sata_sil24 ahci libahci fjes
Nov 13 06:15:54 gbr-proxmox-1 kernel: [61228.442454] CPU: 2 PID: 0 Comm: 
swapper/2 Not tainted 4.4.21-1-pve #1
Nov 13 06:15:54 gbr-proxmox-1 kernel: [61228.442455] Hardware name: To 
be filled by O.E.M. To be filled by O.E.M./SABERTOOTH 990FX, BIOS 0901 
11/24/2011
Nov 13 06:15:54 gbr-proxmox-1 kernel: [61228.442457] 0086 
63ad933f85fa0f2b 88083fc83e70 813f3f83
Nov 13 06:15:54 gbr-proxmox-1 kernel: [61228.442459]  
81ccfadb 88083fc83ea8 81081806
Nov 13 06:15:54 gbr-proxmox-1 kernel: [61228.442460] 81e576c0 
88083fc97f38 0246 

Nov 13 06:15:54 gbr-proxmox-1 kernel: [61228.442462] Call Trace:
Nov 13 06:15:54 gbr-proxmox-1 kernel: [61228.442463]   
[] dump_stack+0x63/0x90
Nov 13 06:15:54 gbr-proxmox-1 kernel: [61228.442469] 
[] warn_slowpath_common+0x86/0xc0
Nov 13 06:15:54 gbr-proxmox-1 kernel: [61228.442471] 
[] warn_slowpath_null+0x1a/0x20
Nov 13 06:15:54 gbr-proxmox-1 kernel: [61228.442473] 
[] rcu_process_callbacks+0x5bb/0x5e0
Nov 13 06:15:54 gbr-proxmox-1 kernel: [61228.442475] 
[] __do_softirq+0x10e/0x2a0
Nov 13 06:15:54 gbr-proxmox-1 kernel: [61228.442476] 
[] irq_exit+0x8e/0x90
Nov 13 06:15:54 gbr-proxmox-1 kernel: [61228.442480] 
[] smp_apic_timer_interrupt+0x42/0x50
Nov 13 06:15:54 gbr-proxmox-1 kernel: [61228.442481] 
[] apic_timer_interrupt+0x82/0x90
Nov 13 06:15:54 gbr-proxmox-1 kernel: [61228.442482]   
[] ? cpuidle_enter_state+0x10a/0x260
Nov 13 06:15:54 gbr-proxmox-1 kernel: [61228.442487] 
[] ? cpuidle_enter_state+0xe6/0x260
Nov 13 06:15:54 gbr-proxmox-1 kernel: [61228.442488] 
[] cpuidle_enter+0x17/0x20
Nov 13 06:15:54 gbr-proxmox-1 kernel: [61228.442491] 
[] call_cpuidle+0x3b/0x70
Nov 13 06:15:54 gbr-proxmox-1 kernel: [61228.442492] 
[] ? cpuidle_select+0x13/0x20
Nov 13 06:15:54 gbr-proxmox-1 kernel: [61228.442494] 
[] cpu_startup_entry+0x2bf/0x380
Nov 13 06:15:54 gbr-proxmox-1 kernel: [61228.442496] 
[] start_secondary+0x154/0x190
Nov 13 06:15:54 gbr-proxmox-1 kernel: [61228.442497] ---[ end trace 
8a742910926b0ed4 ]---
Nov 13 06:17:06 gbr-proxmox-1 kernel: [61300.617812] BUG: unable to 
handle kernel paging request at bb00
Nov 13 06:17:06 gbr-proxmox-1 kernel: [61300.618057] IP: 
[] kmem_cache_alloc+0x77/0x200
Nov 13 06:17:06 gbr-proxmox-1 kernel: [61300.618662] PGD 5cb1c5067 PUD 
5cb0f2067 PMD 0

Nov 13 06:17:06 gbr-proxmox-1 kernel: [61300.619431] Oops:  [#1] SMP
Nov 13 06:17:06 gbr-proxmox-1 kernel: [61300.620253] Modules linked in: 
nfsv3 rpcsec_gss_krb5 nfsv4 ip_set ip6table_filter ip6_tables 
iptable_filter ip_tables softdog x_tables nfsd auth_rpcgss nfs_acl nfs 
lockd grace fscache sunrpc ib_iser rdma_cm iw_cm ib_cm ib_sa ib_mad 
ib_core ib_addr iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi 
nfnetlink_log nfnetlink xfs snd_hda_codec_hdmi nouveau eeepc_wmi 
asus_wmi kvm_amd kvm sparse_keymap irqbypass mxm_wmi crct10dif_pclmul 
snd_hda_codec_realtek crc32_pclmul video snd_hda_codec_generic ttm 
snd_hda_intel drm_kms_helper drm snd_hda_codec aesni_intel aes_x86_64 
lrw gf128mul glue_helper snd_hda_core ablk_helper cryptd snd_hwdep 
i2c_algo_bit snd_pcm fb_sys_fops syscopyarea snd_timer sysfillrect snd 
sysimgblt input_leds pcspkr serio_raw soundcore edac_mce_amd k10temp 
fam15h_power edac_core shpchp i2c_piix4 8250_fintek mac_hid wmi 
vhost_net vhost macvtap macvlan it87 hwmon_vid autofs4 btrfs raid456 
async_raid6_recov 

Re: [PVE-User] P2V Windows2003 Server (AD)

2016-10-08 Thread Gerald Brandt



On 2016-10-08 11:24 AM, Alain Péan wrote:



Le 08/10/2016 à 18:22, Gerald Brandt a écrit :
Can't do that unless you're running 2003R2. Straight 2003 can't be 
joined by much. 


I hope it is at least 2003 R2

Alain

Mine isn't, and it's a PITA. I hate IT people that don't upgrade when 
needed.


Gerald
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] P2V Windows2003 Server (AD)

2016-10-08 Thread Gerald Brandt



On 2016-10-08 11:19 AM, Alain Péan wrote:



Le 07/10/2016 à 11:43, HK 590 Dicky 梁家棋 資科 a écrit :
anyone here have experience on migrate physical windows 2003 server 
to proxmox?
i used a tools to  "EaseUS Todo Backup Advanced Server " to covert 
physical server to .vmdk
than a create new VM with vmdk format, and replace the windows 2003 
server .vmdk file just created,
but can't start up successful, i can see the Windows 2003 logo 2 
seconds, than keep reboot..


any suggestion to migrate physical windows 2003 server to Proxmox ? 


I won't do that, as windows 2003 server is no more supported by 
Microsoft, since at least July 2015.


I would create a new VM using windows 2008 R2 or 2012/2012 R2, join it 
to the AD domain, then promote it as AD controller, then demote the 
old server.


My two cents.

Alain



Can't do that unless you're running 2003R2. Straight 2003 can't be 
joined by much.


Gerald

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Live Migration Problem

2016-07-14 Thread Gerald Brandt



On 2016-07-14 06:56 AM, Kilian Ries wrote:

No, thats an external NFS Storage (Synology NAS).




I use Synology (replicated) for my storage. Migration between 3 servers 
(2 AMD, 1 Intel) works fine. I did change proxmox to use NFSv4 instead 
of NFSv3.


Gerald

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] about IO Delay using openvz and zimbra

2016-06-17 Thread Gerald Brandt

Hi,

Your fsyncs per second are brutal, and Proxmox needs high fsyncs. I 
would start there.


Gerald


On 2016-06-16 09:38 PM, Orlando Martinez Bao wrote:

Hello friends

I am SysAdmin at the Agrarian University of Havana, Cuba.

  


I have installed Proxmox v3.4 here a cluster of seven nodes and for some
days I am having problems with a node in the cluster which only has a
Container with Zimbra 8.

The problem is I'm having a lot of I / O Delay and that server is very slow
to the point that sometimes the service is down.

The server is PowerEdge T320 Dell with Intel Xeon E5-2420 12gram with 12
cores and 1GB of disk 7200RPM 2xHDD are configured as RAID1.

I have virtualizing the zimbra 8 using a template of 12.04, has 8 cores, 8G
RAM, 500GHD the storage of container is local storage.

Then I put the output to see the IO when you are not running the VM. Look at
the BUFFERED READS that are marked are very bad. And those moments have seen
IO Delay up to 50%.

  


root@n07:~# pveperf

CPU BOGOMIPS:  45601.20

REGEX/SECOND:  1025079

HD SIZE:   9.84 GB (/dev/mapper/pve-root)

BUFFERED READS:1.51 MB/sec

AVERAGE SEEK TIME: 165.88 ms

FSYNCS/SECOND: 0.40

DNS EXT:   206.20 ms

DNS INT:   0.91 ms (unah.edu.cu)

root@n07:~# pveperf

CPU BOGOMIPS:  45601.20

REGEX/SECOND:  1048361

HD SIZE:   9.84 GB (/dev/mapper/pve-root)

BUFFERED READS:0.78 MB/sec

AVERAGE SEEK TIME: 283.84 ms

FSYNCS/SECOND: 0.50

DNS EXT:   206.13 ms

DNS INT:   0.89 ms (unah.edu.cu)

root@n07:~# pveperf (Este fue cuando detuve la VM)

CPU BOGOMIPS:  45601.20

REGEX/SECOND:  1073712

HD SIZE:   9.84 GB (/dev/mapper/pve-root)

BUFFERED READS:113.04 MB/sec

AVERAGE SEEK TIME: 13.49 ms

FSYNCS/SECOND: 9.66

DNS EXT:   198.59 ms

DNS INT:   0.86 ms (unah.edu.cu)

root@n07:~# pveperf

CPU BOGOMIPS:  45601.20

REGEX/SECOND:  1024213

HD SIZE:   9.84 GB (/dev/mapper/pve-root)

BUFFERED READS:164.30 MB/sec

AVERAGE SEEK TIME: 13.61 ms

FSYNCS/SECOND: 16.34

DNS EXT:   234.75 ms

DNS INT:   0.94 ms (unah.edu.cu)

  

  


Please help me.

Best Regards

Orlando

  


___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Online migration problems with pve 4.2

2016-05-11 Thread Gerald Brandt

Hi,

Try from the command line:

qm migrate   --online

Gerald


On 2016-05-11 12:20 PM, Albert Dengg wrote:

hi,

i just upgrade a pve cluster to pve 4.2 (enterprise repo), but i
have the problem that i cannot do any online migrations since the
upgrade.

pve versions (this node has alerady rebooted after the upgrade):
[dengg@pve1:~]> pveversion -v
proxmox-ve: 4.2-48 (running kernel: 4.4.6-1-pve)
pve-manager: 4.2-2 (running version: 4.2-2/725d76f0)
pve-kernel-2.6.32-37-pve: 2.6.32-150
pve-kernel-4.4.6-1-pve: 4.4.6-48
pve-kernel-4.2.6-1-pve: 4.2.6-36
pve-kernel-2.6.32-32-pve: 2.6.32-136
pve-kernel-2.6.32-43-pve: 2.6.32-166
pve-kernel-4.2.8-1-pve: 4.2.8-41
pve-kernel-2.6.32-39-pve: 2.6.32-157
pve-kernel-3.10.0-11-pve: 3.10.0-36
pve-kernel-2.6.32-34-pve: 2.6.32-140
pve-kernel-4.2.2-1-pve: 4.2.2-16
pve-kernel-2.6.32-40-pve: 2.6.32-160
pve-kernel-3.10.0-13-pve: 3.10.0-38
pve-kernel-2.6.32-41-pve: 2.6.32-164
pve-kernel-4.2.3-2-pve: 4.2.3-22
lvm2: 2.02.116-pve2
corosync-pve: 2.3.5-2
libqb0: 1.0-1
pve-cluster: 4.0-39
qemu-server: 4.0-72
pve-firmware: 1.1-8
libpve-common-perl: 4.0-59
libpve-access-control: 4.0-16
libpve-storage-perl: 4.0-50
pve-libspice-server1: 0.12.5-2
vncterm: 1.2-1
pve-qemu-kvm: 2.5-14
pve-container: 1.0-62
pve-firewall: 2.0-25
pve-ha-manager: 1.0-28
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u1
lxc-pve: 1.1.5-7
lxcfs: 2.0.0-pve2
cgmanager: 0.39-pve1
criu: 1.6.0-1
fence-agents-pve: 4.0.20-1
openvswitch-switch: 2.3.2-3

(yes, i should cleanup old kernel versions...)

steps i took sofar:
* migrated all vm's of one node
* upgraded this node
* rebooted it
* tried to migrate vms back to this host to upgrade another one

since there was a thread on the mailling list about migration
problems that where fixed with newer qemu-server/pve-qemu-kvm
version, i then upgraded the other two nodes (disabeling HA first).

i still cannot online migrate, even a vm that has been booted on the
fully upgraded node.

unfortunatly the error message i get is not really informative:
ERROR: online migration failure - aborting

am i running into some known problem here or is this a new issue?

thx

regards,
albert

ps: rebooting the remaining nodes without migrating the VMs off them
would be really inconvinient...


___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] NFSv4 support for NFS storage?

2016-05-06 Thread Gerald Brandt


On 2016-05-06 08:41 AM, Frank Thommen wrote:

Hi,

new fileservers in our department are configured to require NFSv4 and 
do not support NFSv3 any more.  That worked fine for PVE 3.x by 
patching the "values.options" assignment in 
/usr/share/pve-manager/ext?/pvemanagerlib.js.


When applying the same patch to PVE 4.x, the storage is mounted 
correctly and also shown in the summary, but the status is inactive 
and `pvesm list ` returns "mount error: mount.nfs: 
/mnt/pve/ is busy or already mounted". In the content tab, 
the error "mount error: mount.nfs: /mnt/pve/ is busy or 
already mounted (500)" is shown.


Is there any other patch which has to be applied to make NFSv4 working 
with PVE 4.x and is there already a timeframe, when NFSv4 will be 
supported out of the box?


We are currently running "pve-manager/4.1-33/de386c1a (running kernel: 
4.2.6-1-pve)".


Cheers
Frank


I go into /etc/pve/storage.conf and add or change vers=4

Gerald


___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] VM stuck in start

2016-04-24 Thread Gerald Brandt



On 2016-04-24 04:59 PM, Lindsay Mathieson wrote:

On 25/04/2016 6:38 AM, Gerald Brandt wrote:
I have a VM stuck in start. It's not actually starting, and there 
doesn't seem to be anything I can do to unstick it.


28128 ?Ds 0:00 task 
UPID:erl-proxmox-1:6DE0:0237B4E5:571D1C86:qmstart:111:root@pam:


How do I get this thing to start. I'm currently remote, so can't hard 
reset the Proxmox server... not that I really want to.



I presume the VMID is "111"?

try
systemctl stop 111.scope

then

qm start 111



I ended up hard resetting the server with

echo 1 > /proc/sys/kernel/sysrq
echo b > /proc/sysrq-trigger

This happens almost weekly, so I'll try your suggestion next time and 
report back.


Gerald

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] VM stuck in start

2016-04-24 Thread Gerald Brandt



On 2016-04-24 03:38 PM, Gerald Brandt wrote:

Hi,

I have a VM stuck in start. It's not actually starting, and there 
doesn't seem to be anything I can do to unstick it.


28128 ?Ds 0:00 task 
UPID:erl-proxmox-1:6DE0:0237B4E5:571D1C86:qmstart:111:root@pam:


How do I get this thing to start. I'm currently remote, so can't hard 
reset the Proxmox server... not that I really want to.


Gerald


I just tried to manually migrate the machines off, and I got this:

# qm migrate 106 erl-proxmox-2 --online
Apr 24 16:08:16 ERROR: migration aborted (duration 00:00:03): VM 106 qmp 
command 'query-machines' failed - unable to connect to VM 106 qmp socket 
- timeout after 31 retries

migration aborted

# pveversion -v
proxmox-ve: 4.1-41 (running kernel: 4.2.8-1-pve)
pve-manager: 4.1-22 (running version: 4.1-22/aca130cf)
pve-kernel-4.2.6-1-pve: 4.2.6-36
pve-kernel-4.2.8-1-pve: 4.2.8-41
lvm2: 2.02.116-pve2
corosync-pve: 2.3.5-2
libqb0: 1.0-1
pve-cluster: 4.0-36
qemu-server: 4.0-64
pve-firmware: 1.1-7
libpve-common-perl: 4.0-54
libpve-access-control: 4.0-13
libpve-storage-perl: 4.0-45
pve-libspice-server1: 0.12.5-2
vncterm: 1.2-1
pve-qemu-kvm: 2.5-9
pve-container: 1.0-52
pve-firewall: 2.0-22
pve-ha-manager: 1.0-25
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u1
lxc-pve: 1.1.5-7
lxcfs: 2.0.0-pve2
cgmanager: 0.39-pve1
criu: 1.6.0-1
zfsutils: 0.6.5-pve7~jessie

Gerald

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] VM stuck in start

2016-04-24 Thread Gerald Brandt

Hi,

I have a VM stuck in start. It's not actually starting, and there 
doesn't seem to be anything I can do to unstick it.


28128 ?Ds 0:00 task 
UPID:erl-proxmox-1:6DE0:0237B4E5:571D1C86:qmstart:111:root@pam:


How do I get this thing to start. I'm currently remote, so can't hard 
reset the Proxmox server... not that I really want to.


Gerald

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Proxmox not responsive

2016-04-20 Thread Gerald Brandt
Promox 4 has way more issues than 3.4 did. My stability has gotten 
significantly worse when I went to 4.


Gerald


On 2016-04-20 12:58 AM, Marc Cousin wrote:

And a week later… it hung. So NFSv4 doesn't solve the problem. It just
makes it less frequent it seems.


On 13/04/2016 09:00, Marc Cousin wrote:

A bit more than a week after… NFSv4 seems to have done the trick for us.
Not one problem since then…

On 04/04/2016 18:47, Marc Cousin wrote:

Hi,

For what it's worth, I just migrated my servers to proxmox 4, and also
having random (one every week) NFS hangs, while the NFS server is still
available. Only rebooting my server did the trick both times.

On 04/04/2016 18:35, Gilberto Nunes wrote:

Hi
I don't know why, but seems to me NFS doesn't work with Proxmox
I have trouble ins the pass with simple Storage and NFS.
Because of that I switch to glusterFS.
GlusterFS can also work as NFS server.
I suggest you give a try...

Cheers

2016-04-04 12:14 GMT-03:00 Gerald Brandt <g...@majentis.com>:


On 2016-04-04 09:31 AM, Gerald Brandt wrote:


Hi,

I have a 3 node Proxmox Cluster. Two of the nodes are partially
non-responsive this morning.

Weekend snapshots to an NFS drive hung.
Trying to look at the NFS via Proxmox results in communication failure.
Looking via command line is fine.
Trying to migrate VM's off of box results in communication failure
(Timeout)
Looking at a single VM's summary shows no data and communication failure
All VM's show as online

I've seen similar, but with all VM's showing as offline and VM numbers
only, no names. So this is different.

Is restarting pve-manager recommended here?

Gerald


restarting pve-manager didn't help. Looks like a Proxmox reboot, but
without being able to migrate the VM's off, it's downtime.


Gerald

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Proxmox not responsive

2016-04-04 Thread Gerald Brandt

Hi,

I've been running NFS storage on Proxmox 3.x for quite some time. 
Proxmox 4 a bit more recently. I do force NFSv4 for all of my NFS 
mounts, except the one that hung. I may have to change that to v4 as well.


Luckily, command line worked for me still, so I migrated my VM's (except 
the hung backup one) to another server and reboot. Just about to do the 
same with the second non-responsive server.


I tried GlusterFS back in my XenServer days, and had issues with it's 
NFS server. Now I run Synology storage, and GlusterFS is not an option.


Gerald


On 2016-04-04 11:35 AM, Gilberto Nunes wrote:

Hi
I don't know why, but seems to me NFS doesn't work with Proxmox
I have trouble ins the pass with simple Storage and NFS.
Because of that I switch to glusterFS.
GlusterFS can also work as NFS server.
I suggest you give a try...

Cheers

2016-04-04 12:14 GMT-03:00 Gerald Brandt <g...@majentis.com>:



On 2016-04-04 09:31 AM, Gerald Brandt wrote:


Hi,

I have a 3 node Proxmox Cluster. Two of the nodes are partially
non-responsive this morning.

Weekend snapshots to an NFS drive hung.
Trying to look at the NFS via Proxmox results in communication failure.
Looking via command line is fine.
Trying to migrate VM's off of box results in communication failure
(Timeout)
Looking at a single VM's summary shows no data and communication failure
All VM's show as online

I've seen similar, but with all VM's showing as offline and VM numbers
only, no names. So this is different.

Is restarting pve-manager recommended here?

Gerald



restarting pve-manager didn't help. Looks like a Proxmox reboot, but
without being able to migrate the VM's off, it's downtime.


Gerald

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user






___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Proxmox not responsive

2016-04-04 Thread Gerald Brandt



On 2016-04-04 09:31 AM, Gerald Brandt wrote:

Hi,

I have a 3 node Proxmox Cluster. Two of the nodes are partially 
non-responsive this morning.


Weekend snapshots to an NFS drive hung.
Trying to look at the NFS via Proxmox results in communication 
failure. Looking via command line is fine.
Trying to migrate VM's off of box results in communication failure 
(Timeout)

Looking at a single VM's summary shows no data and communication failure
All VM's show as online

I've seen similar, but with all VM's showing as offline and VM numbers 
only, no names. So this is different.


Is restarting pve-manager recommended here?

Gerald



restarting pve-manager didn't help. Looks like a Proxmox reboot, but 
without being able to migrate the VM's off, it's downtime.


Gerald

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Proxmox not responsive

2016-04-04 Thread Gerald Brandt

Hi,

I have a 3 node Proxmox Cluster. Two of the nodes are partially 
non-responsive this morning.


Weekend snapshots to an NFS drive hung.
Trying to look at the NFS via Proxmox results in communication failure. 
Looking via command line is fine.

Trying to migrate VM's off of box results in communication failure (Timeout)
Looking at a single VM's summary shows no data and communication failure
All VM's show as online

I've seen similar, but with all VM's showing as offline and VM numbers 
only, no names. So this is different.


Is restarting pve-manager recommended here?

Gerald
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] VM Backup best practices recommendation...

2016-02-17 Thread Gerald Brandt
I use BackupPC to back up data from servers daily. vzdump backups I do 
weekly, and use for disaster recovery purposes only.


Gerald


On 2016-02-17 04:00 AM, Gilberto Nunes wrote:

Hello list

I have a VM KVM with Linux Ubuntu 14.04 running in PVE 4.1.
Everything is fine.
This VM is our Zimbra Mail Server in which we have about 1 000 accounts.
The amount of disk space is about 1,4 TB.
As can you see, with a bug partition like this, it's very difficult to 
make and mantain backups...

And, to worst eerything, vzdump backup tooks so long to make a backup...
What recommendation can you,guys, point to me to mitigate such situation?
I will grateful for any advice




___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Proxmox 4.1 on Debian 8.3.0

2016-02-07 Thread Gerald Brandt

Great, thanks.

On 2016-02-07 01:47 PM, Martin Maurer wrote:

yes, no problem.

we will update the wiki soon, no changes on the howto.

On 07.02.2016 17:30, Gerald Brandt wrote:

Hi,

Can I install 4.1 on top of Debian 8.3.0? The Wiki only show 8.2.0
instructions.

Gerald

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user





___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Proxmox 4.1 on Debian 8.3.0

2016-02-07 Thread Gerald Brandt

Hi,

Can I install 4.1 on top of Debian 8.3.0? The Wiki only show 8.2.0 
instructions.


Gerald

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] gzip rsyncable

2016-01-19 Thread Gerald Brandt

Hi,

Does Proxmox 3.4 pass --rsyncable to gzip? I not, is there a way I can 
add it?


Gerald

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] VM Won't start

2016-01-13 Thread Gerald Brandt

Are all your other VM's using Nehalem? Did you try kvm64 for CPU type?

Gerald


On 2016-01-13 10:44 AM, Gilberto Nunes wrote:

I already set to none in CD configuration. No luck!

2016-01-13 14:48 GMT-02:00 Angel Docampo >:


On 13/01/16 17:24, Gilberto Nunes wrote:


But, when I point the virtual CD/DVD to an ISO image, I get that
message...
There is no special requirement to this VM... ISO is Centos 7 x64.
VM config:




Can it start without CD?
-- 



*Angel Docampo
*
*Datalab Tecnologia, s.a.*
Castillejos, 352 - 08025 Barcelona
Tel. 93 476 69 14 - Ext: 114
Mob. 670.299.381


___
pve-user mailing list
pve-user@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user




--

Gilberto Ferreira
+55 (47) 9676-7530
Skype: gilberto.nunes36



___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Backup won't stop and bad process

2016-01-04 Thread Gerald Brandt

Hi,

I have a 4.1 install, and I have a backup that won't stop.  It's been 
stuck at 79% on the same VM for days.


I also have a process taking 100% of a CPU:

sh -c echo 3 > /proc/sys/vm/drop_caches

I'll be bringing up a new server today to see if I can migrate vm's off 
this box and reboot it, but had anyone seen this before?


Gerald

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Move from 3.4 to 4.1 oops?

2015-12-28 Thread Gerald Brandt



On 2015-12-28 03:18 PM, Gerald Brandt wrote:

Hi,

I had a 3 node (non-ha) cluster of 3.4.  I took one offline, rebuilt 
it with 4.1, and moved some machines over.  It worked, so I did the 
same to a second machine, and added it to the new 4.1 cluster.


Now I want to move some test VM's over from 3.4 to 4.1.  My plan was 
to backup the VM, and do a restore to the new cluster.


Unfortunately, I can't do anything.  The old cluster no longer has 
quorum, and I can't backup a VM.


Any ideas?

Gerald



Figures, as soon as I post...

I ran 'pvecm expected 1' on the 3.4 server, and all is well.

Gerald

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Move from 3.4 to 4.1 oops?

2015-12-28 Thread Gerald Brandt

Hi,

I had a 3 node (non-ha) cluster of 3.4.  I took one offline, rebuilt it 
with 4.1, and moved some machines over.  It worked, so I did the same to 
a second machine, and added it to the new 4.1 cluster.


Now I want to move some test VM's over from 3.4 to 4.1.  My plan was to 
backup the VM, and do a restore to the new cluster.


Unfortunately, I can't do anything.  The old cluster no longer has 
quorum, and I can't backup a VM.


Any ideas?

Gerald

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Move from 3.4 to 4.1 oops?

2015-12-28 Thread Gerald Brandt



On 2015-12-28 04:38 PM, Lindsay Mathieson wrote:



On 29/12/2015 7:18 AM, Gerald Brandt wrote:
Unfortunately, I can't do anything.  The old cluster no longer has 
quorum, and I can't backup a VM.



Is the storage shared? if so, you could manually move the vm .conf files

It is shared. Moving the conf files should be significantly faster than 
backup/restore.

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] I try hard but...

2015-10-29 Thread Gerald Brandt
I use NFSv4, and I've never had any problems.  You have to hand modify 
the storage config file.  I've used Ubuntu NFS servers with DRBD, and 
more recently, Synology boxes (again with DRBD).  I also bond my network 
links for 2GB bandwidth.


I'm not sure why Proxmox doesn't support NFSv4 out of the box.

Gerald


On 2015-10-29 04:46 AM, Gilberto Nunes wrote:

Hi guys

Last night, i get IO error again!
Perhaps i must give up NFS and considrer glusterFS or iSCSI???

Thanks for any help

Best regards

2015-10-27 13:25 GMT-02:00 Gilberto Nunes >:


What you say if drop soft and leave only proto=udp???

2015-10-27 12:36 GMT-02:00 Michael Rasmussen >:

Remember softlocks is dangerous and can cause data loss.


On October 27, 2015 2:58:25 PM CET, Gilberto Nunes
> wrote:

Now the VM seems to doing well...

I put some limits on the Virtual HD ( Thanks Dmitry),
mount NFS with soft and proto=udp, and right now stress
the VM with a lot of impasync jobs, in order to migrate
huge mailbox to oldserver to the VM...
I will performe others tests yet, but I thing there peace
here again...
Thanks for help and sorry to blame proxmox, guys!
My apologies!


2015-10-26 17:55 GMT-02:00 Hector Suarez Planas
>:

...

Answer my own question: yes! There is a bug
related NFS on kernel 3.13 that is the default on
Ubuntu 14.04...
I will try with kernel 3.19...
Sorry for sent this to the list...


Rectify is wise. Do not blame Proxmox for every bad
thing that happens to you with it. You must have
patience with things that come from the world of Open
Source.

:-)

-- 
=

Lic. Hector Suarez Planas
Administrador Nodo CODESA
Santiago de Cuba
-
Blog: http://nihilanthlnxc.cubava.cu/
ICQ ID: 681729738
Conferendo ID: hspcuba
=

___
pve-user mailing list
pve-user@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user




-- 
Sent from my Android phone with K-9 Mail. Please excuse my

brevity.
 This mail was virus scanned and spam checked before
delivery. This mail is also DKIM signed. See header
dkim-signature.




-- 


Gilberto Ferreira
+55 (47) 9676-7530 
Skype: gilberto.nunes36




--

Gilberto Ferreira
+55 (47) 9676-7530
Skype: gilberto.nunes36



___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] I try hard but...

2015-10-29 Thread Gerald Brandt
I didn't use OCFS, eithet XFS or EXT4.  No need for OCFS unless you're 
dual primary.


Maybe OCFS is doing something weird for you?

Gerald


On 2015-10-29 06:50 AM, Gilberto Nunes wrote:

That's precisely my scenario...
I have DRBD + OCFS and a gigaethernet cable link the two server.
The server with DRBD + OCFS has NFS with ubuntu 14.


2015-10-29 9:43 GMT-02:00 Gerald Brandt <g...@majentis.com 
<mailto:g...@majentis.com>>:


I use NFSv4, and I've never had any problems.  You have to hand
modify the storage config file.  I've used Ubuntu NFS servers with
DRBD, and more recently, Synology boxes (again with DRBD).  I also
bond my network links for 2GB bandwidth.

I'm not sure why Proxmox doesn't support NFSv4 out of the box.

Gerald



On 2015-10-29 04:46 AM, Gilberto Nunes wrote:

Hi guys

Last night, i get IO error again!
Perhaps i must give up NFS and considrer glusterFS or iSCSI???

Thanks for any help

Best regards

2015-10-27 13:25 GMT-02:00 Gilberto Nunes
<gilberto.nune...@gmail.com <mailto:gilberto.nune...@gmail.com>>:

What you say if drop soft and leave only proto=udp???

2015-10-27 12:36 GMT-02:00 Michael Rasmussen <m...@miras.org
<mailto:m...@miras.org>>:

Remember softlocks is dangerous and can cause data loss.


On October 27, 2015 2:58:25 PM CET, Gilberto Nunes
<gilberto.nune...@gmail.com
<mailto:gilberto.nune...@gmail.com>> wrote:

Now the VM seems to doing well...

I put some limits on the Virtual HD ( Thanks Dmitry),
mount NFS with soft and proto=udp, and right now
stress the VM with a lot of impasync jobs, in order
to migrate huge mailbox to oldserver to the VM...
I will performe others tests yet, but I thing there
peace here again...
Thanks for help and sorry to blame proxmox, guys!
My apologies!


2015-10-26 17:55 GMT-02:00 Hector Suarez Planas
<hector.sua...@codesa.co.cu
<mailto:hector.sua...@codesa.co.cu>>:

...

Answer my own question: yes! There is a bug
related NFS on kernel 3.13 that is the
default on Ubuntu 14.04...
I will try with kernel 3.19...
Sorry for sent this to the list...


Rectify is wise. Do not blame Proxmox for every
bad thing that happens to you with it. You must
have patience with things that come from the
world of Open Source.

:-)

-- 
=

Lic. Hector Suarez Planas
Administrador Nodo CODESA
Santiago de Cuba
-
Blog: http://nihilanthlnxc.cubava.cu/
ICQ ID: 681729738
Conferendo ID: hspcuba
=

___
pve-user mailing list
pve-user@pve.proxmox.com
<mailto:pve-user@pve.proxmox.com>
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user




-- 
Sent from my Android phone with K-9 Mail. Please excuse

my brevity.
 This mail was virus scanned and spam checked before
delivery. This mail is also DKIM signed. See header
dkim-signature.




-- 


Gilberto Ferreira
+55 (47) 9676-7530 <tel:%2B55%20%2847%29%209676-7530>
Skype: gilberto.nunes36




-- 


Gilberto Ferreira
+55 (47) 9676-7530 <tel:%2B55%20%2847%29%209676-7530>
Skype: gilberto.nunes36



___
pve-user mailing list
pve-user@pve.proxmox.com <mailto:pve-user@pve.proxmox.com>
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user



___
pve-user mailing list
pve-user@pve.proxmox.com <mailto:pve-user@pve.proxmox.com>
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user




--

Gilberto Ferreira
+55 (47) 9676-7530
Skype: gilberto.nunes36



___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Windows VM's consistently hang on snapshot backup

2015-10-22 Thread Gerald Brandt

Hi,

I'm running the latest 3.4, and my Windows VM's hang on snapshoy 
backups.  Not every time, but 9 out of 10.


I'm running qemu disks on an NFS server.

Has anyone seen this before?  I already tried change the cpu from kvm64 
to qemu64, but no changes.


Gerald

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] NAS4Free and ZFS

2015-09-29 Thread Gerald Brandt
If I would have thought things through a bit better, I would just have 
built a couple of NAS4Free boxes instead of using Synology.


Gerald


On 2015-09-29 07:08 AM, Gerald Brandt wrote:

Hi,

I know it's not recommended, but is anyone running NAS4Free in a VM 
with ZFS?


I use it to create user directories with fixed quotas and share them 
over CIFs.  Each user has there own dataset and the dataset is shared.


The Proxmox VM is using direct sync cache mode on the NAS3Free disk.  
I do backup and snapshot daily, so data loss will be minimized if 
anything bad happens.  I also turned off any scrubs on the pool.  I 
used to do them nightly, but read somewhere it wasn't recommended in a 
virtual environment.


The NAS4Free disk is on an NFS file server with its own RAID6 and DRBD 
setup (Synology).


It's been really stable, and I'm considering putting some more 
critical data on it.


Gerald
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] NAS4Free and ZFS

2015-09-29 Thread Gerald Brandt

Hi,

I know it's not recommended, but is anyone running NAS4Free in a VM with 
ZFS?


I use it to create user directories with fixed quotas and share them 
over CIFs.  Each user has there own dataset and the dataset is shared.


The Proxmox VM is using direct sync cache mode on the NAS3Free disk.  I 
do backup and snapshot daily, so data loss will be minimized if anything 
bad happens.  I also turned off any scrubs on the pool.  I used to do 
them nightly, but read somewhere it wasn't recommended in a virtual 
environment.


The NAS4Free disk is on an NFS file server with its own RAID6 and DRBD 
setup (Synology).


It's been really stable, and I'm considering putting some more critical 
data on it.


Gerald
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] VM locked icon to GUI

2015-07-20 Thread Gerald Brandt
I moved to Proxmox from Citrix XenServer, and they did it... 
accurately.  I don't know what they did though.


Gerald

On 2015-07-20 03:33 AM, Emmanuel Kasper wrote:

On 07/17/2015 06:40 PM, Gerald Brandt wrote:

On 2015-07-17 01:34 AM, Sten Aus wrote:

Hi

Is it possible to have little lock sign next to VM icon in the GUI
when VM is locked (for backup etc)?


Hi Gerald
A problem here might be that different storages have different
interpretation of what is considered to be online.

IIRC for example with NFS, when the nfs server disappears in the
network, the mount is still considered to be valid by the kernel, and
the next file system calls will hang, waiting indefinitely for the
server to be online again.

So at least for NFS, I don't see an easy way to detect a storage as
being offline.

Emmanuel






___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] VM locked icon to GUI

2015-07-17 Thread Gerald Brandt

On 2015-07-17 01:34 AM, Sten Aus wrote:

Hi

Is it possible to have little lock sign next to VM icon in the GUI 
when VM is locked (for backup etc)?




___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Since we're on the topic, I'd love it if the storage icons change if 
storage went offline.


Gerald


___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] PVE 4.0 Backup Feature?

2015-07-07 Thread Gerald Brandt

Hi,

I know this has been talked about for quite some time, but I haven't 
heard anything recently.


Is there any possibility for backups to contain the VM name?

Gerald

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Questions about backup strategy

2015-05-03 Thread Gerald Brandt

On 2015-05-03 3:36 AM, Julien Groselle wrote:

Hi,

I have never backuped any VM.
I consider every VM as a server, so I backup the data inside (rsync 
based backup) and not the envelop.


If someone can tell me what is the advantage to backup the VM, maybe I 
will change my mind :-)



I backup the data in the VM (using BackupPC) daily.  I vzdump the VM's 
weekly for disaster recovery purposes.  The snapshots and friday backups 
go offsite.  If the building burns down, I can build a new Proxmox 
system, and restore from vzdump, losing, at most, a week of work.  No 
fiddling to rebuild servers and restore from backups, just a restore of 
the image.


I verify the offsite on a semi-monthly basis.

Gerald

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Erratic ping times

2015-04-04 Thread Gerald Brandt

Hi,

I just virtualized an old QNX4 (4.25) box.  Everything is working fine, 
except that I'm getting erratic ping times.


The QNX box only had drivers for the RTL8139.

565 ms
.435 ms
1314 ms
315 ms
1054 ms
64.9 ms
863 ms
.470 ms

Has anybody seen something similar in the past?

Gerald

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Erratic ping times

2015-04-04 Thread Gerald Brandt

A small error on my part, I'm running QNX 4.23A

Gerald

On 2015-04-04 11:38 AM, Gerald Brandt wrote:

Hi,

I just virtualized an old QNX4 (4.25) box.  Everything is working 
fine, except that I'm getting erratic ping times.


The QNX box only had drivers for the RTL8139.

565 ms
.435 ms
1314 ms
315 ms
1054 ms
64.9 ms
863 ms
.470 ms

Has anybody seen something similar in the past?

Gerald

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user



--
Gerald Brandt
Majentis Technologies
204-229-6595
g...@majentis.com

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Internet facing Proxmox

2014-09-14 Thread Gerald Brandt

Hi,

I've been asked to set up a Proxmox server on the Internet.  Has anybody 
done so, and how secure is the web interface on port 8006?


I was considering running a VPN on Proxmox, and not allowing port 8006 
access unless you were connected to the VPN.  That creates issues if the 
VPN server goes down.


Also, with the new built in firewall, how easy is it to run all VPN's on 
a private address space and port forward as needed?


Gerald

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Stale NFS Mount - System Instability

2014-07-24 Thread Gerald Brandt

On 2014-07-23, 2:17 PM, Tim Nelson wrote:

Greetings-

I recently experienced an odd issue where, due to failure of an NFS 
host handling backups for a Proxmox cluster, several nodes stopped 
responding properly. They appear as offline in the web interface, and 
I/O is very slow, even to non-NFS destinations.


Is this a known issue where a stale or timed out NFS mount will cause 
the system to misbehave quite poorly? Would an alternate network 
storage protocol such as iSCSI perform better in that scenario?


--Tim


___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
If you restart pve-statd, the nodes on the web page will come back. I 
haven't noticed any slowdowns when I lose an NFS link though.


Gerald

--
Gerald Brandt
Majentis Technologies
204-229-6595
g...@majentis.com

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Use GlusterFS or Ceph for proxmox vm's...

2014-07-23 Thread Gerald Brandt

On 2014-07-22, 3:27 PM, Gilberto Nunes wrote:
Forgive me by the off topic, but I wonder if someone here can point me 
the performance difference between GlusterFS and Ceph...

Which one is faster? Why? Some docs or websites??
Thank you...

--
Gilberto Ferreira

I tried Gluster, both native and it's NFS implementation.  The NFS 
worked great, until it didn't.  It began to fail under load.  The 
Gluster native worked fine, but halved my IO.  I was doing replicate 
gluster, which sends data to all (in my case 2) bricks over the same 
link, essentially reducing your speed.  2 bricks in replicate, half the 
speed.  3 bricks, 1/3 of the speed.


Gluster NFS replicates it's data between servers, not between client and 
server, and can be off-loaded to another network link.


I settled on standard kernel NFS with DRBD to replicate the data.  I 
changed the Proxmox config files to use NFSv4, and I am super happy.


Gerald

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Who using Proxmox....

2014-06-27 Thread Gerald Brandt

On 2014-06-24, 8:43 AM, Gilberto Nunes wrote:

Hello friends


I am just make some research to justify the adoption of Proxmox for 
some customers so, I need to know who is using Proxmox...


I meant, what company, size of scenario... Such thinks like that...

I will appreciate your help...

Thanks...

--
Gilberto Ferreira


___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
I run a very small consulting company.  Two of my clients run PVE, one a 
3 server cluster with 23 VM's and the other a 2 server cluster with 5 
VM's.  I'm trying to convert another client from the Citrix XenServer I 
initially installed for him to PVE.


I also run it in my shop.

Gerald


--
Gerald Brandt
Majentis Technologies
204-229-6595
g...@majentis.com

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Does VZDump still pass --rsyncable when using gzip?

2014-05-03 Thread Gerald Brandt

Hi,

The title of the email asks it all: Does vzdump still pass --rsyncable 
when using gzip?


Gerald

--
Gerald Brandt
Majentis Technologies
g...@majentis.com
204-229-6595
www.majentis.com

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Online migration leaves firewall vm without service

2014-03-10 Thread Gerald Brandt

  
  
Hi,

I don't see this.  I'm running a standard Ubuntu server with PPTP,
and an almost standard (Zentyal) Ubuntu server running openvpn.

Gerald

On 2014-03-10, 11:04 AM, Angel Docampo
  wrote:


  
  I would like to ask here if you suffer the same issue than me.
  When upgrading proxmox,  I must move the virtual machines fron
  node to node in order to reboot the node.
  
  Everything works fine except for the firewalls I have virtualized.
  All of them, without exception, stop working. My laptop cannot
  ping the Internet and everything is restablished when I reboot the
  VM.
  
  Any of you have the same issue? There is some workaround?
  
  Thanks,
  -- 


  

  
  
  
Angel
Docampo
  
Datalab
Tecnologia, s.a.
Castillejos,
  352 - 08025 Barcelona
Tel.
  93 476 69 14 - Ext: 706
  Mob. 670.299.381
  

  

  
  
  Nota Legal
  
  
  
  ___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user



-- 
Gerald Brandt
Majentis Technologies
g...@majentis.com
204-229-6595
www.majentis.com

  

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Newbie question

2014-03-06 Thread Gerald Brandt


On 2014-03-06, 12:29 PM, Gilberto Nunes wrote:

Hi falks...

I am using PVE here and host has two NIC, one for LAN and one for WAN, 
like that:


eth0 - 172.172.10.5

eth1 - 200.201.299.299   THAT'S THE WAN CONNECTION


Ok...

Now I install a VM under PVE that is a Firewall...

And this Firewall has two nic too...

Like that:

eth0 - 172.172.10.254

eth1 - 200.201.299.299 -- THAT'S THE WAN CONNECTION

As you can see, I set the IP for eth1 twice: one for Proxmox Host and 
one for VM host...


I don't know if this is a good practice...

What the adviced for that??

Thanks

Don't do that.  Each machine, real or VM, needs a unique address if they 
are on the same network.


Gerald

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Create Bond on proxmox 3.1

2014-03-04 Thread Gerald Brandt

  
  

On 2014-03-04, 2:08 AM, henry wrote:


  
  hi guys,...
  
  i'm still get error, and can not ping yet. is this okey ??? see
  the picture below :
  
  
  
  why still can not ping ??? i can not ping 192.168.10.2 on client
  or router. i'm use prox ver 3.1-21. thanks for your help. 
  
  

Hi Henry,

Could you show me the output of 'ifconfig' and 'route -n'? Can the
proxmox box itself ping 192.168.10.2? I know bonding works, I just
set up balance-rr for my storage a few weeks ago, and am getting
throughput of up to 208MB/s on two 1GigE links.

Gerald

-- 
Gerald Brandt
Majentis Technologies
g...@majentis.com
204-229-6595
www.majentis.com

  

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Create Bond on proxmox 3.1

2014-03-04 Thread Gerald Brandt


On 2014-03-04, 9:41 PM, henry wrote:

Hi Gerald,... thanks for your help..

this is my ifconfig machine :

bond0 Link encap:Ethernet  HWaddr c8:1f:66:d4:6a:0c
  UP BROADCAST RUNNING MASTER MULTICAST  MTU:1500 Metric:1
  RX packets:3035043 errors:0 dropped:0 overruns:0 frame:0
  TX packets:27987 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:0
  RX bytes:433969645 (413.8 MiB)  TX bytes:1881264 (1.7 MiB)

eth0  Link encap:Ethernet  HWaddr c8:1f:66:d4:6a:0b
  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
  RX packets:1319944 errors:0 dropped:0 overruns:0 frame:0
  TX packets:2160694 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:1000
  RX bytes:177917566 (169.6 MiB)  TX bytes:2720123222 (2.5 GiB)
  Interrupt:35

eth1  Link encap:Ethernet  HWaddr c8:1f:66:d4:6a:0c
  UP BROADCAST RUNNING SLAVE MULTICAST  MTU:1500 Metric:1
  RX packets:1913720 errors:0 dropped:0 overruns:0 frame:0
  TX packets:27987 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:1000
  RX bytes:311070872 (296.6 MiB)  TX bytes:1881264 (1.7 MiB)
  Interrupt:38

eth2  Link encap:Ethernet  HWaddr c8:1f:66:d4:6a:0c
  UP BROADCAST RUNNING SLAVE MULTICAST  MTU:1500 Metric:1
  RX packets:1121323 errors:0 dropped:0 overruns:0 frame:0
  TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:1000
  RX bytes:122898773 (117.2 MiB)  TX bytes:0 (0.0 B)
  Interrupt:34

loLink encap:Local Loopback
  inet addr:127.0.0.1  Mask:255.0.0.0
  inet6 addr: ::1/128 Scope:Host
  UP LOOPBACK RUNNING  MTU:16436  Metric:1
  RX packets:408802 errors:0 dropped:0 overruns:0 frame:0
  TX packets:408802 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:0
  RX bytes:359570282 (342.9 MiB)  TX bytes:359570282 (342.9 MiB)

venet0Link encap:UNSPEC  HWaddr 
00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00

  inet6 addr: fe80::1/128 Scope:Link
  UP BROADCAST POINTOPOINT RUNNING NOARP  MTU:1500 Metric:1
  RX packets:0 errors:0 dropped:0 overruns:0 frame:0
  TX packets:0 errors:0 dropped:3 overruns:0 carrier:0
  collisions:0 txqueuelen:0
  RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

vmbr0 Link encap:Ethernet  HWaddr c8:1f:66:d4:6a:0b
  inet addr:192.168.2.4  Bcast:192.168.2.255 Mask:255.255.255.0
  inet6 addr: fe80::ca1f:66ff:fed4:6a0b/64 Scope:Link
  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
  RX packets:1061548 errors:0 dropped:0 overruns:0 frame:0
  TX packets:724773 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:0
  RX bytes:118879990 (113.3 MiB)  TX bytes:2633974067 (2.4 GiB)

vmbr1 Link encap:Ethernet  HWaddr c8:1f:66:d4:6a:0c
  inet addr:192.168.10.2  Bcast:192.168.10.255 Mask:255.255.255.0
  inet6 addr: fe80::ca1f:66ff:fed4:6a0c/64 Scope:Link
  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
  RX packets:1655838 errors:0 dropped:0 overruns:0 frame:0
  TX packets:26782 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:0
  RX bytes:241379533 (230.1 MiB)  TX bytes:1669950 (1.5 MiB)

root@pve-dell1:/# route -n
Kernel IP routing table
Destination  Gateway Genmask Flags   Metric Ref
Use Iface
192.168.2.0   0.0.0.0255.255.255.0   U 0 
00 vmbr0
192.168.10.0 0.0.0.0255.255.255.0   U0
00 vmbr
0.0.0.0  192.168.10.1 0.0.0.0 UG 0  
00 vmbr1



thanks again.
Your route looks wrong.  The 'Use Iface' for 192.168.10.0 should be 
vmbr1.  I'm assuming it's a typo on your part?


I see data has been transmitted and received on vmbr1, so it looks like 
it's working.


Gerald

--
Gerald Brandt
Majentis Technologies
g...@majentis.com
204-229-6595
www.majentis.com

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Off-site backup for VMs and Containers

2014-02-18 Thread Gerald Brandt


On 2014-02-18, 12:30 AM, Lindsay Mathieson wrote:

On 18 February 2014 09:36, Bruce B bruceb...@gmail.com wrote:

Hi everyone,

I am looking for a quick and reliable off-site backup service provider and
solution that can backup my Proxmox data based on below conditions:

- Be mindful of bandwidth usage and backup only the changed files / data and
not all data every-time

I believe that the backup format (vzdump) is not friendly to incremental backups

That might be the fault of gzip. gzip on Ubuntu at least has the 
--rsyncable option, which helps alot.


Gerald

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Several issues with gluster as storage.

2014-01-27 Thread Gerald Brandt
Hi,I solved the XFS issue for GlusterFS and NFS here:http://forum.proxmox.com/threads/17462-SOLVED-Linux-NFS-or-GlusterFS-Server-not-creating-sparse-files?highlight=nfs+glusterfsGeraldFrom: "Angel Docampo" adoca...@dltec.netTo: "Leslie-Alexandre DENIS" infos...@gmail.com, pve-user@pve.proxmox.comSent: Monday, 27 January, 2014 8:52:11 AMSubject: Re: [PVE-User] Several issues with gluster as storage.El 27/01/14 14:38, Leslie-Alexandre DENIS escribió:Hello Angel,   Did you figure it out why you had timeout problem ? Yes, I did. The main problem was the underlying filesystem. I do not know why, but XFS does not thin provision the space of the hard disk when create the virtual machine. Then proxmox times out at 30 seconds, or so. Because it should be matter of one or two seconds.  Changing it to ext4 solved the main problem.  Another issue, even with ext4, was the timing when in the gluster cluster (2 nodes), there were one node down. There is a bug on gfapi that times out the proxmox cluster when trying to start a VM on that gluster storage.  While gluster developers fix that bug. I lowered the value on /proc/sys/net/ipv4/tcp_syn_retries from 5 to 3 to reduce the time necessary to start the VM and now everything works, Regards Regards, Le 11/12/2013 17:28, Angel Docampo a écrit : Hello,   There is no special topology between the nodes, it's a 10Gb dedicated segment only for that purpose.   To mount glusterfs into PVE you can follow this tutorial  http://www.jamescoyle.net/how-to/533-glusterfs-storage-mount-in-proxmox   That is the proxmox way, and it's supposed to not to use FUSE, but to use the gluster translator (correct me if I'm wrong) and therefore no mount options are needed.   Regards,Angel Docampo  Datalab Tecnologia, s.a.  Castillejos, 352 - 08025 Barcelona  Tel. 93.476.69.10 - 6711   --  Angel Docampo Datalab Tecnologia, s.a.Castillejos, 352 - 08025 BarcelonaTel. 93 476 69 14 - Ext: 706 Mob. 670.299.381___pve-user mailing listpve-user@pve.proxmox.comhttp://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Add server to non HA cluster

2014-01-23 Thread Gerald Brandt
Hi,

I'm adding a new node to my cluster.  Just to make sure I have this right, I 
execute the 'pvecm add IP' on the new node, and the IP is the address of 
the existing cluster.

Correct?  I don't want to get it wrong by mistake, and take out my cluster.

Gerald
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] openiscsi with Proxmox

2014-01-02 Thread Gerald Brandt
Hi,

I’ve used Ubuntu + DRBD + IET iSCSI in production for the last 4 years for a 
client of mine.  They are running XenServer instead of Proxmox.  I’ve just 
converted another client to Proxmox from XenServer, and am testing an Ubuntu + 
DRBD + NFS system now.  This client has used XenServers snapshots on iSCSI for 
a long time, and want the same functionality with Proxmox, which is only 
available with qcow2.

Gerald


On Jan 2, 2014, at 2:10 AM, Muhammad Yousuf Khan sir...@gmail.com wrote:

 is there anyone have used openiscsi with Proxmox in production? 
 for example. openfiler or manual openiscsi?
 what is your recommendation for the backend SAN/NAS box other then freeBSD or 
 FreeNAS as both lagging DRBD and i want to setup an HA with DRBD. 
 any suggestion would be highly appreciated.
 
 Thanks
 MYK
 ___
 pve-user mailing list
 pve-user@pve.proxmox.com
 http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] openiscsi with Proxmox

2014-01-02 Thread Gerald Brandt

On Jan 2, 2014, at 8:39 AM, Muhammad Yousuf Khan sir...@gmail.com wrote:

 Thanks Fabio for your kind input. 
 i will keep on learning on this application, from learning perspective nappit 
 is easy. but i am worried that if some thing fails in production what will i 
 do after that because the problem can only be extracted from SSH. 
 
 
 
 @Gerald Brandt
 
 would you please share some details about your ubuntu + DRBD + IET box
 
 Can you please share the hardware specs in details (RAID CARD, Number of 
 Drives, Capacity, RAM etc.)?
 
 as you know writes are slow on DRBD how come you manage that?
 
 what was the Ethernet throughput 1GB or less?
 
 how it performed under heavy load?
 
 Thanks,

Hi,

I use Intel Server motherboards, software RAID-6.  RAM is 4 GB.  Reads and 
writes easily saturate the 1GB link, peaking out at 118 MB/s.  The DRBD sync 
line is a direct connect cable.  I can get the DRBD and IET config files if you 
like, but they are pretty generic.

Gerald

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] openiscsi with Proxmox

2014-01-02 Thread Gerald Brandt

On Jan 2, 2014, at 12:09 PM, Muhammad Yousuf Khan sir...@gmail.com wrote:

 Thanks Gerald,  
 Are you using Sata1,Sata2 or Sata3 or SAS?
 
 i will more then happy to read these configs. i only used heartbeat once. if 
 you share the config it will help me alot recalling the old learning. and i 
 never used IET so it will be best for me to learn some thing new. 
 
 I am also interested to see the connectivity diagram (if Possible for you) 
 like how you have connected both nodes and fine tune the the setup by using 
 Jumbo frames, reducing latency etc. because i also need constant 1GB Ethernet 
 output.
 
 
 Thanks,
 
 

Nothing special on my end at all.  I use uCARP for failover of the IP at one 
client and heartbeat at the other.  No jumbo frames either, since when I set up 
XenServer it wasn’t supported.  I kept my disk traffic on a separate 
network/switch from the user traffic.

I’m testing bonded (balance-rr) links now with my NFS test setup.

Gerald


___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Hows your uptimes?

2013-12-25 Thread Gerald Brandt

On Dec 25, 2013, at 3:43 AM, Lindsay Mathieson lindsay.mathie...@gmail.com 
wrote:

 Mine are pitiful - new install and can't resist fiddling with it :) Currently 
 at 5 days.
 
 Anyone over a year?
 -- 
 Lindsay___


I’m at 57 days, which is when I built my cluster.

Gerald

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Question about backup...

2013-11-04 Thread Gerald Brandt
When you create the disk, there's a 'no backup' option.  You can double click 
on the disk and set it there as well.

Gerald


- Original Message -
 From: Gilberto Nunes gilberto.nune...@gmail.com
 To: pve-user@pve.proxmox.com
 Sent: Monday, November 4, 2013 10:39:25 AM
 Subject: [PVE-User] Question about backup...
 
 Hi folks...
 
 I created a VM with two disks, just like that:
 
 virtio0: local:100/vm-100-disk-3.qcow2,format=qcow2,size=80G
 virtio1: local:100/vm-100-disk-1.qcow2,format=qcow2,size=110G
 
 My question is: when I setting up a backup policy, vzdump will take
 all hard disk, or just the first??
 
 Is there a way to make backup for just one disk??
 
 Thanks
 
 
 --
 Gilberto Ferreira
 ___
 pve-user mailing list
 pve-user@pve.proxmox.com
 http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
 
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] disk move from local to LVM over iSCSI Failed

2013-10-25 Thread Gerald Brandt
Hi,

Is this a good place to post questions, or is the forum better?

I just created an LVM over iSCSI share for testing.  I then picked a running 
test VM, and moved its disk from local:100/vm-disk-1.qcow2,format=qcow2,sie=8GB 
to the iSCSI.  

The copy went through, completeing to 100%, I've attached the log.  The copy 
failed on 'remove ioctl'.  Now I have an active working drive on local, and a 
duplicate drive on the iSCSI, and I can't remove the iSCSI drive.

Any help appreciated.

Gerald
transferred: 671023104 bytes remaining: 7918911488 bytes total: 8589934592 
bytes progression: 7.81 %
transferred: 702480384 bytes remaining: 7887454208 bytes total: 8589934592 
bytes progression: 8.18 %
transferred: 744423424 bytes remaining: 7845511168 bytes total: 8589934592 
bytes progression: 8.67 %
transferred: 775880704 bytes remaining: 7814053888 bytes total: 8589934592 
bytes progression: 9.03 %
transferred: 817823744 bytes remaining: 7772110848 bytes total: 8589934592 
bytes progression: 9.52 %
transferred: 838795264 bytes remaining: 7751139328 bytes total: 8589934592 
bytes progression: 9.76 %
transferred: 880738304 bytes remaining: 7709196288 bytes total: 8589934592 
bytes progression: 10.25 %
transferred: 922681344 bytes remaining: 7667253248 bytes total: 8589934592 
bytes progression: 10.74 %
transferred: 964624384 bytes remaining: 7625310208 bytes total: 8589934592 
bytes progression: 11.23 %
transferred: 1006567424 bytes remaining: 7583367168 bytes total: 8589934592 
bytes progression: 11.72 %
transferred: 1048510464 bytes remaining: 7541424128 bytes total: 8589934592 
bytes progression: 12.21 %
transferred: 1090453504 bytes remaining: 7499481088 bytes total: 8589934592 
bytes progression: 12.69 %
transferred: 1132396544 bytes remaining: 7457538048 bytes total: 8589934592 
bytes progression: 13.18 %
transferred: 1153368064 bytes remaining: 7436566528 bytes total: 8589934592 
bytes progression: 13.43 %
transferred: 1195311104 bytes remaining: 7394623488 bytes total: 8589934592 
bytes progression: 13.92 %
transferred: 1226768384 bytes remaining: 7363166208 bytes total: 8589934592 
bytes progression: 14.28 %
transferred: 1279197184 bytes remaining: 7310737408 bytes total: 8589934592 
bytes progression: 14.89 %
transferred: 1321140224 bytes remaining: 7268794368 bytes total: 8589934592 
bytes progression: 15.38 %
transferred: 1363083264 bytes remaining: 7226851328 bytes total: 8589934592 
bytes progression: 15.87 %
transferred: 1394540544 bytes remaining: 7195394048 bytes total: 8589934592 
bytes progression: 16.23 %
transferred: 1425997824 bytes remaining: 7163936768 bytes total: 8589934592 
bytes progression: 16.60 %
transferred: 1467940864 bytes remaining: 7121993728 bytes total: 8589934592 
bytes progression: 17.09 %
transferred: 1499398144 bytes remaining: 7090536448 bytes total: 8589934592 
bytes progression: 17.46 %
transferred: 1551826944 bytes remaining: 7038107648 bytes total: 8589934592 
bytes progression: 18.07 %
transferred: 1572798464 bytes remaining: 7017136128 bytes total: 8589934592 
bytes progression: 18.31 %
transferred: 1625227264 bytes remaining: 6964707328 bytes total: 8589934592 
bytes progression: 18.92 %
transferred: 1656684544 bytes remaining: 6933250048 bytes total: 8589934592 
bytes progression: 19.29 %
transferred: 1688141824 bytes remaining: 6901792768 bytes total: 8589934592 
bytes progression: 19.65 %
transferred: 1719599104 bytes remaining: 6870335488 bytes total: 8589934592 
bytes progression: 20.02 %
transferred: 1772027904 bytes remaining: 6817906688 bytes total: 8589934592 
bytes progression: 20.63 %
transferred: 1803485184 bytes remaining: 6786449408 bytes total: 8589934592 
bytes progression: 21.00 %
transferred: 1845428224 bytes remaining: 6744506368 bytes total: 8589934592 
bytes progression: 21.48 %
transferred: 1887371264 bytes remaining: 6702563328 bytes total: 8589934592 
bytes progression: 21.97 %
transferred: 1908342784 bytes remaining: 6681591808 bytes total: 8589934592 
bytes progression: 22.22 %
transferred: 1960771584 bytes remaining: 6629163008 bytes total: 8589934592 
bytes progression: 22.83 %
transferred: 1992228864 bytes remaining: 6597705728 bytes total: 8589934592 
bytes progression: 23.19 %
transferred: 2013200384 bytes remaining: 6576734208 bytes total: 8589934592 
bytes progression: 23.44 %
transferred: 2044657664 bytes remaining: 6545276928 bytes total: 8589934592 
bytes progression: 23.80 %
transferred: 2076114944 bytes remaining: 6513819648 bytes total: 8589934592 
bytes progression: 24.17 %
transferred: 2118057984 bytes remaining: 6471876608 bytes total: 8589934592 
bytes progression: 24.66 %
transferred: 2159935488 bytes remaining: 642104 bytes total: 8589934592 
bytes progression: 25.14 %
transferred: 2180907008 bytes remaining: 6409027584 bytes total: 8589934592 
bytes progression: 25.39 %
transferred: 2212364288 bytes remaining: 6377570304 bytes total: 8589934592 
bytes progression: 25.76 %
transferred: 2264793088 bytes remaining: 

[PVE-User] Moving from XenServer to Proxmox... slowly

2013-10-24 Thread Gerald Brandt
Hi,

We've pretty much made the decision to move from Citrix XenServer to KVM 
(Proxmox) for our Virtual solution.  Unfortunately, I don't have all the 
computers available to me at the start, since migration will use the existing 
XenServer computers.

To start, I'll have two servers, using a single iSCSI, in an HA cluster.  Once 
they are running and some of the virtual machines are migrated, I can add a 
third server.  Finally, I can add the fourth server.

Is there anything I need to look out for when creating the 2 server HA cluster 
and then converting it to a 3 and 4 node cluster?
Has anyone done this, and created a HOWTO I can follow?

Thanks,
Gerald

ps: Been using Citrix XenServer since Jan 2009.  We've had enough issues with 
VM disk corruption to haunt me for year (using LVMoveriSCSI).  It's a bloody 
house of cards.
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Moving from XenServer to Proxmox... slowly

2013-10-24 Thread Gerald Brandt
Hi,

I'd considered that, but I'm having trouble finding a system that supports 
VT-d/VT-x.

Any hiccups converting Xen to KVM?  I imagine the Windows boxes will move 
easily enough, just using clonezilla or similar.  The rest of my servers are 
Ubuntu based, and PVM's so I'll need to add a kernel, so I'm looking at 
http://www.blog.turmair.de/2010/12/xen-to-kvm-or-physical-vmware-migration/

Any performance issues?

Gerald


- Original Message -
 From: Kurt Bauer kurt.ba...@univie.ac.at
 To: Gerald Brandt g...@majentis.com
 Cc: pve-user@pve.proxmox.com
 Sent: Thursday, October 24, 2013 8:42:54 AM
 Subject: Re: [PVE-User] Moving from XenServer to Proxmox... slowly
 
 Hi,
 
 we did pretty much the same thing over the past weeks, ie. changing
 fom
 XEN to KVM (Proxmox) reusing the hardware already in place and
 production. But we didn't went down the 2-Node cluster road, what we
 did
 instead was, that we used an old server machine as a third node, just
 for the sake of quorum and not having to deal with all the problems,
 that may occur on a 2-Node cluster. No guests on that machine and as
 soon as the 3rd production machine was free, we added it to the
 cluster
 and removed the old one.
 I guess performance is not an issue for that 3rd machine, as long as
 it
 has no guests.
 
 Hope that helps a little,
 best regards,
 Kurt
 
 Gerald Brandt schrieb:
  Hi,
 
  We've pretty much made the decision to move from Citrix XenServer
  to KVM (Proxmox) for our Virtual solution.  Unfortunately, I don't
  have all the computers available to me at the start, since
  migration will use the existing XenServer computers.
 
  To start, I'll have two servers, using a single iSCSI, in an HA
  cluster.  Once they are running and some of the virtual machines
  are migrated, I can add a third server.  Finally, I can add the
  fourth server.
 
  Is there anything I need to look out for when creating the 2 server
  HA cluster and then converting it to a 3 and 4 node cluster?
  Has anyone done this, and created a HOWTO I can follow?
 
  Thanks,
  Gerald
 
  ps: Been using Citrix XenServer since Jan 2009.  We've had enough
  issues with VM disk corruption to haunt me for year (using
  LVMoveriSCSI).  It's a bloody house of cards.
  ___
  pve-user mailing list
  pve-user@pve.proxmox.com
  http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
 
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Moving from XenServer to Proxmox... slowly

2013-10-24 Thread Gerald Brandt
Hi,Is it possible to create a cluster of three Proxmox servers, without HA, (first 1 server, then 2, then 3, over time), and add HA once there are enough Proxmox servers to do HA?GeraldFrom: "Angel Docampo" adoca...@dltec.netTo: pve-user@pve.proxmox.comSent: Thursday, October 24, 2013 9:08:43 AMSubject: Re: [PVE-User] Moving from XenServer to Proxmox... slowly
  

  
Hi,
  
  Here we user as third|quorum machine, a VMWare machine. It worked
  perfectly as far as it does not host any VM itself.
  
  To migrate linux machines, we created a new VM on proxmox, booted
  it up with a sysresCD and then netcat the whole system from the
  xen VM to the new PVE machine.
  
  Hope it helps,
  
  El 24/10/13 15:49, Gerald Brandt escribió:


  Hi,

I'd considered that, but I'm having trouble finding a system that supports VT-d/VT-x.

Any hiccups converting Xen to KVM?  I imagine the Windows boxes will move easily enough, just using clonezilla or similar.  The rest of my servers are Ubuntu based, and PVM's so I'll need to add a kernel, so I'm looking at http://www.blog.turmair.de/2010/12/xen-to-kvm-or-physical-vmware-migration/

Any performance issues?

Gerald


- Original Message -

  
From: "Kurt Bauer" kurt.ba...@univie.ac.at
To: "Gerald Brandt" g...@majentis.com
Cc: pve-user@pve.proxmox.com
Sent: Thursday, October 24, 2013 8:42:54 AM
Subject: Re: [PVE-User] Moving from XenServer to Proxmox... slowly

Hi,

we did pretty much the same thing over the past weeks, ie. changing
fom
XEN to KVM (Proxmox) reusing the hardware already in place and
production. But we didn't went down the 2-Node cluster road, what we
did
instead was, that we used an old server machine as a third node, just
for the sake of quorum and not having to deal with all the problems,
that may occur on a 2-Node cluster. No guests on that machine and as
soon as the 3rd production machine was free, we added it to the
cluster
and removed the old one.
I guess performance is not an issue for that 3rd machine, as long as
it
has no guests.

Hope that helps a little,
best regards,
Kurt

Gerald Brandt schrieb:


  Hi,

We've pretty much made the decision to move from Citrix XenServer
to KVM (Proxmox) for our Virtual solution.  Unfortunately, I don't
have all the computers available to me at the start, since
migration will use the existing XenServer computers.

To start, I'll have two servers, using a single iSCSI, in an HA
cluster.  Once they are running and some of the virtual machines
are migrated, I can add a third server.  Finally, I can add the
fourth server.

Is there anything I need to look out for when creating the 2 server
HA cluster and then converting it to a 3 and 4 node cluster?
Has anyone done this, and created a HOWTO I can follow?

Thanks,
Gerald

ps: Been using Citrix XenServer since Jan 2009.  We've had enough
issues with VM disk corruption to haunt me for year (using
LVMoveriSCSI).  It's a bloody house of cards.
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user



  
  ___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user




-- 
  
  

  



  Angel
  Docampo

  Datalab
  Tecnologia, s.a.
  Castillejos,
352 - 08025 Barcelona
  Tel. 93 476
69 14 - Ext: 706
Mob. 670.299.381

  

  

  ___pve-user mailing listpve-user@pve.proxmox.comhttp://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Moving from XenServer to Proxmox... slowly

2013-10-24 Thread Gerald Brandt
Hi Angel,Were you transferring HVM or PVM servers to KVM?GeraldFrom: "Angel Docampo" adoca...@dltec.netTo: pve-user@pve.proxmox.comSent: Thursday, October 24, 2013 9:08:43 AMSubject: Re: [PVE-User] Moving from XenServer to Proxmox... slowly
  

  
Hi,
  
  Here we user as third|quorum machine, a VMWare machine. It worked
  perfectly as far as it does not host any VM itself.
  
  To migrate linux machines, we created a new VM on proxmox, booted
  it up with a sysresCD and then netcat the whole system from the
  xen VM to the new PVE machine.
  
  Hope it helps,
  
  El 24/10/13 15:49, Gerald Brandt escribió:


  Hi,

I'd considered that, but I'm having trouble finding a system that supports VT-d/VT-x.

Any hiccups converting Xen to KVM?  I imagine the Windows boxes will move easily enough, just using clonezilla or similar.  The rest of my servers are Ubuntu based, and PVM's so I'll need to add a kernel, so I'm looking at http://www.blog.turmair.de/2010/12/xen-to-kvm-or-physical-vmware-migration/

Any performance issues?

Gerald


- Original Message -

  
From: "Kurt Bauer" kurt.ba...@univie.ac.at
To: "Gerald Brandt" g...@majentis.com
Cc: pve-user@pve.proxmox.com
Sent: Thursday, October 24, 2013 8:42:54 AM
Subject: Re: [PVE-User] Moving from XenServer to Proxmox... slowly

Hi,

we did pretty much the same thing over the past weeks, ie. changing
fom
XEN to KVM (Proxmox) reusing the hardware already in place and
production. But we didn't went down the 2-Node cluster road, what we
did
instead was, that we used an old server machine as a third node, just
for the sake of quorum and not having to deal with all the problems,
that may occur on a 2-Node cluster. No guests on that machine and as
soon as the 3rd production machine was free, we added it to the
cluster
and removed the old one.
I guess performance is not an issue for that 3rd machine, as long as
it
has no guests.

Hope that helps a little,
best regards,
Kurt

Gerald Brandt schrieb:


  Hi,

We've pretty much made the decision to move from Citrix XenServer
to KVM (Proxmox) for our Virtual solution.  Unfortunately, I don't
have all the computers available to me at the start, since
migration will use the existing XenServer computers.

To start, I'll have two servers, using a single iSCSI, in an HA
cluster.  Once they are running and some of the virtual machines
are migrated, I can add a third server.  Finally, I can add the
fourth server.

Is there anything I need to look out for when creating the 2 server
HA cluster and then converting it to a 3 and 4 node cluster?
Has anyone done this, and created a HOWTO I can follow?

Thanks,
Gerald

ps: Been using Citrix XenServer since Jan 2009.  We've had enough
issues with VM disk corruption to haunt me for year (using
LVMoveriSCSI).  It's a bloody house of cards.
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user



  
  ___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user




-- 
  
  

  



  Angel
  Docampo

  Datalab
  Tecnologia, s.a.
  Castillejos,
352 - 08025 Barcelona
  Tel. 93 476
69 14 - Ext: 706
Mob. 670.299.381

  

  

  ___pve-user mailing listpve-user@pve.proxmox.comhttp://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Moving from XenServer to Proxmox... slowly

2013-10-24 Thread Gerald Brandt


- Original Message -
 From: Kurt Bauer kurt.ba...@univie.ac.at
 To: Gerald Brandt g...@majentis.com
 Cc: pve-user@pve.proxmox.com
 Sent: Thursday, October 24, 2013 9:18:58 AM
 Subject: Re: [PVE-User] Moving from XenServer to Proxmox... slowly
 
 
 
 Gerald Brandt schrieb:

 
  Any performance issues?
 In which regard?

Noticable slowdowns/or speed ups on disk IO or memory, etc?


 
 
  Gerald
 
 
  - Original Message -
  From: Kurt Bauer kurt.ba...@univie.ac.at
  To: Gerald Brandt g...@majentis.com
  Cc: pve-user@pve.proxmox.com
  Sent: Thursday, October 24, 2013 8:42:54 AM
  Subject: Re: [PVE-User] Moving from XenServer to Proxmox... slowly
 
  Hi,
 
  we did pretty much the same thing over the past weeks, ie.
  changing
  fom
  XEN to KVM (Proxmox) reusing the hardware already in place and
  production. But we didn't went down the 2-Node cluster road, what
  we
  did
  instead was, that we used an old server machine as a third node,
  just
  for the sake of quorum and not having to deal with all the
  problems,
  that may occur on a 2-Node cluster. No guests on that machine and
  as
  soon as the 3rd production machine was free, we added it to the
  cluster
  and removed the old one.
  I guess performance is not an issue for that 3rd machine, as long
  as
  it
  has no guests.
 
  Hope that helps a little,
  best regards,
  Kurt
 
  Gerald Brandt schrieb:
  Hi,
 
  We've pretty much made the decision to move from Citrix XenServer
  to KVM (Proxmox) for our Virtual solution.  Unfortunately, I
  don't
  have all the computers available to me at the start, since
  migration will use the existing XenServer computers.
 
  To start, I'll have two servers, using a single iSCSI, in an HA
  cluster.  Once they are running and some of the virtual machines
  are migrated, I can add a third server.  Finally, I can add the
  fourth server.
 
  Is there anything I need to look out for when creating the 2
  server
  HA cluster and then converting it to a 3 and 4 node cluster?
  Has anyone done this, and created a HOWTO I can follow?
 
  Thanks,
  Gerald
 
  ps: Been using Citrix XenServer since Jan 2009.  We've had enough
  issues with VM disk corruption to haunt me for year (using
  LVMoveriSCSI).  It's a bloody house of cards.
  ___
  pve-user mailing list
  pve-user@pve.proxmox.com
  http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
 
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] cluster with no HA

2013-10-24 Thread Gerald Brandt
Hi Lex,

The cluster would only be two nodes for a day, two days at the most.  The third 
node would come in really quick (install and config time), and then there would 
be three nodes.

Gerald


- Original Message -
From: Lex Rivera m...@lex.io
To: pve-user@pve.proxmox.com
Sent: Thursday, 24 October, 2013 2:35:06 PM
Subject: Re: [PVE-User] cluster with no HA

Hi, no problem with that, except possible problems with quorum. 
http://pve.proxmox.com/wiki/Two-Node_High_Availability_Cluster

On Thu, Oct 24, 2013, at 12:33 PM, Gerald Brandt wrote:
 Hi,
 
 Can I create a Proxmox cluster, starting with 2 nodes, then increasing to
 4 nodes over a few days, with no HA ability, and then add HA/fencing
 after the fact?
 
 Gerald
 ___
 pve-user mailing list
 pve-user@pve.proxmox.com
 http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Named backups

2013-09-10 Thread Gerald Brandt
Hi,

Proxmox currently names backups based on the VM id number.  Would it be 
possible, and are there any plans for, using the VM's name instead?

Gerald
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Named backups

2013-09-10 Thread Gerald Brandt
Thanks Marco,  don't know why I didn't find that.

Gerald


- Original Message -
 From: Marco Gabriel - inett GmbH mgabr...@inett.de
 To: Gerald Brandt g...@majentis.com, pve-user@pve.proxmox.com
 Sent: Tuesday, September 10, 2013 10:37:33 AM
 Subject: AW: [PVE-User] Named backups
 
 Hi,
 
 https://bugzilla.proxmox.com/show_bug.cgi?id=438
 
 best regards,
 Marco
 
 -Ursprüngliche Nachricht-
 Von: pve-user-boun...@pve.proxmox.com
 [mailto:pve-user-boun...@pve.proxmox.com] Im Auftrag von Gerald
 Brandt
 Gesendet: Dienstag, 10. September 2013 15:34
 An: pve-user@pve.proxmox.com
 Betreff: [PVE-User] Named backups
 
 Hi,
 
 Proxmox currently names backups based on the VM id number.  Would it
 be possible, and are there any plans for, using the VM's name
 instead?
 
 Gerald
 ___
 pve-user mailing list
 pve-user@pve.proxmox.com
 http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
 
 
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] NFSv4

2013-06-28 Thread Gerald Brandt
Hi,

Are there any options in 3.0 to mount NFSv4?

Gerald
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Backups suddenly slow

2012-10-27 Thread Gerald Brandt
Hi,

I made two changes to my server since the last backup:

1. switched from 2.1 to 2.2
2. converted vmdk storage to qcow2 (for snapshots)

Here's a sample from last weeks backup:

Oct 20 01:01:49 INFO: Starting Backup of VM 101 (qemu)
Oct 20 01:01:49 INFO: status = running
Oct 20 01:01:49 INFO: mode failure - unable to detect lvm volume group
Oct 20 01:01:49 INFO: trying 'suspend' mode instead
Oct 20 01:01:49 INFO: backup mode: suspend
Oct 20 01:01:49 INFO: ionice priority: 7
Oct 20 01:01:49 INFO: suspend vm
Oct 20 01:01:49 INFO: creating archive 
'/mnt/pve/Backups/dump/vzdump-qemu-101-2012_10_20-01_01_49.tar.lzo'
Oct 20 01:01:49 INFO: adding 
'/mnt/pve/Backups/dump/vzdump-qemu-101-2012_10_20-01_01_49.tmp/qemu-server.conf'
 to archive ('qemu-server.conf')
Oct 20 01:01:49 INFO: adding '/mnt/pve/Images/images/101/vm-101-disk-2.vmdk' to 
archive ('vm-disk-virtio1.vmdk')
Oct 20 01:04:24 INFO: adding '/mnt/pve/Images/images/101/vm-101-disk-1.vmdk' to 
archive ('vm-disk-virtio0.vmdk')
Oct 20 01:10:11 INFO: Total bytes written: 9381612544 (17.82 MiB/s)
Oct 20 01:10:35 INFO: archive file size: 5.02GB
Oct 20 01:10:36 INFO: resume vm
Oct 20 01:10:37 INFO: vm is online again after 528 seconds
Oct 20 01:10:37 INFO: Finished Backup of VM 101 (00:08:48)

And here is this weeks:

Oct 27 01:24:09 INFO: Starting Backup of VM 101 (qemu)
Oct 27 01:24:09 INFO: status = running
Oct 27 01:24:09 INFO: mode failure - unable to detect lvm volume group
Oct 27 01:24:09 INFO: trying 'suspend' mode instead
Oct 27 01:24:09 INFO: backup mode: suspend
Oct 27 01:24:09 INFO: ionice priority: 7
Oct 27 01:24:09 INFO: suspend vm
Oct 27 01:24:09 INFO: creating archive 
'/mnt/pve/Backups/dump/vzdump-qemu-101-2012_10_27-01_24_09.tar.lzo'
Oct 27 01:24:09 INFO: adding 
'/mnt/pve/Backups/dump/vzdump-qemu-101-2012_10_27-01_24_09.tmp/qemu-server.conf'
 to archive ('qemu-server.conf')
Oct 27 01:24:09 INFO: adding '/mnt/pve/Images/images/101/vm-101-disk-2.qcow2' 
to archive ('vm-disk-virtio1.qcow2')
Oct 27 01:59:01 INFO: adding '/mnt/pve/Images/images/101/vm-101-disk-1.qcow2' 
to archive ('vm-disk-virtio0.qcow2')
Oct 27 02:29:40 INFO: Total bytes written: 10161819648 (2.47 MiB/s)
Oct 27 02:29:40 INFO: archive file size: 5.20GB
Oct 27 02:29:41 INFO: resume vm
Oct 27 02:29:41 INFO: vm is online again after 3932 seconds
Oct 27 02:29:41 INFO: Finished Backup of VM 101 (01:05:32)


Is anyone else noticing a change?  Will going back to vmdk help me?

Gerald

ps: are there plans to create full backups using the qcow2 snapshots?
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Proxmox 2.1 'internal' network

2012-09-11 Thread Gerald Brandt
Hi,

I'm moving from Citrix XenServer to Proxmox, so my question may be simple ones.

In XenServer, I can create an internal network.  The XenServer itself is on a 
physical network card, while all the VM's are on the internal network.  All the 
VM's go through a firewall between the network card (routable IP) and the 
internal network (non routable IP 192.168.x.x).

Is there a way to setup and maintain is similar network layout in Proxmox 2.1?

Gerald
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user