Re: [PVE-User] Firewalling caused me to freak out :)

2017-03-14 Thread Mark Schouten
Hi, > On 15 Mar 2017, at 00:22, Mark Schouten wrote: > So, I finally stumbled upon > https://forum.proxmox.com/threads/pve-firewall-drop-traffic.32290/ > , and > tried to do a sysctl. And all of the sudden,

Re: [PVE-User] Broken cluster

2017-03-14 Thread Jeff Palmer
Check multicast. https://pve.proxmox.com/wiki/Multicast_notes On Mar 14, 2017 3:01 PM, "Kevin Lemonnier" wrote: > Hi, > > I've just been given the task of maintaning an existing "cluster" > of Proxmox 4. I'm putting quotes around the word because currently, > it doesn't

Re: [PVE-User] Broken cluster

2017-03-14 Thread Uwe Sauter
Sorry, this was sent too quickly. Didn't read about the missing corosync.conf. How many cluster nodes exist? How many VMs? it might be easier to reinstall the whole cluster, especially if you don't use Ceph (because then backups of the VM images are harder to backup). Am 14.03.2017 um 20:59

Re: [PVE-User] Broken cluster

2017-03-14 Thread Uwe Sauter
Check that there are no firewalls blocking communication. I had a problem like this a couple of weeks ago and all I needed was to properly configure the settings for pveproxy. (There are other firewall settings, too.) Am 14.03.2017 um 20:15 schrieb Kevin Lemonnier: Looks like they can't find

Re: [PVE-User] Broken cluster

2017-03-14 Thread Kevin Lemonnier
> Looks like they can't find each other. Make sure each server has > /etc/hosts entries for all the other servers. Some of them were missing that when I got the accesses yes, that's the first thing I checked and corrected. They can all talk to each other, as that's tested through the "Summary"

Re: [PVE-User] Broken cluster

2017-03-14 Thread Gerald Brandt
On 2017-03-14 02:00 PM, Kevin Lemonnier wrote: Hi, I've just been given the task of maintaning an existing "cluster" of Proxmox 4. I'm putting quotes around the word because currently, it doesn't work. Each nodes seems to have been somehow added to a cluster and can see the other nodes, and

[PVE-User] Broken cluster

2017-03-14 Thread Kevin Lemonnier
Hi, I've just been given the task of maintaning an existing "cluster" of Proxmox 4. I'm putting quotes around the word because currently, it doesn't work. Each nodes seems to have been somehow added to a cluster and can see the other nodes, and the sumary tab does work for each node, but they all

Re: [PVE-User] blocked requests

2017-03-14 Thread Marco Gaiarin
Mandi! Holger Hampel | RA Consulting In chel di` si favelave... > we don't use a scheduler on OSDs, so priority has no effect. We use this to > reduce load during working ours: I've also found very useful: osd scrub during recovery = false osd recovery op priority = 1 probably setting 'osd

Re: [PVE-User] HEALTH_WARN clock skew detected on mon.1

2017-03-14 Thread lists
On 14-3-2017 10:04, Marco Gaiarin wrote: Seems there's also possible to relax ceph constraint in ceph.conf, eg: http://docs.ceph.com/docs/jewel/rados/configuration/mon-config-ref/ option 'mon clock drift allowed'. But i've not tackle with that... For now: I have configured both ntpd

Re: [PVE-User] VM Backups

2017-03-14 Thread Holger Hampel | RA Consulting
Hello, I don't like to interfere with default behavior. Less intrusive I added this to a copy of the hook example (/usr/share/doc/pve-manager/examples/vzdump-hook-script.pl): } elsif ($phase eq 'backup-start') { my $mode = shift; # stop/suspend/snapshot my $vmid = shift; my $vmtype

Re: [PVE-User] blocked requests

2017-03-14 Thread Holger Hampel | RA Consulting
Hello, we don't use a scheduler on OSDs, so priority has no effect. We use this to reduce load during working ours: osd scrub begin hour = 18 osd scrub end hour = 7 osd scrub load threshold = 3 osd max scrubs = 1 osd max backfill = 3 osd recovery

Re: [PVE-User] HEALTH_WARN clock skew detected on mon.1

2017-03-14 Thread Marco Gaiarin
Mandi! lists In chel di` si favelave... > Tips, trics? All my reading/google voodo suggests me to install NTP, but we > are already. Seems there's also possible to relax ceph constraint in ceph.conf, eg: http://docs.ceph.com/docs/jewel/rados/configuration/mon-config-ref/ option 'mon