This one does not apply - please check, there is something wrong with
this patch.
On 10/08/2015 10:24 AM, Philipp Marek wrote:
---
PVE/Storage/DRBDPlugin.pm | 10 +-
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/PVE/Storage/DRBDPlugin.pm b/PVE/Storage/DRBDPlugin.pm
in
Please can you resend this one when the required drbdmanage changes are
online?
I cannot commit this one because it fails with current drbdmanage code.
On 10/08/2015 10:24 AM, Philipp Marek wrote:
Needs a recent DRBDmanage and DRBD 9 version - 9.0.0 has a known bug.
---
PVE/Storage/DRBDPlugin
> About my previous patch (for resume nocheck config file), I think it can still
> apply it.
> as we can't be sure that we don't have small replication latencies.
I am not sure about that, because it break the basic assumption that only
the owner node is allowed to do such actions.
Another way t
>>I mean, no during the on-live migrate but, when start the migrate, Proxmox
>>chack if the destination has at least igual memory available or more...
If the target don't have enough memory, the target vm process shouldn't start.
(kvm: cannot set up guest memory 'pc.ram': Cannot allocate memory
>>I think the LRM can easily test if the VM Config file was moved or not, so we
>>can do the check inside the LRM.
ok, no problem.
About my previous patch (for resume nocheck config file), I think it can still
apply it.
as we can't be sure that we don't have small replication latencies.
> >>I'll try to make the HA "immune" against such errors, but the real bug
> >>isn't in the HA stack :)
>
> Yes, I think we could send different error code for errors in phase3 of live
> migration.
> Maybe a warning instead an error for this phase.
I think the LRM can easily test if the VM Confi
t; at /usr/share/perl5/PVE/HA/Env/PVE2.pm line 389.
> Oct 14 19:05:33 kvmtest2 pve-ha-lrm[28562]: service 'vm:125' not on this node
> at /usr/share/perl5/PVE/HA/Env/PVE2.pm line 389.
> Oct 14 19:05:43 kvmtest2 pve-ha-lrm[28583]: service 'vm:125' not on this node
>
/PVE2.pm line 389.
Oct 14 19:06:03 kvmtest2 pve-ha-lrm[28626]: service 'vm:125' not on this node
at /usr/share/perl5/PVE/HA/Env/PVE2.pm line 389.
- Mail original -
De: "aderumier"
À: "dietmar"
Cc: "pve-devel"
Envoyé: Mercredi 14 Octobre 2015 16:
3 kvmtest2 pve-ha-lrm[28626]: service 'vm:125' not on this node
at /usr/share/perl5/PVE/HA/Env/PVE2.pm line 389.
- Mail original -
De: "aderumier"
À: "dietmar"
Cc: "pve-devel"
Envoyé: Mercredi 14 Octobre 2015 16:17:24
Objet: Re: [pve-devel]
v/PVE2.pm line 389.
Oct 14 19:06:03 kvmtest2 pve-ha-lrm[28626]: service 'vm:125' not on this node
at /usr/share/perl5/PVE/HA/Env/PVE2.pm line 389.
- Mail original -
De: "aderumier"
À: "dietmar"
Cc: "pve-devel"
Envoyé: Mercredi 14 Octobre 2015 16:17:24
Hi
I mean, no during the on-live migrate but, when start the migrate, Proxmox
chack if the destination has at least igual memory available or more...
I hope make myself more clear
Regards
2015-10-14 13:11 GMT-03:00 Alexandre DERUMIER :
> Do you mean reduce the vm memory when migrating ? if
Do you mean reduce the vm memory when migrating ? if yes, it's impossible.
only way could be memory unplug, but from my test, it's not 100% perfect
because of current linux limitations.
- Mail original -
De: "Gilberto Nunes"
À: "pve-devel"
Envoyé: Mercredi 14 Octobre 2015 16:44:32
Obj
Hi...
Perhaps my thought is no right, but even so, I will try expose it here...
Is there a way to (in future releases, perhaps) make Proxmox more flexible
when migrate a VM from one server with more memory to a server to less
memory??
I mean, perhaps ask to set how many memory to use on the other
>>To be sure, I would also test with my direct_io patch for fuse...
yes, I'm currently using it.
I have make a simple perl script which monitor create/delete vm conf file,
and time are indeed correct vs notify
node1
-----
exist 20151014 16:14:06.183
notexist20151014 16:14:38.989
ex
> > Why do we want to overwrite an existing file with $default_inittab ?
>
> $creation=1 means the container is being created, and we always replaced it
> before,
yes, but we can now change the existing file, so why should we replace it?
I would simply always modify the existing file.
_
> On October 14, 2015 at 3:55 PM Dietmar Maurer wrote:
> > On October 14, 2015 at 2:48 PM Wolfgang Bumiller
> > wrote:
> > ---
> > src/PVE/LXC/Setup/Base.pm | 6 ++---
> > src/PVE/LXC/Setup/Debian.pm | 61
> > ++---
> > 2 files changed, 49 insertions(+)
> > Here a inotify trace on /etc/pve , when the problem has occured.
>
> But inotify does not work at all on a distributed file system, so
> I am quite unsure if those numbers are correct.
>
FYI, I also reported a fuse bug today here:
http://sourceforge.net/p/fuse/mailman/fuse-devel/thread/1
> http://search.cpan.org/~andya/File-Monitor-1.00/lib/File/Monitor.pm
>
> which used stat() to detect changes
To be sure, I would also test with my direct_io patch for fuse...
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/
> On October 14, 2015 at 2:48 PM Wolfgang Bumiller
> wrote:
>
>
> ---
> src/PVE/LXC/Setup/Base.pm | 6 ++---
> src/PVE/LXC/Setup/Debian.pm | 61
> ++---
> 2 files changed, 49 insertions(+), 18 deletions(-)
>
> diff --git a/src/PVE/LXC/Setup/Base.pm
>>But inotify does not work at all on a distributed file system, so
>>I am quite unsure if those numbers are correct.
ok,i'll make a small perl script with
http://search.cpan.org/~andya/File-Monitor-1.00/lib/File/Monitor.pm
which used stat() to detect changes
- Mail original -
De: "die
> Here a inotify trace on /etc/pve , when the problem has occured.
But inotify does not work at all on a distributed file system, so
I am quite unsure if those numbers are correct.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.
---
src/PVE/LXC/Setup/Base.pm | 6 ++---
src/PVE/LXC/Setup/Debian.pm | 61 ++---
2 files changed, 49 insertions(+), 18 deletions(-)
diff --git a/src/PVE/LXC/Setup/Base.pm b/src/PVE/LXC/Setup/Base.pm
index ba7453d..9a39468 100644
--- a/src/PVE/LXC/Setup/B
changed:
* correct formatting in sub do_fsck()
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
* the filesystem specific command will be called automatically by fsck
* the -a flag ensures that the filesystem can be fixed without any questions
* the -f flag forces a filesystem check even if the fs seems clean
(flags similar to what the fsck systemd unit uses)
---
src/PVE/CLI/pct.pm | 83
About time difference between source and target, I'm not sure how pxmcfs works,
because I always have around 5sec difference between move of source and target,
even if resume is working correctly ...
- Mail original -
De: "aderumier"
À: "pve-devel"
Envoyé: Mercredi 14 Octobre 2015 14:
I'm able to reproduce it without HA
kvmtest1 -> kvmtest2
Oct 14 14:11:20 ERROR: unable to find configuration file for VM 125 - no such
machine
Oct 14 14:11:20 ERROR: command '/usr/bin/ssh -o 'BatchMode=yes' root@10.3.94.47
qm resume 125 --skiplock' failed: exit code 2
Oct 14
* the filesystem specific command will be called automatically by fsck
* the -a flag ensures that the filesystem can be fixed without any questions
* the -f flag forces a filesystem check even if the fs seems clean
(flags similar to what the fsck systemd unit uses)
---
src/PVE/CLI/pct.pm | 85
Here a inotify trace on /etc/pve , when the problem has occured.
source : 2015-10-14 13:25:34 125.conf MOVED_FROM
target : 2015-10-14 13:25:39 125.conf.tmp.15438 MOVED_FROM
(5s difference, ouch ...)
Not sure it's related, but they are also lrm_status.tmp file move, with ha.
Don't known if it can
Hi Dietmar,
> Does not work for me - you already uploaded required versions to public drbd
> git repository?
Yes, the DRBD 9 is upstream:
http://git.drbd.org/drbd-9.0.git/commit/03431bc8a61ca022e4149c546ade1e1e86c2deea
The DRBDmanage one not yet - this needs a cluster-wide operation (first
Don't help :(
I'll try to launch inotifywatch on /etc/pve source and target,
and check the date of file move, and maybe if they are other file writes at the
same time.
- Mail original -
De: "aderumier"
À: "dietmar"
Cc: "pve-devel"
Envoyé: Mercredi 14 Octobre 2015 12:10:31
Objet:
Does not work for me - you already uploaded required versions to public
drbd git repository?
On 10/08/2015 10:24 AM, Philipp Marek wrote:
Needs a recent DRBDmanage and DRBD 9 version - 9.0.0 has a known bug.
---
PVE/Storage/DRBDPlugin.pm | 15 ++-
1 file changed, 6 insertions(+),
>>I would really like to understand what happens.
Yes, me too !
>>I wonder if it may help
>>if we use 'direct_io' flag for fuse. Would you mind to test?
Sure, I'll try this afternoon
- Mail original -
De: "dietmar"
À: "aderumier" , "pve-devel"
Envoyé: Mercredi 14 Octobre 2015 11:30:3
some minor things to finalize this:
On Wed, Oct 14, 2015 at 10:09:17AM +0200, Emmanuel Kasper wrote:
> * the filesystem specific command will be called automatically by fsck
> * the -a flag ensures that the filesystem can be fixed without any questions
> * the -f flag forces a filesystem check eve
> About systemd, do we still need /etc/init.d/pve-* init scripts ?
> I don't have installed a fresh proxmox4 yet, only upgrades from proxmox3,
> and old init script are still here.
Yes, we still install those script. But they all include this line:
. /lib/lsb/init-functions
And this magically c
> Users have reported resume bug when HA is used.
>
> They seem to have a little race (bench show >0s < 1s) between the vm conf file
> move on source node and replication to,
> and resume on target node.
>
> I don't known why this is only with HA, maybe this occur will standard
> migration too.
Users have reported resume bug when HA is used.
They seem to have a little race (bench show >0s < 1s) between the vm conf file
move on source node and replication to,
and resume on target node.
I don't known why this is only with HA, maybe this occur will standard
migration too.
Anyway, we don
>>Restart the pve-ha-lrm service, this should use the new code as it
>>executes the resource managers, i.e. makes the API call.
>>
>>> systemctl restart pve-ha-lrm.service
Perfect ! Thanks !
Offtopic :
About systemd, do we still need /etc/init.d/pve-* init scripts ?
I don't have installed a
changed:
* include direct block devices in devices to check
* but skip zfs datasets
* lock the container earlier
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
* the filesystem specific command will be called automatically by fsck
* the -a flag ensures that the filesystem can be fixed without any questions
* the -f flag forces a filesystem check even if the fs seems clean
(flags similar to what the fsck systemd unit uses)
---
src/PVE/CLI/pct.pm | 85
39 matches
Mail list logo