We have to set the correct permission,
because ceph greater than infernalis use ceph as daemon user.
---
PVE/API2/Ceph.pm | 7 +++
1 file changed, 7 insertions(+)
diff --git a/PVE/API2/Ceph.pm b/PVE/API2/Ceph.pm
index f6b9370..96ae9e2 100644
--- a/PVE/API2/Ceph.pm
+++ b/PVE/API2/Ceph.pm
@@
This series make ceph jewel available on pveceph.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
The pve-ceph-disk.service ensure that ceph will start after pve-cluster and
mount the osd-disks.
This is essential because ceph (osd, mon, mds, disks) need the ceph.conf which
is located in the pmxcfs.
Also there is a race condition in the ceph.serivce scritps, what end in a
stopped osd.
To
We do not use the ceph.service what normally start ceph-mon,
so we have to ensure ceph-mon is enabled.
---
PVE/API2/Ceph.pm | 6 ++
1 file changed, 6 insertions(+)
diff --git a/PVE/API2/Ceph.pm b/PVE/API2/Ceph.pm
index 96ae9e2..a63199b 100644
--- a/PVE/API2/Ceph.pm
+++ b/PVE/API2/Ceph.pm
@@
This is important to control the exact start-up of ceph at boot.
We have the ceph.conf in the pmxcfs so pve-cluster.service must startup first.
The ceph.sevice is a link to the old SysV script.
For more information see http://tracker.ceph.com/issues/18305 .
---
PVE/CLI/pveceph.pm | 10 ++
---
PVE/CLI/pveceph.pm | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/PVE/CLI/pveceph.pm b/PVE/CLI/pveceph.pm
index b198fc1..c6af98d 100755
--- a/PVE/CLI/pveceph.pm
+++ b/PVE/CLI/pveceph.pm
@@ -74,7 +74,7 @@ __PACKAGE__->register_method ({
version => {
The pve-ceph-disk.service ensure that ceph will start after pve-cluster and
mount the osd-disks.
This is essential because ceph (osd, mon, mds, disks) need the ceph.conf which
is located in the pmxcfs.
To use this service you have to mask the ceph.service and enable the
pve-ceph-disk.service
We have to set the correct permission,
because ceph greater than infernalis use ceph as daemon user.
---
PVE/API2/Ceph.pm | 7 +++
1 file changed, 7 insertions(+)
diff --git a/PVE/API2/Ceph.pm b/PVE/API2/Ceph.pm
index f6b9370..96ae9e2 100644
--- a/PVE/API2/Ceph.pm
+++ b/PVE/API2/Ceph.pm
@@
The pve-ceph-disk.service ensure that ceph will start after pve-cluster and
mount the osd-disks.
This is essential because ceph (osd, mon, mds, disks) need the ceph.conf which
is located in the pmxcfs.
Also there is a race condition in the ceph.serivce scritps, what end in a
stopped osd.
To
This function will return you the block device of a given partition path.
---
PVE/Diskmanage.pm | 16
1 file changed, 16 insertions(+)
diff --git a/PVE/Diskmanage.pm b/PVE/Diskmanage.pm
index 1171031..3c3518b 100644
--- a/PVE/Diskmanage.pm
+++ b/PVE/Diskmanage.pm
@@ -548,4
If we remove first the journal the data partition will automatically mounted
and can't destroy the partition.
This is trigger by the udev ceph rule.
---
PVE/API2/Ceph.pm | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/PVE/API2/Ceph.pm b/PVE/API2/Ceph.pm
index
This patch series fix the remove of osd's and subsequent erase of them.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
With this function you get the partnum of a dev.
---
PVE/Diskmanage.pm | 32
1 file changed, 32 insertions(+)
diff --git a/PVE/Diskmanage.pm b/PVE/Diskmanage.pm
index 5d498ce..1171031 100644
--- a/PVE/Diskmanage.pm
+++ b/PVE/Diskmanage.pm
@@ -5,6 +5,8 @@ use
Get the partition num and block device from sysfs.
This ensure different block device types will work.
---
PVE/API2/Ceph.pm | 28 ++--
1 file changed, 10 insertions(+), 18 deletions(-)
diff --git a/PVE/API2/Ceph.pm b/PVE/API2/Ceph.pm
index b0c9ddc..c218fd2 100644
---
This patch series fix the remove of osd's and subsequent erase of them.
PATCH V2
Add function get_blockdev.
V2 Fix typo and space.
Add new function part_num
Remove Module use File::stat.
Fix typo.
rewrite if condition.
The other 2 patches has from pve-manager no changes.
[PATCH pve-manager 1/2]
With this function you get the partnum of a dev.
---
PVE/Diskmanage.pm | 28
1 file changed, 28 insertions(+)
Remove Module use File::stat.
Fix typo.
rewrite if condition.
diff --git a/PVE/Diskmanage.pm b/PVE/Diskmanage.pm
index 5d498ce..e4821d4 100644
---
This function will return you the block device of a given partition path.
---
PVE/Diskmanage.pm | 16
1 file changed, 16 insertions(+)
V2 Fix typo and space.
diff --git a/PVE/Diskmanage.pm b/PVE/Diskmanage.pm
index e4821d4..5fd7c6a 100644
--- a/PVE/Diskmanage.pm
+++
It is false positive if cache mode is set to none.
---
pve-zsync | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/pve-zsync b/pve-zsync
index bf71894..68a03d4 100644
--- a/pve-zsync
+++ b/pve-zsync
@@ -770,7 +770,7 @@ sub parse_disks {
while ($text && $text =~
It is false positive if cache mode is set to none.
---
pve-zsync | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/pve-zsync b/pve-zsync
index bf71894..5fa5292 100644
--- a/pve-zsync
+++ b/pve-zsync
@@ -770,7 +770,7 @@ sub parse_disks {
while ($text && $text =~
---
pve-zsync | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/pve-zsync b/pve-zsync
index 4993bed..38357a5 100644
--- a/pve-zsync
+++ b/pve-zsync
@@ -775,7 +775,7 @@ sub parse_disks {
next if $line !~ m/^(?:((?:virtio|ide|scsi|sata|mp)\d+)|rootfs): /;
#QEMU
---
pve-zsync | 16 +---
1 file changed, 9 insertions(+), 7 deletions(-)
diff --git a/pve-zsync b/pve-zsync
index bc06ae1..8d083ba 100644
--- a/pve-zsync
+++ b/pve-zsync
@@ -508,7 +508,7 @@ sub init {
die "Config already exists\n" if $cfg->{$job->{source}}->{$job->{name}};
Wolfgang Link (3):
fix wrong quoting in qemu disk check.
fix #1301 skip if mp has no backup flag.
Improve error handling in parse_disk.
pve-zsync | 20 +++-
1 file changed, 11 insertions(+), 9 deletions(-)
--
2.1.4
___
pve
---
pve-zsync | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/pve-zsync b/pve-zsync
index 38357a5..bc06ae1 100644
--- a/pve-zsync
+++ b/pve-zsync
@@ -778,7 +778,7 @@ sub parse_disks {
next if $vm_type eq 'qemu' && ($line =~ m/backup=(?i:0|no|off|false)/);
If a file path is used as mp in lxc, you get a error message that it is not
include in the sync.
---
pve-zsync | 10 +-
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/pve-zsync b/pve-zsync
index 4993bed..3ea4b47 100644
--- a/pve-zsync
+++ b/pve-zsync
@@ -769,20 +769,20 @@
This is the timer for pverepm run.
---
Makefile| 5 +
conffiles | 2 ++
pve-replica | 3 +++
3 files changed, 10 insertions(+)
create mode 100644 conffiles
create mode 100644 pve-replica
diff --git a/Makefile b/Makefile
index 54c774b..7850128 100644
--- a/Makefile
+++ b/Makefile
@@
This is the implementation for asyncorn replica in LXC.
Wolfgang Link (3):
Insert new options in the LXC config for the PVE Replica.
Integrate replica in the lxc migration.
Destroy all remote and local replication datasets when a CT will
destroyed.
src/PVE/API2/LXC.pm| 4 +++
src
Now it is possible to migrate a VM offline when replica is enabled.
It will reduce replication to an minimal amount.
---
PVE/QemuMigrate.pm | 34 +-
1 file changed, 29 insertions(+), 5 deletions(-)
diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index
When replica is enabled and the target host is the reptarget,
the most VM data are on the new target.
---
PVE/Storage.pm | 11 ---
1 file changed, 8 insertions(+), 3 deletions(-)
diff --git a/PVE/Storage.pm b/PVE/Storage.pm
index eb7000f..964102c 100755
--- a/PVE/Storage.pm
+++
---
src/PVE/API2/LXC.pm | 4
1 file changed, 4 insertions(+)
diff --git a/src/PVE/API2/LXC.pm b/src/PVE/API2/LXC.pm
index 47dcb08..d4480dc 100644
--- a/src/PVE/API2/LXC.pm
+++ b/src/PVE/API2/LXC.pm
@@ -19,6 +19,7 @@ use PVE::LXC::Migrate;
use PVE::API2::LXC::Config;
use
---
src/PVE/LXC/Config.pm | 74 +++
1 file changed, 74 insertions(+)
diff --git a/src/PVE/LXC/Config.pm b/src/PVE/LXC/Config.pm
index 05cd970..a4a8a4c 100644
--- a/src/PVE/LXC/Config.pm
+++ b/src/PVE/LXC/Config.pm
@@ -8,6 +8,7 @@ use PVE::Cluster
This patch will include all necessary config for the replication.
Also will it enable and disable a replication job
when appointed flags are set or deleted.
---
PVE/API2/Qemu.pm | 41 +
PVE/QemuServer.pm | 31 +++
2 files
Now it is possible to migrate a CT when replica is enabled.
It will reduce replication to an minimal amount.
---
src/PVE/LXC/Migrate.pm | 34 +-
1 file changed, 29 insertions(+), 5 deletions(-)
diff --git a/src/PVE/LXC/Migrate.pm b/src/PVE/LXC/Migrate.pm
index
It is possible to sync a volume to an other node in a defines interval.
So if a node fail there will be an copy of the volumes from a VM
on an other node.
With this copy it is possible to start the VM on this node.
---
Makefile | 12 +-
PVE/API2/Makefile
This is the implementation for asyncorn replica in the storage lib where the
logic is.
Wolfgang Link (3):
This patch will include storage asyncron replica.
Include incremental zfs send in storage_migrate.
include pve-replica cronjob.
Makefile | 17 +-
PVE
This is the implementation for asyncorn replica in Qemu.
Wolfgang Link (3):
Insert new options in the Qemu config for the PVE Replica.
Integrate replica in the qemu migration.
Destroy all remote and local replication datasets when a VM will
destroyed.
PVE/API2/Qemu.pm | 45
---
PVE/API2/Qemu.pm | 4
1 file changed, 4 insertions(+)
diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 20491c9..dbcb323 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -1326,11 +1326,15 @@ __PACKAGE__->register_method({
syslog('info', "destroy VM $vmid:
This patches are the basic for pve-storage storage replica series.
Wolfgang Link (3):
Insert new properties in the Qemu config for the PVE Replica.
Integrate replica in the qemu migration.
Destroy all remote and local replication datasets when a VM will
destroyed.
PVE/API2/Qemu.pm
This patch will include all necessary properties for the replication.
Also will it enable and disable a replication job
when appointed flags are set or deleted.
---
PVE/API2/Qemu.pm | 36
PVE/QemuServer.pm | 31 +++
2 files
It is possible to synchronise a volume to an other node in a defined interval.
So if a node fail there will be an copy of the volumes from a VM
on an other node.
With this copy it is possible to start the VM on this node.
---
Makefile | 12 +-
PVE/API2/Makefile
This is the timer for pvesr run.
---
Makefile| 5 +
conffiles | 2 ++
pve-replica | 3 +++
3 files changed, 10 insertions(+)
create mode 100644 conffiles
create mode 100644 pve-replica
diff --git a/Makefile b/Makefile
index 2aab912..1530db2 100644
--- a/Makefile
+++ b/Makefile
@@
We need this function to delete remote snapshots.
---
PVE/Storage/ZFSPoolPlugin.pm | 37 ++---
1 file changed, 18 insertions(+), 19 deletions(-)
diff --git a/PVE/Storage/ZFSPoolPlugin.pm b/PVE/Storage/ZFSPoolPlugin.pm
index 8cc3f05..5286ba1 100644
---
This feature shows that the storage can send and receive images.
---
PVE/Storage/ZFSPoolPlugin.pm | 1 +
1 file changed, 1 insertion(+)
diff --git a/PVE/Storage/ZFSPoolPlugin.pm b/PVE/Storage/ZFSPoolPlugin.pm
index b94497c..5ff2356 100644
--- a/PVE/Storage/ZFSPoolPlugin.pm
+++
This function we need for replica to handle snapshots on remote nodes.
---
PVE/Storage.pm| 4 ++--
PVE/Storage/Plugin.pm | 2 +-
2 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/PVE/Storage.pm b/PVE/Storage.pm
index 6f296e5..778ec4d 100755
--- a/PVE/Storage.pm
+++
This patch serie will include storage asyncron replica.
Wolfgang Link (7):
Add ip parameter in zfs_request to execute on remote host.
Make volume_snapshot_delete on remote nodes.
Make volume_snapshot_delete on remote nodes.
Add replicate as new storage feature.
This patch will include
This patches are the basic for pve-storage storage replica series.
Wolfgang Link (3):
Insert new properties in the LXC config for the PVE Storage Replica.
Integrate storage replica in lxc migration.
Destroy all remote and local replication datasets when a CT will
destroyed.
src/PVE
---
PVE/API2/Qemu.pm | 4
1 file changed, 4 insertions(+)
diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index daa07e1..314b51f 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -1321,11 +1321,15 @@ __PACKAGE__->register_method({
syslog('info', "destroy VM $vmid:
This function we need for replica to handle snapshots on remote nodes.
Conflicts:
PVE/Storage/ZFSPoolPlugin.pm
---
PVE/Storage/ZFSPoolPlugin.pm | 7 +--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/PVE/Storage/ZFSPoolPlugin.pm b/PVE/Storage/ZFSPoolPlugin.pm
index
Now it is possible to migrate a VM offline when replica is enabled.
It will reduce replication to an minimal amount.
---
PVE/QemuMigrate.pm | 34 +-
1 file changed, 29 insertions(+), 5 deletions(-)
diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index
When replica is enabled and the target host is the reptarget,
the most VM data are on the new target.
---
PVE/Storage.pm | 12 +---
1 file changed, 9 insertions(+), 3 deletions(-)
diff --git a/PVE/Storage.pm b/PVE/Storage.pm
index 778ec4d..6afb356 100755
--- a/PVE/Storage.pm
+++
---
src/PVE/LXC/Config.pm | 75 ++-
1 file changed, 74 insertions(+), 1 deletion(-)
diff --git a/src/PVE/LXC/Config.pm b/src/PVE/LXC/Config.pm
index 05cd970..0921709 100644
--- a/src/PVE/LXC/Config.pm
+++ b/src/PVE/LXC/Config.pm
@@ -8,6 +8,7 @@ use
We have to add it in the ceph-*.target,
because it will resolved be systemd in shutdown case only on the target.
It is not possible to add this dependency on the fragments cep-*@.service,
it will ignored in shutdown case.
Wolfgang Link (1):
Fix shutdown order of ceph daemons.
bin/init.d
It is important that ceph stop after pveproxy.
If ceph stops to early and the shutdown is cluster wide the vm loses disks
and can not shutdown.
This can ends in fencing the node.
---
bin/init.d/Makefile | 6 ++
bin/init.d/ceph-shutdown-after-pveproxy.conf | 3 +++
2
inking of using this for disaster recovery, with ceph + rbd snapshots
> export/import,
>
> on 2 ceph clusters on remote datacenter.
>
>
> Could be great to be able to define 2 storeid in /etc/pve/storage.cfg.
>
>
> - Mail original -
> De: "Wolfgang L
This function we need for replica to handle snapshots on remote nodes.
---
PVE/Storage.pm | 15 +++
PVE/Storage/Plugin.pm| 7 +++
PVE/Storage/ZFSPoolPlugin.pm | 13 -
3 files changed, 34 insertions(+), 1 deletion(-)
diff --git a/PVE/Storage.pm
If the storage backend support import and export
we can send the contend to a remote host.
---
PVE/Storage.pm | 18
PVE/Storage/Plugin.pm| 8
PVE/Storage/ZFSPlugin.pm | 7 +++
PVE/Storage/ZFSPoolPlugin.pm | 49
This feature shows that the storage can send and receive images.
---
PVE/Storage/ZFSPoolPlugin.pm | 1 +
1 file changed, 1 insertion(+)
diff --git a/PVE/Storage/ZFSPoolPlugin.pm b/PVE/Storage/ZFSPoolPlugin.pm
index 7579472..c98dc86 100644
--- a/PVE/Storage/ZFSPoolPlugin.pm
+++
Returns a list of snapshots (youngest snap first) form a given volid.
It is possible to use a prefix to filter the list.
---
PVE/Storage.pm | 17 +
PVE/Storage/Plugin.pm| 9 +
PVE/Storage/ZFSPlugin.pm | 6 ++
PVE/Storage/ZFSPoolPlugin.pm |
This is the timer for pvesr run.
---
Makefile| 5 +
conffiles | 2 ++
pve-replica | 3 +++
3 files changed, 10 insertions(+)
create mode 100644 conffiles
create mode 100644 pve-replica
diff --git a/Makefile b/Makefile
index 0d80ce5..ef911cc 100644
--- a/Makefile
+++ b/Makefile
@@
---
src/PVE/LXC/Config.pm | 81 ++-
1 file changed, 80 insertions(+), 1 deletion(-)
Include update_conf to apply config changes.
diff --git a/src/PVE/LXC/Config.pm b/src/PVE/LXC/Config.pm
index 05cd970..ee45cf0 100644
--- a/src/PVE/LXC/Config.pm
This patches are the basic for pve-storage storage replica series.
Wolfgang Link (3):
Insert new properties in the LXC config for the PVE Storage Replica.
Integrate storage replica in lxc migration.
Destroy all remote and local replication datasets when a CT will
destroyed.
src/PVE
Changes as suggest by Wolfgang and Thomas.
add update_conf to apply config changes.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
We need this function to delete remote snapshots.
---
PVE/Storage/ZFSPoolPlugin.pm | 37 ++---
1 file changed, 18 insertions(+), 19 deletions(-)
diff --git a/PVE/Storage/ZFSPoolPlugin.pm b/PVE/Storage/ZFSPoolPlugin.pm
index 8cc3f05..2cf1bc7 100644
---
It is possible to synchronise a volume to an other node in a defined interval.
So if a node fail there will be an copy of the volumes from a VM
on an other node.
With this copy it is possible to start the VM on this node.
---
Makefile | 12 +-
PVE/API2/Makefile
---
PVE/API2/Qemu.pm | 3 +++
1 file changed, 3 insertions(+)
diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index ca92b65..3abe795 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -1327,6 +1327,9 @@ __PACKAGE__->register_method({
syslog('info', "destroy VM $vmid:
This patch will include all necessary properties for the replication.
Also will it enable and disable a replication job
when appointed flags are set or deleted.
---
PVE/API2/Qemu.pm | 42 ++
PVE/QemuServer.pm | 31 +++
2 files
Now it is possible to migrate a CT when replica is enabled.
It will reduce replication to an minimal amount.
---
src/PVE/LXC/Migrate.pm | 34 +-
1 file changed, 29 insertions(+), 5 deletions(-)
diff --git a/src/PVE/LXC/Migrate.pm b/src/PVE/LXC/Migrate.pm
index
Now it is possible to migrate a VM offline when replica is enabled.
It will reduce replication to an minimal amount.
---
PVE/QemuMigrate.pm | 35 ++-
1 file changed, 30 insertions(+), 5 deletions(-)
diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index
This patch serie will include storage asyncron replica.
[RFC V2]:
Make changes as Wolfgang and Thomans replite to the V1 patch series.
Note: cron will later chaged to systemd .timer
Add update_conf to update the replica file.
Wolfgang Link (8):
Include new storage function
This patches are the basic for pve-storage storage replica series.
Wolfgang Link (3):
Insert new properties in the Qemu config for the PVE Replica.
Integrate replica in the qemu migration.
Destroy all remote and local replication datasets when a VM will
destroyed.
PVE/API2/Qemu.pm
When replica is enabled and the target host is the reptarget,
the most VM data are on the new target.
---
PVE/Storage.pm | 20 +---
1 file changed, 17 insertions(+), 3 deletions(-)
diff --git a/PVE/Storage.pm b/PVE/Storage.pm
index 68c39ca..6b8d108 100755
--- a/PVE/Storage.pm
+++
---
src/PVE/API2/LXC.pm | 4
1 file changed, 4 insertions(+)
diff --git a/src/PVE/API2/LXC.pm b/src/PVE/API2/LXC.pm
index 47dcb08..a913982 100644
--- a/src/PVE/API2/LXC.pm
+++ b/src/PVE/API2/LXC.pm
@@ -19,6 +19,7 @@ use PVE::LXC::Migrate;
use PVE::API2::LXC::Config;
use
The --filestore flag is now required see doc of ceph.
If the --bluestore argument is given, a bluestore objectstore will be
created. If --filestore is provided, a legacy FileStore objectstore
will be created. If neither is specified, we default to BlueStore.
---
PVE/API2/Ceph.pm | 2 +-
1 file
Hammer, Infernalis, Jewel and Kraken are identical.
On 07/04/2017 06:15 AM, Dietmar Maurer wrote:
applied.
I guess we do not need hardy?
___
pve-devel mailing list
pve-devel@pve.proxmox.com
---
PVE/CLI/pveceph.pm | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/PVE/CLI/pveceph.pm b/PVE/CLI/pveceph.pm
index 4bec9899..3129dedc 100755
--- a/PVE/CLI/pveceph.pm
+++ b/PVE/CLI/pveceph.pm
@@ -87,7 +87,7 @@ __PACKAGE__->register_method ({
code => sub {
my
Ceph change ceph version output.
full output of 'ceph --version'
Luminous 'ceph version 12.1.0 (262617c9f16c55e863693258061c5b25dea5b086)
luminous (dev)'
Jewel'ceph version 10.2.7 (50e863e0f4bc8f4b9e31156de690d765af245185)'
---
PVE/Storage/RBDPlugin.pm | 2 +-
1 file changed, 1
---
pve-network.adoc | 72 +---
pvecm.adoc | 12 +-
qm.adoc | 2 +-
3 files changed, 60 insertions(+), 26 deletions(-)
diff --git a/pve-network.adoc b/pve-network.adoc
index 45f6424..102bb8e 100644
--- a/pve-network.adoc
Add square brackets on IP.
---
pve-zsync | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/pve-zsync b/pve-zsync
index 5fa5292..b3241ff 100644
--- a/pve-zsync
+++ b/pve-zsync
@@ -948,7 +948,7 @@ sub send_image {
run_cmd(['scp', '--', $source_target,
We need this at migration time.
---
PVE/ReplicationConfig.pm | 4
1 file changed, 4 insertions(+)
diff --git a/PVE/ReplicationConfig.pm b/PVE/ReplicationConfig.pm
index 51cfe81..670113d 100644
--- a/PVE/ReplicationConfig.pm
+++ b/PVE/ReplicationConfig.pm
@@ -213,9 +213,13 @@ sub
It is important that storages stop after pve-ha-lrm.
If the storages stops to early the vm loses disks
and can not shutdown.
This can ends in fencing the node.
---
debian/pve-ha-lrm.service | 1 +
1 file changed, 1 insertion(+)
diff --git a/debian/pve-ha-lrm.service
This will ensure all storages are up before pveproxy is running.
---
bin/init.d/Makefile | 3 ++-
bin/init.d/pve-storage.target | 10 ++
bin/init.d/pveproxy.service | 7 ++-
3 files changed, 14 insertions(+), 6 deletions(-)
create mode 100644
This will ensure all storages are up before pveproxy is running.
---
bin/init.d/Makefile | 3 ++-
bin/init.d/pve-storage.target | 11 +++
bin/init.d/pveproxy.service | 7 ++-
3 files changed, 15 insertions(+), 6 deletions(-)
create mode 100644
It is important that ceph stop after pveproxy.
If ceph stops to early and the shutdown is cluster wide the vm loses disks
and can not shutdown.
This can ends in fencing the node.
[PATCH V2]
PATCH: Make a new systemd target.
:: refactor: ensure that all possible storages used in PVE will be up.
It is important that all storages stop after pve-ha-lrm.
If the storages stops to early the vm loses disksand can not shutdown.
This can ends in fencing the node.
---
debian/pve-ha-lrm.service | 2 ++
1 file changed, 2 insertions(+)
diff --git a/debian/pve-ha-lrm.service
Your correct, it is not implemented yet.
But this can include in rollback function in the plugin.
Like this in ZFS case.
If you rollback, destroy replica_snap, rollback, and then replicate again.
On 04/25/2017 08:00 AM, Alexandre DERUMIER wrote:
> is enable ?
If the storage backend support import and export
we can send the contend to a remote host.
---
PVE/Storage.pm | 18 +
PVE/Storage/Plugin.pm| 8
PVE/Storage/ZFSPlugin.pm | 7 +++
PVE/Storage/ZFSPoolPlugin.pm | 48
When replica is enabled and the target host is the reptarget,
the most VM data are on the new target.
---
PVE/Storage.pm | 20 +---
1 file changed, 17 insertions(+), 3 deletions(-)
diff --git a/PVE/Storage.pm b/PVE/Storage.pm
index 68c39ca..6b8d108 100755
--- a/PVE/Storage.pm
+++
This feature shows that the storage can send and receive images.
---
PVE/Storage/ZFSPoolPlugin.pm | 1 +
1 file changed, 1 insertion(+)
diff --git a/PVE/Storage/ZFSPoolPlugin.pm b/PVE/Storage/ZFSPoolPlugin.pm
index b023ce7..62452a6 100644
--- a/PVE/Storage/ZFSPoolPlugin.pm
+++
We need this function to delete remote snapshots.
---
PVE/Storage/ZFSPoolPlugin.pm | 37 ++---
1 file changed, 18 insertions(+), 19 deletions(-)
diff --git a/PVE/Storage/ZFSPoolPlugin.pm b/PVE/Storage/ZFSPoolPlugin.pm
index a212191..82a62d3 100644
---
.
:: remove snapshot config handling and include it in pve-guest-common.
Wolfgang Link (3):
Insert new properties in the LXC config for the PVE Storage Replica.
Integrate storage replica in lxc migration.
Destroy all remote and local replication datasets when a CT will
destroyed.
src/PVE/API2
/jobs/job/ in function $get_replica_list
:: Rework function $get_replica_list to permit that guest which have the same
synctime will overwrite the queue list.
Wolfgang Link (8):
Include new storage function volume_send.
Include new storage function volume_snapshot_list.
Add ip parameter
This function we need for replica to handle snapshots on remote nodes.
---
PVE/Storage.pm | 15 +++
PVE/Storage/Plugin.pm| 7 +++
PVE/Storage/ZFSPoolPlugin.pm | 13 -
3 files changed, 34 insertions(+), 1 deletion(-)
diff --git a/PVE/Storage.pm
Changes
[RFC V2]
:: Changes as suggest by Wolfgang and Thomas.
:: add update_conf to apply config changes.
[RFC V3]
qemu-server
Patch: Integrate storage replica in lxc migration.
:: make condition more precise at remove local storage at migrating in phase3.
Patch:
It is possible to synchronise a volume to an other node in a defined interval.
So if a node fail there will be an copy of the volumes from a VM
on an other node.
With this copy it is possible to start the VM on this node.
---
Makefile | 12 +-
PVE/API2/Makefile
Now it is possible to migrate a VM offline when replica is enabled.
It will reduce replication to an minimal amount.
---
PVE/QemuMigrate.pm | 35 ++-
1 file changed, 30 insertions(+), 5 deletions(-)
diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index
The replica are parameter that should not be changed by a rollback.
---
PVE/AbstractConfig.pm | 7 +--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/PVE/AbstractConfig.pm b/PVE/AbstractConfig.pm
index 482f0e2..2cc4f4d 100644
--- a/PVE/AbstractConfig.pm
+++
Returns a list of snapshots (youngest snap first) form a given volid.
It is possible to use a prefix to filter the list.
---
PVE/Storage.pm | 17 +
PVE/Storage/Plugin.pm| 9 +
PVE/Storage/ZFSPlugin.pm | 6 ++
PVE/Storage/ZFSPoolPlugin.pm |
---
src/PVE/LXC/Config.pm | 73 ++-
1 file changed, 72 insertions(+), 1 deletion(-)
diff --git a/src/PVE/LXC/Config.pm b/src/PVE/LXC/Config.pm
index 05cd970..74c5b03 100644
--- a/src/PVE/LXC/Config.pm
+++ b/src/PVE/LXC/Config.pm
@@ -8,6 +8,7 @@ use
---
PVE/API2/Qemu.pm | 3 +++
1 file changed, 3 insertions(+)
diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 4737500..3fb1ab2 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -1319,6 +1319,9 @@ __PACKAGE__->register_method({
syslog('info', "destroy VM $vmid:
---
src/PVE/API2/LXC.pm | 4
1 file changed, 4 insertions(+)
diff --git a/src/PVE/API2/LXC.pm b/src/PVE/API2/LXC.pm
index 47dcb08..a913982 100644
--- a/src/PVE/API2/LXC.pm
+++ b/src/PVE/API2/LXC.pm
@@ -19,6 +19,7 @@ use PVE::LXC::Migrate;
use PVE::API2::LXC::Config;
use
This patch will include all necessary properties for the replication.
Also will it enable and disable a replication job
when appointed flags are set or deleted.
---
PVE/API2/Qemu.pm | 34 ++
PVE/QemuServer.pm | 31 +++
2 files changed,
501 - 600 of 954 matches
Mail list logo