Sorry for long delay ...

here is the log:

2017-07-17 16:26:00 100-0: start replication job
2017-07-17 16:26:00 100-0: guest => CT 100, running => 0
2017-07-17 16:26:01 100-0: volumes => local-zfs:subvol-100-disk-1
2017-07-17 16:26:01 100-0: create snapshot '__replicate_100-0_1500301560__' on 
local-zfs:subvol-100-disk-1
2017-07-17 16:26:01 100-0: full sync 'local-zfs:subvol-100-disk-1' 
(__replicate_100-0_1500301560__)
2017-07-17 16:26:02 100-0: delete previous replication snapshot 
'__replicate_100-0_1500301560__' on local-zfs:subvol-100-disk-1
2017-07-17 16:26:02 100-0: end replication job with error: command 'set -o 
pipefail && pvesm export local-zfs:subvol-100-disk-1 zfs - -with-snapshots 1 
-snapshot __replicate_100-0_1500301560__ | /usr/bin/ssh -o 'BatchMode=yes' -o 
'HostKeyAlias=vbox-proxmox1' root@192.168.43.220 -- pvesm import 
local-zfs:subvol-100-disk-1 zfs - -with-snapshots 1' failed: exit code 255


kind regards

Wolfgang 

Am Montag, 10. Juli 2017, 20:25:41 CEST schrieb Dietmar Maurer:
> > the new storage replication feature is really great! But there is an issue:
> > Unfortunately the replication breaks completely if somebody do a rollback to
> > an older snapshot than the last sync of a container and destroys that 
> > snapshot
> > before the next sync.
> 
> AFAIK it simply syncs from rollbacked snapshot instead. Please can you post 
> the
> replication log with the error?
> 
> 


_______________________________________________
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

Reply via email to