Re: [PVE-User] pve-zsync - no zvol replication ?

2020-03-03 Thread Roland @web.de

could be.

ok - but if they are supported, how do i use pve-zsync with zvols then ?


Am 03.03.20 um 16:35 schrieb Gianni Milo:

Could the following be the reason ...?  zvols are being supported as far as
I can tell...
Limitations

- not possible to sync recursive


G.


On Tue, 3 Mar 2020 at 12:30, Roland @web.de  wrote:


hello,

apparently pve-zsync does not seem to replicate zfs zvols (but only
regular zfs datasets) !?

since zvols is default for VMs, i'm curious if this is a bug or a
(missing) feature  !?


i tried syncronizing the following way:


pve-zsync sync -dest pve-node2:hddpool/pve-node1-zsync/hddpool -source
hddpool/vms -v

source:
root@pve-node1# zfs list -r hddpool/vms
NAMEUSED  AVAIL REFER MOUNTPOINT
hddpool/vms34.2G  3.42T 30.6K /hddpool/vms
hddpool/vms/vm-100-disk-0  34.2G  3.46T 1.18G  -

dest:
root@pve-node2# zfs list -r hddpool/pve-node1-zsync
NAME  USED  AVAIL REFER  MOUNTPOINT
hddpool/pve-node1-zsync   156K  3.44T 32.0K
/hddpool/pve-node1-zsync
hddpool/pve-node1-zsync/hddpool 61.3K  3.44T 30.6K
/hddpool/pve-node1-zsync/hddpool
hddpool/pve-node1-zsync/hddpool/vms  30.6K  3.44T 30.6K
/hddpool/pve-node1-zsync/hddpool/vms

the documentation at https://pve.proxmox.com/wiki/PVE-zsync  is unclear
about that.

i'm asking since i want to use builtin utils instead of 3rd party tools
for that job.

regards
roland


___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] pve-zsync - no zvol replication ?

2020-03-03 Thread Gianni Milo
Could the following be the reason ...?  zvols are being supported as far as
I can tell...
Limitations

   - not possible to sync recursive


G.


On Tue, 3 Mar 2020 at 12:30, Roland @web.de  wrote:

> hello,
>
> apparently pve-zsync does not seem to replicate zfs zvols (but only
> regular zfs datasets) !?
>
> since zvols is default for VMs, i'm curious if this is a bug or a
> (missing) feature  !?
>
>
> i tried syncronizing the following way:
>
>
> pve-zsync sync -dest pve-node2:hddpool/pve-node1-zsync/hddpool -source
> hddpool/vms -v
>
> source:
> root@pve-node1# zfs list -r hddpool/vms
> NAMEUSED  AVAIL REFER MOUNTPOINT
> hddpool/vms34.2G  3.42T 30.6K /hddpool/vms
> hddpool/vms/vm-100-disk-0  34.2G  3.46T 1.18G  -
>
> dest:
> root@pve-node2# zfs list -r hddpool/pve-node1-zsync
> NAME  USED  AVAIL REFER  MOUNTPOINT
> hddpool/pve-node1-zsync   156K  3.44T 32.0K
> /hddpool/pve-node1-zsync
> hddpool/pve-node1-zsync/hddpool 61.3K  3.44T 30.6K
> /hddpool/pve-node1-zsync/hddpool
> hddpool/pve-node1-zsync/hddpool/vms  30.6K  3.44T 30.6K
> /hddpool/pve-node1-zsync/hddpool/vms
>
> the documentation at https://pve.proxmox.com/wiki/PVE-zsync  is unclear
> about that.
>
> i'm asking since i want to use builtin utils instead of 3rd party tools
> for that job.
>
> regards
> roland
>
>
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] pve-zsync - no zvol replication ?

2020-03-03 Thread Roland @web.de

hello,

apparently pve-zsync does not seem to replicate zfs zvols (but only 
regular zfs datasets) !?


since zvols is default for VMs, i'm curious if this is a bug or a 
(missing) feature  !?



i tried syncronizing the following way:


pve-zsync sync -dest pve-node2:hddpool/pve-node1-zsync/hddpool -source 
hddpool/vms -v


source:
root@pve-node1# zfs list -r hddpool/vms
NAME    USED  AVAIL REFER MOUNTPOINT
hddpool/vms    34.2G  3.42T 30.6K /hddpool/vms
hddpool/vms/vm-100-disk-0  34.2G  3.46T 1.18G  -

dest:
root@pve-node2# zfs list -r hddpool/pve-node1-zsync
NAME  USED  AVAIL REFER  MOUNTPOINT
hddpool/pve-node1-zsync   156K  3.44T 32.0K  
/hddpool/pve-node1-zsync
hddpool/pve-node1-zsync/hddpool 61.3K  3.44T 30.6K  
/hddpool/pve-node1-zsync/hddpool
hddpool/pve-node1-zsync/hddpool/vms  30.6K  3.44T 30.6K  
/hddpool/pve-node1-zsync/hddpool/vms


the documentation at https://pve.proxmox.com/wiki/PVE-zsync  is unclear 
about that.


i'm asking since i want to use builtin utils instead of 3rd party tools 
for that job.


regards
roland


___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] pve-zsync broken?

2019-07-09 Thread Andreas Heinlein
Hello,

I have been trying to setup pve-zsync, with Host 2 (backup) pulling from
Host 1 (origin). This worked for the first VM, but failed for the second
because I misunderstood the maxsnap setting, which lead to out-of-space
on Host 1 because of too many snapshots.

I tried to cleanup everything and could delete the snapshots from Host
1, but not Host 2. I simply can't find the snapshots.

I created a subvol "rpool/backup" on Host 2, which I can see with 'zfs
list -t all'. But 'zfs list -t snapshot' only gives "No datasets found".
'zfs list -t all' also shows no snapshots. Shouldn't I at least be able
to see the snapshots from the first - still enabled and working - job?

Thanks,

Andreas

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] pve-zsync + borgbackup "changed size" discrepancy

2019-03-02 Thread devzero
to answer this:

https://mail.python.org/pipermail/borgbackup/2019q1/001280.html


> Gesendet: Freitag, 01. März 2019 um 12:50 Uhr
> Von: devz...@web.de
> An: pve-user@pve.proxmox.com
> Betreff: [PVE-User] pve-zsync + borgbackup "changed size" discrepancy
>
> hi,
> i'm syncing a zfs dataset with pve-zsync to another proxmox host and also do 
> backup with borg-backup of the same
> dataset via temporary backup snapshot.
> 
> i have shutdown all VMs on the dataset and only leave a single win10 vm up 
> and running.
> 
> i see a huge discrepancy between what pve-zsync is reporting as "changed 
> data" to be synced to the other node
> and what borbackup is reporting regarding backup growth (see below). 
> ("deduplicated size" means: this is the size
> the archive grew after the data has been checksummed, deduplicated and 
> compressed)
> 
> does anybody have a clue why this differs by "roughly a power of 10" ?
> 
> is pve-zsync estimation wrong ?
> is borgbackup "overhead" so big when backing up large single qcow2 files ?
> 
> regards
> roland
> 
> pve-zsync estimated size:
> total estimated size is 1.79M
> total estimated size is 22.9M
> total estimated size is 13.5M
> total estimated size is 24.4M
> total estimated size is 22.9M
> total estimated size is 24.1M
> total estimated size is 10.2M
> total estimated size is 20.9M
> total estimated size is 81.0M
> total estimated size is 53.3M
> total estimated size is 10.8M
> total estimated size is 14.1M
> total estimated size is 10.9M
> total estimated size is 189M
> total estimated size is 29.0M
> total estimated size is 21.2M
> total estimated size is 14.1M
> total estimated size is 13.1M
> total estimated size is 11.0M
> total estimated size is 14.1M
> total estimated size is 11.3M
> total estimated size is 14.9M
> total estimated size is 10.5M
> total estimated size is 14.2M
> total estimated size is 20.4M
> 
> BorgBackup growth
>Original size  Compressed sizeDeduplicated size
> This archive:  111.69 GB 19.86 GB 95.31 MB
> This archive:  111.69 GB 19.86 GB197.58 MB
> This archive:  111.69 GB 19.86 GB119.40 MB
> This archive:  111.69 GB 19.86 GB138.81 MB
> This archive:  111.69 GB 19.86 GB180.30 MB
> This archive:  111.69 GB 19.86 GB205.75 MB
> This archive:  111.69 GB 19.86 GB 76.22 MB
> This archive:  111.69 GB 19.86 GB159.60 MB
> This archive:  111.69 GB 19.86 GB636.18 MB
> This archive:  111.69 GB 19.86 GB413.14 MB
> This archive:  111.69 GB 19.86 GB 83.86 MB
> This archive:  111.69 GB 19.86 GB122.86 MB
> This archive:  111.69 GB 19.86 GB108.17 MB
> This archive:  111.69 GB 19.86 GB153.89 MB
> This archive:  111.69 GB 19.89 GB946.72 MB
> This archive:  111.69 GB 19.89 GB126.48 MB
> This archive:  111.69 GB 19.89 GB139.23 MB
> This archive:  111.69 GB 19.89 GB113.23 MB
> This archive:  111.69 GB 19.89 GB 67.53 MB
> This archive:  111.69 GB 19.89 GB129.25 MB
> This archive:  111.69 GB 19.89 GB 74.28 MB
> This archive:  111.69 GB 19.89 GB110.28 MB
> This archive:  111.69 GB 19.89 GB 80.15 MB
> This archive:  111.69 GB 19.89 GB 92.86 MB
> This archive:  111.69 GB 19.89 GB160.57 MB
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] pve-zsync + borgbackup "changed size" discrepancy

2019-03-01 Thread devzero
hi,
i'm syncing a zfs dataset with pve-zsync to another proxmox host and also do 
backup with borg-backup of the same
dataset via temporary backup snapshot.

i have shutdown all VMs on the dataset and only leave a single win10 vm up and 
running.

i see a huge discrepancy between what pve-zsync is reporting as "changed data" 
to be synced to the other node
and what borbackup is reporting regarding backup growth (see below). 
("deduplicated size" means: this is the size
the archive grew after the data has been checksummed, deduplicated and 
compressed)

does anybody have a clue why this differs by "roughly a power of 10" ?

is pve-zsync estimation wrong ?
is borgbackup "overhead" so big when backing up large single qcow2 files ?

regards
roland

pve-zsync estimated size:
total estimated size is 1.79M
total estimated size is 22.9M
total estimated size is 13.5M
total estimated size is 24.4M
total estimated size is 22.9M
total estimated size is 24.1M
total estimated size is 10.2M
total estimated size is 20.9M
total estimated size is 81.0M
total estimated size is 53.3M
total estimated size is 10.8M
total estimated size is 14.1M
total estimated size is 10.9M
total estimated size is 189M
total estimated size is 29.0M
total estimated size is 21.2M
total estimated size is 14.1M
total estimated size is 13.1M
total estimated size is 11.0M
total estimated size is 14.1M
total estimated size is 11.3M
total estimated size is 14.9M
total estimated size is 10.5M
total estimated size is 14.2M
total estimated size is 20.4M

BorgBackup growth
   Original size  Compressed sizeDeduplicated size
This archive:  111.69 GB 19.86 GB 95.31 MB
This archive:  111.69 GB 19.86 GB197.58 MB
This archive:  111.69 GB 19.86 GB119.40 MB
This archive:  111.69 GB 19.86 GB138.81 MB
This archive:  111.69 GB 19.86 GB180.30 MB
This archive:  111.69 GB 19.86 GB205.75 MB
This archive:  111.69 GB 19.86 GB 76.22 MB
This archive:  111.69 GB 19.86 GB159.60 MB
This archive:  111.69 GB 19.86 GB636.18 MB
This archive:  111.69 GB 19.86 GB413.14 MB
This archive:  111.69 GB 19.86 GB 83.86 MB
This archive:  111.69 GB 19.86 GB122.86 MB
This archive:  111.69 GB 19.86 GB108.17 MB
This archive:  111.69 GB 19.86 GB153.89 MB
This archive:  111.69 GB 19.89 GB946.72 MB
This archive:  111.69 GB 19.89 GB126.48 MB
This archive:  111.69 GB 19.89 GB139.23 MB
This archive:  111.69 GB 19.89 GB113.23 MB
This archive:  111.69 GB 19.89 GB 67.53 MB
This archive:  111.69 GB 19.89 GB129.25 MB
This archive:  111.69 GB 19.89 GB 74.28 MB
This archive:  111.69 GB 19.89 GB110.28 MB
This archive:  111.69 GB 19.89 GB 80.15 MB
This archive:  111.69 GB 19.89 GB 92.86 MB
This archive:  111.69 GB 19.89 GB160.57 MB
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] pve-zsync issues

2019-01-22 Thread Miguel González
Hi,

  I have two servers running proxmox 5.3-6. Both run several VMs and I
am using pve-zsync to sync two machines in server1 in server two for
disaster recovery and offline backups.

  This has been working without issue with two proxmox servers running
5.1-46. I have just replaced them with two new servers.

  I have two jobs, one is reporting that is has to send the full batch
and the other one reporting a failure. Snapshots in the backup server
show  0B.

  root@server1:~# pve-zsync status
SOURCE   NAME STATUS   
100  plesk1   error
102  cpanel1  ok

root@server2:~# zfs list -t snapshot
NAME   USED  AVAIL 
REFER  MOUNTPOINT
rpool/data/vm-100-disk-0@rep_plesk1_2019-01-21_22:30:03  0B  - 
20.4G  -
rpool/data/vm-100-disk-1@rep_plesk1_2019-01-21_22:30:03  0B  - 
67.3G  -
rpool/data/vm-100-disk-2@rep_plesk1_2019-01-21_22:30:03  0B  - 
92.9G  -
rpool/data/vm-102-disk-0@rep_cpanel1_2019-01-22_01:00:01 0B  - 
20.0G  -
rpool/data/vm-102-disk-1@rep_cpanel1_2019-01-22_01:00:01 0B  - 
60.4G  -

root@server1:~# zfs list -t snapshot
NAME  USED  AVAIL 
REFER  MOUNTPOINT
rpool/vm-100-disk-0@rep_plesk1_2019-01-19_22:47:37    597M  -  20.0G  -
rpool/vm-100-disk-0@rep_plesk1_2019-01-20_11:22:21    482M  -  20.1G  -
rpool/vm-100-disk-0@rep_plesk1_2019-01-21_22:05:08    121M  -  20.4G  -
rpool/vm-100-disk-0@rep_plesk1_2019-01-21_22:30:03    117M  -  20.4G  -
rpool/vm-100-disk-1@rep_plesk1_2019-01-19_22:47:37   9.68G  -  67.1G  -
rpool/vm-100-disk-1@rep_plesk1_2019-01-20_11:22:21   9.49G  -  67.2G  -
rpool/vm-100-disk-1@rep_plesk1_2019-01-21_22:30:03   4.84G  -  67.3G  -
rpool/vm-100-disk-2@rep_plesk1_2019-01-19_22:47:37    519M  -  92.9G  -
rpool/vm-100-disk-2@rep_plesk1_2019-01-20_11:22:21    335M  -  92.9G  -
rpool/vm-100-disk-2@rep_plesk1_2019-01-21_22:30:03    517M  -  92.9G  -
rpool/vm-102-disk-0@rep_cpanel1_2019-01-20_01:00:01  1.87G  -  20.1G  -
rpool/vm-102-disk-0@rep_cpanel1_2019-01-21_01:00:04  1.21G  -  20.1G  -
rpool/vm-102-disk-0@rep_cpanel1_2019-01-22_01:00:01  1.25G  -  20.0G  -
rpool/vm-102-disk-1@rep_cpanel1_2019-01-20_01:00:01  4.94G  -  60.5G  -
rpool/vm-102-disk-1@rep_cpanel1_2019-01-21_01:00:04  3.97G  -  60.5G  -
rpool/vm-102-disk-1@rep_cpanel1_2019-01-22_01:00:01  3.31G  -  60.4G  -

Nigthly jobs report different things:

cpanel1 VM:

WARN: COMMAND:
ssh root@server2 -- zfs list -rt snapshot -Ho name 
rpool/data/vm-102-disk-0@rep_cpanel1_2019-01-20_01:00:01
GET ERROR:
cannot open 'rpool/data/vm-102-disk-0@rep_cpanel1_2019-01-20_01:00:01': 
dataset does not exist
full send of rpool/vm-102-disk-0@rep_cpanel1_2019-01-22_01:00:01 estimated size 
is 29.7G
total estimated size is 29.7G
TIMESENT   SNAPSHOT
01:00:03   23.8M   rpool/vm-102-disk-0@rep_cpanel1_2019-01-22_01:00:01
01:00:04   54.3M   rpool/vm-102-disk-0@rep_cpanel1_2019-01-22_01:00:01
01:00:05   84.7M   rpool/vm-102-disk-0@rep_cpanel1_2019-01-22_01:00:01
01:00:06115M   rpool/vm-102-disk-0@rep_cpanel1_2019-01-22_01:00:01

and it has two set the full two disks, which I don´t understand why

plesk1 VM:

WARN: COMMAND:
ssh root@server2 -- zfs list -rt snapshot -Ho name 
rpool/data/vm-100-disk-0@rep_plesk1_2019-01-19_22:47:37
GET ERROR:
cannot open 'rpool/data/vm-100-disk-0@rep_plesk1_2019-01-19_22:47:37': 
dataset does not exist
full send of rpool/vm-100-disk-0@rep_plesk1_2019-01-22_01:58:55 estimated size 
is 28.4G
total estimated size is 28.4G
TIMESENT   SNAPSHOT
COMMAND:
zfs send -v -- rpool/vm-100-disk-0@rep_plesk1_2019-01-22_01:58:55 | ssh 
-o 'BatchMode=yes' root@37.187.154.74 -- zfs recv -F -- rpool/data/vm-100-disk-0
GET ERROR:
cannot receive new filesystem stream: destination has snapshots (eg. 
rpool/data/vm-100-disk-0)
must destroy them to overwrite it

Job --source 100 --name plesk1 got an ERROR!!!
ERROR Message:




---
This email has been checked for viruses by AVG.
https://www.avg.com
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] pve-zsync error in path

2018-06-23 Thread Christian Meiring
Hello Tonči,

you use -dest instead of --dest, or is this a typo in your mail only?

Christian

Am Samstag, 23. Juni 2018, 22:57:59 CEST schrieb Tonči Stipičević:
> Hello to all,
> unfortunately I'm experineing this (error: in path) issue and cannot
> figure out why. My scenario is:
> My hosts make in 3-node cluster. I want to use pve-zsync in order to do
> periodical fast incremental backups , what is not possible to achive
> with classical vzdump.
> 
> 1. this is the source vm conf:
> bootdisk: virtio0
> cores: 2
> ide2: rn314:iso/ubuntu-16.04.3-server-amd64.iso,media=cdrom
> memory: 1024
> name: u1604server-zfs
> net0: virtio=1A:5D:FC:7C:FC:98,bridge=vmbr1
> numa: 0
> ostype: l26
> scsihw: virtio-scsi-pci
> smbios1: uuid=ba277e4f-9bc4-455b-bf9c-a8c96bb32391
> sockets: 1
> vga: qxl
> virtio0: zfs1:vm-70020-disk-1,size=7G
> /etc/pve/qemu-server/70020.conf (END)
> 
> 2. this is the destination zpool (on server 10.20.28.4):
> root@pvesuma03:~# zfs list
> NAME USED AVAIL REFER MOUNTPOINT
> bckpool 444K 449G 96K /bckpool
> bckpool/backup 96K 449G 96K /bckpool/backup
> root@pvesuma03:~#
> 
> 
> 
> 3. this is the command I run on the "source" node:
> root@pvesuma01:~# pve-zsync create -source 70020 -dest
> 10.20.28.4:bckpool/backup --verbose --maxsnap 7 --name bck1
> ERROR: in path
> root@pvesuma01:~#
> 
> latest pve is running
> 
> proxmox-ve: 5.2-2 (running kernel: 4.15.17-3-pve) pve-manager: 5.2-2
> (running version: 5.2-2/b1d1c7f4) pve-kernel-4.15: 5.2-3
> pve-kernel-4.13: 5.1-45 pve-kernel-4.15.17-3-pve: 4.15.17-13
> pve-kernel-4.15.17-2-pve: 4.15.17-10 pve-kernel-4.15.17-1-pve: 4.15.17-9
> pve-kernel-4.15.15-1-pve: 4.15.15-6 pve-kernel-4.13.16-3-pve: 4.13.16-49
> pve-kernel-4.13.16-2-pve: 4.13.16-48 pve-kernel-4.13.16-1-pve:
> 4.13.16-46 pve-kernel-4.13.13-6-pve: 4.13.13-42
> pve-kernel-4.13.13-5-pve: 4.13.13-38 pve-kernel-4.13.13-4-pve:
> 4.13.13-35 pve-kernel-4.13.13-2-pve: 4.13.13-33
> pve-kernel-4.13.13-1-pve: 4.13.13-31 pve-kernel-4.13.8-3-pve: 4.13.8-30
> pve-kernel-4.13.8-2-pve: 4.13.8-28 pve-kernel-4.13.4-1-pve: 4.13.4-26
> pve-kernel-4.10.17-4-pve: 4.10.17-24 pve-kernel-4.10.17-3-pve:
> 4.10.17-23 pve-kernel-4.10.17-2-pve: 4.10.17-20
> pve-kernel-4.10.17-1-pve: 4.10.17-18 pve-kernel-4.10.15-1-pve:
> 4.10.15-15 pve-kernel-4.4.67-1-pve: 4.4.67-92 pve-kernel-4.4.62-1-pve:
> 4.4.62-88 pve-kernel-4.4.59-1-pve: 4.4.59-87 pve-kernel-4.4.49-1-pve:
> 4.4.49-86 pve-kernel-4.4.44-1-pve: 4.4.44-84 pve-kernel-4.4.40-1-pve:
> 4.4.40-82 pve-kernel-4.4.35-2-pve: 4.4.35-79 pve-kernel-4.4.35-1-pve:
> 4.4.35-77 pve-kernel-4.4.24-1-pve: 4.4.24-72 pve-kernel-4.4.21-1-pve:
> 4.4.21-71 pve-kernel-4.4.19-1-pve: 4.4.19-66 pve-kernel-4.4.16-1-pve:
> 4.4.16-64 pve-kernel-4.4.15-1-pve: 4.4.15-60 pve-kernel-4.4.13-2-pve:
> 4.4.13-58 pve-kernel-4.4.13-1-pve: 4.4.13-56 pve-kernel-4.4.10-1-pve:
> 4.4.10-54 pve-kernel-4.2.6-1-pve: 4.2.6-36 corosync: 2.4.2-pve5 criu:
> 2.11.1-1~bpo90 glusterfs-client: 3.8.8-1 ksm-control-daemon: 1.2-2
> libjs-extjs: 6.0.1-2 libpve-access-control: 5.0-8 libpve-apiclient-perl:
> 2.0-4 libpve-common-perl: 5.0-33 libpve-guest-common-perl: 2.0-16
> libpve-http-server-perl: 2.0-9 libpve-storage-perl: 5.0-23 libqb0:
> 1.0.1-1 lvm2: 2.02.168-pve6 lxc-pve: 3.0.0-3 lxcfs: 3.0.0-1 novnc-pve:
> 1.0.0-1 openvswitch-switch: 2.7.0-2 proxmox-widget-toolkit: 1.0-19
> pve-cluster: 5.0-27 pve-container: 2.0-23 pve-docs: 5.2-4 pve-firewall:
> 3.0-12 pve-firmware: 2.0-4 pve-ha-manager: 2.0-5 pve-i18n: 1.0-6
> pve-libspice-server1: 0.12.8-3 pve-qemu-kvm: 2.11.1-5 pve-xtermjs: 1.0-5
> pve-zsync: 1.6-16 qemu-server: 5.0-28 smartmontools: 6.5+svn4324-1
> spiceterm: 3.0-5 vncterm: 1.5-3 zfsutils-linux: 0.7.9-pve1~bpo9
> 
> 
> I have no idea so far what could cause such response/error
> 
> Thank you very much in advance for you help
> 
> BR
> Tonci
> 
> 
> 
> 
> 
> 
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] pve-zsync error in path

2018-06-23 Thread Tonči Stipičević

Hello to all,
unfortunately I'm experineing this (error: in path) issue and cannot 
figure out why. My scenario is:
My hosts make in 3-node cluster. I want to use pve-zsync in order to do 
periodical fast incremental backups , what is not possible to achive 
with classical vzdump.


1. this is the source vm conf:
bootdisk: virtio0
cores: 2
ide2: rn314:iso/ubuntu-16.04.3-server-amd64.iso,media=cdrom
memory: 1024
name: u1604server-zfs
net0: virtio=1A:5D:FC:7C:FC:98,bridge=vmbr1
numa: 0
ostype: l26
scsihw: virtio-scsi-pci
smbios1: uuid=ba277e4f-9bc4-455b-bf9c-a8c96bb32391
sockets: 1
vga: qxl
virtio0: zfs1:vm-70020-disk-1,size=7G
/etc/pve/qemu-server/70020.conf (END)

2. this is the destination zpool (on server 10.20.28.4):
root@pvesuma03:~# zfs list
NAME USED AVAIL REFER MOUNTPOINT
bckpool 444K 449G 96K /bckpool
bckpool/backup 96K 449G 96K /bckpool/backup
root@pvesuma03:~#



3. this is the command I run on the "source" node:
root@pvesuma01:~# pve-zsync create -source 70020 -dest 
10.20.28.4:bckpool/backup --verbose --maxsnap 7 --name bck1

ERROR: in path
root@pvesuma01:~#

latest pve is running

proxmox-ve: 5.2-2 (running kernel: 4.15.17-3-pve) pve-manager: 5.2-2 
(running version: 5.2-2/b1d1c7f4) pve-kernel-4.15: 5.2-3 
pve-kernel-4.13: 5.1-45 pve-kernel-4.15.17-3-pve: 4.15.17-13 
pve-kernel-4.15.17-2-pve: 4.15.17-10 pve-kernel-4.15.17-1-pve: 4.15.17-9 
pve-kernel-4.15.15-1-pve: 4.15.15-6 pve-kernel-4.13.16-3-pve: 4.13.16-49 
pve-kernel-4.13.16-2-pve: 4.13.16-48 pve-kernel-4.13.16-1-pve: 
4.13.16-46 pve-kernel-4.13.13-6-pve: 4.13.13-42 
pve-kernel-4.13.13-5-pve: 4.13.13-38 pve-kernel-4.13.13-4-pve: 
4.13.13-35 pve-kernel-4.13.13-2-pve: 4.13.13-33 
pve-kernel-4.13.13-1-pve: 4.13.13-31 pve-kernel-4.13.8-3-pve: 4.13.8-30 
pve-kernel-4.13.8-2-pve: 4.13.8-28 pve-kernel-4.13.4-1-pve: 4.13.4-26 
pve-kernel-4.10.17-4-pve: 4.10.17-24 pve-kernel-4.10.17-3-pve: 
4.10.17-23 pve-kernel-4.10.17-2-pve: 4.10.17-20 
pve-kernel-4.10.17-1-pve: 4.10.17-18 pve-kernel-4.10.15-1-pve: 
4.10.15-15 pve-kernel-4.4.67-1-pve: 4.4.67-92 pve-kernel-4.4.62-1-pve: 
4.4.62-88 pve-kernel-4.4.59-1-pve: 4.4.59-87 pve-kernel-4.4.49-1-pve: 
4.4.49-86 pve-kernel-4.4.44-1-pve: 4.4.44-84 pve-kernel-4.4.40-1-pve: 
4.4.40-82 pve-kernel-4.4.35-2-pve: 4.4.35-79 pve-kernel-4.4.35-1-pve: 
4.4.35-77 pve-kernel-4.4.24-1-pve: 4.4.24-72 pve-kernel-4.4.21-1-pve: 
4.4.21-71 pve-kernel-4.4.19-1-pve: 4.4.19-66 pve-kernel-4.4.16-1-pve: 
4.4.16-64 pve-kernel-4.4.15-1-pve: 4.4.15-60 pve-kernel-4.4.13-2-pve: 
4.4.13-58 pve-kernel-4.4.13-1-pve: 4.4.13-56 pve-kernel-4.4.10-1-pve: 
4.4.10-54 pve-kernel-4.2.6-1-pve: 4.2.6-36 corosync: 2.4.2-pve5 criu: 
2.11.1-1~bpo90 glusterfs-client: 3.8.8-1 ksm-control-daemon: 1.2-2 
libjs-extjs: 6.0.1-2 libpve-access-control: 5.0-8 libpve-apiclient-perl: 
2.0-4 libpve-common-perl: 5.0-33 libpve-guest-common-perl: 2.0-16 
libpve-http-server-perl: 2.0-9 libpve-storage-perl: 5.0-23 libqb0: 
1.0.1-1 lvm2: 2.02.168-pve6 lxc-pve: 3.0.0-3 lxcfs: 3.0.0-1 novnc-pve: 
1.0.0-1 openvswitch-switch: 2.7.0-2 proxmox-widget-toolkit: 1.0-19 
pve-cluster: 5.0-27 pve-container: 2.0-23 pve-docs: 5.2-4 pve-firewall: 
3.0-12 pve-firmware: 2.0-4 pve-ha-manager: 2.0-5 pve-i18n: 1.0-6 
pve-libspice-server1: 0.12.8-3 pve-qemu-kvm: 2.11.1-5 pve-xtermjs: 1.0-5 
pve-zsync: 1.6-16 qemu-server: 5.0-28 smartmontools: 6.5+svn4324-1 
spiceterm: 3.0-5 vncterm: 1.5-3 zfsutils-linux: 0.7.9-pve1~bpo9



I have no idea so far what could cause such response/error

Thank you very much in advance for you help

BR
Tonci






___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] pve-zsync processes

2018-03-23 Thread Thomas Lamprecht
On 3/21/18 1:51 PM, Wolfgang Link wrote:
> 
>> So does this mean that all those processes are sitting in a "queue" waiting
>> to execute? wouldn't it be more sensible for the script to terminate if a
>> process is already running for the same job?
>>
> No because as I wrote 15 is default, but we have many user which have longer 
> intervals like 1 day.
> If you would quit the process you would skip one day.
> 

But that could simply be solved that only one process may be queued,
i.e., if one is active a the next starts the it waits. If the next(-next)
start and the former still waits then it just exists immediately.
With this we'd never miss a sync cycle and have a maximal queue of 1,
not 500 or more...

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] pve-zsync processes

2018-03-21 Thread Wolfgang Link

> So does this mean that all those processes are sitting in a "queue" waiting
> to execute? wouldn't it be more sensible for the script to terminate if a
> process is already running for the same job?
> 
No because as I wrote 15 is default, but we have many user which have longer 
intervals like 1 day.
If you would quit the process you would skip one day.

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] pve-zsync processes

2018-03-21 Thread Mark Adams
Hi Wolfgang,

So does this mean that all those processes are sitting in a "queue" waiting
to execute? wouldn't it be more sensible for the script to terminate if a
process is already running for the same job?

Regards,
Mark

On 21 March 2018 at 12:40, Wolfgang Link  wrote:

> Hi,
>
> this indicates that the sync time is to low.
> cron fork every (default) 15 minutes a pve-zsync process.
> If the former pve-zsync process is not finished, it will wait until the
> former process is done.
>
> You should rise your sync interval this can be done in the
> /etc/cron.d/pve-zsync.
>
> Best Regards,
>
> Wolfgang Link
>
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] pve-zsync processes

2018-03-21 Thread Wolfgang Link
Hi,

this indicates that the sync time is to low.
cron fork every (default) 15 minutes a pve-zsync process.
If the former pve-zsync process is not finished, it will wait until the former 
process is done.

You should rise your sync interval this can be done in the 
/etc/cron.d/pve-zsync.

Best Regards,

Wolfgang Link

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] pve-zsync processes

2018-03-21 Thread Mark Adams
Hi All,

I've been using pve-zsync for a few months - it seems to work pretty well.

However, I have just noticed it doesn't seem to be terminating itself
correctly. at present I have around 800 pve-zsync processes (sleeping)
which all seems to be duplicates. (I would expect 1 per VMID?)

Has anyone noticed this behaviour? any idea why or how to stop it?

Best Regards,
Mark
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] pve-zsync log

2018-03-08 Thread Wolfgang Link
Hello,

> Does anyone know a decent way of logging pve-zsync status? For failure or how 
> long it took to run the sync?
All jobs are executed by a cron job. A Proxmox VE host default setting is, if a 
cron job generate output, send this output to the root email address.

But you can configure cron as you need it. send an email or write to syslog.

Extra login could be done if you edit the cron job in /etc/cron.d/pve-zsync and 
give all pve-zsync jobs a -verbose parameter.

I see this is an undocumented feature so I will send a patch.

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] pve-zsync log

2018-03-08 Thread Daniel Bayerdorffer
I would be interested in this too.

--
Daniel B

- Original Message -
From: "Johns, Daniel (GPK)" 
To: pve-user@pve.proxmox.com
Sent: Thursday, March 8, 2018 9:55:21 AM
Subject: [PVE-User] pve-zsync log

Hello,
Does anyone know a decent way of logging pve-zsync status? For failure or how 
long it took to run the sync?

Thanks

-Daniel J
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] pve-zsync log

2018-03-08 Thread Johns, Daniel (GPK)
Hello,
Does anyone know a decent way of logging pve-zsync status? For failure or how 
long it took to run the sync?

Thanks

-Daniel J
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Pve-zsync available size

2018-02-15 Thread Wolfgang Link
> Wolfgang Link  hat am 15. Februar 2018 um 14:49 
> geschrieben:
> 
> 
> > Yes subvolume like rpool/data/subvol-100-disk1
> We do not replicate the file system property.
> Because you have to restore it anyway manual and so you can set the refquota 
> at this step.

Best Regards,

Wolfgang Link

w.l...@proxmox.com
http://www.proxmox.com


Proxmox Server Solutions GmbH
Bräuhausgasse 37, 1050 Vienna, Austria
Commercial register no.: FN 258879 f
Registration office: Handelsgericht Wien

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Pve-zsync available size

2018-02-15 Thread Wolfgang Link
Hi,
> Jérémy Carnus  hat am 15. Februar 2018 um 12:02 
> geschrieben:
> 
> 
> Hi
> I just notice that using the pve-zsync tool to replicate the zfs pool to 
> another server doesn't keep the available size on the pool. Is it wanted ?  
> How proxmox 5 manage zfs size ?  With quota ? 
pve-zsync do no sync the hole pool.
Do you mean subvolumes for LXC?
> 
> Thanks
> 
> Jérémy Carnus
> 
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

Best Regards,

Wolfgang Link

w.l...@proxmox.com
http://www.proxmox.com


Proxmox Server Solutions GmbH
Bräuhausgasse 37, 1050 Vienna, Austria
Commercial register no.: FN 258879 f
Registration office: Handelsgericht Wien

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Pve-zsync available size

2018-02-15 Thread Jérémy Carnus
Hi
I just notice that using the pve-zsync tool to replicate the zfs pool to 
another server doesn't keep the available size on the pool. Is it wanted ?  How 
proxmox 5 manage zfs size ?  With quota ? 

Thanks

Jérémy Carnus

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] pve-zsync

2017-03-10 Thread Luis G. Coralle
Still say:

root@pve4:~# zfs destroy -rf rpool/data/vm-107-disk-1
cannot destroy 'rpool/data/vm-107-disk-1': dataset already exists

2017-03-10 3:53 GMT-03:00 Wolfgang Link :

> Hi,
>
> You can't destroy datasets where snapshots exists.
>
> zfs list -t all
> will show you all datasets
>
> and zfs destroy -R will erase all dataset what are referenced to tihs
> given set.
>
> On 03/09/2017 08:35 PM, Luis G. Coralle wrote:
> > Hi.
> > I was trying to sync two vm with pve-zsync between two nodes with pve 4.2
> > After completing the tests, I could not remove rpool/data/vm-107-disk-1
> and
> > rpool/data/vm-17206-disk-1.
> > How can I remove them?
> >
> > root@pve4:~# zfs list
> > NAME USED  AVAIL  REFER  MOUNTPOINT
> > rpool332G  2.23T   140K  /rpool
> > rpool/ROOT   194G  2.23T   140K  /rpool/ROOT
> > rpool/ROOT/pve-1 194G  2.23T   194G  /
> > rpool/STORAGE   49.1G  2.23T  49.1G  /rpool/STORAGE
> > rpool/data  79.5G  2.23T   140K  /rpool/data
> > rpool/data/vm-101-disk-13.04G  2.23T  3.04G  -
> > rpool/data/vm-103-disk-168.6G  2.23T  68.6G  -
> > rpool/data/vm-107-disk-15.23G  2.23T  2.62G  -
> > rpool/data/vm-17206-disk-1  2.62G  2.23T93K  -
> > rpool/swap  8.50G  2.24T  99.2M  -
> >
> > root@pve4:~# zfs destroy rpool/data/vm-17206-disk-1
> > cannot destroy 'rpool/data/vm-17206-disk-1': dataset already exists
> >
> > root@pve4:~# zfs destroy rpool/data/vm-107-disk-1
> > cannot destroy 'rpool/data/vm-107-disk-1': dataset already exists
> >
> >
> >
>
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>



-- 
Luis G. Coralle
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] pve-zsync

2017-03-09 Thread Wolfgang Link
Hi,

You can't destroy datasets where snapshots exists.

zfs list -t all
will show you all datasets

and zfs destroy -R will erase all dataset what are referenced to tihs
given set.

On 03/09/2017 08:35 PM, Luis G. Coralle wrote:
> Hi.
> I was trying to sync two vm with pve-zsync between two nodes with pve 4.2
> After completing the tests, I could not remove rpool/data/vm-107-disk-1 and
> rpool/data/vm-17206-disk-1.
> How can I remove them?
> 
> root@pve4:~# zfs list
> NAME USED  AVAIL  REFER  MOUNTPOINT
> rpool332G  2.23T   140K  /rpool
> rpool/ROOT   194G  2.23T   140K  /rpool/ROOT
> rpool/ROOT/pve-1 194G  2.23T   194G  /
> rpool/STORAGE   49.1G  2.23T  49.1G  /rpool/STORAGE
> rpool/data  79.5G  2.23T   140K  /rpool/data
> rpool/data/vm-101-disk-13.04G  2.23T  3.04G  -
> rpool/data/vm-103-disk-168.6G  2.23T  68.6G  -
> rpool/data/vm-107-disk-15.23G  2.23T  2.62G  -
> rpool/data/vm-17206-disk-1  2.62G  2.23T93K  -
> rpool/swap  8.50G  2.24T  99.2M  -
> 
> root@pve4:~# zfs destroy rpool/data/vm-17206-disk-1
> cannot destroy 'rpool/data/vm-17206-disk-1': dataset already exists
> 
> root@pve4:~# zfs destroy rpool/data/vm-107-disk-1
> cannot destroy 'rpool/data/vm-107-disk-1': dataset already exists
> 
> 
> 

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] pve-zsync

2017-03-09 Thread Luis G. Coralle
Hi.
I was trying to sync two vm with pve-zsync between two nodes with pve 4.2
After completing the tests, I could not remove rpool/data/vm-107-disk-1 and
rpool/data/vm-17206-disk-1.
How can I remove them?

root@pve4:~# zfs list
NAME USED  AVAIL  REFER  MOUNTPOINT
rpool332G  2.23T   140K  /rpool
rpool/ROOT   194G  2.23T   140K  /rpool/ROOT
rpool/ROOT/pve-1 194G  2.23T   194G  /
rpool/STORAGE   49.1G  2.23T  49.1G  /rpool/STORAGE
rpool/data  79.5G  2.23T   140K  /rpool/data
rpool/data/vm-101-disk-13.04G  2.23T  3.04G  -
rpool/data/vm-103-disk-168.6G  2.23T  68.6G  -
rpool/data/vm-107-disk-15.23G  2.23T  2.62G  -
rpool/data/vm-17206-disk-1  2.62G  2.23T93K  -
rpool/swap  8.50G  2.24T  99.2M  -

root@pve4:~# zfs destroy rpool/data/vm-17206-disk-1
cannot destroy 'rpool/data/vm-17206-disk-1': dataset already exists

root@pve4:~# zfs destroy rpool/data/vm-107-disk-1
cannot destroy 'rpool/data/vm-107-disk-1': dataset already exists



-- 
Luis G. Coralle
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] pve-zsync general question

2017-02-16 Thread Jérémy Carnus
Also, another question, the vzdump tool do a sync to be sure there is
nothing in memory before creating the snapshot. Does pve-zsync has the
same behavior ? 

Jeremy 

Le 2017-02-16 13:26, Jérémy Carnus a écrit :

> Hi 
> 
> I just discover the pve-zsync tool that seems pretty good to perform
> remote backup 
> 
> I just wanted to know if the tool has a restore feature ? And does it
> works with LXC currently ? 
> 
> Also, does the underlayer zfs send / receive is done incrementally ? I
> have pretty large VM ;), to avoid to transfer all data each night ;) 
> 
> Thanks

-- 
--
Jérémy Carnus 
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] pve-zsync general question

2017-02-16 Thread Jérémy Carnus
Hi 

I just discover the pve-zsync tool that seems pretty good to perform
remote backup 

I just wanted to know if the tool has a restore feature ? And does it
works with LXC currently ? 

Also, does the underlayer zfs send / receive is done incrementally ? I
have pretty large VM ;), to avoid to transfer all data each night ;) 

Thanks

-- 
--
Jérémy Carnus 
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] pve-zsync -> source vm doesnot exist

2015-09-25 Thread Tonci Stipicevic

Hello to all

I'm trying to setup zfs VM sync but got strange system repsonse.

1. I'm having 2-node cluster
- both nodes have local zfs tank and vm-8001 has its image on the local 
zfs tank.


root@pmx01:~# zfs list
NAMEUSED  AVAIL  REFER  MOUNTPOINT
mytank 15.5G   434G96K  none
mytank/vm-8001-disk-1  15.5G   434G  15.2G  -


root@pmx01:~# qm list
  VMID NAME STATUS MEM(MB)BOOTDISK(GB) PID
   100 w2k8 stopped4096  64.00 0
...
  2114 wxp1 stopped3072  20.01 0
  2118 zmaj00   stopped1048  32.00 0

  8001 vm1-zfs  stopped 512   15.01 0 **

 9001 w2k12-stdstopped4096  32.00 0




but when I try to start first sync I got error msg :

root@pmx01:~# pve-zsync sync --source 8001 --dest 10.20.28.3:mytank 
--verbose --maxsnap 2

VM 8001 doesn't exist

root@pmx01:~#

So I haven't got too far ... :-(

vm 8001 is properly working


Please help !

Thank you very much in advance and

BR

Tonci
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user