Re: [PVE-User] disk parameter "backup=no" is now "backup=0"?

2016-09-13 Thread Nicola Ferrari (#554252)
Il 12/09/2016 09:00, Fabian Grünbichler ha scritto: you don't need to update your configs, but PVE will set the boolean options to 1 (or 0, which are the internal representations) when writing the config file. Thanks Fabian for your feedback! N -- +-+ | Linux User #554252

Re: [PVE-User] performance of 4.2 versus 3.4

2016-09-13 Thread Yannis Milios
If your backup target is a NFS server, try to mount it in vers=3 instead of 4. On Monday, 12 September 2016, Miguel González wrote: > Hi, > > I have a software RAID of 2 Tb HDs. I have upgraded from 3.4 to 4.2 > and migrated the VMs that I had. > > I have realized backups are taking three

Re: [PVE-User] Restoring VM on ZFS-thin storage

2016-09-13 Thread Yannis Milios
Can't answer directly your question since I'm not aware of PVE backup internals however as a workaround you could try to: If your zfs volumes (where vms reside) have compression enabled, you could reclaim unused space by: - running sdelete on windows vms - creating a zero filled file via dd on lin

Re: [PVE-User] performance of 4.2 versus 3.4

2016-09-13 Thread Dietmar Maurer
> I have realized backups are taking three times than before. I used to > get 30 Mb/s as average for a backup and now I get around 10 Mb/s. The > performance seems to drop after start. Also, what kind of storage/fs do you use for VM images and backup storage? Maybe you simply use other mount opt

Re: [PVE-User] Migrating a vmdk from NAS to LVM

2016-09-13 Thread Dhaussy Alexandre
Le 13/09/2016 à 05:30, Alexandre DERUMIER a écrit : > ok, this could be a fast workaround to implement > # qemu-img info -f vmdk /nas/proxmox/testmox2/testmox2.vmdk > Yes, i have made a quick workaround that seems to work. Not sure if it's the way to go.. --- /tmp/Plugin.pm2016-09-13 10:24:11.

Re: [PVE-User] performance of 4.2 versus 3.4

2016-09-13 Thread miguel gonzalez
Sorry i forgot. I use local storage and ext4 for the vms and backups. Before in 3.4 I had ext3. Many thanks Dietmar Maurer wrote: >> I have realized backups are taking three times than before. I used to >> get 30 Mb/s as average for a backup and now I get around 10 Mb/s. The >> performance s

Re: [PVE-User] performance of 4.2 versus 3.4

2016-09-13 Thread Yannis Milios
Do you use a single disk or a raid array as a backup storage? What's the output of: 'cat /proc/mounts | grep ext4' ? On Tue, Sep 13, 2016 at 9:35 AM, miguel gonzalez wrote: > Sorry i forgot. I use local storage and ext4 for the vms and backups. > > Before in 3.4 I had ext3. > > Many thanks

Re: [PVE-User] performance of 4.2 versus 3.4

2016-09-13 Thread Dietmar Maurer
> Before in 3.4 I had ext3. old ext3 code used 'barrier=0' by default. ext4 uses 'barrier=1' by default. ___ pve-user mailing list pve-user@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

Re: [PVE-User] performance of 4.2 versus 3.4

2016-09-13 Thread miguel gonzalez
Yes, i have read that. Is it safe to use it with software raid? I have read in proxmox forums that some people dont recommend it. Another question about this. I have qcow2 with ide virtual hard drives. I had no cache and tried writeback but dd tests results are more or less the same (around 40

Re: [PVE-User] performance of 4.2 versus 3.4

2016-09-13 Thread Dietmar Maurer
> Yes, i have read that. Is it safe to use it with software raid? No, this is not safe. ___ pve-user mailing list pve-user@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

Re: [PVE-User] performance of 4.2 versus 3.4

2016-09-13 Thread Miguel González
Hi, It´s a software RAID of two 2 Tb SATA disks. The output of /proc/mounts /dev/md2 / ext4 rw,relatime,errors=remount-ro,data=ordered 0 0 /dev/mapper/pve-data /var/lib/vz ext4 rw,relatime,data=ordered 0 0 On 09/13/16 10:55 AM, Yannis Milios wrote: > Do you use a single disk or a raid ar

[PVE-User] Ceph and journal on a file...

2016-09-13 Thread Marco Gaiarin
I'm doing some test with ceph, and i'm trying to put journal on a file (TMPFS); probably is a bad idea, but i'm only doing some test... Also, i'm putting OSD on partitions, not disks, so i'm forced to use commandline. No trouble at all. So i've done: root@capitanamerica:~# ceph-disk -v prepare