[PVE-User] Problem with virtio ethernet drivers PVE 6.2-4 and CentOS 5.6

2020-05-20 Thread Fabrizio Cuseo
Hello.
I have updated a PVE cluster from 6.1.X to last (this night), and a VM with 
CentOS 5.6 and virtio ethernet drivers, after some minutes (few) the ethernet 
stop responding. 
With a VM restart, the ethernet starts working again, but again after some 
minutes stop working.
Changing from virtio to e1000 seems working fine...

Someone else with this problem ? 

Thanks, Fabrizio 

-- 
---
Fabrizio Cuseo - mailto:f.cu...@panservice.it
Direzione Generale - Panservice InterNetWorking
Servizi Professionali per Internet ed il Networking
Panservice e' associata AIIP - RIPE Local Registry
Phone: +39 0773 410020 - Fax: +39 0773 470219
http://www.panservice.it  mailto:i...@panservice.it
Numero verde nazionale: 800 901492
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] RBD Storage from 6.1 to 3.4 (or 4.4)

2020-01-30 Thread Fabrizio Cuseo
Thank you all.
I will migrate using an iSCSI storage configured on both the clusters, so the 
VMs downtime will be short.
Fabrizio


- Il 30-gen-20, alle 16:27, Eneko Lacunza elacu...@binovo.es ha scritto:

> I think firefly is too old.
> 
> Either you create backups and restore in the new cluster, or you'll have
> to upgrade the old clusters at least to Proxmox 5 and Ceph Mimic.
> 
> Cheers
> 
> El 30/1/20 a las 12:59, Fabrizio Cuseo escribió:
>> I can't afford the long downtime. With my method, the downtime is only to 
>> stop
>> the VM on the old cluster and start on the new; the disk image copy in done
>> online.
>>
>> But my last migration was from 3.4 to 4.4
>>
>>
>> - Il 30-gen-20, alle 12:51, Uwe Sauter uwe.sauter...@gmail.com ha 
>> scritto:
>>
>>> If you can afford the downtime of the VMS you might be able to migrate the 
>>> disk
>>> images using "rbd export | ncat" and "ncat | rbd
>>> import".
>>>
>>> I haven't tried this with such a great difference of versions but from 
>>> Proxmox
>>> 5.4 to 6.1 this worked without a problem.
>>>
>>> Regards,
>>>
>>> Uwe
>>>
>>>
>>> Am 30.01.20 um 12:46 schrieb Fabrizio Cuseo:
>>>> I have installed a new cluster with the last release, with a local ceph 
>>>> storage.
>>>> I also have 2 old and smaller clusters, and I need to migrate all the VMs 
>>>> to the
>>>> new cluster.
>>>> The best method i have used in past is to add on the NEW cluster the RBD 
>>>> storage
>>>> of the old cluster, so I can stop the VM, move the .cfg file, start the vm 
>>>> (all
>>>> those operations are really quick), and move the disk (online) from the old
>>>> storage to the new storage.
>>>>
>>>> But now, if I add the RBD storage, copying the keyring file of the old 
>>>> cluster
>>>> to the new cluster, naming as the storage ID, and using the old cluster
>>>> monitors IP, i can see the storage summary (space total and used), but 
>>>> when I
>>>> go to "content", i have this error: "rbd error: rbd: listing images failed:
>>>> (95) Operation not supported (500)".
>>>>
>>>> If, from the new cluster CLI, i use the command:
>>>>
>>>> rbd -k /etc/pve/priv/ceph/CephOLD.keyring -m 172.16.20.31 ls rbd2
>>>>
>>>> I can see the list of disk images, but also the error: "librbd::api::Trash:
>>>> list: error listing rbd trash entries: (95) Operation not supported"
>>>>
>>>>
>>>> The new cluster ceph release is Nautilus, and the old one is firefly.
>>>>
>>>> Some idea ?
>>>>
>>>> Thanks in advance, Fabrizio
>>>>
>>>> ___
>>>> pve-user mailing list
>>>> pve-user@pve.proxmox.com
>>>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>>>>
>>> ___
>>> pve-user mailing list
>>> pve-user@pve.proxmox.com
>>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> 
> 
> --
> Zuzendari Teknikoa / Director Técnico
> Binovo IT Human Project, S.L.
> Telf. 943569206
> Astigarragako bidea 2, 2º izq. oficina 11; 20180 Oiartzun (Gipuzkoa)
> www.binovo.es
> 
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

-- 
---
Fabrizio Cuseo - mailto:f.cu...@panservice.it
Direzione Generale - Panservice InterNetWorking
Servizi Professionali per Internet ed il Networking
Panservice e' associata AIIP - RIPE Local Registry
Phone: +39 0773 410020 - Fax: +39 0773 470219
http://www.panservice.it  mailto:i...@panservice.it
Numero verde nazionale: 800 901492
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] RBD Storage from 6.1 to 3.4 (or 4.4)

2020-01-30 Thread Fabrizio Cuseo
I can't afford the long downtime. With my method, the downtime is only to stop 
the VM on the old cluster and start on the new; the disk image copy in done 
online.

But my last migration was from 3.4 to 4.4


- Il 30-gen-20, alle 12:51, Uwe Sauter uwe.sauter...@gmail.com ha scritto:

> If you can afford the downtime of the VMS you might be able to migrate the 
> disk
> images using "rbd export | ncat" and "ncat | rbd
> import".
> 
> I haven't tried this with such a great difference of versions but from Proxmox
> 5.4 to 6.1 this worked without a problem.
> 
> Regards,
> 
>   Uwe
> 
> 
> Am 30.01.20 um 12:46 schrieb Fabrizio Cuseo:
>> 
>> I have installed a new cluster with the last release, with a local ceph 
>> storage.
>> I also have 2 old and smaller clusters, and I need to migrate all the VMs to 
>> the
>> new cluster.
>> The best method i have used in past is to add on the NEW cluster the RBD 
>> storage
>> of the old cluster, so I can stop the VM, move the .cfg file, start the vm 
>> (all
>> those operations are really quick), and move the disk (online) from the old
>> storage to the new storage.
>> 
>> But now, if I add the RBD storage, copying the keyring file of the old 
>> cluster
>> to the new cluster, naming as the storage ID, and using the old cluster
>> monitors IP, i can see the storage summary (space total and used), but when I
>> go to "content", i have this error: "rbd error: rbd: listing images failed:
>> (95) Operation not supported (500)".
>> 
>> If, from the new cluster CLI, i use the command:
>> 
>> rbd -k /etc/pve/priv/ceph/CephOLD.keyring -m 172.16.20.31 ls rbd2
>> 
>> I can see the list of disk images, but also the error: "librbd::api::Trash:
>> list: error listing rbd trash entries: (95) Operation not supported"
>> 
>> 
>> The new cluster ceph release is Nautilus, and the old one is firefly.
>> 
>> Some idea ?
>> 
>> Thanks in advance, Fabrizio
>> 
>> _______
>> pve-user mailing list
>> pve-user@pve.proxmox.com
>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>> 
> 
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

-- 
---
Fabrizio Cuseo - mailto:f.cu...@panservice.it
Direzione Generale - Panservice InterNetWorking
Servizi Professionali per Internet ed il Networking
Panservice e' associata AIIP - RIPE Local Registry
Phone: +39 0773 410020 - Fax: +39 0773 470219
http://www.panservice.it  mailto:i...@panservice.it
Numero verde nazionale: 800 901492
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] RBD Storage from 6.1 to 3.4 (or 4.4)

2020-01-30 Thread Fabrizio Cuseo


I have installed a new cluster with the last release, with a local ceph storage.
I also have 2 old and smaller clusters, and I need to migrate all the VMs to 
the new cluster.
The best method i have used in past is to add on the NEW cluster the RBD 
storage of the old cluster, so I can stop the VM, move the .cfg file, start the 
vm (all those operations are really quick), and move the disk (online) from the 
old storage to the new storage.

But now, if I add the RBD storage, copying the keyring file of the old cluster 
to the new cluster, naming as the storage ID, and using the old cluster 
monitors IP, i can see the storage summary (space total and used), but when I 
go to "content", i have this error: "rbd error: rbd: listing images failed: 
(95) Operation not supported (500)".

If, from the new cluster CLI, i use the command: 

rbd -k /etc/pve/priv/ceph/CephOLD.keyring -m 172.16.20.31 ls rbd2

I can see the list of disk images, but also the error: "librbd::api::Trash: 
list: error listing rbd trash entries: (95) Operation not supported"


The new cluster ceph release is Nautilus, and the old one is firefly.

Some idea ? 

Thanks in advance, Fabrizio

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] SQL server 2014 poor performances

2019-10-28 Thread Fabrizio Cuseo
Sorry, PVEPERF not VZPERF



- Il 28-ott-19, alle 17:15, Fabrizio Cuseo f.cu...@panservice.it ha scritto:

> - Il 28-ott-19, alle 16:56, Mark Adams m...@openvs.co.uk ha scritto:
> 
>> There is a WD Blue SSD - but it is a desktop drive, you probably shouldn't
>> use it in a server.
> 
> Hello mark.
> I know that is a desktop drive, but we are only testing the different
> performances between this and a desktop pc with a similar (or cheaper) desktop
> ssd.
> 
> 
>> Are you using the virtio-scsi blockdev and the newest virtio drivers? also,
> 
> I am using the virtio-scsi, and virtio drivers (not more than 1 year old
> version)
> 
>> have you tried with writeback enabled?
> 
> Not yet.
> 
> 
>> Have you tested the performance of your ssd zpool from the command line on
>> the host?
> 
> 
> Do you mean vzperf ?
> 
> PS: i don't know if the bottleneck is I/O or some problem like the SQL 
> "content
> switch" setting.
> 
> PPS: SQL server is 2017, not 2014.
> 
> 
> 
>> On Mon, 28 Oct 2019 at 15:46, Michael Rasmussen via pve-user <
>> pve-user@pve.proxmox.com> wrote:
>> 
>>>
>>>
>>>
>>> -- Forwarded message --
>>> From: Michael Rasmussen 
>>> To: pve-user@pve.proxmox.com
>>> Cc:
>>> Bcc:
>>> Date: Mon, 28 Oct 2019 16:46:23 +0100
>>> Subject: Re: [PVE-User] SQL server 2014 poor performances
>>> On Mon, 28 Oct 2019 15:47:18 +0100 (CET)
>>> Fabrizio Cuseo  wrote:
>>>
>>> > Hello.
>>> > I have a customer with proxmox 5.X, 4 x SSD (WD blue) in raid-10 ZFS
>>> > configuration, Poweredge R710 dual xeon and 144Gbyte RAM. Same
>>> > problem with 4 x SAS 15k rpm drives.
>>> >
>>> Are you sure it is SSD? I don't recollect that WD has produced WD blue
>>> as SSD.
>>>
>>> --
>>> Hilsen/Regards
>>> Michael Rasmussen
>>>
>>> Get my public GnuPG keys:
>>> michael  rasmussen  cc
>>> https://pgp.key-server.io/pks/lookup?search=0xD3C9A00E
>>> mir  datanom  net
>>> https://pgp.key-server.io/pks/lookup?search=0xE501F51C
>>> mir  miras  org
>>> https://pgp.key-server.io/pks/lookup?search=0xE3E80917
>>> --
>>> /usr/games/fortune -es says:
>>> Follow each decision as closely as possible with its associated action.
>>> - The Elements of Programming Style (Kernighan & Plaugher)
>>>
>>>
>>>
>>> -- Forwarded message --
>>> From: Michael Rasmussen via pve-user 
>>> To: pve-user@pve.proxmox.com
>>> Cc: Michael Rasmussen 
>>> Bcc:
>>> Date: Mon, 28 Oct 2019 16:46:23 +0100
>>> Subject: Re: [PVE-User] SQL server 2014 poor performances
>>> ___
>>> pve-user mailing list
>>> pve-user@pve.proxmox.com
>>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>>>
>> ___
>> pve-user mailing list
>> pve-user@pve.proxmox.com
>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> 
> --
> ---
> Fabrizio Cuseo - mailto:f.cu...@panservice.it
> Direzione Generale - Panservice InterNetWorking
> Servizi Professionali per Internet ed il Networking
> Panservice e' associata AIIP - RIPE Local Registry
> Phone: +39 0773 410020 - Fax: +39 0773 470219
> http://www.panservice.it  mailto:i...@panservice.it
> Numero verde nazionale: 800 901492
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

-- 
---
Fabrizio Cuseo - mailto:f.cu...@panservice.it
Direzione Generale - Panservice InterNetWorking
Servizi Professionali per Internet ed il Networking
Panservice e' associata AIIP - RIPE Local Registry
Phone: +39 0773 410020 - Fax: +39 0773 470219
http://www.panservice.it  mailto:i...@panservice.it
Numero verde nazionale: 800 901492
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] SQL server 2014 poor performances

2019-10-28 Thread Fabrizio Cuseo



- Il 28-ott-19, alle 16:56, Mark Adams m...@openvs.co.uk ha scritto:

> There is a WD Blue SSD - but it is a desktop drive, you probably shouldn't
> use it in a server.

Hello mark.
I know that is a desktop drive, but we are only testing the different 
performances between this and a desktop pc with a similar (or cheaper) desktop 
ssd.


> Are you using the virtio-scsi blockdev and the newest virtio drivers? also,

I am using the virtio-scsi, and virtio drivers (not more than 1 year old 
version)

> have you tried with writeback enabled?

Not yet.


> Have you tested the performance of your ssd zpool from the command line on
> the host?


Do you mean vzperf ? 

PS: i don't know if the bottleneck is I/O or some problem like the SQL "content 
switch" setting. 

PPS: SQL server is 2017, not 2014.


 
> On Mon, 28 Oct 2019 at 15:46, Michael Rasmussen via pve-user <
> pve-user@pve.proxmox.com> wrote:
> 
>>
>>
>>
>> -- Forwarded message --
>> From: Michael Rasmussen 
>> To: pve-user@pve.proxmox.com
>> Cc:
>> Bcc:
>> Date: Mon, 28 Oct 2019 16:46:23 +0100
>> Subject: Re: [PVE-User] SQL server 2014 poor performances
>> On Mon, 28 Oct 2019 15:47:18 +0100 (CET)
>> Fabrizio Cuseo  wrote:
>>
>> > Hello.
>> > I have a customer with proxmox 5.X, 4 x SSD (WD blue) in raid-10 ZFS
>> > configuration, Poweredge R710 dual xeon and 144Gbyte RAM. Same
>> > problem with 4 x SAS 15k rpm drives.
>> >
>> Are you sure it is SSD? I don't recollect that WD has produced WD blue
>> as SSD.
>>
>> --
>> Hilsen/Regards
>> Michael Rasmussen
>>
>> Get my public GnuPG keys:
>> michael  rasmussen  cc
>> https://pgp.key-server.io/pks/lookup?search=0xD3C9A00E
>> mir  datanom  net
>> https://pgp.key-server.io/pks/lookup?search=0xE501F51C
>> mir  miras  org
>> https://pgp.key-server.io/pks/lookup?search=0xE3E80917
>> --
>> /usr/games/fortune -es says:
>> Follow each decision as closely as possible with its associated action.
>> - The Elements of Programming Style (Kernighan & Plaugher)
>>
>>
>>
>> -- Forwarded message --
>> From: Michael Rasmussen via pve-user 
>> To: pve-user@pve.proxmox.com
>> Cc: Michael Rasmussen 
>> Bcc:
>> Date: Mon, 28 Oct 2019 16:46:23 +0100
>> Subject: Re: [PVE-User] SQL server 2014 poor performances
>> ___
>> pve-user mailing list
>> pve-user@pve.proxmox.com
>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>>
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

-- 
---
Fabrizio Cuseo - mailto:f.cu...@panservice.it
Direzione Generale - Panservice InterNetWorking
Servizi Professionali per Internet ed il Networking
Panservice e' associata AIIP - RIPE Local Registry
Phone: +39 0773 410020 - Fax: +39 0773 470219
http://www.panservice.it  mailto:i...@panservice.it
Numero verde nazionale: 800 901492
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] SQL server 2014 poor performances

2019-10-28 Thread Fabrizio Cuseo
Thank you for your answer:

CPU(s) 24 x Intel(R) Xeon(R) CPU X5650 @ 2.67GHz (2 Sockets)

The VM has 2 socket x 4 core, both KVM and host type, with NUMA enabled.



- Il 28-ott-19, alle 15:59, José Manuel Giner j...@ginernet.com ha scritto:

> What is the exact CPU model?
> 
> 
> 
> On 28/10/2019 15:47, Fabrizio Cuseo wrote:
>> Poweredge R710 dual xeon
> 
> --
> José Manuel Giner
> https://ginernet.com
> 
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

-- 
---
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] SQL server 2014 poor performances

2019-10-28 Thread Fabrizio Cuseo
Hello.
I have a customer with proxmox 5.X, 4 x SSD (WD blue) in raid-10 ZFS 
configuration, Poweredge R710 dual xeon and 144Gbyte RAM. Same problem with 4 x 
SAS 15k rpm drives.

A VM with Windows Server 2016, and SQL Server 2014.
The SQL performances are very poor if compared with a standard desktop pc with 
i5 and a single consumer grade SSD.

There is some tweak like "Trace Flag T8038" to apply to SQL 2014 and/or 2017 ? 

Thanks in advance, Fabrizio Cuseo


___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Kernel Memory Leak on PVE6?

2019-09-20 Thread Fabrizio Cuseo
Are you sure that the memory is used by ZFS cache ? 

Regards, Fabrizio

- Il 20-set-19, alle 14:31, Chris Hofstaedtler | Deduktiva 
chris.hofstaedt...@deduktiva.com ha scritto:

> Hi,
> 
> I'm seeing a very interesting problem on PVE6: one of our machines
> appears to leak kernel memory over time, up to the point where only
> a reboot helps. Shutting down all KVM VMs does not release this
> memory.
> 
> I'll attach some information below, because I just couldn't figure
> out what this memory is used for. Once before shutting down the VMs,
> and once after. I had to reboot the PVE host now, but I guess
> in a few days it will be at least noticable again.
> 
> This machine has the same (except CPU) hardware as the box next to
> it; however this one was freshly installed with PVE6, the other one
> is an upgrade from PVE5 and doesn't exhibit this problem. It's quite
> puzzling because I haven't seen this symptom at all at all the
> customer installations.
> 
> Here are some graphs showing the memory consumption over time:
>  http://zeha.at/~ch/T/20190920-pve6_meminfo_0.png
>  http://zeha.at/~ch/T/20190920-pve6_meminfo_1.png
> 
> Looking forward to any debug help, suggestions, ...
> 
> Chris
-- 
---
Fabrizio Cuseo - mailto:f.cu...@panservice.it
Direzione Generale - Panservice InterNetWorking
Servizi Professionali per Internet ed il Networking
Panservice e' associata AIIP - RIPE Local Registry
Phone: +39 0773 410020 - Fax: +39 0773 470219
http://www.panservice.it  mailto:i...@panservice.it
Numero verde nazionale: 800 901492
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Ris: Ceph MON quorum problem

2019-09-16 Thread Fabrizio Cuseo
Answer following:

- Il 16-set-19, alle 14:49, Ronny Aasen ronny+pve-u...@aasen.cx ha scritto:

> with 2 rooms there is no way to avoid a split brain situation unless you
> have a tie breaker outside one of those 2 rooms.
> 
> Run a Mon on a neutral third location is the quick, correct, and simple
> solution.
> 
> Or
> 
> you need to have a master-slave situation where one room is the master
> (3 mons) and the other room is the slave (2 mons) and the slave can not
> operate without the master, but the master can operate alone.


Yes, I need a master-slave situation, but I need to have the slave running in 
case of master's fault.
So, if a have a total of 3 mons (2 on master, 1 on slave), if I loose the 
master, I have only 1 mon available, and i need to create another mon (but i 
can't create it because I have no quorum).

I know that for now, the only solution is a third room.

Thanks, Fabrizio 




> 
> On 14.09.2019 16:11, f.cu...@panservice.it wrote:
>> This is My last choice :)
>> Inviato dal mio dispositivo Huawei
>>  Messaggio originale 
>> Oggetto: Re: [PVE-User] Ceph MON quorum problem
>> Da: "Brian :"
>> A: Fabrizio Cuseo ,PVE User List
>> CC:
>> 
>> 
>> Have a mon that runs somewhere that isn't either of those rooms.
>> 
>> On Friday, September 13, 2019, Fabrizio Cuseo  wrote:
>>> Hello.
>>> I am planning a 6 hosts cluster.
>>>
>>> 3 hosts are located in the CedA room
>>> 3 hosts are located in the CedB room
>>>
>>> the two rooms are connected with a 2 x 10Gbit fiber (200mt) and in each
>> room i have 2 x 10Gbit stacked switch and each host have a 2 x 10Gbit (one
>> for each switch) for Ceph storage.
>>>
>>> My need is to have a full redundancy cluster that can survive to CedA (or
>> CedB) disaster.
>>>
>>> I have modified the crush map, so I have a RBD Pool that writes 2 copies
>> in CedA hosts, and 2 copies in CedB hosts, so a very good redundancy (disk
>> space is not a problem).
>>>
>>> But if I loose one of the rooms, i can't establish the needed quorum.
>>>
>>> Some suggestion to have a quick and not too complicated way to satisfy my
>> need ?
>>>
>>> Regards, Fabrizio
>>>
>>>
>>> ___
>>> pve-user mailing list
>>> pve-user@pve.proxmox.com
>>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>>>
>> ___
>> pve-user mailing list
>> pve-user@pve.proxmox.com
>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>> 
> 
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

-
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Ceph MON quorum problem

2019-09-16 Thread Fabrizio Cuseo
THank you Humberto, but my problem is not related on proxmox quorum, but ceph 
mon quorum. 

Regards, Fabrizio 

- Il 16-set-19, alle 12:58, Humberto Jose De Sousa  
ha scritto: 

> Hi.

> You could try the qdevice:
> https://pve.proxmox.com/pve-docs/chapter-pvecm.html#_corosync_external_vote_support

> Humberto

> De: "Fabrizio Cuseo" 
> Para: "pve-user" 
> Enviadas: Sexta-feira, 13 de setembro de 2019 16:42:06
> Assunto: [PVE-User] Ceph MON quorum problem

> Hello.
> I am planning a 6 hosts cluster.

> 3 hosts are located in the CedA room
> 3 hosts are located in the CedB room

> the two rooms are connected with a 2 x 10Gbit fiber (200mt) and in each room i
> have 2 x 10Gbit stacked switch and each host have a 2 x 10Gbit (one for each
> switch) for Ceph storage.

> My need is to have a full redundancy cluster that can survive to CedA (or 
> CedB)
> disaster.

> I have modified the crush map, so I have a RBD Pool that writes 2 copies in 
> CedA
> hosts, and 2 copies in CedB hosts, so a very good redundancy (disk space is 
> not
> a problem).

> But if I loose one of the rooms, i can't establish the needed quorum.

> Some suggestion to have a quick and not too complicated way to satisfy my 
> need ?

> Regards, Fabrizio

> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

-- 
--- 
Fabrizio Cuseo - mailto:f.cu...@panservice.it 
Direzione Generale - Panservice InterNetWorking 
Servizi Professionali per Internet ed il Networking 
Panservice e' associata AIIP - RIPE Local Registry 
Phone: +39 0773 410020 - Fax: +39 0773 470219 
http://www.panservice.it mailto:i...@panservice.it 
Numero verde nazionale: 800 901492 
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Ceph MON quorum problem

2019-09-13 Thread Fabrizio Cuseo
Hello.
I am planning a 6 hosts cluster.

3 hosts are located in the CedA room
3 hosts are located in the CedB room 

the two rooms are connected with a 2 x 10Gbit fiber (200mt) and in each room i 
have 2 x 10Gbit stacked switch and each host have a 2 x 10Gbit (one for each 
switch) for Ceph storage.

My need is to have a full redundancy cluster that can survive to CedA (or CedB) 
disaster.

I have modified the crush map, so I have a RBD Pool that writes 2 copies in 
CedA hosts, and 2 copies in CedB hosts, so a very good redundancy (disk space 
is not a problem).

But if I loose one of the rooms, i can't establish the needed quorum.

Some suggestion to have a quick and not too complicated way to satisfy my need 
? 

Regards, Fabrizio


___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Ceph Crush Map

2019-09-10 Thread Fabrizio Cuseo
Hello.

I want to suggest a new feature for PVE release 6.1 :)

Scenario:  
- 3 hosts in a rack in building A
- 3 hosts in a rack in building B
- a dedicated 2 x 10Gbit connection (200mt fiber channel)

A single PVE cluster with 6 hosts.
A single Ceph cluster with 6 hosts (each with several OSD)

I would like to manipulate the crush map to have my pools with not less than 1 
copy for each building (so if I have a pool with 3 copies, I need to have 2 
copies on different hosts in building A, and 1 copy on one of the hosts in 
building B).

With this configuration I can obtain a full redundancy of my VMs and data 
(something wrong ?).

I can change manually the crush map, pools and so on, but with the GUI this can 
be more simple (also if i need to add other servers to my cluster).

Regards, Fabrizio 
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Proxmox - BIG PROBLEM

2019-07-29 Thread Fabrizio Cuseo
Your system without /bin/ and /lib can't be usable, you need to totally recover 
it. 
I personally prefer to install a new system and migrate the vm files (that you 
have on local-zfs). But forget to use the GUI 

- Il 29-lug-19, alle 11:08, lord_Niedzwiedz  ha scritto: 

> VM at local-zfs.
> But local-zfs not available with gui !!
> VM still work.

> And I see in:
> cd /mnt/pve/
> directory:
> nvme0n1 / nvme1n1 / sda /

> Here is one virtual.
> The rest on local-zfs (and they work, and I can not see the space).

> Proxmox it's still working.

> I lost mybe only :
> /bin
> / lib
> / lib64
> / sbin
> How is it possible that command:
> rm / *
> removed them ?? !!

> Without the -r option.

> And the rest of the catalogs did not delete ?? !!
> Maybe these were symbolic links?

> Gregor

>> Where are located the VM's disks ? LVM ? ZFS ?
>> Is possibile that you still have your disks (if LVM, for example), but i 
>> think
>> that is better that you install a fresh Proxmox server, and move the disks 
>> from
>> the old hard drive to the new one.
>> You need some knowledge about linux, lvm, and you can save all your data.

>> - Il 29-lug-19, alle 10:55, lord_Niedzwiedz [ mailto:sir_misi...@o2.pl |
>> sir_misi...@o2.pl ] ha scritto:

>>> I ran a command on the server by mistake:

>>> rm /*
>>> rm: cannot remove '/Backup': Is a directory
>>> rm: cannot remove '/boot': Is a directory
>>> rm: cannot remove '/dev': Is a directory
>>> rm: cannot remove '/etc': Is a directory
>>> rm: cannot remove '/home': Is a directory
>>> rm: cannot remove '/media': Is a directory
>>> rm: cannot remove '/mnt': Is a directory
>>> rm: cannot remove '/opt': Is a directory
>>> rm: cannot remove '/proc': Is a directory
>>> rm: cannot remove '/Roboczy': Is a directory
>>> rm: cannot remove '/root': Is a directory
>>> rm: cannot remove '/rpool': Is a directory
>>> rm: cannot remove '/run': Is a directory
>>> rm: cannot remove '/srv': Is a directory
>>> rm: cannot remove '/sys': Is a directory
>>> rm: cannot remove '/tmp': Is a directory
>>> rm: cannot remove '/usr': Is a directory
>>> rm: cannot remove '/var': Is a directory

>>> Strange machines work.
>>> I'm logged in gui.
>>> But I can not get to the machine VM.
>>> Do not execute any commands.
>>> What to do ??!!
>>> From what I see, I deleted my catalogs:
>>> / bin
>>> / lib
>>> / lib64
>>> / sbin
>>> WITH /.
>>> How is this possible ??!!
>>> I'm still logged in on one console after the shell, but I can not do any
>>> commandos.
>>> Even:
>>> qm
>>> -bash: /usr/sbin/qm: /usr/bin/perl: bad interpreter: No such file or
>>> directory
>>> root@tomas:/usr/bin# ls
>>> -bash: /usr/bin/ls: No such file or directory
>>> root@tomas:/usr/bin# echo $PATH
>>> /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin

>>> Any Idea ??
>>> Please Help Me.

>>> Gregor

>>> ___
>>> pve-user mailing list [ mailto:pve-user@pve.proxmox.com |
>>> pve-user@pve.proxmox.com ] [
>>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user |
>>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user ]

-- 
--- 
Fabrizio Cuseo - mailto:f.cu...@panservice.it 
Direzione Generale - Panservice InterNetWorking 
Servizi Professionali per Internet ed il Networking 
Panservice e' associata AIIP - RIPE Local Registry 
Phone: +39 0773 410020 - Fax: +39 0773 470219 
http://www.panservice.it mailto:i...@panservice.it 
Numero verde nazionale: 800 901492 
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Proxmox - BIG PROBLEM

2019-07-29 Thread Fabrizio Cuseo
Where are located the VM's disks ? LVM ? ZFS ? 
Is possibile that you still have your disks (if LVM, for example), but i think 
that is better that you install a fresh Proxmox server, and move the disks from 
the old hard drive to the new one. 
You need some knowledge about linux, lvm, and you can save all your data.



- Il 29-lug-19, alle 10:55, lord_Niedzwiedz sir_misi...@o2.pl ha scritto:

> I ran a command on the server by mistake:
> 
> rm /*
> rm: cannot remove '/Backup': Is a directory
> rm: cannot remove '/boot': Is a directory
> rm: cannot remove '/dev': Is a directory
> rm: cannot remove '/etc': Is a directory
> rm: cannot remove '/home': Is a directory
> rm: cannot remove '/media': Is a directory
> rm: cannot remove '/mnt': Is a directory
> rm: cannot remove '/opt': Is a directory
> rm: cannot remove '/proc': Is a directory
> rm: cannot remove '/Roboczy': Is a directory
> rm: cannot remove '/root': Is a directory
> rm: cannot remove '/rpool': Is a directory
> rm: cannot remove '/run': Is a directory
> rm: cannot remove '/srv': Is a directory
> rm: cannot remove '/sys': Is a directory
> rm: cannot remove '/tmp': Is a directory
> rm: cannot remove '/usr': Is a directory
> rm: cannot remove '/var': Is a directory
> 
> Strange machines work.
> I'm logged in gui.
> But I can not get to the machine VM.
> Do not execute any commands.
> What to do ??!!
> From what I see, I deleted my catalogs:
> / bin
> / lib
> / lib64
> / sbin
> WITH /.
> How is this possible ??!!
> I'm still logged in on one console after the shell, but I can not do any
> commandos.
> Even:
> qm
> -bash: /usr/sbin/qm: /usr/bin/perl: bad interpreter: No such file or
> directory
> root@tomas:/usr/bin# ls
> -bash: /usr/bin/ls: No such file or directory
> root@tomas:/usr/bin# echo $PATH
> /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
> 
> Any Idea ??
> Please Help Me.
> 
> Gregor
> 
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

-- 
---
Fabrizio Cuseo - mailto:f.cu...@panservice.it
Direzione Generale - Panservice InterNetWorking
Servizi Professionali per Internet ed il Networking
Panservice e' associata AIIP - RIPE Local Registry
Phone: +39 0773 410020 - Fax: +39 0773 470219
http://www.panservice.it  mailto:i...@panservice.it
Numero verde nazionale: 800 901492
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] pve-csync version of pve-zsync?

2019-06-25 Thread Fabrizio Cuseo
Hi.
There is some news regarding two-cluster replication ? I would like to use it 
in a DR scenario, with the big cluster that can have some VMs replicated on a 
small one (but different, and possibly in a different datacenter).

Something included in the proxmox gui will be really great and add a lot of 
value to proxmox itself.

Regards, Fabrizio Cuseo


- Il 13-mar-18, alle 19:32, Alexandre DERUMIER aderum...@odiso.com ha 
scritto:

> Hi,
> 
> I have plans to implement storage replication for rbd in proxmox,
> like for zfs export|import.  (with rbd export-diff |rbd import-diff )
> 
> I'll try to work on it next month.
> 
> I'm not sure that currently a plugin infrastructe in done in code,
> and that it's able to manage storages with differents name.
> 
> Can't tell if it'll be hard to implement, but the workflow is almost the same.
> 
> I'll try to look also at rbd mirror, but it's only work with librbd in qemu, 
> not
> with krbd,
> so it can't be implemented for container.
> 
> 
> - Mail original -
> De: "Mark Adams" 
> À: "proxmoxve" 
> Envoyé: Mardi 13 Mars 2018 18:52:21
> Objet: Re: [PVE-User] pve-csync version of pve-zsync?
> 
> Hi Alwin,
> 
> I might have to take another look at it, but have you actually done this
> with 2 proxmox clusters? I can't remember the exact part I got stuck on as
> it was quite a while ago, but it wasn't as straight forward as you suggest.
> I think you couldn't use the same cluster name, which in turn created
> issues trying to use the "remote" (backup/dr/whatever you wanna call it)
> cluster with proxmox because it needed to be called ceph.
> 
> The docs I was referring to were the ceph ones yes. Some of the options
> listed in that doc do not work in the current proxmox version (I think the
> doc hasn't been updated for newer versions...)
> 
> Regards,
> Mark
> 
> On 13 March 2018 at 17:19, Alwin Antreich  wrote:
> 
>> On Mon, Mar 12, 2018 at 04:51:32PM +, Mark Adams wrote:
>> > Hi Alwin,
>> > 
>> > The last I looked at it, rbd mirror only worked if you had different
>> > cluster names. Tried to get it working with proxmox but to no avail,
>> > without really messing with how proxmox uses ceph I'm not sure it's
>> > feasible, as proxmox assumes the default cluster name for everything...
>> That isn't mentioned anywhere in the ceph docs, they use for ease of
>> explaining two different cluster names.
>> 
>> If you have a config file named after the cluster, then you can specifiy
>> it on the command line.
>> http://docs.ceph.com/docs/master/rados/configuration/
>> ceph-conf/#running-multiple-clusters
>> 
>> > 
>> > Also the documentation was a bit poor for it IMO.
>> Which documentation do you mean?
>> ? -> http://docs.ceph.com/docs/master/rbd/rbd-mirroring/
>> 
>> > 
>> > Would also be nice to choose specifically which VM's you want to be
>> > mirroring, rather than the whole cluster.
>> It is done either per pool or image separately. See the link above.
>> 
>> > 
>> > I've manually done rbd export-diff and rbd import-diff between 2 separate
>> > proxmox clusters over ssh, and it seems to work really well... It would
>> > just be nice to have a tool like pve-zsync so I don't have to write some
>> > script myself. Seems to me like something that would be desirable as part
>> > of proxmox as well?
>> That would basically implement the ceph rbd mirror feature.
>> 
>> > 
>> > Cheers,
>> > Mark
>> > 
>> > On 12 March 2018 at 16:37, Alwin Antreich 
>> wrote:
>> > 
>> > > Hi Mark,
>> > > 
>> > > On Mon, Mar 12, 2018 at 03:49:42PM +, Mark Adams wrote:
>> > > > Hi All,
>> > > > 
>> > > > Has anyone looked at or thought of making a version of pve-zsync for
>> > > ceph?
>> > > > 
>> > > > This would be great for DR scenarios...
>> > > > 
>> > > > How easy do you think this would be to do? I imagine it wouId it be
>> quite
>> > > > similar to pve-zsync, but using rbd export-diff and rbd import-diff
>> > > instead
>> > > > of zfs send and zfs receive? so could the existing script be
>> relatively
>> > > > easily modified? (I know nothing about perl)
>> > > > 
>> > > > Cheers,
>> > > > Mark
>> > > > 

Re: [PVE-User] Proxmox Ceph to Hyper-V or VMWare

2019-06-03 Thread Fabrizio Cuseo
Hello Gilberto.
Do you need to export the PVE Ceph storage, or you can have a dedicated Ceph 
cluster for Vmware iscsi storage ? 
If you can have a separate ceph cluster, you can use petasan (www.petasan.org) 
that is a ceph + iscsi scalable storage.

I have not heavily tested because I only use proxmox for both storage and 
hypervisor, but I think that is quite stable to use.

Regards, Fabrizio 

- Il 3-giu-19, alle 22:05, Gilberto Nunes gilberto.nune...@gmail.com ha 
scritto:

> Hi
> 
> Actually my need will be supplied by this info
> 
> "configure ceph in proxmox as nfs, smb or iscsi  gateway for hyper-v or
> vmware to consume storage from : yes possible, but will require some cli
> usage."
> 
> I need Hyper-V or VMWare consume storage from PVE CEPH cluster storage...
> 
> I will appreciated if you could point me some clues about it...
> 
> Thanks a lot
> --
> Gilberto Nunes Ferreira
> 
> (47) 3025-5907
> (47) 99676-7530 - Whatsapp / Telegram
> 
> Skype: gilberto.nunes36
> 
> 
> 
> 
> 
> Em seg, 3 de jun de 2019 às 16:48, Ronny Aasen 
> escreveu:
> 
>> On 03.06.2019 20:42, Gilberto Nunes wrote:
>> > Hi there
>> >
>> > Simple question: Is there any way to connect a Proxmox Ceph cluster with
>> a
>> > Hyper-V or VMWare server??
>>
>>
>> Define "connect"..
>>
>> manage vmware/hyper-v/proxmox hosts in the proxmox web interface : no
>> not possible.
>>
>> have them connected to subnets where the hosts are pingable, and copy vm
>> image files over the network:  yes possible.
>>
>> configure ceph in proxmox as nfs, smb or iscsi  gateway for hyper-v or
>> vmware to consume storage from : yes possible, but will require some cli
>> usage.
>>
>> migrare vm's between proxmox and vmware and hyper-v: no, not in a
>> interface. you can do a manual migration, where you move a disk image
>> over and set up a new vm on the other platform using the old disk ofcourse.
>>
>>
>> what did you have in mind?
>>
>>
>> kind regards
>> Ronny Aasen
>>
>>
>> ___
>> pve-user mailing list
>> pve-user@pve.proxmox.com
>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>>
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

-- 
---
Fabrizio Cuseo - mailto:f.cu...@panservice.it
Direzione Generale - Panservice InterNetWorking
Servizi Professionali per Internet ed il Networking
Panservice e' associata AIIP - RIPE Local Registry
Phone: +39 0773 410020 - Fax: +39 0773 470219
http://www.panservice.it  mailto:i...@panservice.it
Numero verde nazionale: 800 901492
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] ZFS Replication on different storage

2019-03-18 Thread Fabrizio Cuseo
I don't know what Proxmox does during replication; i think that it use the pool 
name and not the mountpoint; i will check.
Regards, Fabrizio 


- Il 15-mar-19, alle 16:18,  devz...@web.de ha scritto:

> have a look at zfs get|set mountpoint
> 
> root@pve1:/etc/apt# zfs get mountpoint
> NAME  PROPERTYVALUE   SOURCE
> rpool mountpoint  /rpool  default
> rpool/ROOTmountpoint  /rpool/ROOT default
> rpool/ROOT/pve-1  mountpoint  /   local
> rpool/datamountpoint  /rpool/data default
> rpool/data/vm-100-disk-0  mountpoint  -   -
> rpool/testdataset mountpoint  /rpool/testdataset  default
> 
> 
> 
> 
>> Gesendet: Freitag, 15. März 2019 um 15:10 Uhr
>> Von: "Fabrizio Cuseo" 
>> An: pve-user 
>> Betreff: Re: [PVE-User] ZFS Replication on different storage
>>
>> Hello Mark.
>> I can't rename the zpool because is the primary pool (rpool).
>> 
>> So i have:
>> 
>> Node A:
>> rpool/data (source)
>> 
>> Node B:
>> rpool/data (small partition)
>> pool2 (my desired destination)
>> 
>> I can rename rpool/data as rpool/data_ori, but i can't rename pool2 as
>> rpool/data because is a different pool
>> 
>> - Il 15-mar-19, alle 10:26, Mark Adams  ha scritto:
>> 
>> > Why don't you just rename the zpool so they match?
>> 
>> > On Fri, 15 Mar 2019, 09:10 Fabrizio Cuseo, < [ 
>> > mailto:f.cu...@panservice.it |
>> > f.cu...@panservice.it ] > wrote:
>> 
>> >> Hello Gianni.
>> >> I wrote in my email that pve-zsync is not suitable for my need 
>> >> (redundancy with
>> >> VM migration from one host to another).
>> 
>> >> Fabrizio
>> 
>> >> - Il 15-mar-19, alle 10:08, Gianni M. < [ 
>> >> mailto:tribali...@hotmail.com |
>> >> tribali...@hotmail.com ] > ha scritto:
>> 
>> >> > I think it's hardcoded to use rpool as the default target pool, but I 
>> >> > might be
>> >> > wrong.
>> 
>> >> > You might want to have a look at pve-zsync instead ?
>> 
>> >>> [ https://pve.proxmox.com/wiki/PVE-zsync |
>> >> > https://pve.proxmox.com/wiki/PVE-zsync ]
>> 
>> >> > NiCK
>> 
>> >>> From: pve-user < [ mailto:pve-user-boun...@pve.proxmox.com |
>> >> > pve-user-boun...@pve.proxmox.com ] > on behalf of Fabrizio Cuseo
>> >> > < [ mailto:f.cu...@panservice.it | f.cu...@panservice.it ] >
>> >> > Sent: Friday, March 15, 2019 8:48:58 AM
>> >> > To: pve-user
>> >> > Subject: Re: [PVE-User] ZFS Replication on different storage
>> >> > Thank you, I have already seen this page; i know and use zfs with 
>> >> > freenas and
>> >> > other (tipically i use ceph, but for this cluster I only need the 
>> >> > replication
>> >> > feature), but pve-zsync is used more for offsite-backup than 
>> >> > redundancy, and
>> >> > the VM can't be migrated from one host to another.
>> >> > So, is not usable :(
>> 
>> >> > Thanks, Fabrizio
>> 
>> >>> - Il 15-mar-19, alle 9:36, [ mailto:b...@todoo.biz | b...@todoo.biz 
>> >>> ] ha
>> >> > scritto:
>> 
>> >> > > Please check this page :
>> 
>> >>>> [ [ https://pve.proxmox.com/wiki/PVE-zsync |
>> >> >> https://pve.proxmox.com/wiki/PVE-zsync ] |
>> >>> > [ https://pve.proxmox.com/wiki/PVE-zsync |
>> >> > > https://pve.proxmox.com/wiki/PVE-zsync ] ]
>> 
>> >> > > pve-zsync is a very nice tool. You need to know and understand what 
>> >> > > you are
>> >> > > doing (generally speaking, it is a good advise).
>> 
>> >> > > ZFS is a complex file system with major features, but It has a 
>> >> > > certain learning
>> >> > > curve.
>> >> > > If you have no notion of ZFS, use another backup strategy or learn 
>> >> > > some basics
>> >> > > about ZFS.
>> 
>> >> > > you should first install pve-zsync and issue a command looking 
>> >> > > similar to this
>> >> > > one :
>> 
>> >> > > pve-zsync create -dest 192.16

Re: [PVE-User] ZFS Replication on different storage

2019-03-15 Thread Fabrizio Cuseo
Hello Mark. 
I can't rename the zpool because is the primary pool (rpool). 

So i have: 

Node A: 
rpool/data (source) 

Node B: 
rpool/data (small partition) 
pool2 (my desired destination) 

I can rename rpool/data as rpool/data_ori, but i can't rename pool2 as 
rpool/data because is a different pool 

- Il 15-mar-19, alle 10:26, Mark Adams  ha scritto: 

> Why don't you just rename the zpool so they match?

> On Fri, 15 Mar 2019, 09:10 Fabrizio Cuseo, < [ mailto:f.cu...@panservice.it |
> f.cu...@panservice.it ] > wrote:

>> Hello Gianni.
>> I wrote in my email that pve-zsync is not suitable for my need (redundancy 
>> with
>> VM migration from one host to another).

>> Fabrizio

>> - Il 15-mar-19, alle 10:08, Gianni M. < [ mailto:tribali...@hotmail.com |
>> tribali...@hotmail.com ] > ha scritto:

>> > I think it's hardcoded to use rpool as the default target pool, but I 
>> > might be
>> > wrong.

>> > You might want to have a look at pve-zsync instead ?

>>> [ https://pve.proxmox.com/wiki/PVE-zsync |
>> > https://pve.proxmox.com/wiki/PVE-zsync ]

>> > NiCK

>>> From: pve-user < [ mailto:pve-user-boun...@pve.proxmox.com |
>> > pve-user-boun...@pve.proxmox.com ] > on behalf of Fabrizio Cuseo
>> > < [ mailto:f.cu...@panservice.it | f.cu...@panservice.it ] >
>> > Sent: Friday, March 15, 2019 8:48:58 AM
>> > To: pve-user
>> > Subject: Re: [PVE-User] ZFS Replication on different storage
>> > Thank you, I have already seen this page; i know and use zfs with freenas 
>> > and
>> > other (tipically i use ceph, but for this cluster I only need the 
>> > replication
>> > feature), but pve-zsync is used more for offsite-backup than redundancy, 
>> > and
>> > the VM can't be migrated from one host to another.
>> > So, is not usable :(

>> > Thanks, Fabrizio

>>> - Il 15-mar-19, alle 9:36, [ mailto:b...@todoo.biz | b...@todoo.biz ] ha
>> > scritto:

>> > > Please check this page :

>>>> [ [ https://pve.proxmox.com/wiki/PVE-zsync |
>> >> https://pve.proxmox.com/wiki/PVE-zsync ] |
>>> > [ https://pve.proxmox.com/wiki/PVE-zsync |
>> > > https://pve.proxmox.com/wiki/PVE-zsync ] ]

>> > > pve-zsync is a very nice tool. You need to know and understand what you 
>> > > are
>> > > doing (generally speaking, it is a good advise).

>> > > ZFS is a complex file system with major features, but It has a certain 
>> > > learning
>> > > curve.
>> > > If you have no notion of ZFS, use another backup strategy or learn some 
>> > > basics
>> > > about ZFS.

>> > > you should first install pve-zsync and issue a command looking similar 
>> > > to this
>> > > one :

>> > > pve-zsync create -dest 192.168.210.28:tank/proxmox1 -limit 12600 
>> > > -maxsnap 7
>> > > -name kvm1.srv -source 133 -verbose

>> > > Where 192.168.210.28 is the IP of your second host / backup host… and
>> > > tank/proxmox1 is the dataset where you'll backup.

>> > > First run will actually create the backup and sync It.
>> > > It will also create a cron job available in /etc/cron.d/pve-zsync

>> > > You can edit this file in order to tune the various parameters (most 
>> > > probably
>> > > the frequency).

>> > > Do read the doc.

>>> >> Le 15 mars 2019 à 09:18, Fabrizio Cuseo < [ mailto:f.cu...@panservice.it 
>>> >> |
>> > >> f.cu...@panservice.it ] > a écrit :

>> > >> Hello Yannis.
>> > >> I can't see an option to specify remote storage (or pool) name.
>> > >> If you read my email, i need to replicate:

>> > >> HostA/poolZ > HostB/poolY

>> > >> But without specifying the remote pool name (there is no pool field in 
>> > >> the gui),
>> > >> it replicates from HostA/PoolZ ---> HostB/PoolZ (where i have no enough 
>> > >> space)
>> > >> Regards, Fabrizio

>>> >> - Il 14-mar-19, alle 19:48, Yannis Milios < [ 
>>> >> mailto:yannis.mil...@gmail.com
>> > >> | yannis.mil...@gmail.com ] > ha
>> > >> scritto:

>> > >>> Yes, it is possible...

>>>>>> [ [ [ https://pve.proxmox.com/pve-docs/chapter-pvesr.html |
>> >>>> https://pve.proxmox.com/pve-docs/chapter-pvesr.html ] |
>>&

Re: [PVE-User] ZFS Replication on different storage

2019-03-15 Thread Fabrizio Cuseo
Yes, I can. But i want to try another way before. 

- Il 15-mar-19, alle 10:26, Mark Adams  ha scritto: 

> Why don't you just rename the zpool so they match?

> On Fri, 15 Mar 2019, 09:10 Fabrizio Cuseo, < [ mailto:f.cu...@panservice.it |
> f.cu...@panservice.it ] > wrote:

>> Hello Gianni.
>> I wrote in my email that pve-zsync is not suitable for my need (redundancy 
>> with
>> VM migration from one host to another).

>> Fabrizio

>> - Il 15-mar-19, alle 10:08, Gianni M. < [ mailto:tribali...@hotmail.com |
>> tribali...@hotmail.com ] > ha scritto:

>> > I think it's hardcoded to use rpool as the default target pool, but I 
>> > might be
>> > wrong.

>> > You might want to have a look at pve-zsync instead ?

>>> [ https://pve.proxmox.com/wiki/PVE-zsync |
>> > https://pve.proxmox.com/wiki/PVE-zsync ]

>> > NiCK

>>> From: pve-user < [ mailto:pve-user-boun...@pve.proxmox.com |
>> > pve-user-boun...@pve.proxmox.com ] > on behalf of Fabrizio Cuseo
>> > < [ mailto:f.cu...@panservice.it | f.cu...@panservice.it ] >
>> > Sent: Friday, March 15, 2019 8:48:58 AM
>> > To: pve-user
>> > Subject: Re: [PVE-User] ZFS Replication on different storage
>> > Thank you, I have already seen this page; i know and use zfs with freenas 
>> > and
>> > other (tipically i use ceph, but for this cluster I only need the 
>> > replication
>> > feature), but pve-zsync is used more for offsite-backup than redundancy, 
>> > and
>> > the VM can't be migrated from one host to another.
>> > So, is not usable :(

>> > Thanks, Fabrizio

>>> - Il 15-mar-19, alle 9:36, [ mailto:b...@todoo.biz | b...@todoo.biz ] ha
>> > scritto:

>> > > Please check this page :

>>>> [ [ https://pve.proxmox.com/wiki/PVE-zsync |
>> >> https://pve.proxmox.com/wiki/PVE-zsync ] |
>>> > [ https://pve.proxmox.com/wiki/PVE-zsync |
>> > > https://pve.proxmox.com/wiki/PVE-zsync ] ]

>> > > pve-zsync is a very nice tool. You need to know and understand what you 
>> > > are
>> > > doing (generally speaking, it is a good advise).

>> > > ZFS is a complex file system with major features, but It has a certain 
>> > > learning
>> > > curve.
>> > > If you have no notion of ZFS, use another backup strategy or learn some 
>> > > basics
>> > > about ZFS.

>> > > you should first install pve-zsync and issue a command looking similar 
>> > > to this
>> > > one :

>> > > pve-zsync create -dest 192.168.210.28:tank/proxmox1 -limit 12600 
>> > > -maxsnap 7
>> > > -name kvm1.srv -source 133 -verbose

>> > > Where 192.168.210.28 is the IP of your second host / backup host… and
>> > > tank/proxmox1 is the dataset where you'll backup.

>> > > First run will actually create the backup and sync It.
>> > > It will also create a cron job available in /etc/cron.d/pve-zsync

>> > > You can edit this file in order to tune the various parameters (most 
>> > > probably
>> > > the frequency).

>> > > Do read the doc.

>>> >> Le 15 mars 2019 à 09:18, Fabrizio Cuseo < [ mailto:f.cu...@panservice.it 
>>> >> |
>> > >> f.cu...@panservice.it ] > a écrit :

>> > >> Hello Yannis.
>> > >> I can't see an option to specify remote storage (or pool) name.
>> > >> If you read my email, i need to replicate:

>> > >> HostA/poolZ > HostB/poolY

>> > >> But without specifying the remote pool name (there is no pool field in 
>> > >> the gui),
>> > >> it replicates from HostA/PoolZ ---> HostB/PoolZ (where i have no enough 
>> > >> space)
>> > >> Regards, Fabrizio

>>> >> - Il 14-mar-19, alle 19:48, Yannis Milios < [ 
>>> >> mailto:yannis.mil...@gmail.com
>> > >> | yannis.mil...@gmail.com ] > ha
>> > >> scritto:

>> > >>> Yes, it is possible...

>>>>>> [ [ [ https://pve.proxmox.com/pve-docs/chapter-pvesr.html |
>> >>>> https://pve.proxmox.com/pve-docs/chapter-pvesr.html ] |
>>> >>> [ https://pve.proxmox.com/pve-docs/chapter-pvesr.html |
>> > >>> https://pve.proxmox.com/pve-docs/chapter-pvesr.html ] ] |
>>>>>> [ [ https://pve.proxmox.com/pve-docs/chapter-pvesr.html |
>> >

Re: [PVE-User] ZFS Replication on different storage

2019-03-15 Thread Fabrizio Cuseo
Hello Gianni. 
I wrote in my email that pve-zsync is not suitable for my need (redundancy with 
VM migration from one host to another). 

Fabrizio 

- Il 15-mar-19, alle 10:08, Gianni M.  ha scritto: 

> I think it's hardcoded to use rpool as the default target pool, but I might be
> wrong.

> You might want to have a look at pve-zsync instead ?

> https://pve.proxmox.com/wiki/PVE-zsync

> NiCK

> From: pve-user  on behalf of Fabrizio Cuseo
> 
> Sent: Friday, March 15, 2019 8:48:58 AM
> To: pve-user
> Subject: Re: [PVE-User] ZFS Replication on different storage
> Thank you, I have already seen this page; i know and use zfs with freenas and
> other (tipically i use ceph, but for this cluster I only need the replication
> feature), but pve-zsync is used more for offsite-backup than redundancy, and
> the VM can't be migrated from one host to another.
> So, is not usable :(

> Thanks, Fabrizio

> - Il 15-mar-19, alle 9:36, b...@todoo.biz ha scritto:

> > Please check this page :

>> [ https://pve.proxmox.com/wiki/PVE-zsync |
> > https://pve.proxmox.com/wiki/PVE-zsync ]


> > pve-zsync is a very nice tool. You need to know and understand what you are
> > doing (generally speaking, it is a good advise).

> > ZFS is a complex file system with major features, but It has a certain 
> > learning
> > curve.
> > If you have no notion of ZFS, use another backup strategy or learn some 
> > basics
> > about ZFS.

> > you should first install pve-zsync and issue a command looking similar to 
> > this
> > one :

> > pve-zsync create -dest 192.168.210.28:tank/proxmox1 -limit 12600 -maxsnap 7
> > -name kvm1.srv -source 133 -verbose


> > Where 192.168.210.28 is the IP of your second host / backup host… and
> > tank/proxmox1 is the dataset where you'll backup.

> > First run will actually create the backup and sync It.
> > It will also create a cron job available in /etc/cron.d/pve-zsync

> > You can edit this file in order to tune the various parameters (most 
> > probably
> > the frequency).


> > Do read the doc.

> >> Le 15 mars 2019 à 09:18, Fabrizio Cuseo  a écrit :

> >> Hello Yannis.
> >> I can't see an option to specify remote storage (or pool) name.
> >> If you read my email, i need to replicate:

> >> HostA/poolZ > HostB/poolY

> >> But without specifying the remote pool name (there is no pool field in the 
> >> gui),
> >> it replicates from HostA/PoolZ ---> HostB/PoolZ (where i have no enough 
> >> space)
> >> Regards, Fabrizio

> >> - Il 14-mar-19, alle 19:48, Yannis Milios  ha
> >> scritto:

> >>> Yes, it is possible...

>>>> [ [ https://pve.proxmox.com/pve-docs/chapter-pvesr.html |
> >>> https://pve.proxmox.com/pve-docs/chapter-pvesr.html ] |
>>>> [ https://pve.proxmox.com/pve-docs/chapter-pvesr.html |
> >>> https://pve.proxmox.com/pve-docs/chapter-pvesr.html ] ]

>>>> On Thu, 14 Mar 2019 at 11:19, Fabrizio Cuseo < [ [ 
>>>> mailto:f.cu...@panservice.it
> >>> |
> mailto:f.cu...@panservice.it ] |
> >>> f.cu...@panservice.it ] > wrote:

> >>>> Hello.
> >>>> I have a customer with a small cluster, 2 servers (different models).

> >>>> I would like to replicate VMs from host A to host B, but from local-zfs 
> >>>> (host A)
> >>>> to "zfs-data-2" (host B).

> >>>> On the GUI this is not possibile, what about some workaround ?

> >>>> Regards, Fabrizio

> >>>> --
> >>>> ---
>>>>> Fabrizio Cuseo - mailto: [ [ mailto:f.cu...@panservice.it |
> >>>> mailto:f.cu...@panservice.it ] | f.cu...@panservice.it
> >>>> ]
> >>>> Direzione Generale - Panservice InterNetWorking
> >>>> Servizi Professionali per Internet ed il Networking
> >>>> Panservice e' associata AIIP - RIPE Local Registry
> >>>> Phone: +39 0773 410020 - Fax: +39 0773 470219
>>>>> [ [ http://www.panservice.it/ | http://www.panservice.it/ ] | [
> >>>> http://www.panservice.it/ |
> http://www.panservice.it ] ] mailto: [
> >>>> [ mailto:i...@panservice.it | mailto:i...@panservice.it ] | 
> >>>> i...@panservice.it ]
> >>>> Numero verde nazionale: 800 901492
> >>>> ___
> >>>> pve-user mailing list
>>>>> [ [ mailto:pve-user@pve.proxmox.com | mailto:pve-user@pve.proxmox.com ] |
> >>>

Re: [PVE-User] ZFS Replication on different storage

2019-03-15 Thread Fabrizio Cuseo
Thank you, I have already seen this page; i know and use zfs with freenas and 
other (tipically i use ceph, but for this cluster I only need the replication 
feature), but pve-zsync is used more for offsite-backup than redundancy, and 
the VM can't be migrated  from one host to another.
So, is not usable :( 

Thanks, Fabrizio 


- Il 15-mar-19, alle 9:36,  b...@todoo.biz ha scritto:

> Please check this page :
> 
> https://pve.proxmox.com/wiki/PVE-zsync
> 
> 
> pve-zsync is a very nice tool. You need to know and understand what you are
> doing (generally speaking, it is a good advise).
> 
> ZFS is a complex file system with major features, but It has a certain 
> learning
> curve.
> If you have no notion of ZFS, use another backup strategy or learn some basics
> about ZFS.
> 
> you should first install pve-zsync and issue a command looking similar to this
> one :
> 
> pve-zsync create -dest 192.168.210.28:tank/proxmox1 -limit 12600 -maxsnap 7
> -name kvm1.srv -source 133 -verbose
> 
> 
> Where 192.168.210.28 is the IP of your second host / backup host… and
> tank/proxmox1 is the dataset where you'll backup.
> 
> First run will actually create the backup and sync It.
> It will also create a cron job available in /etc/cron.d/pve-zsync
> 
> You can edit this file in order to tune the various parameters (most probably
> the frequency).
> 
> 
> Do read the doc.
> 
>> Le 15 mars 2019 à 09:18, Fabrizio Cuseo  a écrit :
>> 
>> Hello Yannis.
>> I can't see an option to specify remote storage (or pool) name.
>> If you read my email, i need to replicate:
>> 
>> HostA/poolZ > HostB/poolY
>> 
>> But without specifying the remote pool name (there is no pool field in the 
>> gui),
>> it replicates from HostA/PoolZ ---> HostB/PoolZ (where i have no enough 
>> space)
>> Regards, Fabrizio
>> 
>> - Il 14-mar-19, alle 19:48, Yannis Milios  ha
>> scritto:
>> 
>>> Yes, it is possible...
>> 
>>> [ https://pve.proxmox.com/pve-docs/chapter-pvesr.html |
>>> https://pve.proxmox.com/pve-docs/chapter-pvesr.html ]
>> 
>>> On Thu, 14 Mar 2019 at 11:19, Fabrizio Cuseo < [ 
>>> mailto:f.cu...@panservice.it |
>>> f.cu...@panservice.it ] > wrote:
>> 
>>>> Hello.
>>>> I have a customer with a small cluster, 2 servers (different models).
>> 
>>>> I would like to replicate VMs from host A to host B, but from local-zfs 
>>>> (host A)
>>>> to "zfs-data-2" (host B).
>> 
>>>> On the GUI this is not possibile, what about some workaround ?
>> 
>>>> Regards, Fabrizio
>> 
>>>> --
>>>> ---
>>>> Fabrizio Cuseo - mailto: [ mailto:f.cu...@panservice.it | 
>>>> f.cu...@panservice.it
>>>> ]
>>>> Direzione Generale - Panservice InterNetWorking
>>>> Servizi Professionali per Internet ed il Networking
>>>> Panservice e' associata AIIP - RIPE Local Registry
>>>> Phone: +39 0773 410020 - Fax: +39 0773 470219
>>>> [ http://www.panservice.it/ | http://www.panservice.it ] mailto: [
>>>> mailto:i...@panservice.it | i...@panservice.it ]
>>>> Numero verde nazionale: 800 901492
>>>> ___
>>>> pve-user mailing list
>>>> [ mailto:pve-user@pve.proxmox.com | pve-user@pve.proxmox.com ]
>>>> [ https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user |
>>>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user ]
>> 
>>> --
>>> Sent from Gmail Mobile
>> 
>> --
>> ---
>> Fabrizio Cuseo - mailto:f.cu...@panservice.it
>> Direzione Generale - Panservice InterNetWorking
>> Servizi Professionali per Internet ed il Networking
>> Panservice e' associata AIIP - RIPE Local Registry
>> Phone: +39 0773 410020 - Fax: +39 0773 470219
>> http://www.panservice.it mailto:i...@panservice.it
>> Numero verde nazionale: 800 901492
>> ___
>> pve-user mailing list
>> pve-user@pve.proxmox.com
> > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

-- 
---
Fabrizio Cuseo - mailto:f.cu...@panservice.it
Direzione Generale - Panservice InterNetWorking
Servizi Professionali per Internet ed il Networking
Panservice e' associata AIIP - RIPE Local Registry
Phone: +39 0773 410020 - Fax: +39 0773 470219
http://www.panservice.it  mailto:i...@panservice.it
Numero verde nazionale: 800 901492
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] ZFS Replication on different storage

2019-03-15 Thread Fabrizio Cuseo
Hello Yannis. 
I can't see an option to specify remote storage (or pool) name. 
If you read my email, i need to replicate: 

HostA/poolZ > HostB/poolY 

But without specifying the remote pool name (there is no pool field in the 
gui), it replicates from HostA/PoolZ ---> HostB/PoolZ (where i have no enough 
space) 
Regards, Fabrizio 

- Il 14-mar-19, alle 19:48, Yannis Milios  ha 
scritto: 

> Yes, it is possible...

> [ https://pve.proxmox.com/pve-docs/chapter-pvesr.html |
> https://pve.proxmox.com/pve-docs/chapter-pvesr.html ]

> On Thu, 14 Mar 2019 at 11:19, Fabrizio Cuseo < [ mailto:f.cu...@panservice.it 
> |
> f.cu...@panservice.it ] > wrote:

>> Hello.
>> I have a customer with a small cluster, 2 servers (different models).

>> I would like to replicate VMs from host A to host B, but from local-zfs 
>> (host A)
>> to "zfs-data-2" (host B).

>> On the GUI this is not possibile, what about some workaround ?

>> Regards, Fabrizio

>> --
>> ---
>> Fabrizio Cuseo - mailto: [ mailto:f.cu...@panservice.it | 
>> f.cu...@panservice.it
>> ]
>> Direzione Generale - Panservice InterNetWorking
>> Servizi Professionali per Internet ed il Networking
>> Panservice e' associata AIIP - RIPE Local Registry
>> Phone: +39 0773 410020 - Fax: +39 0773 470219
>> [ http://www.panservice.it/ | http://www.panservice.it ] mailto: [
>> mailto:i...@panservice.it | i...@panservice.it ]
>> Numero verde nazionale: 800 901492
>> ___
>> pve-user mailing list
>> [ mailto:pve-user@pve.proxmox.com | pve-user@pve.proxmox.com ]
>> [ https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user |
>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user ]

> --
> Sent from Gmail Mobile

-- 
--- 
Fabrizio Cuseo - mailto:f.cu...@panservice.it 
Direzione Generale - Panservice InterNetWorking 
Servizi Professionali per Internet ed il Networking 
Panservice e' associata AIIP - RIPE Local Registry 
Phone: +39 0773 410020 - Fax: +39 0773 470219 
http://www.panservice.it mailto:i...@panservice.it 
Numero verde nazionale: 800 901492 
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] ZFS Replication on different storage

2019-03-14 Thread Fabrizio Cuseo
Hello.
I have a customer with a small cluster, 2 servers (different models).

I would like to replicate VMs from host A to host B, but from local-zfs (host 
A) to "zfs-data-2" (host B).

On the GUI this is not possibile, what about some workaround ? 

Regards, Fabrizio


-- 
---
Fabrizio Cuseo - mailto:f.cu...@panservice.it
Direzione Generale - Panservice InterNetWorking
Servizi Professionali per Internet ed il Networking
Panservice e' associata AIIP - RIPE Local Registry
Phone: +39 0773 410020 - Fax: +39 0773 470219
http://www.panservice.it  mailto:i...@panservice.it
Numero verde nazionale: 800 901492
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Backula proxmox backup

2018-05-24 Thread Fabrizio Cuseo
Hello.
Someone tested bacula enterprise backup for proxmox ? And someone knows the 
pricing level ? 

https://www.baculasystems.com/corporate-data-backup-software-solutions/bacula-enterprise-data-backup-tools/backup-and-recovery-for-proxmox


Regards, Fabrizio Cuseo


-- 
---
Fabrizio Cuseo - mailto:f.cu...@panservice.it
Direzione Generale - Panservice InterNetWorking
Servizi Professionali per Internet ed il Networking
Panservice e' associata AIIP - RIPE Local Registry
Phone: +39 0773 410020 - Fax: +39 0773 470219
http://www.panservice.it  mailto:i...@panservice.it
Numero verde nazionale: 800 901492
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Disable SWAP on hosts with SSD

2018-01-09 Thread Fabrizio Cuseo
Hello.
Can I safely disable swap on hosts with PVE installed on SSD drive ? I think 
that 8Gbyte of swap space on 144Gbyte hosts is not so useful and can 
accellerate wearout of SSD drives.

Regards, Fabrizio 


-- 
---
Fabrizio Cuseo - mailto:f.cu...@panservice.it
Direzione Generale - Panservice InterNetWorking
Servizi Professionali per Internet ed il Networking
Panservice e' associata AIIP - RIPE Local Registry
Phone: +39 0773 410020 - Fax: +39 0773 470219
http://www.panservice.it  mailto:i...@panservice.it
Numero verde nazionale: 800 901492
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] PVE 5.1 Clone problem

2017-11-14 Thread Fabrizio Cuseo
Ops :D Sorry :) 
I don't remember the same message on 4.X 


- Il 14-nov-17, alle 13:23, Alexandre DERUMIER aderum...@odiso.com ha 
scritto:

> yes, this is normal.
> 
> the block job mirror is cancelled at the end, to avoid that source vm switch 
> on
> the new disk.
> 
> 
> - Mail original -
> De: "Fabrizio Cuseo" 
> À: "proxmoxve" 
> Envoyé: Mardi 31 Octobre 2017 18:14:28
> Objet: [PVE-User] PVE 5.1 Clone problem
> 
> Hello.
> I have just installed a test cluster with PVE 5.1 and Ceph (bluestore).
> 
> 3 nodes, 4 HD per node, 3 OSD, single gigabit ethernet, no ceph dedicated
> network (is only a test cluster).
> 
> Cloning a VM (both Powered ON and OFF), returns "trying to acquire
> lock...drive-scsi0: Cancelling block job", but the clone seems ok.
> 
> See the screenshot:
> http://cloud.fabry.it/index.php/s/9vrEvMQE5zToCrE
> 
> Regards, Fabrizio Cuseo
> 
> 
> --
> ---
> Fabrizio Cuseo - mailto:f.cu...@panservice.it
> Direzione Generale - Panservice InterNetWorking
> Servizi Professionali per Internet ed il Networking
> Panservice e' associata AIIP - RIPE Local Registry
> Phone: +39 0773 410020 - Fax: +39 0773 470219
> http://www.panservice.it mailto:i...@panservice.it
> Numero verde nazionale: 800 901492
> _______
> pve-user mailing list
> pve-user@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

-- 
---
Fabrizio Cuseo - mailto:f.cu...@panservice.it
Direzione Generale - Panservice InterNetWorking
Servizi Professionali per Internet ed il Networking
Panservice e' associata AIIP - RIPE Local Registry
Phone: +39 0773 410020 - Fax: +39 0773 470219
http://www.panservice.it  mailto:i...@panservice.it
Numero verde nazionale: 800 901492
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Ceph server osd monitoring

2017-11-01 Thread Fabrizio Cuseo
Hi all.

Can you introduce in 5.1 a mail allarm when ceph server has some problem (mon 
or osd down for example).

Regards, Fabrizio
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] PVE 5.1 Clone problem

2017-10-31 Thread Fabrizio Cuseo
Hello.
I have just installed a test cluster with PVE 5.1 and Ceph (bluestore).

3 nodes, 4 HD per node, 3 OSD, single gigabit ethernet, no ceph dedicated 
network (is only a test cluster).

Cloning a VM (both Powered ON and OFF), returns "trying to acquire 
lock...drive-scsi0: Cancelling block job", but the clone seems ok.

See the screenshot:
http://cloud.fabry.it/index.php/s/9vrEvMQE5zToCrE

Regards, Fabrizio Cuseo


-- 
---
Fabrizio Cuseo - mailto:f.cu...@panservice.it
Direzione Generale - Panservice InterNetWorking
Servizi Professionali per Internet ed il Networking
Panservice e' associata AIIP - RIPE Local Registry
Phone: +39 0773 410020 - Fax: +39 0773 470219
http://www.panservice.it  mailto:i...@panservice.it
Numero verde nazionale: 800 901492
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


pve-user@pve.proxmox.com

2017-10-07 Thread Fabrizio Cuseo
Hello.
I have a 4 nodes cluster with PVE 4.4, 2 x 2Gb for management and cluster 
network, 2 x 2Gb for VMs network, 2 x Infiniband (10Gbit) active/passive bond 
for Ceph Network (with IP Over Infiniband).
with iperf, i have about 7/8Gbit speed. With ceph I have 6 x sata WD Gold 
datacenter disks, with journal on disk partition (no ssd), and i have not bad 
performances.

Regards, Fabrizio


- Il 7-ott-17, alle 17:12, Phil Schwarz infol...@schwarz-fr.net ha scritto:

> Hi,
> able to rebuild a brand new cluster, i wonder about using as backend
> storage a bunch of 4 Mellanox ConnectX DDR with a Flextronics or
> Voltaire IB Switch.
> 1. Does this setup be supported ? I found a drbd doc related to IB, but
> not a ceph's one.
> 2. Should i use IPoIB or RDMA instead ? I'm not afraid of performances
> drawbacks with IPoIB (every node has a max of 5 disk).It'a a test
> lab/home use cluster.
> 
> So, appart of being really fun, and really cheap (whole subsystem should
> be under 250€), i wonder of the potential use of such a huge network
> bandwidth.
> 
> Does anyone use, with success, IB for Ceph & Proxmox ?
> 
> Thanks
> Best regards
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

-- 
---
Fabrizio Cuseo - mailto:f.cu...@panservice.it
Direzione Generale - Panservice InterNetWorking
Servizi Professionali per Internet ed il Networking
Panservice e' associata AIIP - RIPE Local Registry
Phone: +39 0773 410020 - Fax: +39 0773 470219
http://www.panservice.it  mailto:i...@panservice.it
Numero verde nazionale: 800 901492
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Safe remove node from cluster...

2016-11-08 Thread Fabrizio Cuseo
If you have exactly the same name schema for ethernet and storage, and you have 
the same storage shared from both hosts, you can:

- power off the VM running on Host A
- copy the .conf file from host A to host B
- power on the VM on host B
- rename (safe) the .conf file on host A (to be sure that you don't power on 
the vm running on host B).

So, you can have a really short downtime.

This is what i have done migrating vms from a 3.4 to a 4.0 running cluster.

Regards, Fabrizio 


- Il 8-nov-16, alle 17:43, Gilberto Nunes gilberto.nune...@gmail.com ha 
scritto:

> HUm... I see So the safe procedure is move the disk and make dowtime,
> make new VM in PVE 4.2, using the disk previously moved right??
> 
> 2016-11-08 14:41 GMT-02:00 Michael Rasmussen :
> 
>> On Tue, 8 Nov 2016 14:18:58 -0200
>> Gilberto Nunes  wrote:
>>
>> > I need migrate one single VM to Promox 4.2.
>> > Proxmox 4.2 already has a GlusterFS storage.
>> > I already set this storage to Proxmox 4.3 and I am able to move disk from
>> > PVE 4.3 to PVE4.2, but I need migrate the running VM from PVE 4.3 to PVE
>> > 4.2.
>> >
>> AFAIK this is not possible since pve 4.2 uses an older version of qemu.
>>
>> --
>> Hilsen/Regards
>> Michael Rasmussen
>>
>> Get my public GnuPG keys:
>> michael  rasmussen  cc
>> http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xD3C9A00E
>> mir  datanom  net
>> http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xE501F51C
>> mir  miras  org
>> http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xE3E80917
>> --
>> /usr/games/fortune -es says:
>> I can resist anything but temptation.
>>
>> ___
>> pve-user mailing list
>> pve-user@pve.proxmox.com
>> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>>
>>
> 
> 
> --
> 
> Gilberto Ferreira
> +55 (47) 9676-7530
> Skype: gilberto.nunes36
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

-- 
---
Fabrizio Cuseo - mailto:f.cu...@panservice.it
Direzione Generale - Panservice InterNetWorking
Servizi Professionali per Internet ed il Networking
Panservice e' associata AIIP - RIPE Local Registry
Phone: +39 0773 410020 - Fax: +39 0773 470219
http://www.panservice.it  mailto:i...@panservice.it
Numero verde nazionale: 800 901492
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] How to move VM from dead node?

2016-11-07 Thread Fabrizio Cuseo
You need to move the conf files from /etc/pve/nodes/death-node/qemu-server to 
/etc/pve/qemu-server, or I am wrong ? 


- Il 7-nov-16, alle 11:50, Daniel dan...@linux-nerd.de ha scritto:

> You dont need to move anythink because /etc/pve is shared and all 
> Cluster-Nodes
> knows all Configs from the whole Cluster.
> The Magic Word here is Shared-Storage.
> 
> 
>> Am 07.11.2016 um 11:37 schrieb Guy :
>> 
>> Yes with shared storage it's simple. Move the Conf file to a new node and 
>> your
>> good to go. In fact that's how the migration works anyway.
>> 
>> Conf file storage /etc/pve is synced between the nodes.
>> 
>> No need to restart anything.
> 
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

-- 
---
Fabrizio Cuseo - mailto:f.cu...@panservice.it
Direzione Generale - Panservice InterNetWorking
Servizi Professionali per Internet ed il Networking
Panservice e' associata AIIP - RIPE Local Registry
Phone: +39 0773 410020 - Fax: +39 0773 470219
http://www.panservice.it  mailto:i...@panservice.it
Numero verde nazionale: 800 901492
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] How to move VM from dead node?

2016-11-07 Thread Fabrizio Cuseo
You can simply move from any of the running nodes the files in 
/etc/pve/nodes/name-of-dead-host/qemu-server and move the *conf files to 
/etc/pve/qemu-server
Your VMs will appear in that node.

PS: if you can repair your dead node without reinstalling it, delete the files 
in /etc/pve/qemu-server and /etc/pve/nodes/name-of-dead-host/qemu-server before 
connecting to the cluster; i have never done it, but I think that is the right 
way.

Regards, Fabrizio 



- Il 7-nov-16, alle 11:16, Szabolcs F. subc...@gmail.com ha scritto:

> Hello All,
> 
> I've got a Proxmox VE 4.3 cluster (no subscription) of 12 Dell C6220 nodes.
> 
> My question is: how do I move a VM from a dead node? Let's say my pve11
> dies (hardware issue), but the other 11 nodes are still up&running. In this
> case I can't migrate VMs off of pve11, because I get the 'no route to host'
> issue. I can only see the VM ID of the VMs that should be running on pve11.
> But I want to move the VMs to the working nodes until I can fix the
> hardware issue.
> 
> All my VMs are stored on NAS servers, so a failing Proxmox node is not an
> issue from this point of view, I can still access the VM files. All my 12
> PVE nodes access the storage with NFS.
> 
> Thanks in advance!
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

-- 
---
Fabrizio Cuseo - mailto:f.cu...@panservice.it
Direzione Generale - Panservice InterNetWorking
Servizi Professionali per Internet ed il Networking
Panservice e' associata AIIP - RIPE Local Registry
Phone: +39 0773 410020 - Fax: +39 0773 470219
http://www.panservice.it  mailto:i...@panservice.it
Numero verde nazionale: 800 901492
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Ris: [pve-devel] Severe problems with latest updates (no-subscription repo)

2016-10-23 Thread Fabrizio Cuseo
Glusterfs too
Inviato dal mio dispositivo Huawei
 Messaggio originale 
Oggetto: Re: [PVE-User] [pve-devel] Severe problems with latest updates 
(no-subscription repo)
Da: Lindsay Mathieson
A: Dietmar Maurer ,PVE User List ,PVE development discussion
CC:


On 23/10/2016 6:08 PM, Dietmar Maurer wrote:
> I am unable to reproduce that. Please can you post the VM config? What kind
> of storage do you use?

Gluster 3.8.4, gfapi


agent: 1
boot: c
bootdisk: scsi0
cores: 2
ide0: none,media=cdrom
machine: pc-i440fx-1.4
memory: 2048
name: Lindsay-Test
net0: virtio=A0:7C:D5:1C:7B:3D,bridge=vmbr0
numa: 0
ostype: win7
scsi0: gluster4:301/vm-301-disk-1.qcow2,cache=writeback,size=64G
scsihw: virtio-scsi-pci
sockets: 1
usb1: spice
usb2: spice
usb3: spice


-- 
Lindsay Mathieson

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Severe problems with latest updates (no-subscription repo)

2016-10-22 Thread Fabrizio Cuseo
I can confirm the same issue.

Messaggio inviato da dispositivo mobile

 Messaggio originale 
Da:Lindsay Mathieson 
Inviato:Sun, 23 Oct 2016 04:15:08 +0200
A:PVE User List ,PVE development discussion 

Oggetto:[PVE-User] Severe problems with latest updates (no-subscription repo)



Online migration appears to be broken:

Oct 23 12:10:27 starting migration of VM 301 to node 'vna' (192.168.5.243)
Oct 23 12:10:27 copying disk images
Oct 23 12:10:27 starting VM 301 on remote node 'vna'


And it just hangs on the last line. Happens with all VM's


Also nearly all VM starts invoked via the gui are failing with a timeout 
error, though the VM actually starts.


-- 
Lindsay Mathieson

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Ceph and Containers...

2016-10-14 Thread Fabrizio Cuseo
Try to set for LCX pool both images and rootdir


- Il 14-ott-16, alle 12:41, Marco Gaiarin g...@sv.lnf.it ha scritto:

> Mandi! Alwin Antreich
>  In chel di` si favelave...
> 
>> As far as I know, you need to separate ceph pools for this.
> 
> OK, now i have 'VM' storage on on 'VM' pool, and 'LXC' storage on 'LXC'
> pool.
> 
> Nothin changed. Still the 'LXC' storage is not available. ;-(
> 
> 
> My /etc/pve/storage.cfg contain:
> 
> rbd: VM
>monhost 10.27.251.7; 10.27.251.8
>username admin
>pool VM
>content images
> 
> rbd: LXC
>monhost 10.27.251.7; 10.27.251.8
>username admin
>pool LXC
>krbd
>content rootdir
> 
> 
> --
> dott. Marco Gaiarin   GNUPG Key ID: 240A3D66
>  Associazione ``La Nostra Famiglia''  http://www.lanostrafamiglia.it/
>  Polo FVG   -   Via della Bontà, 7 - 33078   -   San Vito al Tagliamento (PN)
>  marco.gaiarin(at)lanostrafamiglia.it   t +39-0434-842711   f +39-0434-842797
> 
>   Dona il 5 PER MILLE a LA NOSTRA FAMIGLIA!
>http://www.lanostrafamiglia.it/25/index.php/component/k2/item/123
>   (cf 00307430132, categoria ONLUS oppure RICERCA SANITARIA)
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

-- 
---
Fabrizio Cuseo - mailto:f.cu...@panservice.it
Direzione Generale - Panservice InterNetWorking
Servizi Professionali per Internet ed il Networking
Panservice e' associata AIIP - RIPE Local Registry
Phone: +39 0773 410020 - Fax: +39 0773 470219
http://www.panservice.it  mailto:i...@panservice.it
Numero verde nazionale: 800 901492
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] 4.2 Ceph server Snapshot Remove

2016-07-19 Thread Fabrizio Cuseo
I have two cluster; both with Ceph server.
One is pve 3.4.11 (6 nodes)
The second (not working) one is with pve 4.2.14 (3 nodes)

My vms are with disks on ceph cluster; i am able to take a snapshot while is 
running, but if I try to delete it, a message window say that is not supported. 
If I shutdown the VM and remove the snapshot, all works as expected.

With pve 3.4.11, all works fine.


- Il 19-lug-16, alle 8:53, Fabian Grünbichler f.gruenbich...@proxmox.com ha 
scritto:

> On Fri, Jul 15, 2016 at 11:35:37AM +0200, Fabrizio Cuseo wrote:
>> Hello.
>> 
>> With PVE 4.2, using ceph server, remove a snapshot on a powered-on VM is not
>> supported; with PVE 3.4 i am able to remove it.
>> 
>> Is this feature removed or planned in a future release ?
>> 
>> Regards, Fabrizio
>> 
>> 
> 
> works as expected here - please provide more details about your setup..
> 
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

-- 
---
Fabrizio Cuseo - mailto:f.cu...@panservice.it
Direzione Generale - Panservice InterNetWorking
Servizi Professionali per Internet ed il Networking
Panservice e' associata AIIP - RIPE Local Registry
Phone: +39 0773 410020 - Fax: +39 0773 470219
http://www.panservice.it  mailto:i...@panservice.it
Numero verde nazionale: 800 901492
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] 4.2 Ceph server Snapshot Remove

2016-07-15 Thread Fabrizio Cuseo
Hello.

With PVE 4.2, using ceph server, remove a snapshot on a powered-on VM is not 
supported; with PVE 3.4 i am able to remove it.

Is this feature removed or planned in a future release ? 

Regards, Fabrizio 


-- 
---
Fabrizio Cuseo - mailto:f.cu...@panservice.it
Direzione Generale - Panservice InterNetWorking
Servizi Professionali per Internet ed il Networking
Panservice e' associata AIIP - RIPE Local Registry
Phone: +39 0773 410020 - Fax: +39 0773 470219
http://www.panservice.it  mailto:i...@panservice.it
Numero verde nazionale: 800 901492
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] about IO Delay using openvz and zimbra

2016-06-17 Thread Fabrizio Cuseo
Right (i missed them).
You need better disk/controller.
This is one "old" node, dell 2950, with 2 x sata (raid-1 with perc5i 256Mb 
cache and BBU).
Using cached and BBU controllers, you can gain in fsyncs/second.

root@proxmox-1:~# pveperf
CPU BOGOMIPS:  37241.04
REGEX/SECOND:  923558
HD SIZE:   57.09 GB (/dev/mapper/pve-root)
BUFFERED READS:50.14 MB/sec
AVERAGE SEEK TIME: 13.42 ms
FSYNCS/SECOND: 699.46
DNS EXT:   12.05 ms
DNS INT:   21.70 ms ()




- Il 17-giu-16, alle 15:10, Gerald Brandt g...@majentis.com ha scritto:

> Hi,
> 
> Your fsyncs per second are brutal, and Proxmox needs high fsyncs. I
> would start there.
> 
> Gerald
> 
> 
> On 2016-06-16 09:38 PM, Orlando Martinez Bao wrote:
>> Hello friends
>>
>> I am SysAdmin at the Agrarian University of Havana, Cuba.
>>
>>   
>>
>> I have installed Proxmox v3.4 here a cluster of seven nodes and for some
>> days I am having problems with a node in the cluster which only has a
>> Container with Zimbra 8.
>>
>> The problem is I'm having a lot of I / O Delay and that server is very slow
>> to the point that sometimes the service is down.
>>
>> The server is PowerEdge T320 Dell with Intel Xeon E5-2420 12gram with 12
>> cores and 1GB of disk 7200RPM 2xHDD are configured as RAID1.
>>
>> I have virtualizing the zimbra 8 using a template of 12.04, has 8 cores, 8G
>> RAM, 500GHD the storage of container is local storage.
>>
>> Then I put the output to see the IO when you are not running the VM. Look at
>> the BUFFERED READS that are marked are very bad. And those moments have seen
>> IO Delay up to 50%.
>>
>>   
>>
>> root@n07:~# pveperf
>>
>> CPU BOGOMIPS:  45601.20
>>
>> REGEX/SECOND:  1025079
>>
>> HD SIZE:   9.84 GB (/dev/mapper/pve-root)
>>
>> BUFFERED READS:1.51 MB/sec
>>
>> AVERAGE SEEK TIME: 165.88 ms
>>
>> FSYNCS/SECOND: 0.40
>>
>> DNS EXT:   206.20 ms
>>
>> DNS INT:   0.91 ms (unah.edu.cu)
>>
>> root@n07:~# pveperf
>>
>> CPU BOGOMIPS:  45601.20
>>
>> REGEX/SECOND:  1048361
>>
>> HD SIZE:   9.84 GB (/dev/mapper/pve-root)
>>
>> BUFFERED READS:0.78 MB/sec
>>
>> AVERAGE SEEK TIME: 283.84 ms
>>
>> FSYNCS/SECOND: 0.50
>>
>> DNS EXT:   206.13 ms
>>
>> DNS INT:   0.89 ms (unah.edu.cu)
>>
>> root@n07:~# pveperf (Este fue cuando detuve la VM)
>>
>> CPU BOGOMIPS:  45601.20
>>
>> REGEX/SECOND:  1073712
>>
>> HD SIZE:   9.84 GB (/dev/mapper/pve-root)
>>
>> BUFFERED READS:113.04 MB/sec
>>
>> AVERAGE SEEK TIME: 13.49 ms
>>
>> FSYNCS/SECOND: 9.66
>>
>> DNS EXT:   198.59 ms
>>
>> DNS INT:   0.86 ms (unah.edu.cu)
>>
>> root@n07:~# pveperf
>>
>> CPU BOGOMIPS:  45601.20
>>
>> REGEX/SECOND:  1024213
>>
>> HD SIZE:   9.84 GB (/dev/mapper/pve-root)
>>
>> BUFFERED READS:164.30 MB/sec
>>
>> AVERAGE SEEK TIME: 13.61 ms
>>
>> FSYNCS/SECOND: 16.34
>>
>> DNS EXT:   234.75 ms
>>
>> DNS INT:   0.94 ms (unah.edu.cu)
>>
>>   
>>
>>   
>>
>> Please help me.
>>
>> Best Regards
>>
>> Orlando
>>
>>   
>>
>> ___
>> pve-user mailing list
>> pve-user@pve.proxmox.com
>> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> 
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

-- 
---
Fabrizio Cuseo - mailto:f.cu...@panservice.it
Direzione Generale - Panservice InterNetWorking
Servizi Professionali per Internet ed il Networking
Panservice e' associata AIIP - RIPE Local Registry
Phone: +39 0773 410020 - Fax: +39 0773 470219
http://www.panservice.it  mailto:i...@panservice.it
Numero verde nazionale: 800 901492
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] about IO Delay using openvz and zimbra

2016-06-17 Thread Fabrizio Cuseo

Yes, shared storage is a must; you can (live) migrate VMs and better scale your 
cluster. 
For you problem, try to debug (with dstat and sar) how much your disk is used 
(on both container and host); disk performances depends too much from disks, 
controllers, cache; and cheap sata disk with "dumb" controllers are not the 
best choice for a zimbra server.
I have several zimbra installations, with sata disks too, but using 6 sata 
disks, and memory cached raid-10 controllers.

PS: how many users do you have in your zimbra server ? 



- Il 17-giu-16, alle 14:25, Orlando Martinez Bao om...@unah.edu.cu ha 
scritto:

> Hello, Havana is great.
> 
> My cluster configuration is as follow.
> 
> I have 7 node all use local storage the Servers descriptions is as follow:
> 1. Dell PowerEdge 1xCPU  8GRAM 3HDD 10K 600GB not in RAID.
> 2. Dell PowerEdge T320 1xCPU, 12GRAM, 2HDD 1Tera RAID-1 disk Hardware.
> (Server where I have a container with Zimbra)
> 3. Inspur 2xCPU (24 core), 64GRAM, 5HDD 1 Tera RAID-5 Hardware.
> All other server are PC I5 8GRAM and 1 HDD 1TB.
> 
> I have 2 PC with 4xHDD 1TB, 1 with FreeNAS and 1 with Openfiler Both I use
> with NFS to backup and storage data.
> 
> Al VM in the cluster use local storage, Why? Because never could create a VM
> using NFS shared storage. I did two week ago so only 1 VM is shared storage.
> 
> I want to create a new cluster with Proxmox 4.2 using Ceph for storage, but
> for now I need to solve this problem.
> 
> Thanks
> Best regards
> Orlando
> 
> 
> 
> -Mensaje original-
> De: pve-user [mailto:pve-user-boun...@pve.proxmox.com] En nombre de Fabrizio
> Cuseo
> Enviado el: viernes, 17 de junio de 2016 4:36
> Para: pve-user
> Asunto: Re: [PVE-User] about IO Delay using openvz and zimbra
> 
> Hello.
> First, please say "hello" to Havana... i miss Cuba (my only time in cuba, 2
> years ago, habana, trinidad, vinales).
> 
> Regarding your problem: can you describe better your cluster configuration ?
> Which kind of storage (both local and shared) are you using ? RAID-1 disks
> are software or hardware raid ? Do you have a controller with cache memory
> and backup battery ?
> 
> Why have you choosen to have only a container with local storage (and not a
> VM with shared storage) for zimbra installation ?
> If you have a very busy zimbra server, you will have a lot of disk I/O
> activity, and hardware requirements are growing for last zimbra releases
> (8.6). You can try to monitor your container activity with both "sar" and
> "dstat".
> 
> Regards, Fabrizio
> 
> 
> - Il 17-giu-16, alle 4:38, Orlando Martinez Bao om...@unah.edu.cu ha
> scritto:
> 
>> Hello friends
>> 
>> I am SysAdmin at the Agrarian University of Havana, Cuba.
>> 
>> 
>> 
>> I have installed Proxmox v3.4 here a cluster of seven nodes and for
>> some days I am having problems with a node in the cluster which only
>> has a Container with Zimbra 8.
>> 
>> The problem is I'm having a lot of I / O Delay and that server is very
>> slow to the point that sometimes the service is down.
>> 
>> The server is PowerEdge T320 Dell with Intel Xeon E5-2420 12gram with
>> 12 cores and 1GB of disk 7200RPM 2xHDD are configured as RAID1.
>> 
>> I have virtualizing the zimbra 8 using a template of 12.04, has 8
>> cores, 8G RAM, 500GHD the storage of container is local storage.
>> 
>> Then I put the output to see the IO when you are not running the VM.
>> Look at the BUFFERED READS that are marked are very bad. And those
>> moments have seen IO Delay up to 50%.
>> 
>> 
>> 
>> root@n07:~# pveperf
>> 
>> CPU BOGOMIPS:  45601.20
>> 
>> REGEX/SECOND:  1025079
>> 
>> HD SIZE:   9.84 GB (/dev/mapper/pve-root)
>> 
>> BUFFERED READS:1.51 MB/sec
>> 
>> AVERAGE SEEK TIME: 165.88 ms
>> 
>> FSYNCS/SECOND: 0.40
>> 
>> DNS EXT:   206.20 ms
>> 
>> DNS INT:   0.91 ms (unah.edu.cu)
>> 
>> root@n07:~# pveperf
>> 
>> CPU BOGOMIPS:  45601.20
>> 
>> REGEX/SECOND:  1048361
>> 
>> HD SIZE:   9.84 GB (/dev/mapper/pve-root)
>> 
>> BUFFERED READS:0.78 MB/sec
>> 
>> AVERAGE SEEK TIME: 283.84 ms
>> 
>> FSYNCS/SECOND: 0.50
>> 
>> DNS EXT:   206.13 ms
>> 
>> DNS INT:   0.89 ms (unah.edu.cu)
>> 
>> root@n07:~# pveperf (Este fue cuando detuve la VM)
>> 
>> CPU BOGOMIPS:  45601.20
>> 
>> REGEX/SECOND:  1073712
>> 
>> HD SIZE:   9.84 

Re: [PVE-User] about IO Delay using openvz and zimbra

2016-06-17 Thread Fabrizio Cuseo
Hello.
First, please say "hello" to Havana... i miss Cuba (my only time in cuba, 2 
years ago, habana, trinidad, vinales).

Regarding your problem: can you describe better your cluster configuration ? 
Which kind of storage (both local and shared) are you using ? RAID-1 disks are 
software or hardware raid ? Do you have a controller with cache memory and 
backup battery ? 

Why have you choosen to have only a container with local storage (and not a VM 
with shared storage) for zimbra installation ? 
If you have a very busy zimbra server, you will have a lot of disk I/O 
activity, and hardware requirements are growing for last zimbra releases (8.6). 
You can try to monitor your container activity with both "sar" and "dstat".

Regards, Fabrizio 


- Il 17-giu-16, alle 4:38, Orlando Martinez Bao om...@unah.edu.cu ha 
scritto:

> Hello friends
> 
> I am SysAdmin at the Agrarian University of Havana, Cuba.
> 
> 
> 
> I have installed Proxmox v3.4 here a cluster of seven nodes and for some
> days I am having problems with a node in the cluster which only has a
> Container with Zimbra 8.
> 
> The problem is I'm having a lot of I / O Delay and that server is very slow
> to the point that sometimes the service is down.
> 
> The server is PowerEdge T320 Dell with Intel Xeon E5-2420 12gram with 12
> cores and 1GB of disk 7200RPM 2xHDD are configured as RAID1.
> 
> I have virtualizing the zimbra 8 using a template of 12.04, has 8 cores, 8G
> RAM, 500GHD the storage of container is local storage.
> 
> Then I put the output to see the IO when you are not running the VM. Look at
> the BUFFERED READS that are marked are very bad. And those moments have seen
> IO Delay up to 50%.
> 
> 
> 
> root@n07:~# pveperf
> 
> CPU BOGOMIPS:  45601.20
> 
> REGEX/SECOND:  1025079
> 
> HD SIZE:   9.84 GB (/dev/mapper/pve-root)
> 
> BUFFERED READS:1.51 MB/sec
> 
> AVERAGE SEEK TIME: 165.88 ms
> 
> FSYNCS/SECOND: 0.40
> 
> DNS EXT:   206.20 ms
> 
> DNS INT:   0.91 ms (unah.edu.cu)
> 
> root@n07:~# pveperf
> 
> CPU BOGOMIPS:  45601.20
> 
> REGEX/SECOND:  1048361
> 
> HD SIZE:   9.84 GB (/dev/mapper/pve-root)
> 
> BUFFERED READS:0.78 MB/sec
> 
> AVERAGE SEEK TIME: 283.84 ms
> 
> FSYNCS/SECOND: 0.50
> 
> DNS EXT:   206.13 ms
> 
> DNS INT:   0.89 ms (unah.edu.cu)
> 
> root@n07:~# pveperf (Este fue cuando detuve la VM)
> 
> CPU BOGOMIPS:  45601.20
> 
> REGEX/SECOND:  1073712
> 
> HD SIZE:   9.84 GB (/dev/mapper/pve-root)
> 
> BUFFERED READS:113.04 MB/sec
> 
> AVERAGE SEEK TIME: 13.49 ms
> 
> FSYNCS/SECOND: 9.66
> 
> DNS EXT:   198.59 ms
> 
> DNS INT:   0.86 ms (unah.edu.cu)
> 
> root@n07:~# pveperf
> 
> CPU BOGOMIPS:  45601.20
> 
> REGEX/SECOND:  1024213
> 
> HD SIZE:   9.84 GB (/dev/mapper/pve-root)
> 
> BUFFERED READS:164.30 MB/sec
> 
> AVERAGE SEEK TIME: 13.61 ms
> 
> FSYNCS/SECOND: 16.34
> 
> DNS EXT:   234.75 ms
> 
> DNS INT:   0.94 ms (unah.edu.cu)
> 
> 
> 
> 
> 
> Please help me.
> 
> Best Regards
> 
> Orlando
> 
> 
> 
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

-- 
---
Fabrizio Cuseo - mailto:f.cu...@panservice.it
Direzione Generale - Panservice InterNetWorking
Servizi Professionali per Internet ed il Networking
Panservice e' associata AIIP - RIPE Local Registry
Phone: +39 0773 410020 - Fax: +39 0773 470219
http://www.panservice.it  mailto:i...@panservice.it
Numero verde nazionale: 800 901492
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] ??utf-8?q? VM Backup best practicesrecommendation...

2016-02-17 Thread Fabrizio Cuseo
I have your same problem; backup strategy and software is a priority for a lot 
of vendors; vmware have both backup (with dedup and flexible retention policy) 
and replication; veeam is a great suite and is a must-have for a more complex 
and big cluster. 
I know that proxmox is based on community software, and backup (i mean 
incremental backup) is not simple, but i can't believe that all the users are 
satisfied with vzdump. 
I personaly miss: 
- a retention policy different from "the last" backups; i know that i can write 
script to move and rotate some backups, but simplicity is the winner; if i 
could select in the backup option a "keep last X daily/weekly/monthly/yearly 
backups", all is simpler and error free 
- an incremental backup; this is really the big miss. I have too long backup 
windows, so i can't backup the vm every day. And I can't use backuppc/bacula or 
some other guest-based backup software, when i use proxmox to host vm for 
customers, 
- backup verification (it can be useful) 

Regarding incremental backups, based on dirty blocks, i don't know if this 
feature is usable: http://wiki.qemu.org/Features/IncrementalBackup 
But, even if only with some kind of storage backend, we could make incremental 
backups. Using ceph (that is fully supported by proxmox), is possibile to make 
incremental (or differential) backups, based on snapshots. I have manually 
tested with "rbd --export-diff" on ceph snapshots, and works like a charm. So i 
think that, even if only for ceph storage, it can be developed (zfs solution 
can be similar) 

No one need to have a better backup support ? 

Regards, Fabrizio 

- Il 17-feb-16, alle 14:19, Gilberto Nunes  ha 
scritto: 

> you know... more than one backup never is too much

> 2016-02-17 10:48 GMT-02:00 Alwin Antreich < sysadmin-...@cognitec.com > :

>> As an idea, if you have a separate disk for your data and you already create
>> backups inside your VM, why not skip the
>> backup of this VM disk? In case of an emergency you may need to restore your
>> backups inside the VM anyway. So it might
>> be faster in the event of a disaster to recreate the disk and restore your
>> backups inside the VM. As you said an email
>> earlier, the backup of the disk takes a long time, so might also the restore.

>> Regards,
>> Alwin

>> On 02/17/2016 01:31 PM, Gilberto Nunes wrote:
>> > Yes Alwin...

>> > I consider to make backup from account as well, in separate method...
>>> I am concern about the VM backup when send this e-mail, 'cause inside the 
>>> VM I
>> > already have accounts backup...

>> > Thanks a lot for your advice

>>> 2016-02-17 10:25 GMT-02:00 Alwin Antreich < sysadmin-...@cognitec.com 
>>> > > sysadmin-...@cognitec.com >>:

>> > Hi all,

>> > @Gilberto
>>> please keep in mind, to get consistent backups you need to backup your 
>>> zimbra
>> > database by different means then by only
>> > taking a snapshot.

>> > Regards,
>> > Alwin

>> > On 02/17/2016 01:05 PM, Pongrácz István wrote:

>> > > Consider to use zfs for proxmox and use its snapshot/send/receive 
>> > > methods.

>>> > Taking a snapshot usually less than a second and you can send the 
>>> > incremental
>> > > changes over the network.

>>> > You should check your existing systems and decide, is it worth to change 
>>> > your
>> > > system or not.

>> > > Bye,

>> > > István



>> > > eredeti üzenet-
>>> > Feladó: "Gilberto Nunes" < gilberto.nune...@gmail.com > > > gilberto.nune...@gmail.com >>
>>> > Címzett: "PVE User List" < pve-user@pve.proxmox.com > > > pve-user@pve.proxmox.com >>
>> > > Dátum: Wed, 17 Feb 2016 09:49:08 -0200
>> > > --


>> > >> Thank you guys... I will study all suggestions and choose the best 
>> > >> one...
>> > >> Thanks a lot





>> > > ___
>> > > pve-user mailing list
>> > > pve-user@pve.proxmox.com 
>> > > http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

>> > ___
>> > pve-user mailing list
>> > pve-user@pve.proxmox.com 
>> > http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user




>> > --

>> > Gilberto Ferreira
>> > +55 (47) 9676-7530
>> > Skype: gilberto.nunes36



>> > _

[PVE-User] Proxmox and ceph - incremental backup proposal

2016-01-22 Thread Fabrizio Cuseo
Hi all.

Can you evaluate this proposal in case of ceph with proxmox for incremental 
backup ? 

If i would like to have this backup plan:

1 day of month:  full backup
every day:  incremental
15th day of month:  differential (from full backup)
every day: incremental from differential

We could:

1 day of month: delete all previous snapshots
create a snapshot, execute the full backup
every day:   create a snapshot
 use rbd --export-diff from previous snapshot to the last snapshot
 execute the backup of this exported diff (that is the incremental)
 delete the previous snapshot (not the 1st)
15th day:create a snapshot
 use rbd --export-diff from 1st snapshot to the last snapshot
 execute the backup of this exporte diff (that is the DIFFERENTIAL 
from 1st)
 delete the previous snapshot (not the 1st)


Is this possible ? 

Regards, Fabrizio 


-- 
---
Fabrizio Cuseo - mailto:f.cu...@panservice.it
Direzione Generale - Panservice InterNetWorking
Servizi Professionali per Internet ed il Networking
Panservice e' associata AIIP - RIPE Local Registry
Phone: +39 0773 410020 - Fax: +39 0773 470219
http://www.panservice.it  mailto:i...@panservice.it
Numero verde nazionale: 800 901492
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Multi Proxmox Cluster with common Ceph cluster

2015-12-28 Thread Fabrizio Cuseo
Hello. 
Which kind of async mirror do you use between clusters ? 

- Il 26-dic-15, alle 18:20, Franck Parisy  ha scritto: 

> 2 DCs, 1 Proxmox and ceph cluster by DC and async mirror between them... What
> else !

> I love Proxmox and Ceph !

> Merry Christmas everybody

> De: "Alexandre DERUMIER" 
> À: "proxmoxve" 
> Envoyé: Samedi 26 Décembre 2015 16:24:55
> Objet: Re: [PVE-User] Multi Proxmox Cluster with common Ceph cluster

> >>will wait jewel, it leaves me time to negociate a third DC ;-)

> Note, that with jewell, you don't need a third DC.

> you can build 2 ceph cluster, 1 on each dc, then do async mirror between them.
> (each ceph cluster is the backup of the other one)

> >>Will it be possible to manage a multi dc's ceph cluster with Proxmox ?

> Yes it's possible, if you have multicast (laybe2) working between them.

> Simply do 1 ceph cluster/ 1 proxmox cluster.
> But you need 3 DC to be sure to always have quorum (for proxmox and ceph
> monitor)

> - Mail original -
> De: "Franck Parisy" 
> À: "proxmoxve" 
> Envoyé: Vendredi 25 Décembre 2015 19:11:43
> Objet: Re: [PVE-User] Multi Proxmox Cluster with common Ceph cluster

> Thank you very much Alexandre.

> I will wait jewel, it leaves me time to negociate a third DC ;-)

> Will it be possible to manage a multi dc's ceph cluster with Proxmox ?

> De: "Alexandre DERUMIER" 
> À: "proxmoxve" 
> Envoyé: Vendredi 25 Décembre 2015 15:39:13
> Objet: Re: [PVE-User] Multi Proxmox Cluster with common Ceph cluster

> Hi,

> you can have 1 ceph cluster, with 2 pools, 1 pool for each proxmox cluster.

> But with only 2 DC, It's not possible to always have quorum if you loose 1 DC.

> (you need quorum for ceph monitors).

> So, for real multi datacenter ceph cluster, you need 3dc, with 1 mon on each 
> DC
> .

> Also note that ceph replication is syncronous, so you need to have good
> latencies between your DC.

> Next ceph release (jewel), will have rbd mirroring (asyncronous mirroring of
> pool or rbd volume), to remote ceph
> cluster, for disaster recovery.

> - Mail original -
> De: "Franck Parisy" 
> À: "proxmoxve" 
> Envoyé: Vendredi 25 Décembre 2015 12:26:32
> Objet: [PVE-User] Multi Proxmox Cluster with common Ceph cluster

> Hello,

> Is it posssible to create a multi Proxmox cluster with 1 Ceph cluster ?

> Explanation :

> 2 Datacenter

> D1 : 192.168.0.0/24
> D2 : 192.168.1.0/24

> 1 D1 Proxmox cluster
> 1 D2 Proxmox cluster

> 1 common Ceph cluster ( with a little crushmap to replicate D1 to D2 and D2 to
> D1) in the 2 Proxmox cluster

> Thanks

> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> _______
> pve-user mailing list
> pve-user@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

-- 
--- 
Fabrizio Cuseo - mailto:f.cu...@panservice.it 
Direzione Generale - Panservice InterNetWorking 
Servizi Professionali per Internet ed il Networking 
Panservice e' associata AIIP - RIPE Local Registry 
Phone: +39 0773 410020 - Fax: +39 0773 470219 
http://www.panservice.it mailto:i...@panservice.it 
Numero verde nazionale: 800 901492 
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Proxmox 4.0 - problem with noVNC Console

2015-10-21 Thread Fabrizio Cuseo
Hello.
With PVE 4.0 using noVNC console on Qemu VM, the power button can't be clicked.

Using console on LXC, sometimes remains in "Loading" status; starting from that 
moment, i can't access the console until I delete and recreate the container.

Someone have the same problem ?

-- 
---
Fabrizio Cuseo - mailto:f.cu...@panservice.it
Direzione Generale - Panservice InterNetWorking
Servizi Professionali per Internet ed il Networking
Panservice e' associata AIIP - RIPE Local Registry
Phone: +39 0773 410020 - Fax: +39 0773 470219
http://www.panservice.it  mailto:i...@panservice.it
Numero verde nazionale: 800 901492
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] PVE 4.0 - problem restoring vm

2015-10-21 Thread Fabrizio Cuseo
Hello.
Sometimes, restoring a vm, at the end I have this error and no vm is restored.



progress 96% (read 32985382912 bytes, duration 516 sec)
progress 97% (read 33328988160 bytes, duration 516 sec)
progress 98% (read 33672593408 bytes, duration 516 sec)
progress 99% (read 34016198656 bytes, duration 516 sec)
progress 100% (read 34359738368 bytes, duration 516 sec)
total bytes read 34359738368, sparse bytes 19649179648 (57.2%)
space reduction due to 4K zero blocks 0.943%
libust[13667/13667]: Warning: HOME environment variable not set. Disabling 
LTTng-UST per-user tracing. (in setup_local_apps() at lttng-ust-comm.c:375)
Code should not be reached 'Unknown argument' at 
../src/udev/udevadm-settle.c:87, function adm_settle(). Aborting.
rbd: run_cmd(udevadm): terminated by signal
rbd: sysfs write failed
can't unmount rbd volume vm-9008-disk-1: rbd: sysfs write failed
TASK ERROR: volume deativation failed: CephCluster2Copie:vm-9008-disk-1 at 
/usr/share/perl5/PVE/Storage.pm line 917.

-- 
---
Fabrizio Cuseo - mailto:f.cu...@panservice.it
Direzione Generale - Panservice InterNetWorking
Servizi Professionali per Internet ed il Networking
Panservice e' associata AIIP - RIPE Local Registry
Phone: +39 0773 410020 - Fax: +39 0773 470219
http://www.panservice.it  mailto:i...@panservice.it
Numero verde nazionale: 800 901492
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] ceph/rbd to qcow2 - sparse file

2015-10-20 Thread Fabrizio Cuseo
So:

- rbd --> qcow2 (with proxmox 3.4 and POWERED-OFF vm) loose the sparse mode
- qemu-img convert qcow2-->qcow2 gives me back a sparsed image
- move (from gui) qcow2 --> rbd with proxmox 4.0 (and vm powered off), loose 
the sparse mode
- qemu-img convert qcow2--> rbd (from CLI), keep the sparse mode (but i need to 
try again the conversion to keep format=2 and layering feature.



- Messaggio originale -
Da: "Alexandre DERUMIER" 
A: "proxmoxve" 
Inviato: Martedì, 20 ottobre 2015 12:49:32
Oggetto: Re: [PVE-User] ceph/rbd to qcow2 - sparse file

looking at rbd block driver, it seem that bdrv_co_write_zeroes is not 
implemented.

Does  rbd -> qcow2 (with proxmox 4.0), give you sparse qcow2 ?

is it only qcow2->rbd which is non sparse ?


- Mail original -
De: "aderumier" 
À: "proxmoxve" 
Envoyé: Mardi 20 Octobre 2015 12:18:43
Objet: Re: [PVE-User] ceph/rbd to qcow2 - sparse file

and also this one: 

"mirror: Do zero write on target if sectors not allocated" 
http://git.qemu.org/?p=qemu.git;a=blobdiff;f=block/mirror.c;h=cea9521fd5fcbc300c054fc8936bdac4f47e;hp=4be06a508233e69040c74fce00d3baac107dbfd8;hb=dcfb3beb5130694b76b57de109619fcbf9c7e5b5;hpb=0fc9f8ea2800b76eaea20a8a3a91fbeeb4bfa81b
 


- Mail original - 
De: "aderumier"  
À: "Fabrizio Cuseo" , "proxmoxve" 
 
Envoyé: Mardi 20 Octobre 2015 12:13:30 
Objet: Re: [PVE-User] ceph/rbd to qcow2 - sparse file 

mmm. this is strange because with last qemu version include in proxmox 4.0, 

the drive-mirror feature (move disk in proxmox), should skip zeros blocks. 

I don't have tested it. 

http://git.qemu.org/?p=qemu.git;a=commit;h=0fc9f8ea2800b76eaea20a8a3a91fbeeb4bfa81b
 

"+# @unmap: #optional Whether to try to unmap target sectors where source has 
+# only zero. If true, and target unallocated sectors will read as zero, 
+# target image sectors will be unmapped; otherwise, zeroes will be 
+# written. Both will result in identical contents. 
+# Default is true. (Since 2.4) 
#" 




As workaround : 

- do the move disk with the vm shutdown will do a sparse file 

- if you use virtio-scsi + discard, you can use fstrim command (linux guest) in 
your guest after the migration. 




- Mail original - 
De: "Fabrizio Cuseo"  
À: "proxmoxve"  
Envoyé: Lundi 19 Octobre 2015 22:30:02 
Objet: [PVE-User] ceph/rbd to qcow2 - sparse file 

Hello. 
I have a test cluster (3 hosts) with 20/30 test vm's, and ceph storage. 
Last week i planned to upgrade from 3.4 to 4.0; so i moved all the vm disks on 
a moosefs storage (qcow2). 

Moving from rbd to qcow2 caused all the disks to loose the sparse mode. 

I have reinstalled the whole cluster from scratch and now I am moving back the 
disks from qcow2 to rbd, but now i need to convert (with the vm off) every 
single disk from qcow2 to qcow2, so i can have the disk image sparsed. 

There is the possibility to move the disk from proxmox gui without loosing the 
sparse mode ? At least with pve 4.0 

Regards, Fabrizio 
___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 
___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 
___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 
_______
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

-- 
---
Fabrizio Cuseo - mailto:f.cu...@panservice.it
Direzione Generale - Panservice InterNetWorking
Servizi Professionali per Internet ed il Networking
Panservice e' associata AIIP - RIPE Local Registry
Phone: +39 0773 410020 - Fax: +39 0773 470219
http://www.panservice.it  mailto:i...@panservice.it
Numero verde nazionale: 800 901492
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] ceph/rbd to qcow2 - sparse file

2015-10-19 Thread Fabrizio Cuseo
Hello.
I have a test cluster (3 hosts) with 20/30 test vm's, and ceph storage.
Last week i planned to upgrade from 3.4 to 4.0; so i moved all the vm disks on 
a moosefs storage (qcow2). 

Moving from rbd to qcow2 caused all the disks to loose the sparse mode.

I have reinstalled the whole cluster from scratch and now I am moving back the 
disks from qcow2 to rbd, but now i need to convert (with the vm off) every 
single disk from qcow2 to qcow2, so i can have the disk image sparsed.

There is the possibility to move the disk from proxmox gui without loosing the 
sparse mode ? At least with pve 4.0

Regards, Fabrizio
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Qemu 2.4 Incremental Backup

2015-10-11 Thread Fabrizio Cuseo

Hello.

The old Live Backup code was not approved by qemu developers; instead, the 
Incremental Backup (http://wiki.qemu.org/Features/IncrementalBackup) seems to 
be merged in the official 2.4 qemu code.

Is it also in the 2.4 qemu included in proxmox 4 ? We can hope to see the 
incremental backup feature in Proxmox VE soon ? 

Regards, Fabrizio 
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Ceph Performance

2015-09-28 Thread Fabrizio Cuseo
Hello Tobias. 
Check if your SSD is suitable for Ceph Journal: 

http://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-test-if-your-ssd-is-suitable-as-a-journal-device/
 

If you can, add 2 OSD Sata disks per host. 

Regards, Fabrizio 



- Il 28-set-15, alle 18:10, Tobias Kropf - inett GmbH  ha 
scritto: 





Hi @ all 

i have a question to ceph we plan to build our own ceph cluster in 
datacenter. Can you tell me the performance statics from running ceph cluster 
with the same setup? 

We want to buy the follow setup: 

3x Chassis with: 

CPUs: 2 x Intel E5-2620v3 
RAM: 64GB 
NIC: 2x10GBit/s CEPH, 4x1GBit/s 
HDD: 4x2TB SATA,1x80GB SSD - OS, 1x240GB SSD - Ceph Cache 





-- 



Tobias Kropf 



Technik 













inett GmbH » Ihr IT Systemhaus in Saarbrücken 



Eschberger Weg 1 

66121 Saarbrücken 

Geschäftsführer: Marco Gabriel 

Handelsregister Saarbrücken 

HRB 16588 





Telefon: 0681 / 41 09 93 – 0 

Telefax: 0681 / 41 09 93 – 99 

E-Mail: i...@inett.de 


Web: www.inett.de 




Zarafa Gold Partner - Proxmox Authorized Reseller - Proxmox Training Center - 
SEP sesam Certified Partner - Endian Certified Partner - Kaspersky Silver 
Partner - Mitglied im iTeam Systemhausverbund für den Mittelstand 



___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 




-- 
--- 
Fabrizio Cuseo - mailto:f.cu...@panservice.it 
Direzione Generale - Panservice InterNetWorking 
Servizi Professionali per Internet ed il Networking 
Panservice e' associata AIIP - RIPE Local Registry 
Phone: +39 0773 410020 - Fax: +39 0773 470219 
http://www.panservice.it mailto:i...@panservice.it 
Numero verde nazionale: 800 901492 

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Ceph repository key change

2015-09-24 Thread Fabrizio Cuseo
Hello.
Due to the change of ceph repository key, I need to change manually the key.

Install curl if not installed:
apt-get install curl 


apt-key del 17ED316D
curl https://git.ceph.com/release.asc | apt-key add -
apt-get update

And now is working fine.

How can avoid to do this on each proxmox node ? 

Regards, Fabrizio 


-- 
---
Fabrizio Cuseo - mailto:f.cu...@panservice.it
Direzione Generale - Panservice InterNetWorking
Servizi Professionali per Internet ed il Networking
Panservice e' associata AIIP - RIPE Local Registry
Phone: +39 0773 410020 - Fax: +39 0773 470219
http://www.panservice.it  mailto:i...@panservice.it
Numero verde nazionale: 800 901492
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] GlusterFS slow resync

2015-07-28 Thread Fabrizio Cuseo
Hello.
I am testing a 2 nodes cluster with glusterFS.

GlusterFS is configured with 4 (raid-5) 1Tbyte disk, so there are 2 bricks 
replicated (1 per server).

I have a VM with a VirtualDisk qcow2 of 2,5Tbyte, and the file is now 950Gbyte.

After rebooting one of the two nodes, glusterFS needs more than 12 hours to 
resync (the nodes are connected with a p2p infiniband 10Gbit card).

Someone have similar problems ? Is GlusterFS not so usable for big VM's ? 

Regards, Fabrizio 


-- 
---
Fabrizio Cuseo - mailto:f.cu...@panservice.it
Direzione Generale - Panservice InterNetWorking
Servizi Professionali per Internet ed il Networking
Panservice e' associata AIIP - RIPE Local Registry
Phone: +39 0773 410020 - Fax: +39 0773 470219
http://www.panservice.it  mailto:i...@panservice.it
Numero verde nazionale: 800 901492
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Infiniband Mellanox cards and DDR

2015-07-07 Thread Fabrizio Cuseo
I have found that the 400EX is a SDR card, while the 400EX-D is a DDR card. I 
don't know if is a firmware problem.


http://sup.xenya.si/sup/info/voltaire/HCAwebfinal.pdf

HCA 400Ex/Ex-F
• Dual port 4X (10 Gbps) InfiniBand PCI-Express low 
profile host channel adapter
HCA 400Ex-D
• Dual port 4X DDR (20 Gbps) InfiniBand PCI-Express 
low profile host channel adapter
HCA 400
• Dual port 4X (10 Gbps) InfiniBand PCI/PCI-X low 
profile host channel adapter




- Messaggio originale -
Da: "Michael Rasmussen" 
A: "Fabrizio Cuseo" , "pve-user" 

Inviato: Martedì, 7 luglio 2015 12:01:03
Oggetto: Re: [PVE-User] Infiniband Mellanox cards and DDR

AFAIK this card is a SDR card so DDR is not possible. 


On July 7, 2015 11:46:41 AM CEST, Fabrizio Cuseo  wrote: 

Hallo. 

I am testing a Mellanox card (IBM Voltaire HCA 400Ex-D) with a cisco 4x DDR 
infiniband switch. 

The problem I have is that the cards (that seems to be 4X DDR) have a 10Gbit 
(SDR) link. 

--- 
03:00.0 InfiniBand: Mellanox Technologies MT25208 [InfiniHost III Ex] (rev 20) 
--- 

root@pve1:/etc/apt/sources.list.d# ibstat 
CA 'mthca0' 
CA type: MT25208 
Number of ports: 2 
Firmware version: 5.3.0 
Hardware version: 20 
Node GUID: 0x0008f104039814d0 
System image GUID: 0x0008f104039814d3 
Port 1: 
State: Active 
Physical state: LinkUp 
Rate: 10 
Base lid: 6 
LMC: 0 
SM lid: 2 
Capability mask: 0x02510a68 
Port GUID: 0x0008f104039814d1 
Link laye
 r:
InfiniBand 

 

root@pve1:/etc/apt/sources.list.d# ibportstate 6 0 
CA PortInfo: 
# Port info: Lid 6 port 0 
LinkState:...Active 
PhysLinkState:...LinkUp 
Lid:.6 
SMLid:...2 
LMC:.0 
LinkWidthSupported:..1X or 4X 
LinkWidthEnabled:1X or 4X 
LinkWidthActive:.4X 
LinkSpeedSupported:..2.5 Gbps 
LinkSpeedEnabled:2.5 Gbps 
LinkSpeedActive:.2.5 Gbps 
Mkey: 
MkeyLeasePeriod:.15 
ProtectBits:.0 

- 


If I try to set the speed at DDR, I have this error: 

root@pve1:/etc/apt/sources.list.d# ibportstate 6 0 speed 2 
Initial CA PortInfo: 
# Port info: Lid 6 port 0 
LinkState:...Active 
PhysLinkState:...LinkUp 
Lid:.6 
SMLid:...2 
LMC:.0 
LinkWidthSupported:..1X or 4X 
LinkWidthEnabled:1X or 4X 
LinkWidthActive:.4X 
LinkSpeedSupported:..2.5 Gbps 
LinkSpeedEnabled:2.5 Gbps 
LinkSpeedActive:.2.5 Gbps 
Mkey: 
MkeyLeasePeriod:.15 
ProtectBits:.0 
ibportstate: iberror: failed: smp set portinfo failed 


 

The switch reported this card in infiniband topology: 

MT25218 InfiniHostEx Mellanox Technologies 


Someone have tested the cards ? 

Fabrizio Cuseo 








-- 
Sent from my Android phone with K-9 Mail. Please excuse my brevity. 
 This mail was virus scanned and spam checked before delivery. This mail is 
also DKIM signed. See header dkim-signature. 

-- 
---
Fabrizio Cuseo - mailto:f.cu...@panservice.it
Direzione Generale - Panservice InterNetWorking
Servizi Professionali per Internet ed il Networking
Panservice e' associata AIIP - RIPE Local Registry
Phone: +39 0773 410020 - Fax: +39 0773 470219
http://www.panservice.it  mailto:i...@panservice.it
Numero verde nazionale: 800 901492
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Infiniband Mellanox cards and DDR

2015-07-07 Thread Fabrizio Cuseo
Hallo.

I am testing a Mellanox card (IBM Voltaire HCA 400Ex-D) with a cisco 4x DDR 
infiniband switch.

The problem I have is that the cards (that seems to be 4X DDR) have a 10Gbit 
(SDR) link.

---
03:00.0 InfiniBand: Mellanox Technologies MT25208 [InfiniHost III Ex] (rev 20)
---

root@pve1:/etc/apt/sources.list.d# ibstat
CA 'mthca0'
CA type: MT25208
Number of ports: 2
Firmware version: 5.3.0
Hardware version: 20
Node GUID: 0x0008f104039814d0
System image GUID: 0x0008f104039814d3
Port 1:
State: Active
Physical state: LinkUp
Rate: 10
Base lid: 6
LMC: 0
SM lid: 2
Capability mask: 0x02510a68
Port GUID: 0x0008f104039814d1
Link layer: InfiniBand



root@pve1:/etc/apt/sources.list.d# ibportstate 6 0
CA PortInfo:
# Port info: Lid 6 port 0
LinkState:...Active
PhysLinkState:...LinkUp
Lid:.6
SMLid:...2
LMC:.0
LinkWidthSupported:..1X or 4X
LinkWidthEnabled:1X or 4X
LinkWidthActive:.4X
LinkSpeedSupported:..2.5 Gbps
LinkSpeedEnabled:2.5 Gbps
LinkSpeedActive:.2.5 Gbps
Mkey:
MkeyLeasePeriod:.15
ProtectBits:.0

-


If I try to set the speed at DDR, I have this error:

root@pve1:/etc/apt/sources.list.d# ibportstate 6 0 speed 2
Initial CA PortInfo:
# Port info: Lid 6 port 0
LinkState:...Active
PhysLinkState:...LinkUp
Lid:.6
SMLid:...2
LMC:.0
LinkWidthSupported:..1X or 4X
LinkWidthEnabled:1X or 4X
LinkWidthActive:.4X
LinkSpeedSupported:..2.5 Gbps
LinkSpeedEnabled:2.5 Gbps
LinkSpeedActive:.2.5 Gbps
Mkey:
MkeyLeasePeriod:.15
ProtectBits:.0
ibportstate: iberror: failed: smp set portinfo failed




The switch reported this card in infiniband topology:

MT25218 InfiniHostEx Mellanox Technologies


Someone have tested the cards ? 

Fabrizio Cuseo








-- 
---
Fabrizio Cuseo - mailto:f.cu...@panservice.it
Direzione Generale - Panservice InterNetWorking
Servizi Professionali per Internet ed il Networking
Panservice e' associata AIIP - RIPE Local Registry
Phone: +39 0773 410020 - Fax: +39 0773 470219
http://www.panservice.it  mailto:i...@panservice.it
Numero verde nazionale: 800 901492
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] PVE 4.0 and Ceph - install problem

2015-07-07 Thread Fabrizio Cuseo
Hello there.
I am trying a 3 host cluster with PVE 4.0beta with ceph server, but when I try 
to install ceph (pveceph install -version hammer, or pveceph install -version 
firefly or pveceph install), I have this error:

The following information may help to resolve the situation:

The following packages have unmet dependencies:
 ceph : Depends: libboost-program-options1.49.0 (>= 1.49.0-1) but it is not 
installable
Depends: libboost-system1.49.0 (>= 1.49.0-1) but it is not installable
Depends: libboost-thread1.49.0 (>= 1.49.0-1) but it is not installable
 ceph-common : Depends: librbd1 (= 0.94.2-1~bpo70+1) but 0.80.7-2 is to be 
installed
   Depends: libboost-thread1.49.0 (>= 1.49.0-1) but it is not 
installable
   Depends: libudev0 (>= 146) but it is not installable
   Breaks: librbd1 (< 0.92-1238) but 0.80.7-2 is to be installed
E: Unable to correct problems, you have held broken packages.
command 'apt-get -q --assume-yes --no-install-recommends -o 
'Dpkg::Options::=--force-confnew' install -- ceph ceph-common gdisk' failed: 
exit code 100


Is Ceph server already supported on PVE 4.0beta ? If not, is planned in a short 
time ? 

Regards, Fabrizio Cuseo




-- 
---
Fabrizio Cuseo - mailto:f.cu...@panservice.it
Direzione Generale - Panservice InterNetWorking
Servizi Professionali per Internet ed il Networking
Panservice e' associata AIIP - RIPE Local Registry
Phone: +39 0773 410020 - Fax: +39 0773 470219
http://www.panservice.it  mailto:i...@panservice.it
Numero verde nazionale: 800 901492
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Ceph cache tiering

2015-05-16 Thread Fabrizio Cuseo
Hallo there.
I am testing the ceph performances with a SSD cache journal.

My setup is:

- 3 hosts, 2 x quad opteron, 64 Gbyte, 1 x giga ethernet (for proxmox), 1 
infiniband 20Gbit  (for ceph), 1 x Perc6I with 7 x WD 1Tbyte 128Mb cache 
(enterprise edition), 1 for proxmox, 6 for ceph OSD, and 1 x Samsung SSD EVO 
850 (240Gbyte); they are configured as 8 different Virtual Disk on the raid 
controller.

I tested the first setup; ceph OSD each with the journal on the single SSD.

My first performances test (with only 1 VM) are about 150Mbyte/sec write.


Now, I would like to test a different ceph setup:

6 x OSD (with 1Tbyte sata disk) and Journal on the same disk

1 x SSD (one for each node), configured as OSD of a different crush map, using 
the cache tiering configuration.

My two questions are:
- have someone tested this setup ? Performs better than the first ? 
- will be planned in proxmox gui some option to create OSD in a different pool, 
or the only options (in future too) is to manually modify the crush map and 
manually create the SSD OSDs ? It will be fine to have all managed by proxmox 
gui, both to create and monitor.

Regards, Fabrizio


-- 
---
Fabrizio Cuseo - mailto:f.cu...@panservice.it
Direzione Generale - Panservice InterNetWorking
Servizi Professionali per Internet ed il Networking
Panservice e' associata AIIP - RIPE Local Registry
Phone: +39 0773 410020 - Fax: +39 0773 470219
http://www.panservice.it  mailto:i...@panservice.it
Numero verde nazionale: 800 901492
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Feature request - backup list

2015-03-13 Thread Fabrizio Cuseo
Hello.

Four backup's features that will be appreciated:

1) Using web interface, if a open a VM and the Backup Tab, it will be useful if 
I see ONLY the VM's Backup and not all the VM's backups

2) If i delete a VM, if I have some backup of this VM, a requester that ask me 
to delete all the related backups

3) The max backup number for single VM not only for backup storage

4) The list of all "orphaned" backup (backup files of deleted VMs)

Thanks, Fabrizio 


-- 
---
Fabrizio Cuseo - mailto:f.cu...@panservice.it
Direzione Generale - Panservice InterNetWorking
Servizi Professionali per Internet ed il Networking
Panservice e' associata AIIP - RIPE Local Registry
Phone: +39 0773 410020 - Fax: +39 0773 470219
http://www.panservice.it  mailto:i...@panservice.it
Numero verde nazionale: 800 901492
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] IP / MacAddress restriction for QEMU

2015-03-09 Thread Fabrizio Cuseo
Hello there.

I would like to know if there is already some module to create a restriction 
for IP/MacAddress.

For "low cost" VPS, creating a dedicated vlan, using a /30 network, configuring 
a network interface on the firewall, is too expensive.

So i would like to use the whole /24 network, and give one address to each vps; 
i also need to forbid any ip change.

The fastest way is to create an ebtables rule, but it will be simpler if on the 
VM details i can check a radio button "restrict ip address" and write the ip 
address. It will generate on all the nodes, two ebtables rules:

ebtables -A FORWARD -i ${network_device} -s ! ${mac_address} -j DROP
ebtables -A FORWARD -s ${mac_address} -p IPv4 --ip-src ! ${ip_address} -j DROP

It will work (for now) only for IPv4 address, but it can be enough for now.

Regards, Fabrizio 

-- 
---
Fabrizio Cuseo - mailto:f.cu...@panservice.it
Direzione Generale - Panservice InterNetWorking
Servizi Professionali per Internet ed il Networking
Panservice e' associata AIIP - RIPE Local Registry
Phone: +39 0773 410020 - Fax: +39 0773 470219
http://www.panservice.it  mailto:i...@panservice.it
Numero verde nazionale: 800 901492
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Backup retention policy. Was: splitting vzdump files

2015-02-21 Thread Fabrizio Cuseo
I will prefer, integrated with proxmox backup automation, a retention policy 
for backups; now you can only leave n last backups. It will be better to leave 
n backups (only for last week), x weekly backpus, y monthly backups, z biannual 
backups.
Also a VM-Number-Name based filename (ex: 
vzdump-qemu-102-webserver01-2015_[...]) will be useful.

Regards, Fabrizio


- Il 21-feb-15, alle 13:59,  sebast...@debianfan.de ha scritto:

> Am 21.02.2015 um 12:45 schrieb Dietmar Maurer:
>>
>>> Am 21.02.2015 um 10:27 schrieb Dietmar Maurer:
>>>>> On February 21, 2015 at 8:36 AM "sebast...@debianfan.de"
>>>>>  wrote:
>>>>>
>>>>>
>>>>> Am 21.02.2015 um 08:05 schrieb Dietmar Maurer:
>>>>>>> I want to split the files into packets - every file should be 200
>>>>>>> Megabytes.
>>>>>>>
>>>>>>> I used  7z for splitting the files.
>>>>>>>
>>>>>>> Is there any possibility for splitting the dump-files?
>>>>>> # man split
>>>>>>
>>>>> I know split - iwas thinking more of an existing built-in functionality.
>>>>>
>>>>> Perhaps this would be a functionality for a future release?
>>>> what for?
>>>>
>>> There are situations in which a file size of 10 gigabytes is
>>> inconvenient - because it would be better to split the file into
>>> multiple files - in one step.
>>> For example - burning to DVD.
>> And what is wrong with split?
>>
> 
> no direct integration into the vzdump ;-)
> 
> i prefer this normally
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

-- 
---
Fabrizio Cuseo - mailto:f.cu...@panservice.it
Direzione Generale - Panservice InterNetWorking
Servizi Professionali per Internet ed il Networking
Panservice e' associata AIIP - RIPE Local Registry
Phone: +39 0773 410020 - Fax: +39 0773 470219
http://www.panservice.it  mailto:i...@panservice.it
Numero verde nazionale: 800 901492
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Ceph cache pool

2015-02-19 Thread Fabrizio Cuseo
Hello there.

Can you plan to add to ceph web console some option to have a cache pool or 
something else to manage the CRUSH map and pools ?
So it will be simpler to have a system with different kind of disks (using 
primary affinity OSDs, for example).

Regards, Fabrizio


-- 
---
Fabrizio Cuseo - mailto:f.cu...@panservice.it
Direzione Generale - Panservice InterNetWorking
Servizi Professionali per Internet ed il Networking
Panservice e' associata AIIP - RIPE Local Registry
Phone: +39 0773 410020 - Fax: +39 0773 470219
http://www.panservice.it  mailto:i...@panservice.it
Numero verde nazionale: 800 901492
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Proxmox VE 3.4 released!

2015-02-19 Thread Fabrizio Cuseo
You can also tune ceph to have the performances you need; you can have a big 
pool with large, cheap and slow disks, and a pool with SSD disks; you can use 
infiniband to have more bandwidth between the servers, and your system will not 
suffer of I/O performances.

But if you prefer, you can buy two EMC2 SAS+SSD storage, replication software 
license (to have redundancy), fiber channel HBAs and switches; you will find a 
"small" difference on the final budget :) 


Regards, Fabrizio 

 

- Il 19-feb-15, alle 19:49, Philippe Schwarz p...@schwarz-fr.net ha scritto:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
> 
> Le 19/02/2015 19:20, Alexandre DERUMIER a écrit :
>>>> I don't think it's a good idea to build Ceph on top of ZFS.
>>>> Ceph expects to give it full hard disks for storage and using
>>>> ZFS with multiple disks will degrade performance.
>> 
>> Yes, don't use zfs for ceph. It's not officially supported, and as
>> ceph use specific features of each filesystem (xfs, ext4 and
>> btrfs), I can't recommand to use zfs in production.
>> 
>> xfs is the current recommended filesystem for ceph osds.
>> 
>> 
> OK, ok, i surrender.
> Read too fast the docs i found. XFS instead seems to be better.
> 
> But, even if i understand the mandatory drawbacks of a clustered FS
> (ceph), i find the performance penalty to be huge :
> 
> According to http://www.sebastien-han.fr/blog/2012/08/26/ceph-benchmarks/
> with the "Insane servers" (close to mine) and a dedicated 1G network
> (mine will be 10gbe), the write bandwidth is between 110 and 150 MB/s.
> It's not a shame, but with 64GB RAM, Raid of SSD and 32 Cores, you
> could expect twice the bandwidth at least.
> 
> OK, for the redundant FS, but what a huge price i'd pay for..
> 
> 
> 
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v1
> 
> iEYEARECAAYFAlTmMDYACgkQlhqCFkbqHRa+1QCgrOwHMNWzQhM+h+7jYR+52ORb
> JLsAoM3GoI/3y1YdpGczrSGpanqtpG95
> =yBZv
> -END PGP SIGNATURE-
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

-- 
---
Fabrizio Cuseo - mailto:f.cu...@panservice.it
Direzione Generale - Panservice InterNetWorking
Servizi Professionali per Internet ed il Networking
Panservice e' associata AIIP - RIPE Local Registry
Phone: +39 0773 410020 - Fax: +39 0773 470219
http://www.panservice.it  mailto:i...@panservice.it
Numero verde nazionale: 800 901492
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Infiniband Voltaire 400ex-d

2015-02-12 Thread Fabrizio Cuseo
yes, I have seen HP or cisco 24 ports; I don't know if I can use two switches 
for the 2 ports of the card (only for high availability). I am totally newbie 
with Infiniband.


- Messaggio originale -
Da: "Michael Rasmussen" 
A: pve-user@pve.proxmox.com
Inviato: Giovedì, 12 febbraio 2015 19:54:43
Oggetto: Re: [PVE-User] Infiniband Voltaire 400ex-d

On Thu, 12 Feb 2015 19:51:45 +0100
Michael Rasmussen  wrote:

> On Thu, 12 Feb 2015 19:37:56 +0100 (CET)
> Fabrizio Cuseo  wrote:
> 
> > Hello.
> > I am planning to setup a new cluster using ceph (8/10 disks each node, 3 to 
> > 6 nodes).
> > Someone have used Infiniband Voltaire 400ex-d cards with proxmox and ceph ? 
> > 
Remember that you will need a switch for the number of cards you are
planning to use. A used one should be available on ebay for ca. 200$

-- 
Hilsen/Regards
Michael Rasmussen

Get my public GnuPG keys:
michael  rasmussen  cc
http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xD3C9A00E
mir  datanom  net
http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xE501F51C
mir  miras  org
http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xE3E80917
--
/usr/games/fortune -es says:
"Here's something to think about:  How come you never see a headline
like `Psychic Wins Lottery'?"
-- Jay Leno

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Infiniband Voltaire 400ex-d

2015-02-12 Thread Fabrizio Cuseo
Hello.
I am planning to setup a new cluster using ceph (8/10 disks each node, 3 to 6 
nodes).
Someone have used Infiniband Voltaire 400ex-d cards with proxmox and ceph ? 


Regards, Fabrizio 
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] High ceph OSD latency

2015-01-16 Thread Fabrizio Cuseo
Following my problem, is correct that proxmox uses "barrier=1" on Ceph OSDS and 
"barrier=0" on /var/lib/vz  ? 

With barrier enabled, fsyncs/second values are really different:

root@proxmox:~# pveperf /var/lib/vz
CPU BOGOMIPS:  4.24
REGEX/SECOND:  932650
HD SIZE:   325.08 GB (/dev/mapper/pve-data)
BUFFERED READS:97.43 MB/sec
AVERAGE SEEK TIME: 11.57 ms
FSYNCS/SECOND: 20.88
DNS EXT:   69.87 ms
DNS INT:   63.98 ms (test.panservice)


root@proxmox:~# mount -o remount -o barrier=0 /var/lib/vz

root@proxmox:~# pveperf /var/lib/vz
CPU BOGOMIPS:  4.24
REGEX/SECOND:  980519
HD SIZE:   325.08 GB (/dev/mapper/pve-data)
BUFFERED READS:82.29 MB/sec
AVERAGE SEEK TIME: 12.10 ms
FSYNCS/SECOND: 561.09
DNS EXT:   64.09 ms
DNS INT:   77.50 ms (test.panservice)

Regards, Fabrizio 


- Messaggio originale -
Da: "Lindsay Mathieson" 
A: pve-user@pve.proxmox.com, "Fabrizio Cuseo" 
Inviato: Giovedì, 15 gennaio 2015 13:17:07
Oggetto: Re: [PVE-User] High ceph OSD latency

On Thu, 15 Jan 2015 11:25:44 AM Fabrizio Cuseo wrote:
> What is strange is that on OSD tree I have high latency: tipically Apply
> latency is between 5 and 25, but commit lattency is between 150 and 300
> (and sometimes 5/600), with 5/10 op/s and some B/s rd/wr (i have only 3
> vms, and only 1 is working now, so the cluster is really unloaded).
> 
> I am using a pool with 3 copies, and I have increased pg_num to 256 (the
> default value of 64 is too low); but OSD latency is the same with a
> different pg_num value.
> 
> I have other clusters (similar configuration, using dell 2950, dual ethernet
> for ceph and proxmox, 4 x OSD with 1Tbyte drive, perc 5i controller), with
> several vlms, and the commit and apply latency is 1/2ms.
> 
> Another cluster (test cluster) with 3 x dell PE860, with only 1 OSD per
> node, have better latency (10/20 ms).
> 
> What can i check ? 


POOMA U, but if you have one drive or controller that is marginal or failing, 
it can slow down the whole cluster.

Might be worth while benching individual osd's

-- 
---
Fabrizio Cuseo - mailto:f.cu...@panservice.it
Direzione Generale - Panservice InterNetWorking
Servizi Professionali per Internet ed il Networking
Panservice e' associata AIIP - RIPE Local Registry
Phone: +39 0773 410020 - Fax: +39 0773 470219
http://www.panservice.it  mailto:i...@panservice.it
Numero verde nazionale: 800 901492
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] High ceph OSD latency

2015-01-15 Thread Fabrizio Cuseo
Hi Lindsay.

First node:

root@nodo01:~# pveperf /var/lib/ceph/osd/ceph-0
CPU BOGOMIPS:  4.76
REGEX/SECOND:  954062
HD SIZE:   1857.11 GB (/dev/sdb1)
BUFFERED READS:152.38 MB/sec
AVERAGE SEEK TIME: 15.07 ms
FSYNCS/SECOND: 48.11
DNS EXT:   51.79 ms
DNS INT:   62.76 ms (domain.test2)

root@nodo01:~# pveperf /var/lib/ceph/osd/ceph-1
CPU BOGOMIPS:  4.76
REGEX/SECOND:  972176
HD SIZE:   1857.11 GB (/dev/sdd1)
BUFFERED READS:141.72 MB/sec
AVERAGE SEEK TIME: 18.91 ms
FSYNCS/SECOND: 41.38
DNS EXT:   23.32 ms
DNS INT:   79.97 ms (domain.test2)

root@nodo01:~# pveperf /var/lib/ceph/osd/ceph-2
CPU BOGOMIPS:  4.76
REGEX/SECOND:  956704
HD SIZE:   1857.11 GB (/dev/sde1)
BUFFERED READS:157.24 MB/sec
AVERAGE SEEK TIME: 14.97 ms
FSYNCS/SECOND: 43.48
DNS EXT:   20.50 ms
DNS INT:   130.27 ms (domain.test2)


Second node:

root@nodo02:~# pveperf  /var/lib/ceph/osd/ceph-3
CPU BOGOMIPS:  3.04
REGEX/SECOND:  965952
HD SIZE:   1857.11 GB (/dev/sdb1)
BUFFERED READS:147.61 MB/sec
AVERAGE SEEK TIME: 22.60 ms
FSYNCS/SECOND: 42.29
DNS EXT:   45.84 ms
DNS INT:   54.82 ms (futek.it)

root@nodo02:~# pveperf  /var/lib/ceph/osd/ceph-4
CPU BOGOMIPS:  3.04
REGEX/SECOND:  956254
HD SIZE:   1857.11 GB (/dev/sdc1)
BUFFERED READS:143.70 MB/sec
AVERAGE SEEK TIME: 15.33 ms
FSYNCS/SECOND: 47.33
DNS EXT:   20.91 ms
DNS INT:   20.76 ms (futek.it)

root@nodo02:~# pveperf  /var/lib/ceph/osd/ceph-5
CPU BOGOMIPS:  3.04
REGEX/SECOND:  996038
HD SIZE:   1857.11 GB (/dev/sdd1)
BUFFERED READS:150.55 MB/sec
AVERAGE SEEK TIME: 15.83 ms
FSYNCS/SECOND: 52.12
DNS EXT:   20.69 ms
DNS INT:   21.33 ms (futek.it)


Third node:

root@nodo03:~# pveperf /var/lib/ceph/osd/ceph-6
CPU BOGOMIPS:  40001.56
REGEX/SECOND:  988544
HD SIZE:   1857.11 GB (/dev/sdb1)
BUFFERED READS:125.93 MB/sec
AVERAGE SEEK TIME: 18.15 ms
FSYNCS/SECOND: 43.85
DNS EXT:   40.32 ms
DNS INT:   22.03 ms (futek.it)

root@nodo03:~# pveperf /var/lib/ceph/osd/ceph-7
CPU BOGOMIPS:  40001.56
REGEX/SECOND:  963925
HD SIZE:   1857.11 GB (/dev/sdc1)
BUFFERED READS:111.99 MB/sec
AVERAGE SEEK TIME: 18.33 ms
FSYNCS/SECOND: 26.52
DNS EXT:   26.22 ms
DNS INT:   20.57 ms (futek.it)

root@nodo03:~# pveperf /var/lib/ceph/osd/ceph-8
CPU BOGOMIPS:  40001.56
REGEX/SECOND:  998566
HD SIZE:   1857.11 GB (/dev/sdd1)
BUFFERED READS:149.53 MB/sec
AVERAGE SEEK TIME: 14.75 ms
FSYNCS/SECOND: 43.25
DNS EXT:   15.37 ms
DNS INT:   55.12 ms (futek.it)


I can only see that OSD ceph-7 has less (half) fsyncs/second (also testing 
again it).

Those servers, have this controller:
00:1f.2 SATA controller: Intel Corporation 82801IR/IO/IH (ICH9R/DO/DH) 6 port 
SATA Controller [AHCI mode] (rev 02)


On the other production cluster (with Dell 2950 and PERC5i, i have really 
better fsyncs/sec); I think that is because of controller's cache:

root@proxmox-1:~# pveperf /var/lib/ceph/osd/ceph-0
CPU BOGOMIPS:  37238.64
REGEX/SECOND:  901248
HD SIZE:   925.55 GB (/dev/sdb1)
BUFFERED READS:101.61 MB/sec
AVERAGE SEEK TIME: 17.39 ms
FSYNCS/SECOND: 1817.31
DNS EXT:   43.65 ms
DNS INT:   2.87 ms (panservice.it)

02:0e.0 RAID bus controller: Dell PowerEdge Expandable RAID controller 5



On the thest cluster, with only one 500Gbyte HD and this controller:

00:1f.2 IDE interface: Intel Corporation NM10/ICH7 Family SATA Controller [IDE 
mode] (rev 01)

i have this result:

root@nodo1:~# pveperf /var/lib/ceph/osd/ceph-1
CPU BOGOMIPS:  8532.80
REGEX/SECOND:  818860
HD SIZE:   460.54 GB (/dev/sdb1)
BUFFERED READS:71.33 MB/sec
AVERAGE SEEK TIME: 22.75 ms
FSYNCS/SECOND: 53.49
DNS EXT:   69.40 ms
DNS INT:   2.34 ms (panservice.it)

So, for the first and third cluster, i have similar fsync/second result, but 
very different delay on OSDs.

I'll try to investigate some controller-related issue.

Thanks, Fabrizio





- Messaggio originale -
Da: "Lindsay Mathieson" 
A: pve-user@pve.proxmox.com, "Fabrizio Cuseo" 
Inviato: Giovedì, 15 gennaio 2015 13:17:07
Oggetto: Re: [PVE-User] High ceph OSD latency

On Thu, 15 Jan 2015 11:25:44 AM Fabrizio Cuseo wrote:
> What is strange is that on OSD tree I have high latency: tipically Apply
> latency is between 5 and 25, but commit lattency is between 150 and 300
> (and sometimes 5/600), with 5/10 op/s and some B/s rd/wr (i have only 3
> vms, and only 1 is working now, so the cluster is really unloaded).
> 
> I am using a pool with 3 copies, and I have increased pg_num to 256 (the
> default value of 64 is too low); but OSD latency is the same with a
> different pg_num value.
> 
> I have 

Re: [PVE-User] High ceph OSD latency

2015-01-15 Thread Fabrizio Cuseo
I will check, but the latency in osd tree is for each disk, and i have high 
latency on all osd; this is why i don't think that the problem is related to 
one host or disk.


Inviato da iPad

> Il giorno 15/gen/2015, alle ore 13:17, Lindsay Mathieson 
>  ha scritto:
> 
>> On Thu, 15 Jan 2015 11:25:44 AM Fabrizio Cuseo wrote:
>> What is strange is that on OSD tree I have high latency: tipically Apply
>> latency is between 5 and 25, but commit lattency is between 150 and 300
>> (and sometimes 5/600), with 5/10 op/s and some B/s rd/wr (i have only 3
>> vms, and only 1 is working now, so the cluster is really unloaded).
>> 
>> I am using a pool with 3 copies, and I have increased pg_num to 256 (the
>> default value of 64 is too low); but OSD latency is the same with a
>> different pg_num value.
>> 
>> I have other clusters (similar configuration, using dell 2950, dual ethernet
>> for ceph and proxmox, 4 x OSD with 1Tbyte drive, perc 5i controller), with
>> several vlms, and the commit and apply latency is 1/2ms.
>> 
>> Another cluster (test cluster) with 3 x dell PE860, with only 1 OSD per
>> node, have better latency (10/20 ms).
>> 
>> What can i check ?
> 
> 
> POOMA U, but if you have one drive or controller that is marginal or failing, 
> it can slow down the whole cluster.
> 
> Might be worth while benching individual osd's
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] High ceph OSD latency

2015-01-15 Thread Fabrizio Cuseo
Hello.

I have a small proxmox/ceph cluster:

- 3 x Dell CS24, each with:
- 2 x xeon CPU
- 24 Gbyte ram
- 1 x 500Gbyte SATA disk (used for proxmox)
- 3 x 2Tbyte WD2000F9YZ SATA Enterprise Edition (used for ceph OSDs)
- 1 x Gbit ethernet (used for ceph and proxmox)
- 1 x Gbit ethernet (used for vms ethernet)

What is strange is that on OSD tree I have high latency: tipically Apply 
latency is between 5 and 25, but commit lattency is between 150 and 300 (and 
sometimes 5/600), with 5/10 op/s and some B/s rd/wr (i have only 3 vms, and 
only 1 is working now, so the cluster is really unloaded).

I am using a pool with 3 copies, and I have increased pg_num to 256 (the 
default value of 64 is too low); but OSD latency is the same with a different 
pg_num value.

I have other clusters (similar configuration, using dell 2950, dual ethernet 
for ceph and proxmox, 4 x OSD with 1Tbyte drive, perc 5i controller), with 
several vlms, and the commit and apply latency is 1/2ms.

Another cluster (test cluster) with 3 x dell PE860, with only 1 OSD per node, 
have better latency (10/20 ms).

What can i check ? 

Thank's in advance, Fabrizio 





-- 
---
Fabrizio Cuseo - mailto:f.cu...@panservice.it
Direzione Generale - Panservice InterNetWorking
Servizi Professionali per Internet ed il Networking
Panservice e' associata AIIP - RIPE Local Registry
Phone: +39 0773 410020 - Fax: +39 0773 470219
http://www.panservice.it  mailto:i...@panservice.it
Numero verde nazionale: 800 901492
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Proxmox VE Ceph Server released (beta)

2014-01-24 Thread Fabrizio Cuseo
Great work ! 
I already use Ceph with a dedicated VM per node; I will try this new feature 
asap ! 

Regards, Fabrizio 

- Messaggio originale -
Da: "Martin Maurer" 
A: pve-de...@pve.proxmox.com, "proxmoxve (pve-user@pve.proxmox.com)" 

Inviato: Venerdì, 24 gennaio 2014 16:08:49
Oggetto: [PVE-User] Proxmox VE Ceph Server released (beta)

Hi all,

We already have a full featured Ceph Storage plugin in our Proxmox VE solution 
and now - BRAND NEW - it is now possible to install and manage the Ceph Server 
directly on Proxmox VE - integrated in our management stack (GUI and CLI via 
Proxmox VE API).

Documentation
http://pve.proxmox.com/wiki/Ceph_Server

Video Tutorial
http://youtu.be/ImyRUyMBrwo

Any comment and feedback is welcome!
__
Best regards,

Martin Maurer
Proxmox VE project leader


___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

-- 
---
Fabrizio Cuseo - mailto:f.cu...@panservice.it
Direzione Generale - Panservice InterNetWorking
Servizi Professionali per Internet ed il Networking
Panservice e' associata AIIP - RIPE Local Registry
Phone: +39 0773 410020 - Fax: +39 0773 470219
http://www.panservice.it  mailto:i...@panservice.it
Numero verde nazionale: 800 901492
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] efficient backup of VM on external storage.

2013-10-05 Thread Fabrizio Cuseo
All my answers below:

> 1. would you please share your hardware specs of Ceph storage cluster
> and PVE Box and the connectivity of all boxes.

You can use legacy linux servers for Ceph Storage; of course, you can use cheap 
servers (ex: HP Microserver, with 4 sata disks) if you don't mind about 
performances; if you want performances, you can use 3 xeon based server, not 
less of 8Gbyte ram (16Gbyte is better), and several SAS disks for each server. 
For network connections, use 2/4 giga ethernet in a single bond. 
You can also use every kind of legacy linux server, or a virtual machine on 
each proxmox node, using 2 or more local disks (not used for other proxmox 
local storage), like Vmware does with VSA (virtual storage appliance); 
performances can be enough or not depending on your needs.

> 2. do you relay upon only replication. dont you take the backup of
> VM? if yes then again would you please throw some light on your
> strategy in reference to my question in first message.

Replication is never enough; you always need a backup with a retemption 
strategy; use another storage (a cheap soho nas with 2/4 sata disk in raid 1/10)
 
> 3. Any recommended how to for ceph and PVE.

You can find an howto on proxmox ve wiki, ( 
http://pve.proxmox.com/wiki/Storage:_Ceph ) and of course you can read on ceph 
home page.

 
> 4. is your setup Ceph cluster is like DRBD active/passive with heart
> beat. if one down second will auto up. with a second or 2 downtime?

Replication on a clustered storage is something different; every "file" is 
splitted in several chunks (blocks) and every chunk is written on 2 or more 
servers (depending of replication level you need); so if one of the nodes of 
the ceph storage cluster dies (or if you need to reboot, change hardware, move 
location), the cluster can work in degraded mode (as a raid volume), but 
differently from raid, if you have 3 or more servers, and your replica goal is 
2, the cluster begin to write the clusters with only 1 copy on the other 
servers; when your dead server can rejoin the cluster, it will be syncronized 
with every change... so always works like a charme.

 
> Sorry for involving DRBD all the time since i have only worked on
> DRBD clustering, only concept of mine in clustering starts and ends
> on DRBD+heartbeat :) and as you also know that DRBD work little
> differently and have some limitations too. so please dont mind.


Is really different, and (for me) better. A storage cluster is not only 
redundant, but you can expand it lineary, with other server and/or disks, 
having more space (theoretically infinite space), more redundancy (if you 
change your replica goal from 2 to 3, for example), and more performance 
because you have more cpu, more disk, more IOPS.

Regards, Fabrizio 
 
> 
> Thanks,
> 
> 
> 
> 
> 
> 
> Regards, Fabrizio
> 
> 
> 
> - Messaggio originale -
> Da: "Muhammad Yousuf Khan" < sir...@gmail.com >
> A: pve-user@pve.proxmox.com
> Inviato: Sabato, 5 ottobre 2013 10:58:41
> Oggetto: [PVE-User] efficient backup of VM on external storage.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> i have never worked on external storage just worked on drbd on local
> storage/system drives.
> 
> 
> Scenario is:
> for example. i have two external storage and 1 Proxmox Machine.
> i am using 1G NICs to connect all nodes
> 
> the storage are connected to each other from another link but no
> replication done b/w storage boxes mean they are no failover to each
> other. and both the storage are connected to Proxmox. i am using
> primary storage to run 3 machines on NFS or iSCSI. note that machine
> are running on primary storage over a single 1G ethernet link/single
> point of failure.
> 
> 
> 
> now lets say. i want VM1 backedup to secondary storage. however i
> dont want my backup traffic to effect my primary link where 3
> machines are active.
> 
> 
> any suggestion to achieve that.
> 
> 
> 
> 
> 
> Thanks.
> 
> 
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> 
> --
> ---
> Fabrizio Cuseo - mailto: f.cu...@panservice.it
> Direzione Generale - Panservice InterNetWorking
> Servizi Professionali per Internet ed il Networking
> Panservice e' associata AIIP - RIPE Local Registry
> Phone: +39 0773 410020 - Fax: +39 0773 470219
> http://www.panservice.it mailto: i...@panservice.it
> Numero verde nazionale: 800 901492
> 
> 

-- 
---
Fabrizio Cuseo - mailto:f.cu...@panservice.it
Direzione Generale - Panservice InterNetWorking
Servizi Professionali per Internet ed il Networking
Panservice e' associata AIIP - RIPE Local Registry
Phone: +39 0773 410020 - Fax: +39 0773 470219
http://www.panservice.it  mailto:i...@panservice.it
Numero verde nazionale: 800 901492
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] efficient backup of VM on external storage.

2013-10-05 Thread Fabrizio Cuseo
Hello Muhammad.
I am using Proxmox since one year with Ceph storage cluster; depending on the 
performances you have on your external storage, ceph could be little slower, 
but you have "embedded" real-time replica and linear scalability, so you can 
consider using it.
Don't worry for setup... is really simple ! 

Regards, Fabrizio 



- Messaggio originale -
Da: "Muhammad Yousuf Khan" 
A: pve-user@pve.proxmox.com
Inviato: Sabato, 5 ottobre 2013 10:58:41
Oggetto: [PVE-User] efficient backup of VM on external storage.









i have never worked on external storage just worked on drbd on local 
storage/system drives. 


Scenario is: 
for example. i have two external storage and 1 Proxmox Machine. 
i am using 1G NICs to connect all nodes 

the storage are connected to each other from another link but no replication 
done b/w storage boxes mean they are no failover to each other. and both the 
storage are connected to Proxmox. i am using primary storage to run 3 machines 
on NFS or iSCSI. note that machine are running on primary storage over a single 
1G ethernet link/single point of failure. 



now lets say. i want VM1 backedup to secondary storage. however i dont want my 
backup traffic to effect my primary link where 3 machines are active. 


any suggestion to achieve that. 





Thanks. 


___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

-- 
---
Fabrizio Cuseo - mailto:f.cu...@panservice.it
Direzione Generale - Panservice InterNetWorking
Servizi Professionali per Internet ed il Networking
Panservice e' associata AIIP - RIPE Local Registry
Phone: +39 0773 410020 - Fax: +39 0773 470219
http://www.panservice.it  mailto:i...@panservice.it
Numero verde nazionale: 800 901492
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Proxmox VE 3.1 released!

2013-08-21 Thread Fabrizio Cuseo
I resolved using this rule:

1st host:
- live migrate vms to another host (same version)
- upgrade

other hosts:
- upgrade with NO REBOOT
- live migrate vms to an already upgraded host
- reboot

It works fine.



- Messaggio originale -
> On my pve-test cluster, I always follow this rule:
> - live migrate all the vm's of the host
> - upgrade the host
> - live migrate back the vm's
> 
> So I can upgrade all my cluster without downtime.
> 
> But here I can't... or I can safely upgrade a host with running vm's
> ?
> 
> 
> - Messaggio originale -
> > >Another problem: live migrating back a VM from a 3.0 to a 3.1
> > >host,
> > >I have
> > I thought live migration between different versions was never
> > (officially) supported.
> > 
> > On Wed, Aug 21, 2013, at 05:19 AM, Fabrizio Cuseo wrote:
> > > Another problem: live migrating back a VM from a 3.0 to a 3.1
> > > host,
> > > I
> > > have:
> > > 
> > > Aug 21 14:17:12 starting migration of VM 118 to node 'nodo03'
> > > (172.16.20.33)
> > > Aug 21 14:17:12 copying disk images
> > > Aug 21 14:17:12 starting VM 118 on remote node 'nodo03'
> > > Aug 21 14:17:14 ERROR: online migrate failure - unable to detect
> > > remote
> > > migration port
> > > Aug 21 14:17:14 aborting phase 2 - cleanup resources
> > > Aug 21 14:17:14 migrate_cancel
> > > Aug 21 14:17:15 ERROR: migration finished with problems (duration
> > > 00:00:04)
> > > TASK ERROR: migration problems
> > > 
> > > 
> > > 
> > > 
> > > - Messaggio originale -
> > > > Hi all!
> > > > 
> > > > We just released Proxmox VE 3.1, introducing great new features
> > > > and
> > > > services. We included SPICE (http://pve.proxmox.com/wiki/SPICE)
> > > > ,
> > > > GlusterFS storage plugin and the ability to apply updates via
> > > > GUI
> > > > (including change logs).
> > > > 
> > > > As an additional service for our commercial subscribers, we
> > > > introduce
> > > > the Proxmox VE Enterprise Repository. This is the default and
> > > > recommended repository for production servers.
> > > > 
> > > > To access the Enterprise Repository, each Proxmox VE Server
> > > > needs
> > > > a
> > > > valid Subscription Key - these subscriptions
> > > > (http://www.proxmox.com/proxmox-ve/pricing) start now at EUR
> > > > 4,16
> > > > per months (was EUR 9,90).
> > > > 
> > > > There is no change in licensing (AGPL v3), also packages for
> > > > non-subscribers are still available.
> > > > 
> > > > A big Thank-you to our active community for all feedback,
> > > > testing,
> > > > bug reporting and patch submissions.
> > > > 
> > > > Release notes
> > > > See http://pve.proxmox.com/wiki/Roadmap#Proxmox_VE_3.1
> > > > 
> > > > Download
> > > > http://www.proxmox.com/downloads/category/proxmox-virtual-environment
> > > > 
> > > > New Package Repositories
> > > > http://pve.proxmox.com/wiki/Package_repositories
> > > > 
> > > > Best Regards,
> > > > 
> > > > Martin Maurer
> > > > Proxmox VE project leader
> > > > 
> > > > mar...@proxmox.com
> > > > http://www.proxmox.com
> > > > 
> > > > ___
> > > > pve-user mailing list
> > > > pve-user@pve.proxmox.com
> > > > http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> > > > 
> > > 
> > > --
> > > ---
> > > Fabrizio Cuseo - mailto:f.cu...@panservice.it
> > > Direzione Generale - Panservice InterNetWorking
> > > Servizi Professionali per Internet ed il Networking
> > > Panservice e' associata AIIP - RIPE Local Registry
> > > Phone: +39 0773 410020 - Fax: +39 0773 470219
> > > http://www.panservice.it  mailto:i...@panservice.it
> > > Numero verde nazionale: 800 901492
> > > ___
> > > pve-user mailing list
> > > pve-user@pve.proxmox.com
> > > http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> > ___
> > pve-user mailing list
> > pve-user@pve.proxmox.com
> > http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> > 
> 
> --
> ---
> Fabrizio Cuseo - mailto:f.cu...@panservice.it
> Direzione Generale - Panservice InterNetWorking
> Servizi Professionali per Internet ed il Networking
> Panservice e' associata AIIP - RIPE Local Registry
> Phone: +39 0773 410020 - Fax: +39 0773 470219
> http://www.panservice.it  mailto:i...@panservice.it
> Numero verde nazionale: 800 901492
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> 

-- 
---
Fabrizio Cuseo - mailto:f.cu...@panservice.it
Direzione Generale - Panservice InterNetWorking
Servizi Professionali per Internet ed il Networking
Panservice e' associata AIIP - RIPE Local Registry
Phone: +39 0773 410020 - Fax: +39 0773 470219
http://www.panservice.it  mailto:i...@panservice.it
Numero verde nazionale: 800 901492
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Proxmox VE 3.1 released!

2013-08-21 Thread Fabrizio Cuseo
Ops, sorry :) 
I have read it with a small tablet, so I missed that there is a new source list 
file.
Thank's Fabrizio


- Messaggio originale -
> > But:
> > 
> > 
> > 
> > root@nodo03:~# apt-get update
> > Hit http://security.debian.org wheezy/updates Release.gpg Hit
> > http://ftp.debian.org wheezy Release.gpg Hit
> > http://download.proxmox.com wheezy Release.gpg Hit
> > http://security.debian.org wheezy/updates Release Hit
> > http://ftp.debian.org wheezy Release Hit
> > http://download.proxmox.com
> > wheezy Release Hit http://security.debian.org wheezy/updates/main
> > amd64
> > Packages Hit http://ftp.debian.org wheezy/main amd64 Packages Hit
> > http://security.debian.org wheezy/updates/contrib amd64 Packages
> > Hit
> > http://download.proxmox.com wheezy/pve-no-subscription amd64
> > Packages Hit http://ftp.debian.org wheezy/contrib amd64 Packages
> > Hit
> > http://security.debian.org wheezy/updates/contrib Translation-en
> > Hit
> > http://ftp.debian.org wheezy/contrib Translation-en Hit
> > http://security.debian.org wheezy/updates/main Translation-en Hit
> > http://ftp.debian.org wheezy/main Translation-en Ign
> > https://enterprise.proxmox.com wheezy Release.gpg Ign
> > https://enterprise.proxmox.com wheezy Release Ign
> > http://download.proxmox.com wheezy/pve-no-subscription Translation-
> > en_US Ign http://download.proxmox.com wheezy/pve-no-subscription
> > Translation-en Err https://enterprise.proxmox.com
> > wheezy/pve-enterprise
> > amd64 Packages
> >   The requested URL returned error: 401
> > Ign https://enterprise.proxmox.com wheezy/pve-enterprise
> > Translation-
> > en_US Ign https://enterprise.proxmox.com wheezy/pve-enterprise
> > Translation-en
> > W: Failed to fetch
> > https://enterprise.proxmox.com/debian/dists/wheezy/pve-
> > enterprise/binary-amd64/Packages  The requested URL returned error:
> > 401
> > 
> > E: Some index files failed to download. They have been ignored, or
> > old ones
> > used instead.
> 
> Hi,
> 
> read again  http://pve.proxmox.com/wiki/Package_repositories
> 
> Martin
> 
> 

-- 
---
Fabrizio Cuseo - mailto:f.cu...@panservice.it
Direzione Generale - Panservice InterNetWorking
Servizi Professionali per Internet ed il Networking
Panservice e' associata AIIP - RIPE Local Registry
Phone: +39 0773 410020 - Fax: +39 0773 470219
http://www.panservice.it  mailto:i...@panservice.it
Numero verde nazionale: 800 901492
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Proxmox VE 3.1 released!

2013-08-21 Thread Fabrizio Cuseo
On my pve-test cluster, I always follow this rule:
- live migrate all the vm's of the host
- upgrade the host
- live migrate back the vm's

So I can upgrade all my cluster without downtime.

But here I can't... or I can safely upgrade a host with running vm's ? 


- Messaggio originale -
> >Another problem: live migrating back a VM from a 3.0 to a 3.1 host,
> >I have
> I thought live migration between different versions was never
> (officially) supported.
> 
> On Wed, Aug 21, 2013, at 05:19 AM, Fabrizio Cuseo wrote:
> > Another problem: live migrating back a VM from a 3.0 to a 3.1 host,
> > I
> > have:
> > 
> > Aug 21 14:17:12 starting migration of VM 118 to node 'nodo03'
> > (172.16.20.33)
> > Aug 21 14:17:12 copying disk images
> > Aug 21 14:17:12 starting VM 118 on remote node 'nodo03'
> > Aug 21 14:17:14 ERROR: online migrate failure - unable to detect
> > remote
> > migration port
> > Aug 21 14:17:14 aborting phase 2 - cleanup resources
> > Aug 21 14:17:14 migrate_cancel
> > Aug 21 14:17:15 ERROR: migration finished with problems (duration
> > 00:00:04)
> > TASK ERROR: migration problems
> > 
> > 
> > 
> > 
> > - Messaggio originale -
> > > Hi all!
> > > 
> > > We just released Proxmox VE 3.1, introducing great new features
> > > and
> > > services. We included SPICE (http://pve.proxmox.com/wiki/SPICE) ,
> > > GlusterFS storage plugin and the ability to apply updates via GUI
> > > (including change logs).
> > > 
> > > As an additional service for our commercial subscribers, we
> > > introduce
> > > the Proxmox VE Enterprise Repository. This is the default and
> > > recommended repository for production servers.
> > > 
> > > To access the Enterprise Repository, each Proxmox VE Server needs
> > > a
> > > valid Subscription Key - these subscriptions
> > > (http://www.proxmox.com/proxmox-ve/pricing) start now at EUR 4,16
> > > per months (was EUR 9,90).
> > > 
> > > There is no change in licensing (AGPL v3), also packages for
> > > non-subscribers are still available.
> > > 
> > > A big Thank-you to our active community for all feedback,
> > > testing,
> > > bug reporting and patch submissions.
> > > 
> > > Release notes
> > > See http://pve.proxmox.com/wiki/Roadmap#Proxmox_VE_3.1
> > > 
> > > Download
> > > http://www.proxmox.com/downloads/category/proxmox-virtual-environment
> > > 
> > > New Package Repositories
> > > http://pve.proxmox.com/wiki/Package_repositories
> > > 
> > > Best Regards,
> > > 
> > > Martin Maurer
> > > Proxmox VE project leader
> > > 
> > > mar...@proxmox.com
> > > http://www.proxmox.com
> > > 
> > > ___
> > > pve-user mailing list
> > > pve-user@pve.proxmox.com
> > > http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> > > 
> > 
> > --
> > ---
> > Fabrizio Cuseo - mailto:f.cu...@panservice.it
> > Direzione Generale - Panservice InterNetWorking
> > Servizi Professionali per Internet ed il Networking
> > Panservice e' associata AIIP - RIPE Local Registry
> > Phone: +39 0773 410020 - Fax: +39 0773 470219
> > http://www.panservice.it  mailto:i...@panservice.it
> > Numero verde nazionale: 800 901492
> > ___
> > pve-user mailing list
> > pve-user@pve.proxmox.com
> > http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> 

-- 
---
Fabrizio Cuseo - mailto:f.cu...@panservice.it
Direzione Generale - Panservice InterNetWorking
Servizi Professionali per Internet ed il Networking
Panservice e' associata AIIP - RIPE Local Registry
Phone: +39 0773 410020 - Fax: +39 0773 470219
http://www.panservice.it  mailto:i...@panservice.it
Numero verde nazionale: 800 901492
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Proxmox VE 3.1 released!

2013-08-21 Thread Fabrizio Cuseo
Another problem: live migrating back a VM from a 3.0 to a 3.1 host, I have:

Aug 21 14:17:12 starting migration of VM 118 to node 'nodo03' (172.16.20.33)
Aug 21 14:17:12 copying disk images
Aug 21 14:17:12 starting VM 118 on remote node 'nodo03'
Aug 21 14:17:14 ERROR: online migrate failure - unable to detect remote 
migration port
Aug 21 14:17:14 aborting phase 2 - cleanup resources
Aug 21 14:17:14 migrate_cancel
Aug 21 14:17:15 ERROR: migration finished with problems (duration 00:00:04)
TASK ERROR: migration problems




- Messaggio originale -
> Hi all!
> 
> We just released Proxmox VE 3.1, introducing great new features and
> services. We included SPICE (http://pve.proxmox.com/wiki/SPICE) ,
> GlusterFS storage plugin and the ability to apply updates via GUI
> (including change logs).
> 
> As an additional service for our commercial subscribers, we introduce
> the Proxmox VE Enterprise Repository. This is the default and
> recommended repository for production servers.
> 
> To access the Enterprise Repository, each Proxmox VE Server needs a
> valid Subscription Key - these subscriptions
> (http://www.proxmox.com/proxmox-ve/pricing) start now at EUR 4,16
> per months (was EUR 9,90).
> 
> There is no change in licensing (AGPL v3), also packages for
> non-subscribers are still available.
> 
> A big Thank-you to our active community for all feedback, testing,
> bug reporting and patch submissions.
> 
> Release notes
> See http://pve.proxmox.com/wiki/Roadmap#Proxmox_VE_3.1
> 
> Download
> http://www.proxmox.com/downloads/category/proxmox-virtual-environment
> 
> New Package Repositories
> http://pve.proxmox.com/wiki/Package_repositories
> 
> Best Regards,
> 
> Martin Maurer
> Proxmox VE project leader
> 
> mar...@proxmox.com
> http://www.proxmox.com
> 
> _______
> pve-user mailing list
> pve-user@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> 

-- 
---
Fabrizio Cuseo - mailto:f.cu...@panservice.it
Direzione Generale - Panservice InterNetWorking
Servizi Professionali per Internet ed il Networking
Panservice e' associata AIIP - RIPE Local Registry
Phone: +39 0773 410020 - Fax: +39 0773 470219
http://www.panservice.it  mailto:i...@panservice.it
Numero verde nazionale: 800 901492
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Proxmox VE 3.1 released!

2013-08-21 Thread Fabrizio Cuseo
Hello Martin. After upgrading on pve repository, I have rebooted and changed 
sources.list :

root@nodo03:~# cat /etc/apt/sources.list
deb http://ftp.debian.org/debian wheezy main contrib

# PVE pve-no-subscription repository provided by proxmox.com, NOT recommended 
for production use
deb http://download.proxmox.com/debian wheezy pve-no-subscription

# security updates
deb http://security.debian.org/ wheezy/updates main contrib


But:



root@nodo03:~# apt-get update
Hit http://security.debian.org wheezy/updates Release.gpg
Hit http://ftp.debian.org wheezy Release.gpg
Hit http://download.proxmox.com wheezy Release.gpg
Hit http://security.debian.org wheezy/updates Release
Hit http://ftp.debian.org wheezy Release
Hit http://download.proxmox.com wheezy Release
Hit http://security.debian.org wheezy/updates/main amd64 Packages
Hit http://ftp.debian.org wheezy/main amd64 Packages
Hit http://security.debian.org wheezy/updates/contrib amd64 Packages
Hit http://download.proxmox.com wheezy/pve-no-subscription amd64 Packages
Hit http://ftp.debian.org wheezy/contrib amd64 Packages
Hit http://security.debian.org wheezy/updates/contrib Translation-en
Hit http://ftp.debian.org wheezy/contrib Translation-en
Hit http://security.debian.org wheezy/updates/main Translation-en
Hit http://ftp.debian.org wheezy/main Translation-en
Ign https://enterprise.proxmox.com wheezy Release.gpg
Ign https://enterprise.proxmox.com wheezy Release
Ign http://download.proxmox.com wheezy/pve-no-subscription Translation-en_US
Ign http://download.proxmox.com wheezy/pve-no-subscription Translation-en
Err https://enterprise.proxmox.com wheezy/pve-enterprise amd64 Packages
  The requested URL returned error: 401
Ign https://enterprise.proxmox.com wheezy/pve-enterprise Translation-en_US
Ign https://enterprise.proxmox.com wheezy/pve-enterprise Translation-en
W: Failed to fetch 
https://enterprise.proxmox.com/debian/dists/wheezy/pve-enterprise/binary-amd64/Packages
  The requested URL returned error: 401

E: Some index files failed to download. They have been ignored, or old ones 
used instead.


- Messaggio originale -
> Hi all!
> 
> We just released Proxmox VE 3.1, introducing great new features and
> services. We included SPICE (http://pve.proxmox.com/wiki/SPICE) ,
> GlusterFS storage plugin and the ability to apply updates via GUI
> (including change logs).
> 
> As an additional service for our commercial subscribers, we introduce
> the Proxmox VE Enterprise Repository. This is the default and
> recommended repository for production servers.
> 
> To access the Enterprise Repository, each Proxmox VE Server needs a
> valid Subscription Key - these subscriptions
> (http://www.proxmox.com/proxmox-ve/pricing) start now at EUR 4,16
> per months (was EUR 9,90).
> 
> There is no change in licensing (AGPL v3), also packages for
> non-subscribers are still available.
> 
> A big Thank-you to our active community for all feedback, testing,
> bug reporting and patch submissions.
> 
> Release notes
> See http://pve.proxmox.com/wiki/Roadmap#Proxmox_VE_3.1
> 
> Download
> http://www.proxmox.com/downloads/category/proxmox-virtual-environment
> 
> New Package Repositories
> http://pve.proxmox.com/wiki/Package_repositories
> 
> Best Regards,
> 
> Martin Maurer
> Proxmox VE project leader
> 
> mar...@proxmox.com
> http://www.proxmox.com
> 
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> 

-- 
---
Fabrizio Cuseo - mailto:f.cu...@panservice.it
Direzione Generale - Panservice InterNetWorking
Servizi Professionali per Internet ed il Networking
Panservice e' associata AIIP - RIPE Local Registry
Phone: +39 0773 410020 - Fax: +39 0773 470219
http://www.panservice.it  mailto:i...@panservice.it
Numero verde nazionale: 800 901492
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Spice and supsend

2013-07-29 Thread Fabrizio Cuseo
Hello.
I am testing a Windows 7 VM with spice; if the VM goes in suspend mode after 30 
minutes (default setting), connecting with spice doesn't resume it; with an old 
console connection, the VM wakes up.

Regards, Fabrizio 

PS: it will be nice having a web portal to use only for spice connections, so 
the "normal users" can see a different menu with "Change password", Logout, and 
the list of assigned VM's with the "Connect" button.





-- 
---
Fabrizio Cuseo - mailto:f.cu...@panservice.it
Direzione Generale - Panservice InterNetWorking
Servizi Professionali per Internet ed il Networking
Panservice e' associata AIIP - RIPE Local Registry
Phone: +39 0773 410020 - Fax: +39 0773 470219
http://www.panservice.it  mailto:i...@panservice.it
Numero verde nazionale: 800 901492
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] PVE Spice and aSpice for Android

2013-07-28 Thread Fabrizio Cuseo
Hello.

I am trying to use aSpice for android with Proxmox and Spice without success.
The configuration of the client needs:
- Ip address
- Port
- TLS port
- Certificate Authority
- Cert Subject
- Spice Password

Trying to copy the data from the file downloaded from the "spice" button on the 
WebUI doesn't work; can you help me ? 
If possibile, can you check the browser to download a configuration file for 
aSpice (if the browser is from an android device) ? I will also ask to the 
"quadprox" app for android if they can develop something to open the spice 
console from their app.

Hoping that my english is enough to explain :) 

Thanks, Fabrizio 

-- 
---
Fabrizio Cuseo - mailto:f.cu...@panservice.it
Direzione Generale - Panservice InterNetWorking
Servizi Professionali per Internet ed il Networking
Panservice e' associata AIIP - RIPE Local Registry
Phone: +39 0773 410020 - Fax: +39 0773 470219
http://www.panservice.it  mailto:i...@panservice.it
Numero verde nazionale: 800 901492
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] SPICE for Proxmox VE (pvetest)

2013-07-27 Thread Fabrizio Cuseo
I am testing SPICE on a 3 nodes cluster.
If you are connected with the WebUI to the 1st node, and try to open SPICE for 
a VM that is on another node, it doesn't work; you need to connect with the 
WebUI to that node to correctly start it.

Regards, Fabrizio


- Messaggio originale -
> I've read about spice , and am unclear on its possible uses.
> 
> Can it be used to allow a secure connection to a kvm desktop?   If so
> can that be made to be as secure as a vpn connection?
> 
> 
> 
> On Wed 24 Jul 2013 11:02:31 AM EDT, Matthew W. Ross wrote:
> > Wonderful. Thanks for the update guys.
> >
> >
> > --Matt Ross
> > Ephrata School District
> >
> >
> > - Original Message -
> > From: Alexandre DERUMIER
> >
> >
> >>>> Aka, does it emulate the base VGA well enough to get the driver
> >>>> installed
> >>
> >> Yes, it's exactly like this. if no spice driver installed, the
> >> card is
> >> seeing as a vga card.
> >>
> >>
> >
> >
> > From: Dietmar Maurer
> >
> >
> >>> One last question: If I wanted to install a Windows machine with
> >>> the SPICE
> >>> graphics card, will the installer work before the driver is
> >>> installed over
> >> the
> >>> SPICE Client?
> >>
> >> yes
> >>
> > _______
> > pve-user mailing list
> > pve-user@pve.proxmox.com
> > http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> 

-- 
---
Fabrizio Cuseo - mailto:f.cu...@panservice.it
Direzione Generale - Panservice InterNetWorking
Servizi Professionali per Internet ed il Networking
Panservice e' associata AIIP - RIPE Local Registry
Phone: +39 0773 410020 - Fax: +39 0773 470219
http://www.panservice.it  mailto:i...@panservice.it
Numero verde nazionale: 800 901492
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Configuring ceph storage

2013-06-28 Thread Fabrizio Cuseo
from /etc/pve/storage.cfg

rbd: CephCluster
monhost 172.16.20.71:6789;172.16.20.72:6789;172.16.20.73:6789
pool rbd
content images
nodes proxmox-test-1,proxmox-test-3,proxmox-test-2
username admin

It works with my setup.

Fabrizio


- Messaggio originale -
Da: "Eneko Lacunza" 
A: "Fabrizio Cuseo" 
Cc: pve-user@pve.proxmox.com
Inviato: Venerdì, 28 giugno 2013 17:11:26
Oggetto: Re: [PVE-User] Configuring ceph storage

Hi Fabrizio,

Our configuration works perfectly, that's not the problem.

I'm asking how to configure more that one monitor for the Proxmox
storage :)


El vie, 28-06-2013 a las 17:04 +0200, Fabrizio Cuseo escribió:
> I think that is not the correct configuration; as you can see on the wiki 
> page http://pve.proxmox.com/wiki/Storage:_Ceph , you can't use the proxmox 
> host as part of the ceph cluster.
> 
> To resolve the issue without the need of other servers for the ceph storage, 
> I have done this setup:
> 
> - Proxmox host A - 2 raid-1 disks with proxmox
>- 2 local disks configured as LVM local storage, not shared
>- Qemu VM with Ceph cluster server, using local proxmox storage for the 
> OS, and the 2 LVM disk storage for Ceph storage
> 
> 
> With 3 of theese hosts, I have:
>  a full working proxmox cluster with 3 nodes
>  a full working ceph cluster with 3 nodes
> 
> All the VMS that I have on the cluster can use the ceph shared storage.
> 
> I know that is not the best setup for performances, but is really ceaph.
> 
> Regards, Fabrizio 
>   
> 
> 
> 
> 
> 
> - Messaggio originale -
> Da: "Eneko Lacunza" 
> A: "Fabrizio Cuseo" 
> Cc: pve-user@pve.proxmox.com
> Inviato: Venerdì, 28 giugno 2013 16:51:09
> Oggetto: Re: [PVE-User] Configuring ceph storage
> 
> Hi Fabrizio,
> 
> El vie, 28-06-2013 a las 16:48 +0200, Fabrizio Cuseo escribió:
> > How is configured your ceph cluster ? 
> > The phisical proxmox node is also the ceph cluster node, or do you have a 
> > VM running on every node, with local storage, that is part of the ceph 
> > cluster ? 
> 
> The physical proxmox nodes is also the ceph cluster node.
> 
> > 
> > Fabrizio
> > 
> > - Messaggio originale -
> > Da: "Eneko Lacunza" 
> > A: pve-user@pve.proxmox.com
> > Inviato: Venerdì, 28 giugno 2013 16:41:22
> > Oggetto: [PVE-User] Configuring ceph storage
> > 
> > Hi all,
> > 
> > We're performing some tests with cepth storage and Proxmox.
> > 
> > We noticed that there is a mandatory monitor server field. Is it
> > possible to configure more that one? We tried:
> > 
> > - server1,server2 -> seems to give error on VM startup
> > - server1 server2 -> VM startup ok
> > - server1;server2 -> VM startup ok
> > 
> > Our Proxmox nodes are running also the ceph cluster, so it could be that
> > qemu is using system-wide cepth configuration.
> > 
> > Thanks a lot
> > Eneko
> > 
> > -- 
> > Zuzendari Teknikoa / Director Técnico
> > Binovo IT Human Project, S.L.
> > Telf. 943575997
> >   943493611
> > Astigarraga bidea 2, planta 6 dcha., ofi. 3-2; 20180 Oiartzun (Gipuzkoa)
> > www.binovo.es
> > 
> > ___
> > pve-user mailing list
> > pve-user@pve.proxmox.com
> > http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> > 
> 
> -- 
> Zuzendari Teknikoa / Director Técnico
> Binovo IT Human Project, S.L.
> Telf. 943575997
>   943493611
> Astigarraga bidea 2, planta 6 dcha., ofi. 3-2; 20180 Oiartzun (Gipuzkoa)
> www.binovo.es
> 
> 

-- 
Zuzendari Teknikoa / Director Técnico
Binovo IT Human Project, S.L.
Telf. 943575997
  943493611
Astigarraga bidea 2, planta 6 dcha., ofi. 3-2; 20180 Oiartzun (Gipuzkoa)
www.binovo.es


-- 
---
Fabrizio Cuseo - mailto:f.cu...@panservice.it
Direzione Generale - Panservice InterNetWorking
Servizi Professionali per Internet ed il Networking
Panservice e' associata AIIP - RIPE Local Registry
Phone: +39 0773 410020 - Fax: +39 0773 470219
http://www.panservice.it  mailto:i...@panservice.it
Numero verde nazionale: 800 901492
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Configuring ceph storage

2013-06-28 Thread Fabrizio Cuseo
I think that is not the correct configuration; as you can see on the wiki page 
http://pve.proxmox.com/wiki/Storage:_Ceph , you can't use the proxmox host as 
part of the ceph cluster.

To resolve the issue without the need of other servers for the ceph storage, I 
have done this setup:

- Proxmox host A - 2 raid-1 disks with proxmox
   - 2 local disks configured as LVM local storage, not shared
   - Qemu VM with Ceph cluster server, using local proxmox storage for the OS, 
and the 2 LVM disk storage for Ceph storage


With 3 of theese hosts, I have:
 a full working proxmox cluster with 3 nodes
 a full working ceph cluster with 3 nodes

All the VMS that I have on the cluster can use the ceph shared storage.

I know that is not the best setup for performances, but is really ceaph.

Regards, Fabrizio 
  





- Messaggio originale -
Da: "Eneko Lacunza" 
A: "Fabrizio Cuseo" 
Cc: pve-user@pve.proxmox.com
Inviato: Venerdì, 28 giugno 2013 16:51:09
Oggetto: Re: [PVE-User] Configuring ceph storage

Hi Fabrizio,

El vie, 28-06-2013 a las 16:48 +0200, Fabrizio Cuseo escribió:
> How is configured your ceph cluster ? 
> The phisical proxmox node is also the ceph cluster node, or do you have a VM 
> running on every node, with local storage, that is part of the ceph cluster ? 

The physical proxmox nodes is also the ceph cluster node.

> 
> Fabrizio
> 
> - Messaggio originale -
> Da: "Eneko Lacunza" 
> A: pve-user@pve.proxmox.com
> Inviato: Venerdì, 28 giugno 2013 16:41:22
> Oggetto: [PVE-User] Configuring ceph storage
> 
> Hi all,
> 
> We're performing some tests with cepth storage and Proxmox.
> 
> We noticed that there is a mandatory monitor server field. Is it
> possible to configure more that one? We tried:
> 
> - server1,server2 -> seems to give error on VM startup
> - server1 server2 -> VM startup ok
> - server1;server2 -> VM startup ok
> 
> Our Proxmox nodes are running also the ceph cluster, so it could be that
> qemu is using system-wide cepth configuration.
> 
> Thanks a lot
> Eneko
> 
> -- 
> Zuzendari Teknikoa / Director Técnico
> Binovo IT Human Project, S.L.
> Telf. 943575997
>   943493611
> Astigarraga bidea 2, planta 6 dcha., ofi. 3-2; 20180 Oiartzun (Gipuzkoa)
> www.binovo.es
> 
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> 

-- 
Zuzendari Teknikoa / Director Técnico
Binovo IT Human Project, S.L.
Telf. 943575997
  943493611
Astigarraga bidea 2, planta 6 dcha., ofi. 3-2; 20180 Oiartzun (Gipuzkoa)
www.binovo.es


-- 
---
Fabrizio Cuseo - mailto:f.cu...@panservice.it
Direzione Generale - Panservice InterNetWorking
Servizi Professionali per Internet ed il Networking
Panservice e' associata AIIP - RIPE Local Registry
Phone: +39 0773 410020 - Fax: +39 0773 470219
http://www.panservice.it  mailto:i...@panservice.it
Numero verde nazionale: 800 901492
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Configuring ceph storage

2013-06-28 Thread Fabrizio Cuseo
Hi Eneko.

How is configured your ceph cluster ? 

The phisical proxmox node is also the ceph cluster node, or do you have a VM 
running on every node, with local storage, that is part of the ceph cluster ? 

Fabrizio

- Messaggio originale -
Da: "Eneko Lacunza" 
A: pve-user@pve.proxmox.com
Inviato: Venerdì, 28 giugno 2013 16:41:22
Oggetto: [PVE-User] Configuring ceph storage

Hi all,

We're performing some tests with cepth storage and Proxmox.

We noticed that there is a mandatory monitor server field. Is it
possible to configure more that one? We tried:

- server1,server2 -> seems to give error on VM startup
- server1 server2 -> VM startup ok
- server1;server2 -> VM startup ok

Our Proxmox nodes are running also the ceph cluster, so it could be that
qemu is using system-wide cepth configuration.

Thanks a lot
Eneko

-- 
Zuzendari Teknikoa / Director Técnico
Binovo IT Human Project, S.L.
Telf. 943575997
  943493611
Astigarraga bidea 2, planta 6 dcha., ofi. 3-2; 20180 Oiartzun (Gipuzkoa)
www.binovo.es

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

-- 
---
Fabrizio Cuseo - mailto:f.cu...@panservice.it
Direzione Generale - Panservice InterNetWorking
Servizi Professionali per Internet ed il Networking
Panservice e' associata AIIP - RIPE Local Registry
Phone: +39 0773 410020 - Fax: +39 0773 470219
http://www.panservice.it  mailto:i...@panservice.it
Numero verde nazionale: 800 901492
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Updates for Proxmox VE 3.0 - including storage migration ("qm move")

2013-06-11 Thread Fabrizio Cuseo
Hello Alexandre.
I have found the problem.

MooseFs needs a write back cache to work; if I use the default NO CACHE 
setting, the VM doesn't start.

So, if a move a powered-on disk from ceph to moosefs (remember that moosefs is 
locally mounted on the proxmox host), the default "no cache" setting on the new 
disk causes the problem.

So, having a cache setting on the destination disk can solve the issue.

Thanks in advance, Fabrizio Cuseo


- Messaggio originale -
Da: "Alexandre DERUMIER" 
A: "Fabrizio Cuseo" 
Cc: "proxmoxve (pve-user@pve.proxmox.com)" , "Martin 
Maurer" 
Inviato: Martedì, 11 giugno 2013 5:54:36
Oggetto: Re: [PVE-User] Updates for Proxmox VE 3.0 - including storage 
migration("qm move")

no problem to read/write to the ceph storage ?

(also notice that last pvetest update librbd, don't known if it's related)

Do you have tried to stop/start the vm, and try again ?




- Mail original -

De: "Fabrizio Cuseo" 
À: "Alexandre DERUMIER" 
Cc: "proxmoxve (pve-user@pve.proxmox.com)" , "Martin 
Maurer" 
Envoyé: Lundi 10 Juin 2013 19:16:57
Objet: Re: [PVE-User] Updates for Proxmox VE 3.0 - including storage migration 
("qm move")

Yes; it hangs at the beginning.

From moosefs to moosefs (both locally mounted) from qcow2 to qcow2 works fine..

Regards, Fabrizio


- Messaggio originale -
Da: "Alexandre DERUMIER" 
A: "Fabrizio Cuseo" 
Cc: "proxmoxve (pve-user@pve.proxmox.com)" , "Martin 
Maurer" 
Inviato: Lunedì, 10 giugno 2013 19:11:13
Oggetto: Re: [PVE-User] Updates for Proxmox VE 3.0 - including storage 
migration ("qm move")

I just check your logs, it seem that the job mirroring doesn't start at all ?
It's hanging at the beginning right ?


- Mail original -

De: "Fabrizio Cuseo" 
À: "Alexandre DERUMIER" 
Cc: "proxmoxve (pve-user@pve.proxmox.com)" , "Martin 
Maurer" 
Envoyé: Lundi 10 Juin 2013 18:49:32
Objet: Re: [PVE-User] Updates for Proxmox VE 3.0 - including storage migration 
("qm move")

Yes, backup works fine.
I will try to setup two different ceph storage or move from different kind of 
storage and i'll publish my results.

Thanks for now...
Fabrizio


- Messaggio originale -
Da: "Alexandre DERUMIER" 
A: "Fabrizio Cuseo" 
Cc: "proxmoxve (pve-user@pve.proxmox.com)" , "Martin 
Maurer" 
Inviato: Lunedì, 10 giugno 2013 18:39:29
Oggetto: Re: [PVE-User] Updates for Proxmox VE 3.0 - including storage 
migration ("qm move")

>>create full clone of drive virtio0 (CephCluster:vm-103-disk-1)
>>Formatting '/mnt/mfscluster2/images/103/vm-103-disk-1.raw', fmt=raw 
>>size=21474836480
>>TASK ERROR: storage migration failed: mirroring error: mirroring job seem to 
>>have die. Maybe do you have bad sectors? at 
>>/usr/share/perl5/PVE/QemuServer.pm line 4690.

Seem that the block mirror job have failed.
It's possibly a bad sector on virtual disk on ceph.
Does proxmox backup works fine for this ceph disk ? (It should also die if it's 
really a bad sector)




- Mail original -

De: "Fabrizio Cuseo" 
À: "Alexandre DERUMIER" 
Cc: "proxmoxve (pve-user@pve.proxmox.com)" , "Martin 
Maurer" 
Envoyé: Lundi 10 Juin 2013 14:33:23
Objet: Re: [PVE-User] Updates for Proxmox VE 3.0 - including storage migration 
("qm move")

Hello.
I am trying to move from a ceph disk to a local shared (moosefs) disk: this is 
the error I see.

Regards, Fabrizio


create full clone of drive virtio0 (CephCluster:vm-103-disk-1)
Formatting '/mnt/mfscluster2/images/103/vm-103-disk-1.raw', fmt=raw 
size=21474836480
TASK ERROR: storage migration failed: mirroring error: mirroring job seem to 
have die. Maybe do you have bad sectors? at /usr/share/perl5/PVE/QemuServer.pm 
line 4690.


- Messaggio originale -
Da: "Alexandre DERUMIER" 
A: "Fabrizio Cuseo" 
Cc: "proxmoxve (pve-user@pve.proxmox.com)" , 
pve-de...@pve.proxmox.com, "Martin Maurer" 
Inviato: Lunedì, 10 giugno 2013 14:21:13
Oggetto: Re: [PVE-User] Updates for Proxmox VE 3.0 - including storage 
migration ("qm move")

Wiki is not yet updated, but

From gui, you have a new "move disk" button, on your vm hardware tab. (works 
offline or online)

command line is : qm move_disk   


- Mail original -

De: "Fabrizio Cuseo" 
À: "Martin Maurer" 
Cc: "proxmoxve (pve-user@pve.proxmox.com)" , 
pve-de...@pve.proxmox.com
Envoyé: Lundi 10 Juin 2013 12:38:08
Objet: Re: [PVE-User] Updates for Proxmox VE 3.0 - including storage migration 
("qm move")

Hello Martin.
I have upgraded my test cluster; where can I find any doc regarding

Re: [PVE-User] Updates for Proxmox VE 3.0 - including storage migration ("qm move")

2013-06-10 Thread Fabrizio Cuseo
Yes; it hangs at the beginning.

From moosefs to moosefs (both locally mounted) from qcow2 to qcow2 works fine..

Regards, Fabrizio 


- Messaggio originale -
Da: "Alexandre DERUMIER" 
A: "Fabrizio Cuseo" 
Cc: "proxmoxve (pve-user@pve.proxmox.com)" , "Martin 
Maurer" 
Inviato: Lunedì, 10 giugno 2013 19:11:13
Oggetto: Re: [PVE-User] Updates for Proxmox VE 3.0 - including storage 
migration("qm move")

I just check your logs, it seem that the job mirroring doesn't start at all ?
It's hanging at the beginning right ?


- Mail original -

De: "Fabrizio Cuseo" 
À: "Alexandre DERUMIER" 
Cc: "proxmoxve (pve-user@pve.proxmox.com)" , "Martin 
Maurer" 
Envoyé: Lundi 10 Juin 2013 18:49:32
Objet: Re: [PVE-User] Updates for Proxmox VE 3.0 - including storage migration 
("qm move")

Yes, backup works fine.
I will try to setup two different ceph storage or move from different kind of 
storage and i'll publish my results.

Thanks for now...
Fabrizio


- Messaggio originale -
Da: "Alexandre DERUMIER" 
A: "Fabrizio Cuseo" 
Cc: "proxmoxve (pve-user@pve.proxmox.com)" , "Martin 
Maurer" 
Inviato: Lunedì, 10 giugno 2013 18:39:29
Oggetto: Re: [PVE-User] Updates for Proxmox VE 3.0 - including storage 
migration ("qm move")

>>create full clone of drive virtio0 (CephCluster:vm-103-disk-1)
>>Formatting '/mnt/mfscluster2/images/103/vm-103-disk-1.raw', fmt=raw 
>>size=21474836480
>>TASK ERROR: storage migration failed: mirroring error: mirroring job seem to 
>>have die. Maybe do you have bad sectors? at 
>>/usr/share/perl5/PVE/QemuServer.pm line 4690.

Seem that the block mirror job have failed.
It's possibly a bad sector on virtual disk on ceph.
Does proxmox backup works fine for this ceph disk ? (It should also die if it's 
really a bad sector)




- Mail original -

De: "Fabrizio Cuseo" 
À: "Alexandre DERUMIER" 
Cc: "proxmoxve (pve-user@pve.proxmox.com)" , "Martin 
Maurer" 
Envoyé: Lundi 10 Juin 2013 14:33:23
Objet: Re: [PVE-User] Updates for Proxmox VE 3.0 - including storage migration 
("qm move")

Hello.
I am trying to move from a ceph disk to a local shared (moosefs) disk: this is 
the error I see.

Regards, Fabrizio


create full clone of drive virtio0 (CephCluster:vm-103-disk-1)
Formatting '/mnt/mfscluster2/images/103/vm-103-disk-1.raw', fmt=raw 
size=21474836480
TASK ERROR: storage migration failed: mirroring error: mirroring job seem to 
have die. Maybe do you have bad sectors? at /usr/share/perl5/PVE/QemuServer.pm 
line 4690.


- Messaggio originale -
Da: "Alexandre DERUMIER" 
A: "Fabrizio Cuseo" 
Cc: "proxmoxve (pve-user@pve.proxmox.com)" , 
pve-de...@pve.proxmox.com, "Martin Maurer" 
Inviato: Lunedì, 10 giugno 2013 14:21:13
Oggetto: Re: [PVE-User] Updates for Proxmox VE 3.0 - including storage 
migration ("qm move")

Wiki is not yet updated, but

From gui, you have a new "move disk" button, on your vm hardware tab. (works 
offline or online)

command line is : qm move_disk   


- Mail original -

De: "Fabrizio Cuseo" 
À: "Martin Maurer" 
Cc: "proxmoxve (pve-user@pve.proxmox.com)" , 
pve-de...@pve.proxmox.com
Envoyé: Lundi 10 Juin 2013 12:38:08
Objet: Re: [PVE-User] Updates for Proxmox VE 3.0 - including storage migration 
("qm move")

Hello Martin.
I have upgraded my test cluster; where can I find any doc regarding storage 
migration ?

Thanks in advance, Fabrizio


- Messaggio originale -
Da: "Martin Maurer" 
A: "proxmoxve (pve-user@pve.proxmox.com)" , 
pve-de...@pve.proxmox.com
Inviato: Lunedì, 10 giugno 2013 10:33:15
Oggetto: [PVE-User] Updates for Proxmox VE 3.0 - including storage migration 
("qm move")

Hi all,

We just uploaded a bunch of packages to our pvetest repository 
(http://pve.proxmox.com/wiki/Package_repositories) , including a lot of bug 
fixes, code cleanups, qemu 1.4.2 and also a new quite cool new feature - 
storage migration ("qm move").

A big Thank-you to our active community for all feedback, testing, bug 
reporting and patch submissions.

Release Notes

- ceph (0.61.3-1~bpo70+1)
*update of ceph-common, librbd1 and librados2

- libpve-storage-perl (3.0-8)
* rdb: --format is deprecated, use --image-format instead
* be more verebose on rbd commands to get progress
* various fixes for nexenta plugin

- vncterm (1.1-4)
* Allow to add intermediate certificates to /etc/pve/local/pve-ssl.pem (users 
previously used apache option SSLCertificateChainFile for that)

- pve-qemu-kvm (1.4-13)
* update to qemu 1.4.2
* remove rbd-add-an-asynchronous-flush.patch (upstream now)

- qemu-server (3.0-20)
* new API to update VM config: t

Re: [PVE-User] Updates for Proxmox VE 3.0 - including storage migration ("qm move")

2013-06-10 Thread Fabrizio Cuseo
Yes, backup works fine.
I will try to setup two different ceph storage or move from different kind of 
storage and i'll publish my results.

Thanks for now...
Fabrizio


- Messaggio originale -
Da: "Alexandre DERUMIER" 
A: "Fabrizio Cuseo" 
Cc: "proxmoxve (pve-user@pve.proxmox.com)" , "Martin 
Maurer" 
Inviato: Lunedì, 10 giugno 2013 18:39:29
Oggetto: Re: [PVE-User] Updates for Proxmox VE 3.0 - including storage 
migration("qm move")

>>create full clone of drive virtio0 (CephCluster:vm-103-disk-1) 
>>Formatting '/mnt/mfscluster2/images/103/vm-103-disk-1.raw', fmt=raw 
>>size=21474836480 
>>TASK ERROR: storage migration failed: mirroring error: mirroring job seem to 
>>have die. Maybe do you have bad sectors? at 
>>/usr/share/perl5/PVE/QemuServer.pm line 4690. 

Seem that the block mirror job have failed.
It's possibly a bad sector on virtual disk on ceph.
Does proxmox backup works fine for this ceph disk ? (It should also die if it's 
really a bad sector)




- Mail original -

De: "Fabrizio Cuseo" 
À: "Alexandre DERUMIER" 
Cc: "proxmoxve (pve-user@pve.proxmox.com)" , "Martin 
Maurer" 
Envoyé: Lundi 10 Juin 2013 14:33:23
Objet: Re: [PVE-User] Updates for Proxmox VE 3.0 - including storage migration 
("qm move")

Hello.
I am trying to move from a ceph disk to a local shared (moosefs) disk: this is 
the error I see.

Regards, Fabrizio


create full clone of drive virtio0 (CephCluster:vm-103-disk-1)
Formatting '/mnt/mfscluster2/images/103/vm-103-disk-1.raw', fmt=raw 
size=21474836480
TASK ERROR: storage migration failed: mirroring error: mirroring job seem to 
have die. Maybe do you have bad sectors? at /usr/share/perl5/PVE/QemuServer.pm 
line 4690.


- Messaggio originale -
Da: "Alexandre DERUMIER" 
A: "Fabrizio Cuseo" 
Cc: "proxmoxve (pve-user@pve.proxmox.com)" , 
pve-de...@pve.proxmox.com, "Martin Maurer" 
Inviato: Lunedì, 10 giugno 2013 14:21:13
Oggetto: Re: [PVE-User] Updates for Proxmox VE 3.0 - including storage 
migration ("qm move")

Wiki is not yet updated, but

From gui, you have a new "move disk" button, on your vm hardware tab. (works 
offline or online)

command line is : qm move_disk   


- Mail original -

De: "Fabrizio Cuseo" 
À: "Martin Maurer" 
Cc: "proxmoxve (pve-user@pve.proxmox.com)" , 
pve-de...@pve.proxmox.com
Envoyé: Lundi 10 Juin 2013 12:38:08
Objet: Re: [PVE-User] Updates for Proxmox VE 3.0 - including storage migration 
("qm move")

Hello Martin.
I have upgraded my test cluster; where can I find any doc regarding storage 
migration ?

Thanks in advance, Fabrizio


- Messaggio originale -
Da: "Martin Maurer" 
A: "proxmoxve (pve-user@pve.proxmox.com)" , 
pve-de...@pve.proxmox.com
Inviato: Lunedì, 10 giugno 2013 10:33:15
Oggetto: [PVE-User] Updates for Proxmox VE 3.0 - including storage migration 
("qm move")

Hi all,

We just uploaded a bunch of packages to our pvetest repository 
(http://pve.proxmox.com/wiki/Package_repositories) , including a lot of bug 
fixes, code cleanups, qemu 1.4.2 and also a new quite cool new feature - 
storage migration ("qm move").

A big Thank-you to our active community for all feedback, testing, bug 
reporting and patch submissions.

Release Notes

- ceph (0.61.3-1~bpo70+1)
*update of ceph-common, librbd1 and librados2

- libpve-storage-perl (3.0-8)
* rdb: --format is deprecated, use --image-format instead
* be more verebose on rbd commands to get progress
* various fixes for nexenta plugin

- vncterm (1.1-4)
* Allow to add intermediate certificates to /etc/pve/local/pve-ssl.pem (users 
previously used apache option SSLCertificateChainFile for that)

- pve-qemu-kvm (1.4-13)
* update to qemu 1.4.2
* remove rbd-add-an-asynchronous-flush.patch (upstream now)

- qemu-server (3.0-20)
* new API to update VM config: this one is fully asynchronous.
* snapshot rollback: use pc-i440fx-1.4 as default
* config: implement new 'machine' configuration
* migrate: pass --machine parameter to remote 'qm start' command
* snapshot: store/use 'machine' confifiguration
* imlement delete flag for move_disk
* API: rename move to move_disk
* implement storage migration ("qm move")
* fix bug 395: correctly handle unused disk with storage alias
* fix unused disk handling (do not hide unused disks when used with snapshot).

- pve-manager (3.0-23)
* fix bug #368: use vtype 'DnsName' to verify host names
* fix bug #401: disable connection timeout during API call processing
* add suport for new qemu-server async configuration API
* support 'delete' flag for 'Move disk'
* add 'Move disk' button for storage migration

Best Regards,

Martin Maurer

Re: [PVE-User] Updates for Proxmox VE 3.0 - including storage migration ("qm move")

2013-06-10 Thread Fabrizio Cuseo
Hello.
I am trying to move from a ceph disk to a local shared (moosefs) disk: this is 
the error I see.

Regards, Fabrizio 


create full clone of drive virtio0 (CephCluster:vm-103-disk-1)
Formatting '/mnt/mfscluster2/images/103/vm-103-disk-1.raw', fmt=raw 
size=21474836480
TASK ERROR: storage migration failed: mirroring error: mirroring job seem to 
have die. Maybe do you have bad sectors? at /usr/share/perl5/PVE/QemuServer.pm 
line 4690.


- Messaggio originale -
Da: "Alexandre DERUMIER" 
A: "Fabrizio Cuseo" 
Cc: "proxmoxve (pve-user@pve.proxmox.com)" , 
pve-de...@pve.proxmox.com, "Martin Maurer" 
Inviato: Lunedì, 10 giugno 2013 14:21:13
Oggetto: Re: [PVE-User] Updates for Proxmox VE 3.0 - including storage 
migration("qm move")

Wiki is not yet updated, but

From gui, you have a new "move disk" button, on your vm hardware tab. (works 
offline or online)

command line is : qm move_disk   


- Mail original -

De: "Fabrizio Cuseo" 
À: "Martin Maurer" 
Cc: "proxmoxve (pve-user@pve.proxmox.com)" , 
pve-de...@pve.proxmox.com
Envoyé: Lundi 10 Juin 2013 12:38:08
Objet: Re: [PVE-User] Updates for Proxmox VE 3.0 - including storage migration 
("qm move")

Hello Martin.
I have upgraded my test cluster; where can I find any doc regarding storage 
migration ?

Thanks in advance, Fabrizio


- Messaggio originale -
Da: "Martin Maurer" 
A: "proxmoxve (pve-user@pve.proxmox.com)" , 
pve-de...@pve.proxmox.com
Inviato: Lunedì, 10 giugno 2013 10:33:15
Oggetto: [PVE-User] Updates for Proxmox VE 3.0 - including storage migration 
("qm move")

Hi all,

We just uploaded a bunch of packages to our pvetest repository 
(http://pve.proxmox.com/wiki/Package_repositories) , including a lot of bug 
fixes, code cleanups, qemu 1.4.2 and also a new quite cool new feature - 
storage migration ("qm move").

A big Thank-you to our active community for all feedback, testing, bug 
reporting and patch submissions.

Release Notes

- ceph (0.61.3-1~bpo70+1)
*update of ceph-common, librbd1 and librados2

- libpve-storage-perl (3.0-8)
* rdb: --format is deprecated, use --image-format instead
* be more verebose on rbd commands to get progress
* various fixes for nexenta plugin

- vncterm (1.1-4)
* Allow to add intermediate certificates to /etc/pve/local/pve-ssl.pem (users 
previously used apache option SSLCertificateChainFile for that)

- pve-qemu-kvm (1.4-13)
* update to qemu 1.4.2
* remove rbd-add-an-asynchronous-flush.patch (upstream now)

- qemu-server (3.0-20)
* new API to update VM config: this one is fully asynchronous.
* snapshot rollback: use pc-i440fx-1.4 as default
* config: implement new 'machine' configuration
* migrate: pass --machine parameter to remote 'qm start' command
* snapshot: store/use 'machine' confifiguration
* imlement delete flag for move_disk
* API: rename move to move_disk
* implement storage migration ("qm move")
* fix bug 395: correctly handle unused disk with storage alias
* fix unused disk handling (do not hide unused disks when used with snapshot).

- pve-manager (3.0-23)
* fix bug #368: use vtype 'DnsName' to verify host names
* fix bug #401: disable connection timeout during API call processing
* add suport for new qemu-server async configuration API
* support 'delete' flag for 'Move disk'
* add 'Move disk' button for storage migration

Best Regards,

Martin Maurer

mar...@proxmox.com
http://www.proxmox.com

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

--
---
Fabrizio Cuseo - mailto:f.cu...@panservice.it
Direzione Generale - Panservice InterNetWorking
Servizi Professionali per Internet ed il Networking
Panservice e' associata AIIP - RIPE Local Registry
Phone: +39 0773 410020 - Fax: +39 0773 470219
http://www.panservice.it mailto:i...@panservice.it
Numero verde nazionale: 800 901492
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

-- 
---
Fabrizio Cuseo - mailto:f.cu...@panservice.it
Direzione Generale - Panservice InterNetWorking
Servizi Professionali per Internet ed il Networking
Panservice e' associata AIIP - RIPE Local Registry
Phone: +39 0773 410020 - Fax: +39 0773 470219
http://www.panservice.it  mailto:i...@panservice.it
Numero verde nazionale: 800 901492
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Updates for Proxmox VE 3.0 - including storage migration ("qm move")

2013-06-10 Thread Fabrizio Cuseo
Hello Martin.
I have upgraded my test cluster; where can I find any doc regarding storage 
migration ? 

Thanks in advance, Fabrizio


- Messaggio originale -
Da: "Martin Maurer" 
A: "proxmoxve (pve-user@pve.proxmox.com)" , 
pve-de...@pve.proxmox.com
Inviato: Lunedì, 10 giugno 2013 10:33:15
Oggetto: [PVE-User] Updates for Proxmox VE 3.0 - including storage migration
("qm move")

Hi all,

We just uploaded a bunch of packages to our pvetest repository 
(http://pve.proxmox.com/wiki/Package_repositories) , including a lot of bug 
fixes, code cleanups, qemu 1.4.2 and also a new quite cool new feature - 
storage migration ("qm move").

A big Thank-you to our active community for all feedback, testing, bug 
reporting and patch submissions.

Release Notes

- ceph (0.61.3-1~bpo70+1)
  *update of ceph-common, librbd1 and librados2

- libpve-storage-perl (3.0-8)
  * rdb: --format is deprecated, use --image-format instead
  * be more verebose on rbd commands to get progress
  * various fixes for nexenta plugin

- vncterm (1.1-4)
  * Allow to add intermediate certificates to /etc/pve/local/pve-ssl.pem (users 
previously used apache option SSLCertificateChainFile for that)

- pve-qemu-kvm (1.4-13)
  * update to qemu 1.4.2
  * remove rbd-add-an-asynchronous-flush.patch (upstream now)

- qemu-server (3.0-20)
  * new API to update VM config: this one is fully asynchronous.
  * snapshot rollback: use pc-i440fx-1.4 as default
  * config: implement new 'machine' configuration
  * migrate: pass --machine parameter to remote 'qm start' command
  * snapshot: store/use 'machine' confifiguration
  * imlement delete flag for move_disk
  * API: rename move to move_disk
  * implement storage migration ("qm move")
  * fix bug 395: correctly handle unused disk with storage alias
  * fix unused disk handling (do not hide unused disks when used with snapshot).

- pve-manager (3.0-23)
  * fix bug #368: use vtype 'DnsName' to verify host names
  * fix bug #401: disable connection timeout during API call processing
  * add suport for new qemu-server async configuration API
  * support 'delete' flag for 'Move disk'
  * add 'Move disk' button for storage migration

Best Regards,

Martin Maurer

mar...@proxmox.com
http://www.proxmox.com

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

-- 
---
Fabrizio Cuseo - mailto:f.cu...@panservice.it
Direzione Generale - Panservice InterNetWorking
Servizi Professionali per Internet ed il Networking
Panservice e' associata AIIP - RIPE Local Registry
Phone: +39 0773 410020 - Fax: +39 0773 470219
http://www.panservice.it  mailto:i...@panservice.it
Numero verde nazionale: 800 901492
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Backup retention policy

2013-05-22 Thread Fabrizio Cuseo
Hello Steve.
I know that is possibile with a custom (and simple) script, but as you are 
writing is something that is better to avoid.

Having all on the web gui can make, for a customer, all simpler; and I think 
that is a feature that will be appreciated from any user.

Thanks for you reply.

Regards, Fabrizio 


- Messaggio originale -
Da: "Steve Audia" 
A: ad...@extremeshok.com
Cc: "Fabrizio Cuseo" , "pve-user" 

Inviato: Mercoledì, 22 maggio 2013 11:15:38
Oggetto: Re: [PVE-User] Backup retention policy



If it's just the rotation part you need, you could handle that on the backup 
server side with a short script, copying the dailies into weekly folders, 
weeklies into monthly folders, etc. 


I know, rolling a custom script is something some want to avoid, but this 
particular script would be very straight forward. I've been using this guy's 
strategy for years... 

http://www.mikerubel.org/computers/rsync_snapshots/ 


...and now that is rolled into a utility called rsnapshot ( www.rsnapshot.org 
). In this case, you would only use to rotate, though, not actually back up. 






On Fri, May 17, 2013 at 6:33 AM, admin extremeshok.com < ad...@extremeshok.com 
> wrote: 


Yes and delta backups. 

Ie.. full backup every week, then a delta backup every day. 

Thanks 


On 2013-05-16 01:58 PM, Fabrizio Cuseo wrote: 
> Hello people. 
> 
> I am seeing that Proxmox misses a very important features about backup 
> retention. 
> 
> An example could be: 
> 
> - Last week, N backup daily 
> - Last month, N backup weekly 
> - Last year, N backup monthly 
> 
> So, with a total of 9 backups, I could have: 
> 
> - The last 3 daily backups 
> - The last 3 weekly backups 
> - The last 3 montly backups 
> 
> Another nice feature could be a "pool" selection of VM's to backup, so adding 
> a VM in the pool will insert the VM in the backup schedule. 
> 
> Also the machine name in the backup file will be useful (i have already read 
> about it in the mailing list). 
> 
> Thanks, Fabrizio 
> 
> 
> 
> 



___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 



-- 
__ __ __ __ __ 

Steve Audia w +1(412)268-1438 
Director of Information Technology sau...@cmu.edu 
Carnegie Mellon www.etc.cmu.edu 
Entertainment Technology Center @cmuetc 
__ 

-- 
---
Fabrizio Cuseo - mailto:f.cu...@panservice.it
Direzione Generale - Panservice InterNetWorking
Servizi Professionali per Internet ed il Networking
Panservice e' associata AIIP - RIPE Local Registry
Phone: +39 0773 410020 - Fax: +39 0773 470219
http://www.panservice.it  mailto:i...@panservice.it
Numero verde nazionale: 800 901492
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Backup retention policy

2013-05-16 Thread Fabrizio Cuseo
Hello Jeff.
This is one of the features of Vmware Data Recovery, an appliance of the 
Essential Plus (and not only) suite.

I already use BackupPC for my own VMs, but for customer's VMs I can't use an OS 
solution.
And for a complete and quick disaster recovery, is easier a total VM restore. 

The perfect backup solution will be a block-level incremental backup 
(http://wiki.qemu.org/Features/Livebackup) that is not still included in Qemu, 
but I hope it could be soon. In vmware this is VCB (Vmware Consolidated 
Backup). I know that in some cases (databases and similar) without an agent we 
could have a not perfect backup, but is a great start point.

Regards, Fabrizio


- Messaggio originale -
> 
> is this really the hypervisor's job?
> 
> it sounds like you need a more robust solution, so you might
> entertain
> either BackupPC, Bacula, or AMANDA.
> 
> 
> Jeff
> 
> On Thu, 2013-05-16 at 13:58 +0200, Fabrizio Cuseo wrote:
> > Hello people.
> > 
> > I am seeing that Proxmox misses a very important features about
> > backup retention.
> > 
> > An example could be:
> > 
> > - Last week,  N backup daily
> > - Last month, N backup weekly
> > - Last year,  N backup monthly
> > 
> > So, with a total of 9 backups, I could have:
> > 
> > - The last 3 daily backups
> > - The last 3 weekly backups
> > - The last 3 montly backups
> > 
> > Another nice feature could be a "pool" selection of VM's to backup,
> > so adding a VM in the pool will insert the VM in the backup
> > schedule.
> > 
> > Also the machine name in the backup file will be useful (i have
> > already read about it in the mailing list).
> > 
> > Thanks, Fabrizio
> > 
> > 
> > 
> > 
> 
> 
> 

-- 
---
Fabrizio Cuseo - mailto:f.cu...@panservice.it
Direzione Generale - Panservice InterNetWorking
Servizi Professionali per Internet ed il Networking
Panservice e' associata AIIP - RIPE Local Registry
Phone: +39 0773 410020 - Fax: +39 0773 470219
http://www.panservice.it  mailto:i...@panservice.it
Numero verde nazionale: 800 901492
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Backup retention policy

2013-05-16 Thread Fabrizio Cuseo
Hello people.

I am seeing that Proxmox misses a very important features about backup 
retention.

An example could be:

- Last week,  N backup daily
- Last month, N backup weekly
- Last year,  N backup monthly

So, with a total of 9 backups, I could have:

- The last 3 daily backups
- The last 3 weekly backups
- The last 3 montly backups

Another nice feature could be a "pool" selection of VM's to backup, so adding a 
VM in the pool will insert the VM in the backup schedule.

Also the machine name in the backup file will be useful (i have already read 
about it in the mailing list).

Thanks, Fabrizio 




-- 
---
Fabrizio Cuseo - mailto:f.cu...@panservice.it
Direzione Generale - Panservice InterNetWorking
Servizi Professionali per Internet ed il Networking
Panservice e' associata AIIP - RIPE Local Registry
Phone: +39 0773 410020 - Fax: +39 0773 470219
http://www.panservice.it  mailto:i...@panservice.it
Numero verde nazionale: 800 901492
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] 3.0 RC1 Clone on different node / local storage

2013-05-11 Thread Fabrizio Cuseo
Hello Martin.

Sorry, I was wrong.
I am cloning from a SHARED storage (ceph) on node 1, to node 3 LOCAL storage. 
But when I choose as target storage a local storage, it is the local storage of 
the source node, not the local storage of the destination node. All is done via 
GUI.

Regards, Fabrizio 


- Messaggio originale -
> > Subject: [PVE-User] 3.0 RC1 Clone on different node / local storage
> > 
> > Hello there.
> > 
> > I'm trying a new test cluster of 3 nodes with ceph and PVE 3.0Rc1.
> > 
> > I have a VM on Node 1 with local storage If I clone it choosing
> > Node 2 and
> > local storage, it clones the disk image, put the conf file on the
> > Node 2, but the
> > Image file is on local storage of Node 1, so I need to move it
> > manually (rsync -
> > aS).
> 
> Cloning local storage to a remote node (local storage) does not work.
> As soon as you select a remote node for target, it will show a red
> error ".. is not allowed for this action".
> So the questions is, how do you trigger this action? Via gui, its not
> possible.
> 
> Martin
> 
> 
> 

-- 
---
Fabrizio Cuseo - mailto:f.cu...@panservice.it
Direzione Generale - Panservice InterNetWorking
Servizi Professionali per Internet ed il Networking
Panservice e' associata AIIP - RIPE Local Registry
Phone: +39 0773 410020 - Fax: +39 0773 470219
http://www.panservice.it  mailto:i...@panservice.it
Numero verde nazionale: 800 901492
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] 3.0 RC1 Clone on different node / local storage

2013-05-11 Thread Fabrizio Cuseo
Hello there.

I'm trying a new test cluster of 3 nodes with ceph and PVE 3.0Rc1.

I have a VM on Node 1 with local storage
If I clone it choosing Node 2 and local storage, it clones the disk image, put 
the conf file on the Node 2, but the Image file is on local storage of Node 1, 
so I need to move it manually (rsync -aS).

I think that it is a bug. Choose a local storage in a clone operation, mean the 
local storage of the destination node, not the local storage of the source node.

Regards, Fabrizio 




-- 
---
Fabrizio Cuseo - mailto:f.cu...@panservice.it
Direzione Generale - Panservice InterNetWorking
Servizi Professionali per Internet ed il Networking
Panservice e' associata AIIP - RIPE Local Registry
Phone: +39 0773 410020 - Fax: +39 0773 470219
http://www.panservice.it  mailto:i...@panservice.it
Numero verde nazionale: 800 901492
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] how to migrate vm's from a almost dead proxmox node?

2013-05-11 Thread Fabrizio Cuseo
IF your disk images are all on the NFS storage, you only need to copy the VM 
configuration files in /etc/pve/qemu-server/*.conf from the dead server to the 
alive server AFTER powering down all the VMs (offline operation).

This is valid for KVM VMs... i don't know for containers (that I don't use).

Good luck, Fabrizio 


- Messaggio originale -
> Hey there,
> 
> I'm responsible for a Proxmox-cluster consisting of two nodes and
> around
> 20 virtual machines, shared storage comes via NFS, but no HA. The
> nodes
> are starting from a embedded usb 4GB ATP module: no local hard disks.
> 
> The filesystem of one node became corrupted because the embedded ATP
> stick failed (i/o errors), the filesystem is mounted read-only now.
> The
> virtual machines still running fine on this node, i am able to login
> via
> ssh or via the web interface.
> 
> I want to move the vm's to the other node now, but this is not
> possible
> (^= readonly fs):
> 
> > root@pm3:~# grep 'clusternode ' /etc/pve/cluster.conf
> >   
> >   
> > root@pm3:~#
> > root@pm3:~# LANG=C qm migrate 816 pm1 --online
> > unable to create output file
> > '/var/log/pve/tasks/D/UPID:pm3:00023A0C:1ADAE768:518E1E9D:qmigrate:816:root@pam:'
> > - Read-only file system
> > root@pm3:~# LANG=C qm stop 816
> > unable to create output file
> > '/var/log/pve/tasks/0/UPID:pm3:00023A8D:1ADB20BE:518E1F30:qmstop:816:root@pam:'
> > - Read-only file system
> 
> I need to find a way to (online or offline) migrate the virtual
> machines
> to the other node.
> 
> I thought to remove the broken node from the cluster and then startup
> the virtual machine by hand on the surviving cluster, similar to this
> post:
> http://forum.proxmox.com/threads/8017-Migrate-CT-from-dead-cluster-node
> 
> Any hints?
> 
> 
> 
> Freundliche Grüße / Best Regards
> 
>  Lutz Willek
> 
> --
> creating IT solutions
> Lutz Willek science + computing ag
> Senior Systems Engineer Geschäftsstelle Berlin
> IT Services Berlin  Friedrichstraße 187
> phone +49(0)30 2007697-21   10117 Berlin, Germany
> fax   +49(0)30 2007697-11   www.science-computing.de
> --
> Vorstandsvorsitzender/Chairman of the board of management:
> Gerd-Lothar Leonhart
> Vorstand/Board of Management:
> Dr. Bernd Finkbeiner, Michael Heinrichs,
> Dr. Arno Steitz, Dr. Ingrid Zech
> Vorsitzender des Aufsichtsrats/
> Chairman of the Supervisory Board:
> Philippe Miltin
> Sitz/Registered Office: Tuebingen
> Registergericht/Registration Court: Stuttgart
> Registernummer/Commercial Register No.: HRB 382196
> 
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> 

-- 
---
Fabrizio Cuseo - mailto:f.cu...@panservice.it
Direzione Generale - Panservice InterNetWorking
Servizi Professionali per Internet ed il Networking
Panservice e' associata AIIP - RIPE Local Registry
Phone: +39 0773 410020 - Fax: +39 0773 470219
http://www.panservice.it  mailto:i...@panservice.it
Numero verde nazionale: 800 901492
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Proxmox VE 3.0 RC1 released!

2013-05-08 Thread Fabrizio Cuseo
Wow ! Great !

What about the Live Storage Migration ? Is it planned in this release ? 

Best Regards, Fabrizio Cuseo




- Messaggio originale -
Da: "Martin Maurer" 
A: "proxmoxve (pve-user@pve.proxmox.com)" , 
pve-de...@pve.proxmox.com
Inviato: Mercoledì, 8 maggio 2013 9:30:28
Oggetto: [PVE-User] Proxmox VE 3.0 RC1 released!

Hi all,

we just released Proxmox VE 3.0 RC1 (release candidate). It's based on the 
great Debian 7.0 release (Wheezy) and introduces a great new feature set:

http://pve.proxmox.com/wiki/VM_Templates_and_Clones

Under the hood, many improvements and optimizations are done, most important is 
the replacement of Apache2 by our own event driven API server.

A big Thank-you to our active community for all feedback, testing, bug 
reporting and patch submissions.

Release notes
http://pve.proxmox.com/wiki/Roadmap#Proxmox_VE_3.0

Download
http://www.proxmox.com/downloads/proxmox-ve/17-iso-images

Install Proxmox VE on Debian Wheezy
http://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Wheezy

Upgrade from any 2.3 to 3.0
http://pve.proxmox.com/wiki/Upgrade_from_2.3_to_3.0

All RC1 installations can be updated to 3.0 stable without any problems (apt).
__
Best regards,

Martin Maurer
Proxmox VE project leader

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

-- 
---
Fabrizio Cuseo - mailto:f.cu...@panservice.it
Direzione Generale - Panservice InterNetWorking
Servizi Professionali per Internet ed il Networking
Panservice e' associata AIIP - RIPE Local Registry
Phone: +39 0773 410020 - Fax: +39 0773 470219
http://www.panservice.it  mailto:i...@panservice.it
Numero verde nazionale: 800 901492
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Ceph storage in proxmox nodes

2013-03-15 Thread Fabrizio Cuseo
I have done it creating a Vm on the local storage for each node, mounting two 
or more lvm storage for ceph

Eric Abreu Alamo  ha scritto:

> Hello to all people
>
> 
> 
>Lately I wonder if would be posible to build a proxmox procesing and
>storage cluster with the same servers. For example, if I have 4 server
>(nodes) and i want to install proxmox and ceph to each cluster node and
>use same procesing node as cluster server storage. I have been thinking
>to install debian, install and config ceph and later install proxmox.
>It's that possible?
>
> 
>Sorry my english
>
> 
>Thank's in Advance
>
> 
> 
>
>
>
>
>___
>pve-user mailing list
>pve-user@pve.proxmox.com
>http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

-- Inviato dal mio cellulare Android con K-9 Mail.___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] PVE 2.3 Ceph live backup

2013-02-15 Thread Fabrizio Cuseo
Hello people.

I am trying a 3 host cluster with a ceph storage for disk images.

I have installed 2.2 pve, upgraded with the last (3 days ago) pvetest package.

How can I use Live Backup ? 

If i try to backup a VM (i only use qemu VMs), I only have the usuale choises 
(snapshot, suspend, stop), and the backup fails with this error:

 INFO: starting new backup job: vzdump 104 --remove 0 --mode snapshot 
--compress lzo --storage mfscluster2 --node nodo03
 INFO: Starting Backup of VM 104 (qemu)
 INFO: status = running
 ERROR: Backup of VM 104 failed - no such volume 'CephCluster:vm-104-disk-1'
 INFO: Backup job finished with errors
 TASK ERROR: job errors
 

Thank's in advance, Fabrizio 


-- 
---
Fabrizio Cuseo - mailto:f.cu...@panservice.it
Direzione Generale - Panservice InterNetWorking
Servizi Professionali per Internet ed il Networking
Panservice e' associata AIIP - RIPE Local Registry
Phone: +39 0773 410020 - Fax: +39 0773 470219
http://www.panservice.it  mailto:i...@panservice.it
Numero verde nazionale: 800 901492
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user