Re: [PVE-User] Proxmox and firewall

2013-06-11 Thread Julien Groselle
Hi Michael,

Thank you for your SMART answer !
We will make our bash script now to enable the firewall on our two proxmox
host.

Have a good day ! ;-)

*JG*

2013/6/11 Michael Rasmussen 

> On Tue, 11 Jun 2013 17:24:30 +0200
> Julien Groselle  wrote:
>
> > Hello again,
> >
> > In our company, wet set up heavy firewall on every servers.
> > So, after many tests on proxmox with an open firewall, it's time to put
> > servers in production.
> > Before this step, we have to configure our iptables rules :
> >
> > Here is a partial output of my 'netstat -lnpute' :
> > tcp0  0 127.0.0.1:850.0.0.0:*
> LISTEN
> >  0  35730752433645/pvedaemon
> > tcp0  0 0.0.0.0:80060.0.0.0:*
> LISTEN
> >  33 35730876433690/pveproxy
> > udp0  0 192.168.100.187:54040.0.0.0:*
> > 0  133815114501/corosync
> > udp0  0 192.168.100.187:54050.0.0.0:*
> > 0  133815124501/corosync
> > udp0  0 239.192.1.240:5405  0.0.0.0:*
> > 0  133815084501/corosync
> >
> > I just have to open tcp/8006 and all the udp/540* ? Or are there any port
> > that proxmox need to use ?
> > I'm sure that the ssh have to be open in between the two nodes, but what
> > else ?
> >
> I run the following script at boot on every host. Every host has 2 nics
> in bond and has configured a number vlans and bridges. The hosts
> has only a configured IP on vmbr0 (default vlan0), on a lan for shared
> storage (vlan20), and on a lan for migration (vlan30). Everything is
> connected through a managed switch. vlan20 is accessible by all
> storage nodes and all hosts. vlan30 is only accessible by hosts. The
> only access to hosts is via vlan0.
>
> cat /etc/iptables.sh
> #!/bin/sh
>
> iptables -F INPUT
>
> # Block all input on vmbr0 except
> # https(8006)
> iptables -A INPUT -i vmbr0 -p tcp --dport 8006 -m state --state NEW -j
> ACCEPT
> # vnc-console (5900-5910)
> iptables -A INPUT -i vmbr0 -p tcp -m multiport --dports 5900:5910 -m
> state --state NEW -j ACCEPT
> # apcups (udp:3551)
> iptables -A INPUT -i vmbr0 -p udp --dport 3551 -m state --state NEW -j
> ACCEPT
>
> # Related traffic to the above
> iptables -A INPUT -i vmbr0 -p tcp -m state --state ESTABLISHED,RELATED
> -j ACCEPT
> iptables -A INPUT -i vmbr0 -p udp -m state --state ESTABLISHED,RELATED
> -j ACCEPT
>
> # Drop everything else
> iptables -A INPUT -i vmbr0 -j DROP
>
>
> --
> Hilsen/Regards
> Michael Rasmussen
>
> Get my public GnuPG keys:
> michael  rasmussen  cc
> http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xD3C9A00E
> mir  datanom  net
> http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xE501F51C
> mir  miras  org
> http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xE3E80917
> --
> * Omnic looks at his 33.6k link and then looks at Joy
> * Mercury cuddles his cable modem.. (=:]
>
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Proxmox and firewall

2013-06-11 Thread Michael Rasmussen
On Tue, 11 Jun 2013 17:24:30 +0200
Julien Groselle  wrote:

> Hello again,
> 
> In our company, wet set up heavy firewall on every servers.
> So, after many tests on proxmox with an open firewall, it's time to put
> servers in production.
> Before this step, we have to configure our iptables rules :
> 
> Here is a partial output of my 'netstat -lnpute' :
> tcp0  0 127.0.0.1:850.0.0.0:*   LISTEN
>  0  35730752433645/pvedaemon
> tcp0  0 0.0.0.0:80060.0.0.0:*   LISTEN
>  33 35730876433690/pveproxy
> udp0  0 192.168.100.187:54040.0.0.0:*
> 0  133815114501/corosync
> udp0  0 192.168.100.187:54050.0.0.0:*
> 0  133815124501/corosync
> udp0  0 239.192.1.240:5405  0.0.0.0:*
> 0  133815084501/corosync
> 
> I just have to open tcp/8006 and all the udp/540* ? Or are there any port
> that proxmox need to use ?
> I'm sure that the ssh have to be open in between the two nodes, but what
> else ?
> 
I run the following script at boot on every host. Every host has 2 nics
in bond and has configured a number vlans and bridges. The hosts
has only a configured IP on vmbr0 (default vlan0), on a lan for shared
storage (vlan20), and on a lan for migration (vlan30). Everything is
connected through a managed switch. vlan20 is accessible by all
storage nodes and all hosts. vlan30 is only accessible by hosts. The
only access to hosts is via vlan0.

cat /etc/iptables.sh 
#!/bin/sh

iptables -F INPUT

# Block all input on vmbr0 except
# https(8006)
iptables -A INPUT -i vmbr0 -p tcp --dport 8006 -m state --state NEW -j
ACCEPT 
# vnc-console (5900-5910)
iptables -A INPUT -i vmbr0 -p tcp -m multiport --dports 5900:5910 -m
state --state NEW -j ACCEPT 
# apcups (udp:3551)
iptables -A INPUT -i vmbr0 -p udp --dport 3551 -m state --state NEW -j
ACCEPT

# Related traffic to the above
iptables -A INPUT -i vmbr0 -p tcp -m state --state ESTABLISHED,RELATED
-j ACCEPT
iptables -A INPUT -i vmbr0 -p udp -m state --state ESTABLISHED,RELATED
-j ACCEPT

# Drop everything else
iptables -A INPUT -i vmbr0 -j DROP


-- 
Hilsen/Regards
Michael Rasmussen

Get my public GnuPG keys:
michael  rasmussen  cc
http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xD3C9A00E
mir  datanom  net
http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xE501F51C
mir  miras  org
http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xE3E80917
--
* Omnic looks at his 33.6k link and then looks at Joy
* Mercury cuddles his cable modem.. (=:]


signature.asc
Description: PGP signature
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Full privilège list.

2013-06-11 Thread Dietmar Maurer
> A little question about privileges :
> As there is lack of information in the documentation regarding the
> privileges list, can we assume that the user "administrator" have all the
> privileges or is there other ones ?

The role 'Administrator' have all privileges.


___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Proxmox and firewall

2013-06-11 Thread Julien Groselle
Hello again,

In our company, wet set up heavy firewall on every servers.
So, after many tests on proxmox with an open firewall, it's time to put
servers in production.
Before this step, we have to configure our iptables rules :

Here is a partial output of my 'netstat -lnpute' :
tcp0  0 127.0.0.1:850.0.0.0:*   LISTEN
 0  35730752433645/pvedaemon
tcp0  0 0.0.0.0:80060.0.0.0:*   LISTEN
 33 35730876433690/pveproxy
udp0  0 192.168.100.187:54040.0.0.0:*
0  133815114501/corosync
udp0  0 192.168.100.187:54050.0.0.0:*
0  133815124501/corosync
udp0  0 239.192.1.240:5405  0.0.0.0:*
0  133815084501/corosync

I just have to open tcp/8006 and all the udp/540* ? Or are there any port
that proxmox need to use ?
I'm sure that the ssh have to be open in between the two nodes, but what
else ?

Thank you again.

JG.
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Full privilège list.

2013-06-11 Thread Julien Groselle
Hello everyone,

A little question about privileges :
As there is lack of information in the documentation regarding the
privileges list, can we assume that the user "administrator" have all the
privileges or is there other ones ?

Thank you.

JG.
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Updates for Proxmox VE 3.0 - including storage migration ("qm move")

2013-06-11 Thread Martin Maurer
Just created a new wiki page - http://pve.proxmox.com/wiki/Storage_Migration

Its short as the usage is so easy and self-explaining, but fell free to improve.

Best Regards,

Martin Maurer

mar...@proxmox.com
http://www.proxmox.com

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Updates for Proxmox VE 3.0 - including storage migration ("qm move")

2013-06-11 Thread Alexandre DERUMIER
>>Have you tested the migration from or to Sheepdog? 

Yes, it's works.

I have tested from<-> to all storages plugins.



- Mail original - 

De: "Frédéric Massot"  
À: pve-user@pve.proxmox.com 
Envoyé: Mardi 11 Juin 2013 13:05:01 
Objet: Re: [PVE-User] Updates for Proxmox VE 3.0 - including storage migration 
("qm move") 

Le 10/06/2013 18:57, Michael Rasmussen a écrit : 
> On Mon, 10 Jun 2013 18:49:32 +0200 (CEST) 
> Fabrizio Cuseo wrote: 
> 
>> Yes, backup works fine. 
>> I will try to setup two different ceph storage or move from different kind 
>> of storage and i'll publish my results. 
>> 
> I have tried the following which all works: 
> local -> nfs 
> local -> lvm 
|...] 
> nexenta -> zfs 
> nexenta -> nexenta 

Have you tested the migration from or to Sheepdog? 


-- 
== 
| FRÉDÉRIC MASSOT | 
| http://www.juliana-multimedia.com | 
| mailto:frede...@juliana-multimedia.com | 
| +33.(0)2.97.54.77.94 +33.(0)6.67.19.95.69 | 
===Debian=GNU/Linux=== 
___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Updates for Proxmox VE 3.0 - including storage migration ("qm move")

2013-06-11 Thread Frédéric Massot

Le 10/06/2013 18:57, Michael Rasmussen a écrit :

On Mon, 10 Jun 2013 18:49:32 +0200 (CEST)
Fabrizio Cuseo  wrote:


Yes, backup works fine.
I will try to setup two different ceph storage or move from different kind of 
storage and i'll publish my results.


I have tried the following which all works:
local ->  nfs
local ->  lvm

|...]

nexenta ->  zfs
nexenta ->  nexenta


Have you tested the migration from or to Sheepdog?


--
==
|  FRÉDÉRIC MASSOT   |
| http://www.juliana-multimedia.com  |
|   mailto:frede...@juliana-multimedia.com   |
| +33.(0)2.97.54.77.94  +33.(0)6.67.19.95.69 |
===Debian=GNU/Linux===
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Possible to have both 2.3 and 3.0 pve in cluster?

2013-06-11 Thread Lutz Willek

Am 11.06.2013 10:54, schrieb Lex Rivera:

And migrate (offline migration is also ok) VMs between them?


Works for me (small testing setup)
But: not supported, not recommended, do not bother when errors occur.


Freundliche Grüße / Best Regards

Lutz Willek

--
Vorstandsvorsitzender/Chairman of the board of management:
Gerd-Lothar Leonhart
Vorstand/Board of Management:
Dr. Bernd Finkbeiner, Michael Heinrichs, 
Dr. Arno Steitz, Dr. Ingrid Zech

Vorsitzender des Aufsichtsrats/
Chairman of the Supervisory Board:
Philippe Miltin
Sitz/Registered Office: Tuebingen
Registergericht/Registration Court: Stuttgart
Registernummer/Commercial Register No.: HRB 382196

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Possible to have both 2.3 and 3.0 pve in cluster?

2013-06-11 Thread Lex Rivera
And migrate (offline migration is also ok) VMs between them?
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Updates for Proxmox VE 3.0 - including storage migration ("qm move")

2013-06-11 Thread Fabrizio Cuseo
Hello Alexandre.
I have found the problem.

MooseFs needs a write back cache to work; if I use the default NO CACHE 
setting, the VM doesn't start.

So, if a move a powered-on disk from ceph to moosefs (remember that moosefs is 
locally mounted on the proxmox host), the default "no cache" setting on the new 
disk causes the problem.

So, having a cache setting on the destination disk can solve the issue.

Thanks in advance, Fabrizio Cuseo


- Messaggio originale -
Da: "Alexandre DERUMIER" 
A: "Fabrizio Cuseo" 
Cc: "proxmoxve (pve-user@pve.proxmox.com)" , "Martin 
Maurer" 
Inviato: Martedì, 11 giugno 2013 5:54:36
Oggetto: Re: [PVE-User] Updates for Proxmox VE 3.0 - including storage 
migration("qm move")

no problem to read/write to the ceph storage ?

(also notice that last pvetest update librbd, don't known if it's related)

Do you have tried to stop/start the vm, and try again ?




- Mail original -

De: "Fabrizio Cuseo" 
À: "Alexandre DERUMIER" 
Cc: "proxmoxve (pve-user@pve.proxmox.com)" , "Martin 
Maurer" 
Envoyé: Lundi 10 Juin 2013 19:16:57
Objet: Re: [PVE-User] Updates for Proxmox VE 3.0 - including storage migration 
("qm move")

Yes; it hangs at the beginning.

From moosefs to moosefs (both locally mounted) from qcow2 to qcow2 works fine..

Regards, Fabrizio


- Messaggio originale -
Da: "Alexandre DERUMIER" 
A: "Fabrizio Cuseo" 
Cc: "proxmoxve (pve-user@pve.proxmox.com)" , "Martin 
Maurer" 
Inviato: Lunedì, 10 giugno 2013 19:11:13
Oggetto: Re: [PVE-User] Updates for Proxmox VE 3.0 - including storage 
migration ("qm move")

I just check your logs, it seem that the job mirroring doesn't start at all ?
It's hanging at the beginning right ?


- Mail original -

De: "Fabrizio Cuseo" 
À: "Alexandre DERUMIER" 
Cc: "proxmoxve (pve-user@pve.proxmox.com)" , "Martin 
Maurer" 
Envoyé: Lundi 10 Juin 2013 18:49:32
Objet: Re: [PVE-User] Updates for Proxmox VE 3.0 - including storage migration 
("qm move")

Yes, backup works fine.
I will try to setup two different ceph storage or move from different kind of 
storage and i'll publish my results.

Thanks for now...
Fabrizio


- Messaggio originale -
Da: "Alexandre DERUMIER" 
A: "Fabrizio Cuseo" 
Cc: "proxmoxve (pve-user@pve.proxmox.com)" , "Martin 
Maurer" 
Inviato: Lunedì, 10 giugno 2013 18:39:29
Oggetto: Re: [PVE-User] Updates for Proxmox VE 3.0 - including storage 
migration ("qm move")

>>create full clone of drive virtio0 (CephCluster:vm-103-disk-1)
>>Formatting '/mnt/mfscluster2/images/103/vm-103-disk-1.raw', fmt=raw 
>>size=21474836480
>>TASK ERROR: storage migration failed: mirroring error: mirroring job seem to 
>>have die. Maybe do you have bad sectors? at 
>>/usr/share/perl5/PVE/QemuServer.pm line 4690.

Seem that the block mirror job have failed.
It's possibly a bad sector on virtual disk on ceph.
Does proxmox backup works fine for this ceph disk ? (It should also die if it's 
really a bad sector)




- Mail original -

De: "Fabrizio Cuseo" 
À: "Alexandre DERUMIER" 
Cc: "proxmoxve (pve-user@pve.proxmox.com)" , "Martin 
Maurer" 
Envoyé: Lundi 10 Juin 2013 14:33:23
Objet: Re: [PVE-User] Updates for Proxmox VE 3.0 - including storage migration 
("qm move")

Hello.
I am trying to move from a ceph disk to a local shared (moosefs) disk: this is 
the error I see.

Regards, Fabrizio


create full clone of drive virtio0 (CephCluster:vm-103-disk-1)
Formatting '/mnt/mfscluster2/images/103/vm-103-disk-1.raw', fmt=raw 
size=21474836480
TASK ERROR: storage migration failed: mirroring error: mirroring job seem to 
have die. Maybe do you have bad sectors? at /usr/share/perl5/PVE/QemuServer.pm 
line 4690.


- Messaggio originale -
Da: "Alexandre DERUMIER" 
A: "Fabrizio Cuseo" 
Cc: "proxmoxve (pve-user@pve.proxmox.com)" , 
pve-de...@pve.proxmox.com, "Martin Maurer" 
Inviato: Lunedì, 10 giugno 2013 14:21:13
Oggetto: Re: [PVE-User] Updates for Proxmox VE 3.0 - including storage 
migration ("qm move")

Wiki is not yet updated, but

From gui, you have a new "move disk" button, on your vm hardware tab. (works 
offline or online)

command line is : qm move_disk   


- Mail original -

De: "Fabrizio Cuseo" 
À: "Martin Maurer" 
Cc: "proxmoxve (pve-user@pve.proxmox.com)" , 
pve-de...@pve.proxmox.com
Envoyé: Lundi 10 Juin 2013 12:38:08
Objet: Re: [PVE-User] Updates for Proxmox VE 3.0 - including storage migration 
("qm move")

Hello Martin.
I have upgraded my test cluster; where can I find any doc regarding storage 
migration ?

Thanks in advance, Fabrizio


- Messaggio originale -
Da: "Martin Maurer" 
A: "proxmoxve (pve-user@pve.proxmox.com)" , 
pve-de...@pve.proxmox.com
Inviato: Lunedì, 10 giugno 2013 10:33:15
Oggetto: [PVE-User] Updates for Proxmox VE 3.0 - including storage migration 
("qm move")

Hi all,

We just uploaded a bunch of packages to our pvetest repository 
(http://pve.proxmox.com/wiki/Package_repositories) , including a lot of bug 
fixes, code cleanups, qemu 1.4.2 and also a new quit