Re: [PVE-User] Content listing

2014-02-03 Thread Eneko Lacunza

Hola Luis,

¿El problema con el listado lo tienes en el nodo 1 o en todos?

Where do you see the listing problem, in node 1 or in all of them?

Saludos

On 03/02/14 19:00, Luis G. Coralle wrote:

Hola, tengo un cluster con 3 nodos con proxmox ve 3.0
Resulta que tuve que dar mantenimiento al nodo 1, por lo que moví las
mv a los nodos 2 y 3. Luego instalé el nodo 1 de cero, retorné las vm
que había movido a los otros nodos.
Hasta ahí todo bien, pero resulta que en los listados de imágenes, por
ejemplo, local, en el nodo 1 aparece vacío, pero en la ubicación
fisica ( /var/lib/vz ) aparecen las imágnes.
Qué puede estar pasando? El nuevo nodo 1 lo llamé igual que el nodo 1 viejo.

Hi, I have a 3 node cluster proxmox ve 3.0
I had to maintain the node 1, so I moved the VM to nodes 2 and 3. Then
I installed a "new" node 1, I returned the VM that had moved to the
other nodes.
So far so good, but in the content list of storage "local" ( and other
) is empty, but the physical location (/var/lib/vz ) has that Images.
What can be happening? The new node 1 I called like the old node 1.
Thanks




--
Zuzendari Teknikoa / Director Técnico
Binovo IT Human Project, S.L.
Telf. 943575997
  943493611
Astigarraga bidea 2, planta 6 dcha., ofi. 3-2; 20180 Oiartzun (Gipuzkoa)
www.binovo.es

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Planning an HA infrastructure : Hardware

2014-02-03 Thread Adam Thompson
 

On 2014-02-03 10:23, Philippe Schwarz wrote: 

> 1. A pair of Dell
MD1200 (Disk enclosure with 12 SAS disks in a
> stripped mirror)
connected with SAS cables to each of the blades
> (M620) in a M1000E
enclosure.
> 
> Pros :
> - -Simple
> - - Max IOPS
> - - Each storage is
local for proxmox's point of view (Am i wrong ??)
> 
> Cons:
> -
-Difficult to scale : Up to three blades, it could be easy but after 4
>
ou 5 blades, there won't be enough ports on the SAS disk enclosure.
> 
>
2.The same pair of MD1200 connected to a blade, acting as a ZFS-SAN
>
for the other blades and exporting (NFS or iSCSI) the storage content
>
in an external 10 Gbe connection.
> 
> Pros:
> - - Easy to scale
> - -
Euh...ZFS ;-)
> 
> Cons:
> - - More complicated
> - - Less IOPS due to
the 10Gbe link
> - - More expensive (1 10GBE/blade+1 for the SAN+1 10GBE
switch)

Neither solution will work, since there's no way to physically
connect external SAS disks to an M620 blade. 

The M620 does not have
external SAS ports, and it also lacks a PCIe slot for a SAS HBA. 

The
only M1000 blade that can connect to a SAS enclosure would be an M610x
with an add-in PERC card. This would be an extremely strange
configuration, to say the least, and is not supported. 

Blade servers
are explicitly *not* supported with the PERC H800 cards, as documented
here:
http://www.dell.com/learn/us/en/04/campaigns/dell-raid-controllers?c=us&l=en&s=bsd&cs=04&redirect=1.


However, even if it were possible to connect these components
together, the first solution still will not work. The MD1200 does not
support multi-initiator configurations; the dual (not triple) SAS ports
can only be connected to two SAS ports on the same PERC card, and the
PERC card must explicitly support the MD1200. 

Sorry to burst your
bubble, but you need to go back to square one with this... there's so
much wrong with your solution I can't even suggest how to fix it! 

BTW,
if you already have an M1000e chassis, you should probably be looking at
the Equallogic PS 4110M iSCSI array instead. If you don't already own a
M1000e chassis, purchasing one to hold only three blades is... well...
stupid. You could buy 1U servers (and two MD1200s, and a 10GE switch) to
do the same thing for around 1/4 the price. 

If you really want
blade-like equipement, look at the C6220 instead. That will give you
four servers in a smaller footprint that doesn't require an electrician
to hook up. Add whatever SAS or 10GE or ... cards you want; each node
has a PCIe slot as well as two(?) mezzanine slots. 

-Adam Thompson

athom...@athompso.net 

 ___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Content listing

2014-02-03 Thread Luis G. Coralle
Hola, tengo un cluster con 3 nodos con proxmox ve 3.0
Resulta que tuve que dar mantenimiento al nodo 1, por lo que moví las
mv a los nodos 2 y 3. Luego instalé el nodo 1 de cero, retorné las vm
que había movido a los otros nodos.
Hasta ahí todo bien, pero resulta que en los listados de imágenes, por
ejemplo, local, en el nodo 1 aparece vacío, pero en la ubicación
fisica ( /var/lib/vz ) aparecen las imágnes.
Qué puede estar pasando? El nuevo nodo 1 lo llamé igual que el nodo 1 viejo.

Hi, I have a 3 node cluster proxmox ve 3.0
I had to maintain the node 1, so I moved the VM to nodes 2 and 3. Then
I installed a "new" node 1, I returned the VM that had moved to the
other nodes.
So far so good, but in the content list of storage "local" ( and other
) is empty, but the physical location (/var/lib/vz ) has that Images.
What can be happening? The new node 1 I called like the old node 1.
Thanks

-- 
Luis G. Coralle
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] My suggestion about explicit name in bridge configuration

2014-02-03 Thread Diaolin

Il 2014-02-03 18:42 Dietmar Maurer ha scritto:

Just for understand

i will write the "comment" from the interface and i don't find the way 
to do it...


i've seen that the interface respects my comments manually written in 
the

interfaces file


You need to extend the API scheme to allow that.



perfect...

this was the old discussion
but i don't find the place to do it...

Do you have an example?

Tx, Diaolin


---
S’à destacà l’ultima föia dal bósch nét
crodàda l’ei, solàgna, ‘n mèzz ai sàssi
e ‘ntant fis-ciava ‘n zìfol de oseleti
a tegnìr vìo ‘l pensér che vèn matìna
[Diaolin]
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] My suggestion about explicit name in bridge configuration

2014-02-03 Thread Dietmar Maurer
> Just for understand
> 
> i will write the "comment" from the interface and i don't find the way to do 
> it...
> 
> i've seen that the interface respects my comments manually written in the
> interfaces file

You need to extend the API scheme to allow that.
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] My suggestion about explicit name in bridge configuration

2014-02-03 Thread Diaolin

Just for understand

i will write the "comment" from the interface and
i don't find the way to do it...

i've seen that the interface respects my comments
manually written in the interfaces file

Any hint?

Diaolin

---
S’à destacà l’ultima föia dal bósch nét
crodàda l’ei, solàgna, ‘n mèzz ai sàssi
e ‘ntant fis-ciava ‘n zìfol de oseleti
a tegnìr vìo ‘l pensér che vèn matìna
[Diaolin]
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Planning an HA infrastructure : Hardware

2014-02-03 Thread Eneko Lacunza

Hi Philippe,

Please take into account I have never administered such a system (always 
simpler ones).


On 03/02/14 16:23, Philippe Schwarz wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Le 24/01/2014 08:30, Philippe Schwarz a écrit :
Asked many questions and had a few answers (thanks to elacunza).
But a single thread seems to be a no-go. I'm gonna slip it

Hardware relative questions :

I've got different offers from my reseller. Among many solutions, i've
got two in the last race :

1. A pair of Dell MD1200 (Disk enclosure with 12 SAS disks  in a
stripped mirror) connected with SAS cables to each of the blades
(M620) in a M1000E enclosure.

Pros :
- -Simple
- - Max IOPS
- - Each storage is local for proxmox's point of view (Am i wrong ??)

Cons:
- -Difficult to scale : Up to three blades, it could be easy but after 4
ou 5 blades, there won't be enough ports on the SAS disk enclosure.
MD1200s 12 disk limit seems to fit quite nicely with 4-5 blades (2-3 
mirrored disk each). Maybe you can scale-out having multiple proxmox 
clusters, each with his blades and disk enclosures. This gives you a 
cheaper start and more I/O bandwith.


I'd chose this, but don't know what growth you expect for the cluster... 
also knowing what type of I/O do you expect would help. If you expect 
high I/O then you should consider less VMs per disk...


In my experience usually the limit you hit first is disk I/O, especially 
with spinning disks...


Just my 0.01¢

Cheers
Eneko

--
Zuzendari Teknikoa / Director Técnico
Binovo IT Human Project, S.L.
Telf. 943575997
  943493611
Astigarraga bidea 2, planta 6 dcha., ofi. 3-2; 20180 Oiartzun (Gipuzkoa)
www.binovo.es

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Planning an HA infrastructure : Hardware

2014-02-03 Thread Philippe Schwarz
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Le 24/01/2014 08:30, Philippe Schwarz a écrit :
Asked many questions and had a few answers (thanks to elacunza).
But a single thread seems to be a no-go. I'm gonna slip it

Hardware relative questions :

I've got different offers from my reseller. Among many solutions, i've
got two in the last race :

1. A pair of Dell MD1200 (Disk enclosure with 12 SAS disks  in a
stripped mirror) connected with SAS cables to each of the blades
(M620) in a M1000E enclosure.

Pros :
- -Simple
- - Max IOPS
- - Each storage is local for proxmox's point of view (Am i wrong ??)

Cons:
- -Difficult to scale : Up to three blades, it could be easy but after 4
ou 5 blades, there won't be enough ports on the SAS disk enclosure.


2.The same pair of MD1200 connected to a blade, acting as a ZFS-SAN
for the other blades and exporting (NFS or iSCSI) the storage content
in an external 10 Gbe connection.

Pros:
- - Easy to scale
- - Euh...ZFS ;-)


Cons:
- - More complicated
- - Less IOPS due to the 10Gbe link
- - More expensive (1 10GBE/blade+1 for the SAN+1 10GBE switch)



Yes, i'm aware of the SPOF made of the SAN or the enclosure, but i can
live with it and have no budget to double this.


What are your opinions toward one or the other of those solutions ?

PS : Could be an HP material instead of a Dell one.

Thanks
Best regards.

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.12 (GNU/Linux)
Comment: Using GnuPG with Icedove - http://www.enigmail.net/

iEYEARECAAYFAlLvtIMACgkQlhqCFkbqHRaAgQCgtUFil//eWeMnic+34Q2F4+g0
wF8AoLChW7a1Fzi2o6I5GKGyaVMKVz3p
=rROs
-END PGP SIGNATURE-
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] problems with 3.2-beta

2014-02-03 Thread Angel Docampo

  
  
El 02/02/14 19:00, Adam Thompson
  escribió:


  (This isn't really new...) SPICE continues to be a major PITA when
  running Ubuntu 12.04LTS as the management client.  Hmm, I just
  found a PPA with virt-viewer packages that work.  I should update
  the Wiki with that info, too.
Do you have a spice sound working version? It seems that sound
support is broken in ubuntu.

-- 
  
  

  



  Angel
  Docampo

  Datalab
  Tecnologia, s.a.
  Castillejos,
352 - 08025 Barcelona
  Tel. 93 476
69 14 - Ext: 706
Mob. 670.299.381

  

  

  

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] problems with 3.2-beta

2014-02-03 Thread Martin Maurer
Hello Adam,

thanks for your feedback. For help in your issues with migration or HA, please 
submit ONE thread for each single problem, multiple complex issues in one 
thread will lead to  non-answers. 

Here I just answer to your ceph feedback:

> 3. The Wiki page on setting up CEPH Server doesn't mention that you can
> do most of the setup from within the GUI.  Since I have write access
> there, I guess I should fix it myself :-).

[Martin Maurer] 
The wiki tells on which step you can start using the GUI:
http://pve.proxmox.com/wiki/Ceph_Server#Creating_Ceph_Monitors

Also the video tutorial shows what is possible via GUI.
 
> Ceph speeds are barely acceptable (10-20MB/sec) but that's typical of
> Ceph in my experience so far, even with caching turned on. (Still a bit
> of a letdown compared to Sheepdog's 300MB/sec burst throughput,
> though.)

[Martin Maurer] 
20mb? If you tell your benchmark results, you need to specify what you measure.
Our ceph test cluster - explained and described in the wiki page - got about 
260 mb/second (with replication 3) read and write speed inside a single KVM 
guest, eg. Windows (testing with crystaldiskmark). Ceph is not designed to 
maximum single performance, its designed to scale out, means you can get good 
performance with a big number of VMs and you can add always more server to 
increase storage and speed.

You can do a simple benchmark for your ceph setup, using the rados command:

First, create a new pool via gui, e.g. I name it test2 with replication 2:

Now, run a write test:
> rados -p test2 bench 60 write --no-cleanup
__
Total writes made:  5742
Write size: 4194304
Bandwidth (MB/sec): 381.573
__

Now, do a read test:
> rados -p test2 bench 60 seq

__
Total reads made: 5742
Read size:4194304
Bandwidth (MB/sec):974.951
__

Main important parts for performance:
- 10Gbit network 
- at least 4 OSD per node (and at least 3 nodes)
- fast SSD for journal


> One thing I'm not sure of is OSD placement... if I have two drives per
> host dedicated to Ceph (and thus two OSDs), and my pool "size" is 2,
> does that mean a single node failure could render some data
> unreachable?

[Martin Maurer] 
Size 2 means that you data is stored on at least two nodes. So if you lose one 
node, your data is still accessible from the others.
Take a look on the ceph docs, there is a lot of explanation there.

> I've adjusted my "size" to 3 just in case, but I don't
> understand how this works.  Sheepdog guarantees that multiple copies of
> an object won't be stored on the same host for exactly this reason, but
> I can't tell what Ceph does.

[Martin Maurer] 
Size=3 will be new default setting for ceph (the change will be with firefly 
release, afaik)

Via the gui you can easily stop and start, add and remove OSDs, so you can see 
how the cluster behaves in all scenarios. 

Martin


___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] My suggestion about explicit name in bridge configuration

2014-02-03 Thread Diaolin

Il 2014-01-31 15:31 Dietmar Maurer ha scritto:
As proposed by Dietmar i will add a new label in the network 
configuration but

i've a problem


The Network parser already have the ability to parse comments:

iface eth0 ...
# This is a comment
# with 2 lines


Why don't you want to use that feature?


It's the same...

in the Network parser i've not seen it
how can i use it???


I need to add a field in the interface
and write it and read it from interfaces.

Another question

ison/network has standard options
where can i find the definitions?

Tx, Diaolin


---
S’à destacà l’ultima föia dal bósch nét
crodàda l’ei, solàgna, ‘n mèzz ai sàssi
e ‘ntant fis-ciava ‘n zìfol de oseleti
a tegnìr vìo ‘l pensér che vèn matìna
[Diaolin]
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user