ecause they won't write too many things.
- Mail original -
De: "Lindsay Mathieson"
À: "proxmoxve"
Envoyé: Lundi 25 Avril 2016 10:41:21
Objet: Re: [PVE-User] Ceph or Gluster
On 25/04/2016 5:24 PM, Eneko Lacunza wrote:
> So, here you have the reason for your ba
El 25/04/16 a las 10:41, Lindsay Mathieson escribió:
On 25/04/2016 5:24 PM, Eneko Lacunza wrote:
So, here you have the reason for your bad write performance with
Ceph. You have to carefully choose the journal SSDs, otherwise you
can be better even without SSDs... (yes, I made this very mistake
On 25/04/2016 5:24 PM, Eneko Lacunza wrote:
So, here you have the reason for your bad write performance with Ceph.
You have to carefully choose the journal SSDs, otherwise you can be
better even without SSDs... (yes, I made this very mistake too!)
What brand/model?
Yah I know :( Intel 530's
On 22/04/16 12:42, Mohamed Sadok Ben Jazia wrote:
Hello list,
In order to set a high scalable proxmox infrastructure with a number of
clusters, i plan to use distributed storing system, for this i have some
questions.
1- I have a choice between Ceph and Gluster, which is better for proxmox.
We us
Hi,
El 23/04/16 a las 02:36, Lindsay Mathieson escribió:
On 23/04/2016 7:50 AM, Brian :: wrote:
With NVME journals on a 3 node 4 OSD cluster
Well your hardware is rather better than mine :) I'm just using
consumer grade SSD's for journals which won't have anywhere near the
performance of NV
Hi,
El 22/04/16 a las 23:50, Brian :: escribió:
With NVME journals on a 3 node 4 OSD cluster if I do a quick dd of a
1GB file on a VM I can see 2.34Gbps on the storage network straight
away so if I was only using 1Gbps here the network would be a
bottlekneck. If I perform the same in 2 VMs traff
--
De: "Lindsay Mathieson"
À: "proxmoxve"
Envoyé: Vendredi 22 Avril 2016 16:02:19
Objet: Re: [PVE-User] Ceph or Gluster
On 22/04/2016 11:31 PM, Brian :: wrote:
> 10Gbps or faster at a minimum or you will have pain. Even using 4
> nodes with 4 spinner disks in each node
Also, what sort of iowait percentages are you seeing?
On 23/04/2016 7:50 AM, Brian :: wrote:
Hi Lindsay,
With NVME journals on a 3 node 4 OSD cluster if I do a quick dd of a
1GB file on a VM I can see 2.34Gbps on the storage network straight
away so if I was only using 1Gbps here the network wo
On 23/04/2016 7:50 AM, Brian :: wrote:
Would be very interested in hearing more about your gluster setup.. I
don't know anything about it - how many nodes are involved?
I'd be interested in you ceph setup as well
- Version?
- Rolled out using proxmox tools? (pveceph etc)
- underlying filesyste
On 23/04/2016 7:50 AM, Brian :: wrote:
With NVME journals on a 3 node 4 OSD cluster
Well your hardware is rather better than mine :) I'm just using consumer
grade SSD's for journals which won't have anywhere near the performance
of NVME
if I do a quick dd of a
1GB file on a VM I can see 2.
Hi Lindsay,
With NVME journals on a 3 node 4 OSD cluster if I do a quick dd of a
1GB file on a VM I can see 2.34Gbps on the storage network straight
away so if I was only using 1Gbps here the network would be a
bottlekneck. If I perform the same in 2 VMs traffic hits 4.19Gbps on
the storage networ
On 22/04/2016 11:31 PM, Brian :: wrote:
10Gbps or faster at a minimum or you will have pain. Even using 4
nodes with 4 spinner disks in each node and you will be maxing out
1Gbps network.
Can't say I saw that on our cluster.
- 3 Nodes
- 3 OSD's per Node
- SSD journals for each OSD.
- 2*1G Eth
As it will we a large infrastructure, the number of nodes will increase
while demand is increasing.
The network will be mainly needed for backups (not sure to make a node for
backups only), and also for VM and LXC migration to use nodes in an optimal
way.
A good bandwidth is needed while moving co
Hi Mohamed,
El 22/04/16 a las 15:00, Mohamed Sadok Ben Jazia escribió:
Thank you Eneko,
I read in proxmox forum that distributed storage needs 10GBit or faster on
the local network and a dedicated network.
Could you detail your used infrastructure to see if it matches those
conditions?
We only h
Hi Mohamed
10Gbps or faster at a minimum or you will have pain. Even using 4
nodes with 4 spinner disks in each node and you will be maxing out
1Gbps network. For any backfills or adding new OSDs you don't want to
be waiting on 1Gbps ethernet speeds.
Dedicated 10Gbps network for ceph communicatio
Thank you Eneko,
I read in proxmox forum that distributed storage needs 10GBit or faster on
the local network and a dedicated network.
Could you detail your used infrastructure to see if it matches those
conditions?
On 22 April 2016 at 12:06, Eneko Lacunza wrote:
> Hi Mohamed,
>
> El 22/04/16
Hi Mohamed,
El 22/04/16 a las 12:42, Mohamed Sadok Ben Jazia escribió:
Hello list,
In order to set a high scalable proxmox infrastructure with a number of
clusters, i plan to use distributed storing system, for this i have some
questions.
1- I have a choice between Ceph and Gluster, which is bet
Hello list,
In order to set a high scalable proxmox infrastructure with a number of
clusters, i plan to use distributed storing system, for this i have some
questions.
1- I have a choice between Ceph and Gluster, which is better for proxmox.
2- Is it better to install one of those systems on the no
18 matches
Mail list logo