If you don't need LACP you could use round-robin bonding mode.
With 4x1Gbit NICs you can get a bandwidth of 4Gbit per TCP connection.
Either create trunks on stacked switches (e.g. Avaya) or use single
switches (e.g. HP 1810-24) and a locally managed MAC address per node/bond.
The latter is some
The only way I know to actually extend the reserved space it using the
method described here:
https://www.thomas-krenn.com/en/wiki/SSD_Over-provisioning_using_hdparm
On Wed, Apr 29, 2015 at 12:12 PM, Lionel Bouton wrote:
> Hi Dominik,
>
> On 04/29/15 19:06, Dominik Hannen wrote:
>> I had plann
Hi Dominik,
On 04/29/15 19:06, Dominik Hannen wrote:
> I had planned to use at maximum 80GB of the available 250GB.
> 1 x 16GB OS
> 4 x 8, 12 or 16GB partitions for osd-journals.
>
> For a total SSD Usage of 19.2%, 25.6% or 32%
> and over-provisioning of 80.8%, 74.3% or 68%.
>
> I am relatively ce
> FWIW, I tried using some 256G MX100s with ceph and had horrible performance
> issues within a month or two. I was seeing 100% utilization with high
> latency but only 20 MB/s writes. I had a number of S3500s in the same pool
> that were dramatically better. Which is to say that they were actua
FWIW, I tried using some 256G MX100s with ceph and had horrible performance
issues within a month or two. I was seeing 100% utilization with high
latency but only 20 MB/s writes. I had a number of S3500s in the same pool
that were dramatically better. Which is to say that they were actually
fast
users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
>> Dominik Hannen
>> Sent: 29 April 2015 00:30
>> To: Nick Fisk
>> Cc: ceph-users@lists.ceph.com
>> Subject: Re: [ceph-users] Cost- and Powerefficient OSD-Nodes
>>
>> > It's all about the
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Dominik Hannen
> Sent: 29 April 2015 00:30
> To: Nick Fisk
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] Cost- and Powerefficient OSD-Nodes
>
>
> It's all about the total latency per operation. Most IO sizes over 10GB
> don't make much difference to the Round Trip Time. But comparatively even
> 128KB IO's over 1GB take quite a while. For example ping a host with a
> payload of 64k over 1GB and 10GB networks and look at the difference in
>
> We tested the m500 960GB for journaling and found at most it could journal
> 3 spinner OSDs. I'd strongly recommend you avoid the Crucial consumer
> drives based on our testing/usage. We ended up journaling those to the
> spinner itself and getting better performance. Also, I wouldn't trust their
I haven't used them myself but switching silicon is getting pretty cheap
nowadays:
http://whiteboxswitch.com/products/edge-core-as5610-52x
There's similar products (basically the same Broadcom ASIC) from Quanta and
I think Supermicro announced one recently as well.
They're not as plug and play s
> FYI, most Juniper switches hash LAGs on IP+port, so you'd get somewhat
> better performance than you would with simple MAC or IP hashing. 10G is
> better if you can afford it, though.
interesting, I just read up about the topic, those Juniper-Switches seem to
be a nice pick then.
__
--
> > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> > Dominik Hannen
> > Sent: 28 April 2015 17:08
> > To: Nick Fisk
> > Cc: ceph-users@lists.ceph.com
> > Subject: Re: [ceph-users] Cost- and Powerefficient OSD-Nodes
> >
> > &
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Dominik Hannen
> Sent: 28 April 2015 17:08
> To: Nick Fisk
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] Cost- and Powerefficient OSD-Nodes
>
> &
>> Interconnect as currently planned:
>> 4 x 1Gbit LACP Bonds over a pair of MLAG-capable switches (planned:
>> EX3300)
> If you can do 10GB networking its really worth it. I found that with 1G,
> latency effects your performance before you max out the bandwidth. We got
> some Supermicro servers w
> > 2 x (2 x 1Gbit) was on my mind with cluster/public separated, if the
> > performance of 4 x 1Gbit LACP would not deliver.
> > Regarding source-IP/dest-IP hashing with LACP. Wouldn't it be sufficient
> to
> > give each osd-process its own IP for cluster/public then?
> I'm not sure this is supp
-users@lists.ceph.com
> > Subject: Re: [ceph-users] Cost- and Powerefficient OSD-Nodes
> >
> > >> Interconnect as currently planned:
> > >> 4 x 1Gbit LACP Bonds over a pair of MLAG-capable switches (planned:
> > >> EX3300)
> >
> > >
We tested the m500 960GB for journaling and found at most it could journal
3 spinner OSDs. I'd strongly recommend you avoid the Crucial consumer
drives based on our testing/usage. We ended up journaling those to the
spinner itself and getting better performance. Also, I wouldn't trust their
power l
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Dominik Hannen
> Sent: 28 April 2015 15:30
> To: Jake Young
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] Cost- and Powerefficient OSD-Nodes
>
> &
>> Interconnect as currently planned:
>> 4 x 1Gbit LACP Bonds over a pair of MLAG-capable switches (planned: EX3300)
> One problem with LACP is that it will only allow you to have 1Gbps between
> any two IPs or MACs (depending on your switch config). This will most
> likely limit the throughput of
Hi Dominik,
Answers in line
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Dominik Hannen
> Sent: 28 April 2015 10:35
> To: ceph-users@lists.ceph.com
> Subject: [ceph-users] Cost- and Powerefficient OSD-Nodes
>
&g
On Tuesday, April 28, 2015, Dominik Hannen wrote:
> Hi ceph-users,
>
> I am currently planning a cluster and would like some input specifically
> about the storage-nodes.
>
> The non-osd systems will be running on more powerful system.
>
> Interconnect as currently planned:
> 4 x 1Gbit LACP Bonds
Hi ceph-users,
I am currently planning a cluster and would like some input specifically about
the storage-nodes.
The non-osd systems will be running on more powerful system.
Interconnect as currently planned:
4 x 1Gbit LACP Bonds over a pair of MLAG-capable switches (planned: EX3300)
So far I
22 matches
Mail list logo