> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Daniel Gryniewicz
> Sent: 11 July 2016 13:38
> To: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] OSPF to the host
>
> On 07/11/2016 08:23 AM, Saverio Proto wr
On 07/11/2016 08:23 AM, Saverio Proto wrote:
I'm looking at the Dell S-ON switches which we can get in a Cumulus
version. Any pro's and con's of using Cumulus vs old school switch OS's you
may have come across?
Nothing to declare here. Once configured properly the hardware works
as expected
> I'm looking at the Dell S-ON switches which we can get in a Cumulus
> version. Any pro's and con's of using Cumulus vs old school switch OS's you
> may have come across?
Nothing to declare here. Once configured properly the hardware works
as expected. I never used Dell, I used switches from
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Saverio Proto
> Sent: 09 June 2016 11:38
> To: n...@fisk.me.uk
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] OSPF to the host
>
> > Has anybody h
> Has anybody had any experience with running the network routed down all the
> way to the host?
>
Hello Nick,
yes at SWITCH.ch we run OSPF unnumbered on the switches and on the
hosts. Each server has two NICs and we are able to plug the servers to
any port on the fabric and OSFP will make the m
Hi,
regarding clustered VyOS on KVM: In theory this sounds like a safe plan,
but will come with a great performance penalty because of all the
context-switches. And even with PCI-passthrough you will also feel
increased latency.
Docker/LXC/LXD on the other hand does not share the context-swi
> OTOH, running ceph on dynamically routed networks will put your routing
> daemon (e.g. bird) in a SPOF position...
>
I run a somewhat large estate with either BGP or OSPF attachment, not
only ceph is happy in either of them, as I have never had issues with
the routing daemons (after setting them
Hi,
Regarding single points of failure on the daemon on the host I was thinking
about doing a cluster setup with i.e. VyOS on kvm-machines on the host, and
they handle all the ospf stuff as well. I have not done any performance
benchmarks but it should be possible to do at least. Maybe even possib
We do the same thing. OSPF between ToR switches, BGP to all of the hosts
with each one advertising its own /32 (each has 2 NICs).
On Mon, Jun 6, 2016 at 6:29 AM, Luis Periquito wrote:
> Nick,
>
> TL;DR: works brilliantly :)
>
> Where I work we have all of the ceph nodes (and a lot of other stuff
016 14:30
>> To: Nick Fisk
>> Cc: Ceph Users
>> Subject: Re: [ceph-users] OSPF to the host
>>
>> Nick,
>>
>> TL;DR: works brilliantly :)
>
> Excellent, just what I wanted to hear!!!
>
>>
>> Where I work we have all of the ceph
Hi,
for IPoIB this is probably the only way to efficiently use dual-port
HCAs. Since IPoIB can - AFAIK - only do bonding in active-passive mode,
it won't distribute traffic across both ports like ethernet
link-aggregation would do.
OTOH, running ceph on dynamically routed networks will put y
> -Original Message-
> From: Luis Periquito [mailto:periqu...@gmail.com]
> Sent: 06 June 2016 14:30
> To: Nick Fisk
> Cc: Ceph Users
> Subject: Re: [ceph-users] OSPF to the host
>
> Nick,
>
> TL;DR: works brilliantly :)
Excellent, just what I wanted to
Nick,
TL;DR: works brilliantly :)
Where I work we have all of the ceph nodes (and a lot of other stuff) using
OSPF and BGP server attachment. With that we're able to implement solutions
like Anycast addresses, removing the need to add load balancers, for the
radosgw solution.
The biggest issues
Hi All,
Has anybody had any experience with running the network routed down all the
way to the host?
I know the standard way most people configured their OSD nodes is to bond
the two nics which will then talk via a VRRP gateway and then probably from
then on the networking is all Layer3. T
14 matches
Mail list logo