I was fascinated as well. This is how it should be done ☺

We are in the middle of ordering and I saw the notice that they use single
socket systems for the OSDs due to latency issues. I have only seen dual
socket systems on the OSD setups here. Is this something you should do with
new SSD clusters?

Regards,
Josef

On Sat, 30 Jan 2016 09:43 Nick Fisk <n...@fisk.me.uk> wrote:

> Yes, thank you very much. I've just finished going through this and found
> it very interesting. The dynamic nature of the infrastructure from top to
> bottom is fascinating, especially the use of OSPF per container.
>
> One question though, are those latency numbers for writes on Ceph correct?
> 9us is very fast or is it something to do with the 1/100 buffered nature of
> the test?
>
> > -----Original Message-----
> > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> > Gregory Farnum
> > Sent: 29 January 2016 21:25
> > To: Patrick McGarry <pmcga...@redhat.com>
> > Cc: Ceph Devel <ceph-de...@vger.kernel.org>; Ceph-User <ceph-
> > us...@ceph.com>
> > Subject: Re: [ceph-users] Ceph Tech Talk - High-Performance Production
> > Databases on Ceph
> >
> > This is super cool — thanks, Thorvald, for the realistic picture of how
> > databases behave on rbd!
> >
> > On Thu, Jan 28, 2016 at 11:56 AM, Patrick McGarry <pmcga...@redhat.com>
> > wrote:
> > > Hey cephers,
> > >
> > > Here are the links to both the video and the slides from the Ceph Tech
> > > Talk today. Thanks again to Thorvald and Medallia for stepping forward
> > > to present.
> > >
> > > Video: https://youtu.be/OqlC7S3cUKs
> > >
> > > Slides:
> > > http://www.slideshare.net/Inktank_Ceph/2016jan28-high-performance-
> > prod
> > > uction-databases-on-ceph-57620014
> > >
> > >
> > > --
> > >
> > > Best Regards,
> > >
> > > Patrick McGarry
> > > Director Ceph Community || Red Hat
> > > http://ceph.com  ||  http://community.redhat.com @scuttlemonkey ||
> > > @ceph
> > > --
> > > To unsubscribe from this list: send the line "unsubscribe ceph-devel"
> > > in the body of a message to majord...@vger.kernel.org More
> > majordomo
> > > info at  http://vger.kernel.org/majordomo-info.html
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to