On Wed, Nov 27, 2013 at 04:34:00PM +0100, Gregory Farnum wrote:
> On Wed, Nov 27, 2013 at 7:28 AM, Mark Nelson <mark.nel...@inktank.com> wrote:
> > On 11/27/2013 09:25 AM, Gregory Farnum wrote:
> >>
> >> On Wed, Nov 27, 2013 at 1:31 AM, Jens-Christian Fischer
> >> <jens-christian.fisc...@switch.ch> wrote:
> >>>>
> >>>> The largest group of threads is those from the network messenger ? in
> >>>> the current implementation it creates two threads per process the
> >>>> daemon is communicating with. That's two threads for each OSD it
> >>>> shares PGs with, and two threads for each client which is accessing
> >>>> any data on that OSD.
> >>>
> >>>
> >>> If I read your statement right, then 1000 threads still seem excessive,
> >>> no? (with 24 OSD, there's only max 2 * 23 threads to the other OSDs + some
> >>> threads to the clients)...
> >>
> >>
> >> Well, it depends on how many clients you have. ;) I think the default
> >> settings also have ~12 internal working threads (but I don't recall
> >> exactly). The thread count definitely is not related to the number of
> >> PGs it hosts (directly, anyway ? more PGs can lead to more OSD peers
> >> and so more messenger threads). Keep in mind that if you have clients
> >> connecting and then disconnecting repeatedly (eg, the rados tool),
> >> each instance counts as a client and the connection has to time out
> >> (15 minutes) before its threads get cleaned up.
> >
> >
> > So I am woefully ignorant as to why/how we are doing things here, but is
> > there any reason we are spawning new threads for each client connection
> > rather than using a thread pool like we do in other areas?
> 
> Because it's harder and scales a bajillion times farther than people
> think it does. 
It may scale 'farther', but not faster.

1000s of threads talking to each other, managing messages, managing queues, 
managing locks ...
... this takes time: 100s of micro seconds, 100s of systems calls for _ONE_ 
single client-write
(Bug #6366 / long TAT - due too long residence time in Ceph code)

Regards,
-Dieter


> Rather spend the dev time on new features and things,
> but we will have to address it eventually.
> -Greg
> Software Engineer #42 @ http://inktank.com | http://ceph.com
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to