On Tue, Oct 31, 2017 at 07:53:47AM +0800, Qu Wenruo wrote:
> 
> 
> On 2017年10月31日 01:14, Liu Bo wrote:
> > First and foremost, here are the problems we have right now,
> > 
> > a) %thread_pool is configurable via mount option, however for those
> >    'workers' who really needs concurrency, there are at most
> >    "min(num_cpus+2, 8)" threads running to process works, and the
> >    value can't be tuned after mount because they're tagged with
> >    NO_THRESHOLD.
> > 
> > b) For THRESHOLD workers, btrfs is adjusting how concurrency will
> >    happen by starting with the minimum max_active and calling
> >    btrfs_workqueue_set_max() to grow it on demand, but it also has a
> >    upper limit, "min(num_cpus+2, 8)", which means at most we can have
> >    such many threads running at the same time.
> 
> I was also wondering this when using kernel workqueue to replace btrfs
> workqueue.
> However at that time, oct-core CPU is even hard to find in servers.
> 
> > 
> > In fact, kernel workqueue can be created on demand and destroyed once
> > no work needs to be processed.  The small max_active limitation (8)
> > from btrfs has prevented us from utilizing all available cpus, and
> > that is against why we choose workqueue.
> 
> 
> Yep, I'm also wondering should we keep the old 8 threads up limits.
> Especially nowadays oct-core CPU is getting cheaper and cheaper (thanks
> AMD and its Ryzen).
> 
> > 
> > What this patch does:
> > 
> > - resizing %thread_pool is totally removed as they are no long needed,
> >   while keeping its mount option for compatibility.
> > 
> > - The default "0" is passed when allocating workqueue, so the maximum
> >   number of running works is typically 256.  And all fields for
> >   limiting max_active are removed, including current_active,
> >   limit_active, thresh etc.
> 
> Any benchmark will make this more persuasive.
> Especially using low frequency but high thread count CPU.
> 
> The idea and the patch looks good to me.
> 
> Looking forward to some benchmark result.
>

OK, will work out some numbers, and I'm going to make these workqueues
global for all btrfs on one system, the reason is that all unbound
workqueues have already shared kthreads belonged to all available numa
nodes, so no need to do it per-fs.

Thanks,

-liubo
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to