On Mon, Jun 22, 2020 at 12:53:29PM -0700, Paul E. McKenney wrote:
> On Mon, Jun 22, 2020 at 09:04:06PM +0200, Uladzislau Rezki wrote:
> > > > >
> > > > > Very good. When does kfree_rcu() and friends move out of kernel/rcu?
> > > > >
> > > > Do you mean to move the whole logic of kfree_rcu() from
On Mon, Jun 22, 2020 at 09:04:06PM +0200, Uladzislau Rezki wrote:
> > > >
> > > > Very good. When does kfree_rcu() and friends move out of kernel/rcu?
> > > >
> > > Do you mean to move the whole logic of kfree_rcu() from top to down to
> > > mm/?
> >
> > I do mean exactly that.
> >
> > That w
> > >
> > > Very good. When does kfree_rcu() and friends move out of kernel/rcu?
> > >
> > Do you mean to move the whole logic of kfree_rcu() from top to down to mm/?
>
> I do mean exactly that.
>
> That was my goal some years back when Rao Shoaib was making the first
> attempt along these lin
On Fri, Jun 19, 2020 at 05:46:52PM +0200, Uladzislau Rezki wrote:
> On Thu, Jun 18, 2020 at 02:34:27PM -0700, Paul E. McKenney wrote:
> > On Thu, Jun 18, 2020 at 11:17:09PM +0200, Uladzislau Rezki wrote:
> > > > >
> > > > > trace_rcu_invoke_kfree_bulk_callback(
> > > > > rcu_state.name
On Thu, Jun 18, 2020 at 02:34:27PM -0700, Paul E. McKenney wrote:
> On Thu, Jun 18, 2020 at 11:17:09PM +0200, Uladzislau Rezki wrote:
> > > >
> > > > trace_rcu_invoke_kfree_bulk_callback(
> > > > rcu_state.name, bkvhead[i]->nr_records,
> > > > bkvhead[i]->records);
> > > >
On Thu, Jun 18, 2020 at 11:17:09PM +0200, Uladzislau Rezki wrote:
> > >
> > > trace_rcu_invoke_kfree_bulk_callback(
> > > rcu_state.name, bkvhead[i]->nr_records,
> > > bkvhead[i]->records);
> > > if (i == 0)
> > > kfree_bulk(bkvhead[i]->nr_records,
> > >
> >
> > trace_rcu_invoke_kfree_bulk_callback(
> > rcu_state.name, bkvhead[i]->nr_records,
> > bkvhead[i]->records);
> > if (i == 0)
> > kfree_bulk(bkvhead[i]->nr_records,
> > bkvhead[i]->records);
> > else
> > vfree_bulk(bkvhead[i]->nr_re
On Thu, Jun 18, 2020 at 10:35:57PM +0200, Uladzislau Rezki wrote:
> On Thu, Jun 18, 2020 at 12:03:59PM -0700, Paul E. McKenney wrote:
> but i do not have a strong opinion here, even though i tend to
> say that it would be odd. Having just vfree_bulk(), i think
> would be enough, as a result the cod
On Thu, Jun 18, 2020 at 12:03:59PM -0700, Paul E. McKenney wrote:
> On Thu, Jun 18, 2020 at 08:34:48PM +0200, Uladzislau Rezki wrote:
> > > > >
> > > > > I suspect that he would like to keep the tracing.
> > > > >
> > > > > It might be worth trying the branches, given that they would be
> > > >
On Thu, Jun 18, 2020 at 10:35:27AM -0700, Matthew Wilcox wrote:
> On Thu, Jun 18, 2020 at 07:30:49PM +0200, Uladzislau Rezki wrote:
> > > I'd suggest:
> > >
> > > rcu_lock_acquire(&rcu_callback_map);
> > > trace_rcu_invoke_kfree_bulk_callback(rcu_state.name,
> >
On Thu, Jun 18, 2020 at 08:34:48PM +0200, Uladzislau Rezki wrote:
> > > >
> > > > I suspect that he would like to keep the tracing.
> > > >
> > > > It might be worth trying the branches, given that they would be constant
> > > > and indexed by "i". The compiler might well remove the indirection.
On Thu, Jun 18, 2020 at 11:37:51AM -0700, Matthew Wilcox wrote:
> On Thu, Jun 18, 2020 at 08:23:33PM +0200, Uladzislau Rezki wrote:
> > > +void vfree_bulk(size_t count, void **addrs)
> > > +{
> > > + unsigned int i;
> > > +
> > > + BUG_ON(in_nmi());
> > > + might_sleep_if(!in_interrupt());
> > > +
On Thu, Jun 18, 2020 at 08:23:33PM +0200, Uladzislau Rezki wrote:
> > +void vfree_bulk(size_t count, void **addrs)
> > +{
> > + unsigned int i;
> > +
> > + BUG_ON(in_nmi());
> > + might_sleep_if(!in_interrupt());
> > +
> > + for (i = 0; i < count; i++) {
> > + void *addr = addrs[i
> > >
> > > I suspect that he would like to keep the tracing.
> > >
> > > It might be worth trying the branches, given that they would be constant
> > > and indexed by "i". The compiler might well remove the indirection.
> > >
> > > The compiler guys brag about doing so, which of course might o
On Thu, Jun 18, 2020 at 11:15:41AM -0700, Matthew Wilcox wrote:
> On Thu, Jun 18, 2020 at 07:56:23PM +0200, Uladzislau Rezki wrote:
> > If we mix pointers, then we can do free per pointer only. I mean in that
> > case we will not be able to use kfree_bulk() interface for freeing SLAB
> > memory and
On Thu, Jun 18, 2020 at 07:56:23PM +0200, Uladzislau Rezki wrote:
> If we mix pointers, then we can do free per pointer only. I mean in that
> case we will not be able to use kfree_bulk() interface for freeing SLAB
> memory and the code would converted to something like:
>
>
> while (nr_objects_i
On Thu, Jun 18, 2020 at 07:35:20PM +0200, Uladzislau Rezki wrote:
> > >
> > > I don't think that replacing direct function calls with indirect function
> > > calls is a great suggestion with the current state of play around branch
> > > prediction.
> > >
> > > I'd suggest:
> > >
> > >
On Thu, Jun 18, 2020 at 10:32:06AM -0700, Paul E. McKenney wrote:
> On Thu, Jun 18, 2020 at 07:25:04PM +0200, Uladzislau Rezki wrote:
> > > > + // Handle two first channels.
> > > > + for (i = 0; i < FREE_N_CHANNELS; i++) {
> > > > + for (; bkvhead[i]; bkvhead[i] = bnext)
On Thu, Jun 18, 2020 at 07:30:49PM +0200, Uladzislau Rezki wrote:
> > I'd suggest:
> >
> > rcu_lock_acquire(&rcu_callback_map);
> > trace_rcu_invoke_kfree_bulk_callback(rcu_state.name,
> > bkvhead[i]->nr_records, bkvhead[i]->recor
> >
> > I don't think that replacing direct function calls with indirect function
> > calls is a great suggestion with the current state of play around branch
> > prediction.
> >
> > I'd suggest:
> >
> > rcu_lock_acquire(&rcu_callback_map);
> > trace_rcu_i
On Thu, Jun 18, 2020 at 07:25:04PM +0200, Uladzislau Rezki wrote:
> > > + // Handle two first channels.
> > > + for (i = 0; i < FREE_N_CHANNELS; i++) {
> > > + for (; bkvhead[i]; bkvhead[i] = bnext) {
> > > + bnext = bkvhead[i]->next;
> > > + debug_rcu_bhead_
> >
> > Not an emergency, but did you look into replacing this "if" statement
> > with an array of pointers to functions implementing the legs of the
> > "if" statement? If nothing else, this would greatly reduced indentation.
>
> I don't think that replacing direct function calls with indirect
> > + // Handle two first channels.
> > + for (i = 0; i < FREE_N_CHANNELS; i++) {
> > + for (; bkvhead[i]; bkvhead[i] = bnext) {
> > + bnext = bkvhead[i]->next;
> > + debug_rcu_bhead_unqueue(bkvhead[i]);
> > +
> > + rcu_lock_acquir
On Wed, Jun 17, 2020 at 05:52:14PM -0700, Matthew Wilcox wrote:
> On Wed, Jun 17, 2020 at 04:46:09PM -0700, Paul E. McKenney wrote:
> > > + // Handle two first channels.
> > > + for (i = 0; i < FREE_N_CHANNELS; i++) {
> > > + for (; bkvhead[i]; bkvhead[i] = bnext) {
> > > +
On Wed, Jun 17, 2020 at 04:46:09PM -0700, Paul E. McKenney wrote:
> > + // Handle two first channels.
> > + for (i = 0; i < FREE_N_CHANNELS; i++) {
> > + for (; bkvhead[i]; bkvhead[i] = bnext) {
> > + bnext = bkvhead[i]->next;
> > + debug_rcu_bhead_
On Mon, May 25, 2020 at 11:47:53PM +0200, Uladzislau Rezki (Sony) wrote:
> To do so, we use an array of kvfree_rcu_bulk_data structures.
> It consists of two elements:
> - index number 0 corresponds to slab pointers.
> - index number 1 corresponds to vmalloc pointers.
>
> Keeping vmalloc pointer
To do so, we use an array of kvfree_rcu_bulk_data structures.
It consists of two elements:
- index number 0 corresponds to slab pointers.
- index number 1 corresponds to vmalloc pointers.
Keeping vmalloc pointers separated from slab pointers makes
it possible to invoke the right freeing API for
27 matches
Mail list logo