On 12/11/2012 07:37 PM, Tejun Heo wrote:
> Hello,
>
> On Tue, Dec 11, 2012 at 07:32:13PM +0530, Srivatsa S. Bhat wrote:
>> On 12/11/2012 07:17 PM, Tejun Heo wrote:
>>> Hello, Srivatsa.
>>>
>>> On Tue, Dec 11, 2012 at 06:43:54PM +0530, Srivatsa S. Bhat wrote:
This approach (of using
Hello,
On Tue, Dec 11, 2012 at 07:32:13PM +0530, Srivatsa S. Bhat wrote:
> On 12/11/2012 07:17 PM, Tejun Heo wrote:
> > Hello, Srivatsa.
> >
> > On Tue, Dec 11, 2012 at 06:43:54PM +0530, Srivatsa S. Bhat wrote:
> >> This approach (of using synchronize_sched()) also looks good. It is simple,
> >>
On 12/11/2012 07:17 PM, Tejun Heo wrote:
> Hello, Srivatsa.
>
> On Tue, Dec 11, 2012 at 06:43:54PM +0530, Srivatsa S. Bhat wrote:
>> This approach (of using synchronize_sched()) also looks good. It is simple,
>> yet effective, but unfortunately inefficient at the writer side (because
>> he'll
Hello, Srivatsa.
On Tue, Dec 11, 2012 at 06:43:54PM +0530, Srivatsa S. Bhat wrote:
> This approach (of using synchronize_sched()) also looks good. It is simple,
> yet effective, but unfortunately inefficient at the writer side (because
> he'll have to wait for a full synchronize_sched()).
While
On 12/10/2012 10:54 PM, Oleg Nesterov wrote:
> On 12/10, Srivatsa S. Bhat wrote:
>>
>> On 12/10/2012 01:52 AM, Oleg Nesterov wrote:
>>> On 12/10, Srivatsa S. Bhat wrote:
On 12/10/2012 12:44 AM, Oleg Nesterov wrote:
> But yes, it is easy to blame somebody else's code ;) And I
On 12/10/2012 10:58 PM, Oleg Nesterov wrote:
> On 12/10, Srivatsa S. Bhat wrote:
>>
>> On 12/10/2012 02:43 AM, Oleg Nesterov wrote:
>>> Damn, sorry for noise. I missed this part...
>>>
>>> On 12/10, Srivatsa S. Bhat wrote:
On 12/10/2012 12:44 AM, Oleg Nesterov wrote:
> the latency.
On 12/10/2012 11:45 PM, Oleg Nesterov wrote:
> On 12/10, Srivatsa S. Bhat wrote:
>>
>> On 12/10/2012 02:27 AM, Oleg Nesterov wrote:
>>> However. If this is true, then compared to preempt_disable/stop_machine
>>> livelock is possible. Probably this is fine, we have the same problem with
>>>
On 12/10/2012 11:45 PM, Oleg Nesterov wrote:
On 12/10, Srivatsa S. Bhat wrote:
On 12/10/2012 02:27 AM, Oleg Nesterov wrote:
However. If this is true, then compared to preempt_disable/stop_machine
livelock is possible. Probably this is fine, we have the same problem with
get_online_cpus().
On 12/10/2012 10:58 PM, Oleg Nesterov wrote:
On 12/10, Srivatsa S. Bhat wrote:
On 12/10/2012 02:43 AM, Oleg Nesterov wrote:
Damn, sorry for noise. I missed this part...
On 12/10, Srivatsa S. Bhat wrote:
On 12/10/2012 12:44 AM, Oleg Nesterov wrote:
the latency. And I guess something like
On 12/10/2012 10:54 PM, Oleg Nesterov wrote:
On 12/10, Srivatsa S. Bhat wrote:
On 12/10/2012 01:52 AM, Oleg Nesterov wrote:
On 12/10, Srivatsa S. Bhat wrote:
On 12/10/2012 12:44 AM, Oleg Nesterov wrote:
But yes, it is easy to blame somebody else's code ;) And I can't suggest
something
Hello, Srivatsa.
On Tue, Dec 11, 2012 at 06:43:54PM +0530, Srivatsa S. Bhat wrote:
This approach (of using synchronize_sched()) also looks good. It is simple,
yet effective, but unfortunately inefficient at the writer side (because
he'll have to wait for a full synchronize_sched()).
While
On 12/11/2012 07:17 PM, Tejun Heo wrote:
Hello, Srivatsa.
On Tue, Dec 11, 2012 at 06:43:54PM +0530, Srivatsa S. Bhat wrote:
This approach (of using synchronize_sched()) also looks good. It is simple,
yet effective, but unfortunately inefficient at the writer side (because
he'll have to wait
Hello,
On Tue, Dec 11, 2012 at 07:32:13PM +0530, Srivatsa S. Bhat wrote:
On 12/11/2012 07:17 PM, Tejun Heo wrote:
Hello, Srivatsa.
On Tue, Dec 11, 2012 at 06:43:54PM +0530, Srivatsa S. Bhat wrote:
This approach (of using synchronize_sched()) also looks good. It is simple,
yet
On 12/11/2012 07:37 PM, Tejun Heo wrote:
Hello,
On Tue, Dec 11, 2012 at 07:32:13PM +0530, Srivatsa S. Bhat wrote:
On 12/11/2012 07:17 PM, Tejun Heo wrote:
Hello, Srivatsa.
On Tue, Dec 11, 2012 at 06:43:54PM +0530, Srivatsa S. Bhat wrote:
This approach (of using synchronize_sched()) also
On 12/10, Srivatsa S. Bhat wrote:
>
> On 12/10/2012 02:27 AM, Oleg Nesterov wrote:
> > On 12/07, Srivatsa S. Bhat wrote:
> >>
> >> 4. No deadlock possibilities
> >>
> >>Per-cpu locking is not the way to go if we want to have relaxed rules
> >>for lock-ordering. Because, we can end up in
On 12/10, Srivatsa S. Bhat wrote:
>
> On 12/10/2012 02:43 AM, Oleg Nesterov wrote:
> > Damn, sorry for noise. I missed this part...
> >
> > On 12/10, Srivatsa S. Bhat wrote:
> >>
> >> On 12/10/2012 12:44 AM, Oleg Nesterov wrote:
> >>> the latency. And I guess something like kick_all_cpus_sync() is
On 12/10, Srivatsa S. Bhat wrote:
>
> On 12/10/2012 01:52 AM, Oleg Nesterov wrote:
> > On 12/10, Srivatsa S. Bhat wrote:
> >>
> >> On 12/10/2012 12:44 AM, Oleg Nesterov wrote:
> >>
> >>> But yes, it is easy to blame somebody else's code ;) And I can't suggest
> >>> something better at least right
On 12/10, Srivatsa S. Bhat wrote:
On 12/10/2012 01:52 AM, Oleg Nesterov wrote:
On 12/10, Srivatsa S. Bhat wrote:
On 12/10/2012 12:44 AM, Oleg Nesterov wrote:
But yes, it is easy to blame somebody else's code ;) And I can't suggest
something better at least right now. If I understand
On 12/10, Srivatsa S. Bhat wrote:
On 12/10/2012 02:43 AM, Oleg Nesterov wrote:
Damn, sorry for noise. I missed this part...
On 12/10, Srivatsa S. Bhat wrote:
On 12/10/2012 12:44 AM, Oleg Nesterov wrote:
the latency. And I guess something like kick_all_cpus_sync() is too
heavy.
On 12/10, Srivatsa S. Bhat wrote:
On 12/10/2012 02:27 AM, Oleg Nesterov wrote:
On 12/07, Srivatsa S. Bhat wrote:
4. No deadlock possibilities
Per-cpu locking is not the way to go if we want to have relaxed rules
for lock-ordering. Because, we can end up in circular-locking
On 12/10/2012 02:27 AM, Oleg Nesterov wrote:
> On 12/07, Srivatsa S. Bhat wrote:
>>
>> 4. No deadlock possibilities
>>
>>Per-cpu locking is not the way to go if we want to have relaxed rules
>>for lock-ordering. Because, we can end up in circular-locking dependencies
>>as explained in
On 12/10/2012 02:43 AM, Oleg Nesterov wrote:
> Damn, sorry for noise. I missed this part...
>
> On 12/10, Srivatsa S. Bhat wrote:
>>
>> On 12/10/2012 12:44 AM, Oleg Nesterov wrote:
>>> the latency. And I guess something like kick_all_cpus_sync() is "too heavy".
>>
>> I hadn't considered that.
On 12/10/2012 01:52 AM, Oleg Nesterov wrote:
> On 12/10, Srivatsa S. Bhat wrote:
>>
>> On 12/10/2012 12:44 AM, Oleg Nesterov wrote:
>>
>>> But yes, it is easy to blame somebody else's code ;) And I can't suggest
>>> something better at least right now. If I understand correctly, we can not
>>>
Damn, sorry for noise. I missed this part...
On 12/10, Srivatsa S. Bhat wrote:
>
> On 12/10/2012 12:44 AM, Oleg Nesterov wrote:
> > the latency. And I guess something like kick_all_cpus_sync() is "too heavy".
>
> I hadn't considered that. Thinking of it, I don't think it would help us..
> It
On 12/07, Srivatsa S. Bhat wrote:
>
> 4. No deadlock possibilities
>
>Per-cpu locking is not the way to go if we want to have relaxed rules
>for lock-ordering. Because, we can end up in circular-locking dependencies
>as explained in https://lkml.org/lkml/2012/12/6/290
OK, but this
On 12/10, Srivatsa S. Bhat wrote:
>
> On 12/10/2012 12:44 AM, Oleg Nesterov wrote:
>
> > But yes, it is easy to blame somebody else's code ;) And I can't suggest
> > something better at least right now. If I understand correctly, we can not
> > use, say, synchronize_sched() in _cpu_down() path
>
>
On 12/10/2012 12:44 AM, Oleg Nesterov wrote:
> On 12/07, Srivatsa S. Bhat wrote:
>>
>> Per-cpu counters can help solve the cache-line bouncing problem. So we
>> actually use the best of both: per-cpu counters (no-waiting) at the reader
>> side in the fast-path, and global rwlocks in the slowpath.
On 12/07, Srivatsa S. Bhat wrote:
>
> Per-cpu counters can help solve the cache-line bouncing problem. So we
> actually use the best of both: per-cpu counters (no-waiting) at the reader
> side in the fast-path, and global rwlocks in the slowpath.
>
> [ Fastpath = no writer is active; Slowpath = a
On 12/07, Srivatsa S. Bhat wrote:
Per-cpu counters can help solve the cache-line bouncing problem. So we
actually use the best of both: per-cpu counters (no-waiting) at the reader
side in the fast-path, and global rwlocks in the slowpath.
[ Fastpath = no writer is active; Slowpath = a writer
On 12/10/2012 12:44 AM, Oleg Nesterov wrote:
On 12/07, Srivatsa S. Bhat wrote:
Per-cpu counters can help solve the cache-line bouncing problem. So we
actually use the best of both: per-cpu counters (no-waiting) at the reader
side in the fast-path, and global rwlocks in the slowpath.
[
On 12/10, Srivatsa S. Bhat wrote:
On 12/10/2012 12:44 AM, Oleg Nesterov wrote:
But yes, it is easy to blame somebody else's code ;) And I can't suggest
something better at least right now. If I understand correctly, we can not
use, say, synchronize_sched() in _cpu_down() path
We can't
On 12/07, Srivatsa S. Bhat wrote:
4. No deadlock possibilities
Per-cpu locking is not the way to go if we want to have relaxed rules
for lock-ordering. Because, we can end up in circular-locking dependencies
as explained in https://lkml.org/lkml/2012/12/6/290
OK, but this assumes
Damn, sorry for noise. I missed this part...
On 12/10, Srivatsa S. Bhat wrote:
On 12/10/2012 12:44 AM, Oleg Nesterov wrote:
the latency. And I guess something like kick_all_cpus_sync() is too heavy.
I hadn't considered that. Thinking of it, I don't think it would help us..
It won't get rid
On 12/10/2012 01:52 AM, Oleg Nesterov wrote:
On 12/10, Srivatsa S. Bhat wrote:
On 12/10/2012 12:44 AM, Oleg Nesterov wrote:
But yes, it is easy to blame somebody else's code ;) And I can't suggest
something better at least right now. If I understand correctly, we can not
use, say,
On 12/10/2012 02:43 AM, Oleg Nesterov wrote:
Damn, sorry for noise. I missed this part...
On 12/10, Srivatsa S. Bhat wrote:
On 12/10/2012 12:44 AM, Oleg Nesterov wrote:
the latency. And I guess something like kick_all_cpus_sync() is too heavy.
I hadn't considered that. Thinking of it, I
On 12/10/2012 02:27 AM, Oleg Nesterov wrote:
On 12/07, Srivatsa S. Bhat wrote:
4. No deadlock possibilities
Per-cpu locking is not the way to go if we want to have relaxed rules
for lock-ordering. Because, we can end up in circular-locking dependencies
as explained in
On 12/08/2012 12:01 AM, Tejun Heo wrote:
> Hello, Srivatsa.
>
> On Fri, Dec 07, 2012 at 11:54:01PM +0530, Srivatsa S. Bhat wrote:
>>> lg_lock doesn't do local nesting and I'm not sure how big a deal that
>>> is as I don't know how many should be converted. But if nesting is an
>>> absolute
On 12/07/2012 11:46 PM, Tejun Heo wrote:
> Hello, again.
>
> On Fri, Dec 07, 2012 at 09:57:24AM -0800, Tejun Heo wrote:
>> possible. Also, I think the right approach would be auditing each
>> get_online_cpus_atomic() callsites and figure out proper locking order
>> rather than implementing a
Hello, Srivatsa.
On Fri, Dec 07, 2012 at 11:54:01PM +0530, Srivatsa S. Bhat wrote:
> > lg_lock doesn't do local nesting and I'm not sure how big a deal that
> > is as I don't know how many should be converted. But if nesting is an
> > absolute necessity, it would be much better to implement
On 12/07/2012 11:27 PM, Tejun Heo wrote:
> On Fri, Dec 07, 2012 at 11:08:13PM +0530, Srivatsa S. Bhat wrote:
>> 4. No deadlock possibilities
>>
>>Per-cpu locking is not the way to go if we want to have relaxed rules
>>for lock-ordering. Because, we can end up in circular-locking
Hello, again.
On Fri, Dec 07, 2012 at 09:57:24AM -0800, Tejun Heo wrote:
> possible. Also, I think the right approach would be auditing each
> get_online_cpus_atomic() callsites and figure out proper locking order
> rather than implementing a construct this unusual especially as
> hunting down
On Fri, Dec 07, 2012 at 11:08:13PM +0530, Srivatsa S. Bhat wrote:
> 4. No deadlock possibilities
>
>Per-cpu locking is not the way to go if we want to have relaxed rules
>for lock-ordering. Because, we can end up in circular-locking dependencies
>as explained in
On Fri, Dec 07, 2012 at 11:08:13PM +0530, Srivatsa S. Bhat wrote:
4. No deadlock possibilities
Per-cpu locking is not the way to go if we want to have relaxed rules
for lock-ordering. Because, we can end up in circular-locking dependencies
as explained in
Hello, again.
On Fri, Dec 07, 2012 at 09:57:24AM -0800, Tejun Heo wrote:
possible. Also, I think the right approach would be auditing each
get_online_cpus_atomic() callsites and figure out proper locking order
rather than implementing a construct this unusual especially as
hunting down the
On 12/07/2012 11:27 PM, Tejun Heo wrote:
On Fri, Dec 07, 2012 at 11:08:13PM +0530, Srivatsa S. Bhat wrote:
4. No deadlock possibilities
Per-cpu locking is not the way to go if we want to have relaxed rules
for lock-ordering. Because, we can end up in circular-locking dependencies
as
Hello, Srivatsa.
On Fri, Dec 07, 2012 at 11:54:01PM +0530, Srivatsa S. Bhat wrote:
lg_lock doesn't do local nesting and I'm not sure how big a deal that
is as I don't know how many should be converted. But if nesting is an
absolute necessity, it would be much better to implement generic
On 12/07/2012 11:46 PM, Tejun Heo wrote:
Hello, again.
On Fri, Dec 07, 2012 at 09:57:24AM -0800, Tejun Heo wrote:
possible. Also, I think the right approach would be auditing each
get_online_cpus_atomic() callsites and figure out proper locking order
rather than implementing a construct
On 12/08/2012 12:01 AM, Tejun Heo wrote:
Hello, Srivatsa.
On Fri, Dec 07, 2012 at 11:54:01PM +0530, Srivatsa S. Bhat wrote:
lg_lock doesn't do local nesting and I'm not sure how big a deal that
is as I don't know how many should be converted. But if nesting is an
absolute necessity, it
48 matches
Mail list logo