I'm getting a kernel panic with your patch:

-- panic
-- mount_block_root
-- mount_root
-- prepare_namespace
-- kernel_init_freeable

It is giving me an unknown block device for the same config file i
used on other builds. Since my test is running on a kvm guest under a
ramdisk, i'm still checking if there are any differences between this
build and other ones but I think there aren't.

Any chances that "prepare_namespace" might be breaking mount_root ?

Tks

On Wed, Jun 11, 2014 at 9:14 PM, Eric W. Biederman
<ebied...@xmission.com> wrote:
> "Paul E. McKenney" <paul...@linux.vnet.ibm.com> writes:
>
>> On Wed, Jun 11, 2014 at 04:12:15PM -0700, Eric W. Biederman wrote:
>>> "Paul E. McKenney" <paul...@linux.vnet.ibm.com> writes:
>>>
>>> > On Wed, Jun 11, 2014 at 01:46:08PM -0700, Eric W. Biederman wrote:
>>> >> On the chance it is dropping the old nsproxy which calls syncrhonize_rcu
>>> >> in switch_task_namespaces that is causing you problems I have attached
>>> >> a patch that changes from rcu_read_lock to task_lock for code that
>>> >> calls task_nsproxy from a different task.  The code should be safe
>>> >> and it should be an unquestions performance improvement but I have only
>>> >> compile tested it.
>>> >>
>>> >> If you can try the patch it will tell is if the problem is the rcu
>>> >> access in switch_task_namespaces (the only one I am aware of network
>>> >> namespace creation) or if the problem rcu case is somewhere else.
>>> >>
>>> >> If nothing else knowing which rcu accesses are causing the slow down
>>> >> seem important at the end of the day.
>>> >>
>>> >> Eric
>>> >>
>>> >
>>> > If this is the culprit, another approach would be to use workqueues from
>>> > RCU callbacks.  The following (untested, probably does not even build)
>>> > patch illustrates one such approach.
>>>
>>> For reference the only reason we are using rcu_lock today for nsproxy is
>>> an old lock ordering problem that does not exist anymore.
>>>
>>> I can say that in some workloads setns is a bit heavy today because of
>>> the synchronize_rcu and setns is more important that I had previously
>>> thought because pthreads break the classic unix ability to do things in
>>> your process after fork() (sigh).
>>>
>>> Today daemonize is gone, and notify the parent process with a signal
>>> relies on task_active_pid_ns which does not use nsproxy.  So the old
>>> lock ordering problem/race is gone.
>>>
>>> The description of what was happening when the code switched from
>>> task_lock to rcu_read_lock to protect nsproxy.
>>
>> OK, never mind, then!  ;-)
>
> I appreciate you posting your approach.  I just figured I should do
> my homework, and verify my fuzzy memory.
>
> Who knows there might be different performance problems with my
> approach.  But I am hoping this is one of those happy instances where we
> can just make everything simpler.
>
> Eric



-- 
-- 
Rafael David Tinoco
Software Sustaining Engineer @ Canonical
Canonical Technical Services Engineering Team
# Email: rafael.tin...@canonical.com (GPG: 87683FC0)
# Phone: +55.11.9.6777.2727 (Americas/Sao_Paulo)
# LP: ~inaddy | IRC: tinoco | Skype: rafael.tinoco
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to