There are some 21 mentions of 'manage_mutex' in the comments of
kernel/cpuset.c remaining after this patch is applied, but no such
mutex exists anymore.
Could you update kernel/cpuset.c comments, Paul M., for this and
whatever other changes apply ?
--
I won't rest till it's the
And then I pushed just one more patch:
task-containersv11-automatic-userspace-notification-of-idle-containers.patch
and the build died again:
$ make kernel/cgroup.o
CHK include/linux/version.h
CHK include/linux/utsrelease.h
CALLscripts/checksyscalls.sh
:1389:2: warning: #warn
I had to push down all of the following patches, before I could get it to build:
task-containersv11-basic-task-container-framework.patch
task-containersv11-basic-task-container-framework-fix.patch
task-containersv11-basic-task-container-framework-containers-fix-refcount-bug.patch
Paul M:
This patch doesn't build for me in the following case. If I apply the
rest of the containersv11 patches, it builds, but if I happen to bisect
into this set of patches having applied only:
task-containersv11-basic-task-container-framework.patch
while using sn2_defconfig (with CONFIG_CG
Hello, Eric.
Eric W. Biederman wrote:
> Mostly I am thinking that any non-object model users should have
> their own dedicated wrapper layer. To help keep things consistent
> and to make it hard enough to abuse the system that people will
> find that it is usually easier to do it the righ
Patrick McHardy <[EMAIL PROTECTED]> writes:
> Maybe I can save you some time: we used to do down_trylock()
> for the rtnl mutex, so senders would simply return if someone
> else was already processing the queue *or* the rtnl was locked
> for some other reason. In the first case the process already
Eric W. Biederman wrote:
> Patrick McHardy <[EMAIL PROTECTED]> writes:
>
>>>Currently I don't fold the namesapce into the hash so multiple
>>>namespaces using the same socket name will be guaranteed a hash
>>>collision.
>>
>>
>>That doesn't sound like a good thing :) Is there a reason for
>>not av
Eric W. Biederman wrote:
> Patrick McHardy <[EMAIL PROTECTED]> writes:
>
>
>>I'm wondering why this receive queue processing on unlock is still
>>necessary today, we don't do trylock in rtnetlink_rcv anymore, so
>>all senders will simply wait until the lock is released and then
>>process the queu
Patrick McHardy <[EMAIL PROTECTED]> writes:
> Eric W. Biederman wrote:
>> Because of the global nature of garbage collection, and because of the
>> cost of per namespace hash tables unix_socket_table has been kept
>> global. With a filter added on lookups so we don't see sockets from
>> the wrong
Patrick McHardy <[EMAIL PROTECTED]> writes:
> I'm wondering why this receive queue processing on unlock is still
> necessary today, we don't do trylock in rtnetlink_rcv anymore, so
> all senders will simply wait until the lock is released and then
> process the queue.
Good question, I should prob
Eric W. Biederman wrote:
> Because of the global nature of garbage collection, and because of the
> cost of per namespace hash tables unix_socket_table has been kept
> global. With a filter added on lookups so we don't see sockets from
> the wrong namespace.
>
> Currently I don't fold the namesap
Eric W. Biederman wrote:
> void rtnl_unlock(void)
> {
> - mutex_unlock(&rtnl_mutex);
> - if (rtnl && rtnl->sk_receive_queue.qlen)
> + struct net *net;
> +
> + /*
> + * Loop through all of the rtnl sockets until none of them (in
> + * a live network namespace) have queue
12 matches
Mail list logo