On Mon 23-09-19 16:23:40, Eric W. Biederman wrote:
>
> Michal,
>
> Thinking about this I have a hunch about what changed. I think at some
> point we changed from 4k to 8k kernel stacks. So I suspect if your
> client is seeing a lower threads-max it is because the size of the
> kernel data struc
Michal,
Thinking about this I have a hunch about what changed. I think at some
point we changed from 4k to 8k kernel stacks. So I suspect if your
client is seeing a lower threads-max it is because the size of the
kernel data structures increased.
Eric
Andrew, do you want me to send the patch or you can grab it from here?
On Sun 22-09-19 16:24:10, Eric W. Biederman wrote:
> Michal Hocko writes:
>
> > From 711000fdc243b6bc68a92f9ef0017ae495086d39 Mon Sep 17 00:00:00 2001
> > From: Michal Hocko
> > Date: Sun, 22 Sep 2019 08:45:28 +0200
> > Subj
Michal Hocko writes:
> From 711000fdc243b6bc68a92f9ef0017ae495086d39 Mon Sep 17 00:00:00 2001
> From: Michal Hocko
> Date: Sun, 22 Sep 2019 08:45:28 +0200
> Subject: [PATCH] kernel/sysctl.c: do not override max_threads provided by
> userspace
>
> Partially revert 16db3d3f1170 ("kernel/sysctl.c:
Heinrich Schuchardt writes:
> Did this patch when applied to the customer's kernel solve any problem?
>
> WebSphere MQ is a messaging application. If it hits the current limits
> of threads-max, there is a bug in the software or in the way that it has
> been set up at the customer. Instead of mes
On 9/22/19 8:58 AM, Michal Hocko wrote:
> On Thu 19-09-19 14:33:24, Eric W. Biederman wrote:
>> Michal Hocko writes:
>>
>>> On Tue 17-09-19 12:26:18, Eric W. Biederman wrote:
> [...]
Michal is it a very small effect your customers are seeing?
Is it another bug somewhere else?
>>>
>>> I a
On Thu 19-09-19 14:33:24, Eric W. Biederman wrote:
> Michal Hocko writes:
>
> > On Tue 17-09-19 12:26:18, Eric W. Biederman wrote:
[...]
> >> Michal is it a very small effect your customers are seeing?
> >> Is it another bug somewhere else?
> >
> > I am still trying to get more information. Repor
On Thu, 19 Sep 2019 09:59:11 +0200 Michal Hocko wrote:
> On Wed 18-09-19 09:15:41, Michal Hocko wrote:
> > On Tue 17-09-19 12:26:18, Eric W. Biederman wrote:
> [...]
> > > b) Not being able to bump threads_max to the physical limit of
> > >the machine is very clearly a regression.
> >
> > ..
Michal Hocko writes:
> On Tue 17-09-19 12:26:18, Eric W. Biederman wrote:
>> Michal Hocko writes:
>>
>> > On Tue 17-09-19 17:28:02, Heinrich Schuchardt wrote:
>> >>
>> >> On 9/17/19 12:03 PM, Michal Hocko wrote:
>> >> > Hi,
>> >> > I have just stumbled over 16db3d3f1170 ("kernel/sysctl.c: thre
On Wed 18-09-19 09:15:41, Michal Hocko wrote:
> On Tue 17-09-19 12:26:18, Eric W. Biederman wrote:
[...]
> > b) Not being able to bump threads_max to the physical limit of
> >the machine is very clearly a regression.
>
> ... exactly this part. The changelog of the respective patch doesn't
> re
On Tue 17-09-19 12:26:18, Eric W. Biederman wrote:
> Michal Hocko writes:
>
> > On Tue 17-09-19 17:28:02, Heinrich Schuchardt wrote:
> >>
> >> On 9/17/19 12:03 PM, Michal Hocko wrote:
> >> > Hi,
> >> > I have just stumbled over 16db3d3f1170 ("kernel/sysctl.c: threads-max
> >> > observe limits")
Michal Hocko writes:
> On Tue 17-09-19 17:28:02, Heinrich Schuchardt wrote:
>>
>> On 9/17/19 12:03 PM, Michal Hocko wrote:
>> > Hi,
>> > I have just stumbled over 16db3d3f1170 ("kernel/sysctl.c: threads-max
>> > observe limits") and I am really wondering what is the motivation behind
>> > the pa
On Tue 17-09-19 17:28:02, Heinrich Schuchardt wrote:
>
> On 9/17/19 12:03 PM, Michal Hocko wrote:
> > Hi,
> > I have just stumbled over 16db3d3f1170 ("kernel/sysctl.c: threads-max
> > observe limits") and I am really wondering what is the motivation behind
> > the patch. We've had a customer notic
On 9/17/19 12:03 PM, Michal Hocko wrote:
> Hi,
> I have just stumbled over 16db3d3f1170 ("kernel/sysctl.c: threads-max
> observe limits") and I am really wondering what is the motivation behind
> the patch. We've had a customer noticing the threads_max autoscaling
> differences btween 3.12 and 4.
Guenter Roeck writes:
> On Thu, Jun 01, 2017 at 02:36:38PM -0500, Eric W. Biederman wrote:
>> Guenter Roeck writes:
>>
>> > On Thu, Jun 01, 2017 at 12:08:58PM -0500, Eric W. Biederman wrote:
>> >> Guenter Roeck writes:
>> >> >
>> >> > I think you nailed it. If I drop CLONE_NEWPID from the repr
On Thu, Jun 01, 2017 at 02:36:38PM -0500, Eric W. Biederman wrote:
> Guenter Roeck writes:
>
> > On Thu, Jun 01, 2017 at 12:08:58PM -0500, Eric W. Biederman wrote:
> >> Guenter Roeck writes:
> >> >
> >> > I think you nailed it. If I drop CLONE_NEWPID from the reproducer I get
> >> > a zombie pro
Guenter Roeck writes:
> On Thu, Jun 01, 2017 at 12:08:58PM -0500, Eric W. Biederman wrote:
>> Guenter Roeck writes:
>> >
>> > I think you nailed it. If I drop CLONE_NEWPID from the reproducer I get
>> > a zombie process.
>> >
>> > I guess the only question left is if zap_pid_ns_processes() shoul
On Thu, Jun 01, 2017 at 12:08:58PM -0500, Eric W. Biederman wrote:
> Guenter Roeck writes:
> >
> > I think you nailed it. If I drop CLONE_NEWPID from the reproducer I get
> > a zombie process.
> >
> > I guess the only question left is if zap_pid_ns_processes() should (or
> > could)
> > somehow de
Guenter Roeck writes:
>
> I think you nailed it. If I drop CLONE_NEWPID from the reproducer I get
> a zombie process.
>
> I guess the only question left is if zap_pid_ns_processes() should (or could)
> somehow detect that situation and return instead of waiting forever.
> What do you think ?
Any
Guenter Roeck writes:
> On 05/12/2017 01:03 PM, Eric W. Biederman wrote:
>> Guenter Roeck writes:
>>
>>> Hi Eric,
>>>
>>> On Fri, May 12, 2017 at 12:33:01PM -0500, Eric W. Biederman wrote:
Guenter Roeck writes:
> Hi Eric,
>
> On Fri, May 12, 2017 at 08:26:27AM -0500, Eric
On 05/12/2017 01:03 PM, Eric W. Biederman wrote:
Guenter Roeck writes:
Hi Eric,
On Fri, May 12, 2017 at 12:33:01PM -0500, Eric W. Biederman wrote:
Guenter Roeck writes:
Hi Eric,
On Fri, May 12, 2017 at 08:26:27AM -0500, Eric W. Biederman wrote:
Vovo Yang writes:
On Fri, May 12, 2017
Guenter Roeck writes:
> Hi Eric,
>
> On Fri, May 12, 2017 at 12:33:01PM -0500, Eric W. Biederman wrote:
>> Guenter Roeck writes:
>>
>> > Hi Eric,
>> >
>> > On Fri, May 12, 2017 at 08:26:27AM -0500, Eric W. Biederman wrote:
>> >> Vovo Yang writes:
>> >>
>> >> > On Fri, May 12, 2017 at 7:19 AM,
Hi Eric,
On Fri, May 12, 2017 at 12:33:01PM -0500, Eric W. Biederman wrote:
> Guenter Roeck writes:
>
> > Hi Eric,
> >
> > On Fri, May 12, 2017 at 08:26:27AM -0500, Eric W. Biederman wrote:
> >> Vovo Yang writes:
> >>
> >> > On Fri, May 12, 2017 at 7:19 AM, Eric W. Biederman
> >> > wrote:
> >
Guenter Roeck writes:
> Hi Eric,
>
> On Fri, May 12, 2017 at 08:26:27AM -0500, Eric W. Biederman wrote:
>> Vovo Yang writes:
>>
>> > On Fri, May 12, 2017 at 7:19 AM, Eric W. Biederman
>> > wrote:
>> >> Guenter Roeck writes:
>> >>
>> >>> What I know so far is
>> >>> - We see this condition on
Hi Eric,
On Fri, May 12, 2017 at 08:26:27AM -0500, Eric W. Biederman wrote:
> Vovo Yang writes:
>
> > On Fri, May 12, 2017 at 7:19 AM, Eric W. Biederman
> > wrote:
> >> Guenter Roeck writes:
> >>
> >>> What I know so far is
> >>> - We see this condition on a regular basis in the field. Regular
Vovo Yang writes:
> On Fri, May 12, 2017 at 7:19 AM, Eric W. Biederman
> wrote:
>> Guenter Roeck writes:
>>
>>> What I know so far is
>>> - We see this condition on a regular basis in the field. Regular is
>>> relative, of course - let's say maybe 1 in a Milion Chromebooks
>>> per day repor
On Fri, May 12, 2017 at 7:19 AM, Eric W. Biederman
wrote:
> Guenter Roeck writes:
>
>> On Thu, May 11, 2017 at 04:25:23PM -0500, Eric W. Biederman wrote:
>>> Guenter Roeck writes:
>>>
>>> > On Thu, May 11, 2017 at 12:31:21PM -0500, Eric W. Biederman wrote:
>>> >> Guenter Roeck writes:
>>> >>
>>
Guenter Roeck writes:
> On Thu, May 11, 2017 at 04:25:23PM -0500, Eric W. Biederman wrote:
>> Guenter Roeck writes:
>> > As an add-on to my previous mail: I added a function to count
>> > the number of threads in the pid namespace, using next_pidmap().
>> > Even though nr_hashed == 2, only the
Guenter Roeck writes:
> On Thu, May 11, 2017 at 04:25:23PM -0500, Eric W. Biederman wrote:
>> Guenter Roeck writes:
>>
>> > On Thu, May 11, 2017 at 12:31:21PM -0500, Eric W. Biederman wrote:
>> >> Guenter Roeck writes:
>> >>
>> >> > Hi all,
>> >> >
>> >> > the test program attached below almo
On Thu, May 11, 2017 at 04:25:23PM -0500, Eric W. Biederman wrote:
> Guenter Roeck writes:
>
> > On Thu, May 11, 2017 at 12:31:21PM -0500, Eric W. Biederman wrote:
> >> Guenter Roeck writes:
> >>
> >> > Hi all,
> >> >
> >> > the test program attached below almost always results in one of the ch
Guenter Roeck writes:
> On Thu, May 11, 2017 at 03:23:17PM -0500, Eric W. Biederman wrote:
>> Guenter Roeck writes:
>>
>> > On Thu, May 11, 2017 at 12:31:21PM -0500, Eric W. Biederman wrote:
>> >> Guenter Roeck writes:
>> >>
>> >> > Hi all,
>> >> >
>> >> > the test program attached below almo
Guenter Roeck writes:
> On Thu, May 11, 2017 at 12:31:21PM -0500, Eric W. Biederman wrote:
>> Guenter Roeck writes:
>>
>> > Hi all,
>> >
>> > the test program attached below almost always results in one of the child
>> > processes being stuck in zap_pid_ns_processes(). When this happens, I can
On Thu, May 11, 2017 at 03:23:17PM -0500, Eric W. Biederman wrote:
> Guenter Roeck writes:
>
> > On Thu, May 11, 2017 at 12:31:21PM -0500, Eric W. Biederman wrote:
> >> Guenter Roeck writes:
> >>
> >> > Hi all,
> >> >
> >> > the test program attached below almost always results in one of the ch
Guenter Roeck writes:
> On Thu, May 11, 2017 at 12:31:21PM -0500, Eric W. Biederman wrote:
>> Guenter Roeck writes:
>>
>> > Hi all,
>> >
>> > the test program attached below almost always results in one of the child
>> > processes being stuck in zap_pid_ns_processes(). When this happens, I can
On Thu, May 11, 2017 at 12:31:21PM -0500, Eric W. Biederman wrote:
> Guenter Roeck writes:
>
> > Hi all,
> >
> > the test program attached below almost always results in one of the child
> > processes being stuck in zap_pid_ns_processes(). When this happens, I can
> > see from test logs that nr_h
On Thu, May 11, 2017 at 12:31:21PM -0500, Eric W. Biederman wrote:
> Guenter Roeck writes:
>
> > Hi all,
> >
> > the test program attached below almost always results in one of the child
> > processes being stuck in zap_pid_ns_processes(). When this happens, I can
> > see from test logs that nr_h
Guenter Roeck writes:
> Hi all,
>
> the test program attached below almost always results in one of the child
> processes being stuck in zap_pid_ns_processes(). When this happens, I can
> see from test logs that nr_hashed == 2 and init_pids==1, but there is only
> a single thread left in the pid
On 9/4/05, Petter Shappen <[EMAIL PROTECTED]> wrote:
> As we all know the kernel maintain a data struct for the
> process(PCB),and also for the thread.Because of the latter's smaller
> than the former's,thread switching is faster than the process
not really. They just share some bits (like: addres
On Thu, Jun 21, 2001 at 01:10:31AM +0200, J . A . Magallon wrote:
>
> On 20010621 Stephen Satchell wrote:
> >
> >By the way, I'm surprised no one has mentioned that a synonym for "thread"
> >is "lightweight process".
> >
>
> In linux. Perhaps this the fault.
> In IRIX, you have sprocs and threa
On 20010621 Stephen Satchell wrote:
>
>By the way, I'm surprised no one has mentioned that a synonym for "thread"
>is "lightweight process".
>
In linux. Perhaps this the fault.
In IRIX, you have sprocs and threads. sprocs have independent pids and you
can control what you share (mappings, fd ta
J.D. Bakker <[EMAIL PROTECTED]> wrote:
> At 13:42 -0600 20-06-2001, Charles Cazabon wrote:
> >Rodrigo Ventura <[EMAIL PROTECTED]> wrote:
> > > BTW, I have a question: Can the availability of dual-CPU boards for
> > > intel and amd processors, rather then tri- or quadra-CPU boards, be
> > > explain
"J.D. Bakker" wrote:
>
> At 13:42 -0600 20-06-2001, Charles Cazabon wrote:
> >Rodrigo Ventura <[EMAIL PROTECTED]> wrote:
> > > BTW, I have a question: Can the availability of dual-CPU boards for intel
> >> and amd processors, rather then tri- or quadra-CPU boards, be explained with
> >> the fa
At 13:42 -0600 20-06-2001, Charles Cazabon wrote:
>Rodrigo Ventura <[EMAIL PROTECTED]> wrote:
> > BTW, I have a question: Can the availability of dual-CPU boards for intel
>> and amd processors, rather then tri- or quadra-CPU boards, be explained with
>> the fact that the performance degrades s
I thought one only refers to LWPs when talking about kernel level threads
not user-space ones?
Ognen
On Wed, 20 Jun 2001, Stephen Satchell wrote:
> By the way, I'm surprised no one has mentioned that a synonym for "thread"
> is "lightweight process".
>
> Satch
--
Ognen Duzlevski
Plant Biotech
At 08:48 PM 6/20/01 +0200, Martin Devera wrote:
>BTW is not possible to implement threads as subset of process ?
>Like thread list pointed to from task_struct. It'd contain
>thread_structs plus another scheduler's data.
>The thread could be much smaller than process.
>
>Probably there is another p
Rodrigo Ventura <[EMAIL PROTECTED]> wrote:
>
> BTW, I have a question: Can the availability of dual-CPU boards for intel
> and amd processors, rather then tri- or quadra-CPU boards, be explained with
> the fact that the performance degrades significantly for three or more CPUs?
> Or is there a te
> "Mike" == Mike Kravetz <[EMAIL PROTECTED]> writes:
Mike> Note that in the 2 and 4 CPU cases, the run queue length is
Mike> aprox 2x the number of CPUs and the scheduler seems to
Mike> perform reasonably well with respect to locking. In the 8
Mike> CPU case, the number of ta
> Threads are processes that share more
BTW is not possible to implement threads as subset of process ?
Like thread list pointed to from task_struct. It'd contain
thread_structs plus another scheduler's data.
The thread could be much smaller than process.
Probably there is an
I would take exception with the following statements in the FAQ:
"However, the Linux scheduler is designed to work well with a small
number of running threads. Best results are obtained when the number
of running theads equals the number of processors."
I agree that the Linux scheduler is design
On Wed, 20 Jun 2001, bert hubert wrote:
> Rounding up, it may be worth repeating what I think Alan said some months
> ago:
>
> Threads are processes that share more
... and for absolute majority of programmers additional shared objects mean
additional fsckup sources. I do
Scott Long <[EMAIL PROTECTED]> writes:
> I can also use the LDT to point to thread-specific segments. IMHO this
> is much better than the stack trick used by linuxthreads. The problem
Modern LinuxThreads (glibc 2.2) also uses modify_ldt for thread local data
(much to the pain of the IA64 and x
[EMAIL PROTECTED] said:
> I'm trying to do something a bit unorthodox: I want to share the
> address space between threads, but I want a certain region of the
> address space to be writeable only for a particular thread -- for all
> other threads this region is read-only.
UML does this in a some
On 03.07 Ying Chen wrote:
> 2. We ran multi-threaded application using Linux pthread library on 2-way
> SMP and UP intel platforms (with both 2.2 and 2.4 kernels). We see
> significant increase in context switching when moving from UP to SMP, and
> high CPU usage with no performance gain in tu
lock_kernel is a special case and will not block when you call it in
order to create a new kernel thread. Look at the implementation of
lock_kernel if you have any doubts (this is true for the 2.2 kernels. I
don't know it by heart for the 2.4 kernel).
Reto
"M.Kiran Babu" wrote:
>
> si
On Fri, Nov 10, 2000 at 08:33:29PM +0530, M.Kiran Babu wrote:
> sir,
> i got some doubts in kernel
> programming. i am using linux 6.1 version. i want to use threads in
Linux kernel versions are now running up to 2.4.0*, what is
that 6.1 ? Some distribution ? Which ?
Wh
55 matches
Mail list logo