Thanks Sree!

Just to confirm by closure, you mean shutting down idle connections?

Regards,
Deepak

On Tue, Aug 8, 2017 at 8:14 PM, Sree Kuchibhotla <sr...@google.com> wrote:

> Hi Deepak,
> grpc core internally creates two sets of thread pools :
> - Timer thread pool
> <https://github.com/grpc/grpc/blob/master/src/core/lib/iomgr/timer_manager.c>
> (to execute timers/alarms): Max of 2 threads. Typically just one.
> - Executor thread pool
> <https://github.com/grpc/grpc/blob/master/src/core/lib/iomgr/executor.c#L81>:
>  A dedicated thread pool (with max threads of 2*number of cores on your
> machine) that handle executing closures.=
>
> So this is what you are perhaps seeing.  Currently we have not exposed a
> way to configure these thread pool sizes but we might add it in future.
>
> thanks,
> -Sree
>
>
> On Thursday, August 3, 2017 at 3:32:28 PM UTC-7, deepako...@gmail.com
> wrote:
>>
>> Hi,
>>
>> I am planning to implement a service that has very low scale i.e. it
>> would service only a handful of clients.
>> I want to keep resource usage to minimal and thus trying to use a single
>> thread for all clients. After reading
>> gRPC documentation it seems async model is the way to go. But when I
>> tried greeter_async_server example
>>  in cpp, I see it creates multiple threads(18 in my case) although it
>> uses a single thread to service all clients(which I want)
>> .
>> Is there a way to avoid creation of so many threads in async model?
>>
>> bash-4.2$ ps -ax | grep async
>>  1425 pts/5    Sl+    0:27 ./greeter_async_server
>>
>> top - 15:15:02 up 153 days,  1:17, 13 users,  load average: 0.00, 0.00,
>> 0.04
>> Threads:  18 total,   0 running,  18 sleeping,   0 stopped,   0 zombie
>> %Cpu(s):  0.0 us,  0.0 sy,  0.0 ni, 99.9 id,  0.0 wa,  0.0 hi,  0.0 si,
>>  0.0 st
>> KiB Mem : 49457112 total, 45066340 free,  3105740 <310-5740> used,
>>  1285032 buff/cache
>> KiB Swap:  2097148 total,  2095108 free,     2040 used. 45603564 avail Mem
>>
>>   PID USER      PR  NI    VIRT    RES    SHR S %CPU %MEM     TIME+ COMMAND
>>  1425 deojha    20   0  244532   7716   5164 S  0.0  0.0   0:00.00
>> greeter_async_s
>>  1426 deojha    20   0  244532   7716   5164 S  0.0  0.0   0:12.08
>> greeter_async_s
>>  1428 deojha    20   0  244532   7716   5164 S  0.0  0.0   0:01.04
>> greeter_async_s
>>  1429 deojha    20   0  244532   7716   5164 S  0.0  0.0   0:00.81
>> greeter_async_s
>>  1430 deojha    20   0  244532   7716   5164 S  0.0  0.0   0:00.99
>> greeter_async_s
>>  1431 deojha    20   0  244532   7716   5164 S  0.0  0.0   0:01.09
>> greeter_async_s
>>  1432 deojha    20   0  244532   7716   5164 S  0.0  0.0   0:00.77
>> greeter_async_s
>>  1433 deojha    20   0  244532   7716   5164 S  0.0  0.0   0:01.02
>> greeter_async_s
>>  1434 deojha    20   0  244532   7716   5164 S  0.0  0.0   0:00.99
>> greeter_async_s
>>  1435 deojha    20   0  244532   7716   5164 S  0.0  0.0   0:01.08
>> greeter_async_s
>>  1436 deojha    20   0  244532   7716   5164 S  0.0  0.0   0:00.83
>> greeter_async_s
>>  1437 deojha    20   0  244532   7716   5164 S  0.0  0.0   0:01.06
>> greeter_async_s
>>  1438 deojha    20   0  244532   7716   5164 S  0.0  0.0   0:00.91
>> greeter_async_s
>>  1439 deojha    20   0  244532   7716   5164 S  0.0  0.0   0:00.79
>> greeter_async_s
>>  1440 deojha    20   0  244532   7716   5164 S  0.0  0.0   0:01.06
>> greeter_async_s
>>  1444 deojha    20   0  244532   7716   5164 S  0.0  0.0   0:00.83
>> greeter_async_s
>>  1445 deojha    20   0  244532   7716   5164 S  0.0  0.0   0:00.75
>> greeter_async_s
>>  1446 deojha    20   0  244532   7716   5164 S  0.0  0.0   0:00.76
>> greeter_async_s
>>
>>
>> Uses single thread to service all clients.
>>
>> top - 15:15:42 up 153 days,  1:18, 13 users,  load average: 0.22, 0.05,
>> 0.05
>> Threads:  18 total,   0 running,  18 sleeping,   0 stopped,   0 zombie
>> %Cpu(s):  6.3 us,  7.5 sy,  0.0 ni, 85.7 id,  0.0 wa,  0.0 hi,  0.5 si,
>>  0.0 st
>> KiB Mem : 49457112 total, 45056008 free,  3110176 used,  1290928
>> buff/cache
>> KiB Swap:  2097148 total,  2095108 free,     2040 used. 45593760 avail Mem
>>
>>   PID USER      PR  NI    VIRT    RES    SHR S %CPU %MEM     TIME+ COMMAND
>>  1425 deojha    20   0  244640   8308   5548 S 19.6  0.0   0:02.21
>> greeter_async_s
>>  1426 deojha    20   0  244640   8308   5548 S  0.0  0.0   0:12.09
>> greeter_async_s
>>  1428 deojha    20   0  244640   8308   5548 S  0.0  0.0   0:01.04
>> greeter_async_s
>>  1429 deojha    20   0  244640   8308   5548 S  0.0  0.0   0:00.81
>> greeter_async_s
>>  1430 deojha    20   0  244640   8308   5548 S  0.0  0.0   0:00.99
>> greeter_async_s
>>  1431 deojha    20   0  244640   8308   5548 S  0.0  0.0   0:01.09
>> greeter_async_s
>>  1432 deojha    20   0  244640   8308   5548 S  0.0  0.0   0:00.77
>> greeter_async_s
>>  1433 deojha    20   0  244640   8308   5548 S  0.0  0.0   0:01.02
>> greeter_async_s
>>  1434 deojha    20   0  244640   8308   5548 S  0.0  0.0   0:00.99
>> greeter_async_s
>>  1435 deojha    20   0  244640   8308   5548 S  0.0  0.0   0:01.08
>> greeter_async_s
>>  1436 deojha    20   0  244640   8308   5548 S  0.0  0.0   0:00.83
>> greeter_async_s
>>  1437 deojha    20   0  244640   8308   5548 S  0.0  0.0   0:01.06
>> greeter_async_s
>>  1438 deojha    20   0  244640   8308   5548 S  0.0  0.0   0:00.91
>> greeter_async_s
>>  1439 deojha    20   0  244640   8308   5548 S  0.0  0.0   0:00.79
>> greeter_async_s
>>  1440 deojha    20   0  244640   8308   5548 S  0.0  0.0   0:01.06
>> greeter_async_s
>>  1444 deojha    20   0  244640   8308   5548 S  0.0  0.0   0:00.83
>> greeter_async_s
>>  1445 deojha    20   0  244640   8308   5548 S  0.0  0.0   0:00.75
>> greeter_async_s
>>  1446 deojha    20   0  244640   8308   5548 S  0.0  0.0   0:00.76
>> greeter_async_s
>>
>> Regards,
>> Deepak
>>
>


-- 
Thanks & Regards

Deepak Ojha

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CAAuX7k30GataFdB1t4U4SCdxww28%2BjskMCfvSOfqkcgLkUsdtw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to