[grpc-io] Re: gRPC C++ connection failure (getaddrinfo) and deadlock

2017-08-09 Thread Amit Waisel
Hi Nicolas,
Thank you for your help.
I agree that this behavior is disturbing. I also believed that getaddrinfo 
should return at some point. But on that case, it never did.
I must say that it happened on one VM machine, and never reproduced on any 
other machine. I haven't encountered this bug since.

As far as I understand the design of gRPC's thread pool, I debugged the 
process and found our that the RPC function called "ClientReader" 
constructor, which issued an async name resolve operation and waited for 
its completion. An arbitrary thread from the pool picked up the resolve 
task, and called (eventually) getaddrinfo. Because this call *never 
returned*, the thread in the thread pool never completed the task, so the 
ClientReader constructor never finished the wait for the async name resolve 
operation.

I hope this clears things up a bit.

On Wednesday, August 9, 2017 at 12:57:56 AM UTC+3, Nicolas Noble wrote:
>
> Having getaddrinfo() not returning is disturbing. While it's true that all 
> of the OS' DNS resolution functions are synchronous, and will block until 
> the OS comes back with a response, it's usually expected that the OS 
> returns *eventually.* Either with an error (such as a timeout), or with 
> some results. Not returning at all isn't a sane nor expected behavior.
>
> Now your phrasing is a bit confusing. Are you saying that the DNS 
> resolution thread is stuck on resolving address ? Or that you think it 
> somehow did, and got the rest of the library confused and stuck ?
>
> On Monday, April 24, 2017 at 3:53:56 AM UTC-7, Amit Waisel wrote:
>>
>> I have a C++ client, that connects to a C# server. The connection is 
>> being made by a RPC function (called 
>> *InitializeStream()*), that sends a single request and receives a stream 
>> of responses from the server. This RPC function is executed with 'max' 
>> timeout (if the server is unavailable, later call to stream->Read() will 
>> return an error. This is good enough for me).
>> I encountered a weird bug, which happened on a VM. (I couldn't reproduce 
>> it on any other machine, but it reproduces easily on that VM). On that 
>> single VM, the 
>> *InitializeStream()* RPC function never returns.
>>
>> Further debugging of this issue reveled the following:
>>
>>1. The main thread (thread #1) is blocked inside InitializeStream(), 
>>in 
>> * grpc_iocp_work()*. The exact line is 
>> * iocp_windows.c@83* - in Windows's 
>> * GetQueuedCompletionStatus()* function.
>>As far as I understand, here we wait for a task completion, for 
>>unlimited timeout (I used the 'max' timeout).
>>  [External Code] 
>>> Test.exe!grpc_iocp_work(grpc_exec_ctx * exec_ctx, gpr_timespec 
>>deadline) Line 83 C
>>  Test.exe!grpc_pollset_work(grpc_exec_ctx * exec_ctx, grpc_pollset * 
>>pollset, grpc_pollset_worker * * worker_hdl, gpr_timespec now, 
>>gpr_timespec deadline) Line 140 C
>>  Test.exe!grpc_completion_queue_pluck(grpc_completion_queue * cc, 
>>void * tag, gpr_timespec deadline, void * reserved) Line 614 C
>>  
>> Test.exe!grpc::CoreCodegen::grpc_completion_queue_pluck(grpc_completion_queue
>>  
>>* cq, void * tag, gpr_timespec deadline, void * reserved) Line 70 C++
>>  Test.exe!grpc::CompletionQueue::Pluck(grpc::CompletionQueueTag * tag
>>) Line 230 C++
>>  Test.exe!grpc::ClientReader::ClientReader>TestRequest>(grpc::ChannelInterface * channel, 
>>const grpc::RpcMethod & method, grpc::ClientContext * context, const 
>>test::InitMessage & request) Line 151 C++
>>  Test.exe!test::testInterface::Stub::InitializeStreamRaw(grpc::
>>ClientContext * context, const test::InitMessage & request) Line 46 C
>>++
>>  Test.exe!test::testInterface::Stub::InitializeStream(grpc::
>>ClientContext * context, const test::InitMessage & request) Line 86 C
>>++
>>  Test.exe!WinMain(HINSTANCE__ * __formal, HINSTANCE__ * __formal, 
>>char * __formal, int __formal) Line 17 C++
>>  [External Code]
>>
>>2. One of gRPC's threads [from the thread pool] (thread #2), called 
>>the function 
>> * do_request_thread()* in 
>> * resolve_address_windows.c@153*, which called 
>> * grpc_blocking_resolve_address()* (blocking function, by its name) that 
>>called 
>> * getaddrinfo()* that never returns!
>>
>> My guess is that thread #1 waits (*GetQueuedCompletionStatus*) for 
>> thread #2's task completion. 
>> *getaddrinfo()* never returns, so 
>> *GetQueuedCompletionStatus()* is blocking, and the main thread is stuck.
>>
>> Have you encountered this error before? Do you have any idea what can I 
>> do (beside adding a timeout to the function, which I consider as a bypass 
>> and not a solution).
>> I use gRPC v1.2.0 for both C++ and C#.
>>
>> Thanks
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googleg

[grpc-io] Reflection in grpc++ async server

2017-08-09 Thread christian
Hi,

Is reflection possible in  C++ async server?
I'd assume we need to change our CompletionQueue tags to support answering 
the requests?

Best,


-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/a40856b9-4603-45b5-af5c-fe1a3da131b1%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: [Java] gRPC failure Connection reset by peer with inactivity.

2017-08-09 Thread 'Doug Fawley' via grpc.io
On Friday, August 4, 2017 at 7:44:54 AM UTC-7, cr2...@gmail.com wrote:
>
> Any thoughts on what is happening here ?  I know not a lot of details for 
> you to go on :(
>

It's possible this is caused by a proxy between the client and server.  If 
that is the case, client-side keepalive settings should be used to prevent 
this if it is not desired.

On the gRPC server-side, there is a "max idle" setting:

https://github.com/grpc/grpc-go/blob/master/keepalive/keepalive.go#L45

It defaults to infinity/disabled, but you should make sure it's not being 
set unintentionally.  If this is the cause, client-side keepalive will not 
help -- keepalive uses pings, but gRPC's idleness detection considers only 
active streams (RPCs).

The Java client is moving to the very latest version.  Are there concerns 
> with compatibility we need to keep in mind with GO server side ?  
>

There should not be compatibility concerns between gRPC 1.x versions across 
languages.  Please file an issue if you encounter any.
 
Thanks,
Doug

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/acf39324-7260-4a32-b885-eef20977582b%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [grpc-io] [Java] gRPC failure Connection reset by peer with inactivity.

2017-08-09 Thread 'Eric Anderson' via grpc.io
On Fri, Aug 4, 2017 at 7:44 AM,  wrote:

> I have users that are claiming everything is working fine except if
> there's no activity on that connection for about 20 minutes. Then they see:
>
> gRPC failure=Status{code=UNAVAILABLE, description=null,
> cause=java.io.IOException: Connection reset by peer
>

To others seeing this, there are two places you can see this error: in a
Status or in logged. In the next grpc-java release (1.6) these errors will
be squelched from being logged, but you can still see in in the Status.

I've asked them to try the keepAliveTime and keepAliveTimeout and I'm not
> sure yet if they've done that yet.
>

Something ("in the network") is killing the connection after 20 minutes;
that may be proxies, firewalls, or routers. Most of the time keepalive will
fix this. There is a chance that the TCP connection is being killed because
of its age instead of inactivity; keepalive won't fix that.

The server side is GO 1.40
> ...
> So barrage of questions:
>
> Any thoughts on what is happening here ?  I know not a lot of details for
> you to go on :(
>

You mentioned the client configuration. How about the server. Is it
using MaxConnectionAge? It's possible that there's a bug in the
implementation and it isn't gracefully shutting down the connection. (Or
maybe the Grace timeout is exceeded)

(Note that MaxConnectionIdle has "idleness" defined at a higher level; it
is the time since the last RPC completed, not the last network activity. So
it is unlikely to cause a problem.)

The Java client is moving to the very latest version.  Are there concerns
> with compatibility we need to keep in mind with GO server side ?
>

No. Things should work fine.

Are there timeouts were the underlying connections are closed due to
> inactivity?  I would assume they'd reconnect under covers if so, would the
> keepAlive help here ?  Other options to try?
>

The Channel should reconnect automatically (and if it isn't that's a really
big bug; please file an issue). However, when the connection dies any RPCs
on that connection fail with the Status you see. But if you did another RPC
immediately after, gRPC should attempt to reconnect and everything "just
work."

I noticed on ManagedChannel getState and notifyWhenStateChanged  These
have @ExperimentalApi
> and *Warning*: this API is not yet implemented by the gRPC
> So I assume they really can't be used in latest version to auto retry
> setting up connections when they get disconnected?
> 
>

The reconnect part of that API is when you want a connection to be
available, but aren't sending RPCs. The requestConnection of
getState(true) "acts"
like you sent an RPC (so it brings up a TCP connection if there isn't one),
without actually sending an RPC. Even if implemented, it wouldn't be
necessary.

And yes, the API is still unimplemented. In 1.6 more of it will be plumbed,
but it still may not be functioning quite right. The API really only has
two uses though: 1) notify the application that the Channel is unhealthy
and 2) allow the application to cheaply (without sending an RPC) cause a
connection to be established.

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CA%2B4M1oOdi3RWfKBV%2BkZt57MYZ6GyLtvDSskvwCByZ1HqX60Gdg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


smime.p7s
Description: S/MIME Cryptographic Signature


[grpc-io] Re: [Java] gRPC failure Connection reset by peer with inactivity.

2017-08-09 Thread cr22rc
Thanks All !

I finally got some more details and they were NOT running with the keep 
alive on.  And yes, was through some proxy.  The keep alive did seem to fix 
their issues.  Thanks for the other answers. 



On Friday, August 4, 2017 at 10:44:54 AM UTC-4, cr2...@gmail.com wrote:
>
> Hi,
> I have code that's using the futureStub and using NettyChannelBuilder with 
> no other properties set other than usePlaintext(true);  I have users that 
> are claiming everything is working fine except if there's no activity on 
> that connection for about 20 minutes. Then they see:
>
> gRPC failure=Status{code=UNAVAILABLE, description=null, 
> cause=java.io.IOException: Connection reset by peer
>
> I've asked them to try the keepAliveTime and keepAliveTimeout and I'm not 
> sure yet if they've done that yet.
>
> The server side is GO 1.40
>
> Client  Older version we are moving to later in next release:
> com.google.protobuf » protobuf-java  3.1.0
> io.grpc » grpc-netty1.3.0
> io.grpc » grpc-protobuf1.3.0
> io.grpc » grpc-stub1.3.0
>
> So barrage of questions:
>
> Any thoughts on what is happening here ?  I know not a lot of details for 
> you to go on :(
>
> The Java client is moving to the very latest version.  Are there concerns 
> with compatibility we need to keep in mind with GO server side ?  
>
> Are there timeouts were the underlying connections are closed due to 
> inactivity?  I would assume they'd reconnect under covers if so, would the 
> keepAlive help here ?  Other options to try?
>
> I noticed on ManagedChannel getState and notifyWhenStateChanged  These 
> have @ExperimentalApi and *Warning*: this API is not yet implemented by 
> the gRPC 
> So I assume they really can't be used in latest version to auto retry 
> setting up connections when they get disconnected? 
> 
>
>
>
>
>
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/f63df8d1-9135-412c-ab47-fe99cc434fc1%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [grpc-io] SSL error with GRPC Java

2017-08-09 Thread 'Eric Gribkoff' via grpc.io
This is for an OSGi bundle? It looks like the errors you're getting are
internal to Netty, and indicate that your bundle is not correctly adding
netty-tcnative to the classpath. I don't have any experience with OSGi, but
you may be able to get help with the class loading issue at
https://github.com/netty/netty.

Eric

On Sat, Aug 5, 2017 at 8:58 PM,  wrote:

>
> JDK version : 1.8u77
>
> proto3.0.3 version
>
> I have tried incorporating SSL into current application. Please find below
> approaches we have tried.
> 1) OpenSSL Static approach
>
> We have added the io.netty.tcnative-boringssl-static, io.netty.handler
> and bundles to com.pelco.vms.pelcotools.application.bnd and
>
> Tried the below code snippet (added to RPCHandler) :
>
>
> *SslContext sslContext = SslContextBuilder.forServer(certificatePemFile,
> privateKeyPemFile))*
> *
> .sslProvider(SslProvider.OPENSSL)*
> * .build();*
> *server = NettyServerBuilder.forAddress(new
> InetSocketAddress(InetAddress.getLoopbackAddress(), 8443))*
> *   .addService(service)*
> *   .sslContext(sslContext)*
> *   .build()*
> *   .start();*
>
>
> But we are receiving the below exception while building the SslContext.
>
> *java.lang.UnsatisfiedLinkError: failed to load the required native
> library*
> *at
> io.netty.handler.ssl.OpenSsl.ensureAvailability(OpenSsl.java:311)*
> *at
> io.netty.handler.ssl.ReferenceCountedOpenSslContext.(ReferenceCountedOpenSslContext.java:230)*
> *at
> io.netty.handler.ssl.OpenSslContext.(OpenSslContext.java:43)*
> *at
> io.netty.handler.ssl.OpenSslServerContext.(OpenSslServerContext.java:347)*
> *at
> io.netty.handler.ssl.OpenSslServerContext.(OpenSslServerContext.java:335)*
> *at
> io.netty.handler.ssl.SslContext.newServerContextInternal(SslContext.java:421)*
> *at
> io.netty.handler.ssl.SslContextBuilder.build(SslContextBuilder.java:441)*
> *at
> com.pelco.vms.pelcotools.handlers.RPCHandler.start(RPCHandler.java:105)*
> *at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> Method)*
> *at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown
> Source)*
> *at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)*
> *at java.lang.reflect.Method.invoke(Unknown Source)*
> *at
> org.apache.felix.scr.impl.helper.BaseMethod.invokeMethod(BaseMethod.java:222)*
> *at
> org.apache.felix.scr.impl.helper.BaseMethod.access$500(BaseMethod.java:37)*
> *at
> org.apache.felix.scr.impl.helper.BaseMethod$Resolved.invoke(BaseMethod.java:615)*
> *at
> org.apache.felix.scr.impl.helper.BaseMethod.invoke(BaseMethod.java:499)*
> *at
> org.apache.felix.scr.impl.helper.ActivateMethod.invoke(ActivateMethod.java:295)*
> *at
> org.apache.felix.scr.impl.manager.SingleComponentManager.createImplementationObject(SingleComponentManager.java:302)*
> *at
> org.apache.felix.scr.impl.manager.SingleComponentManager.createComponent(SingleComponentManager.java:113)*
> *at
> org.apache.felix.scr.impl.manager.SingleComponentManager.getService(SingleComponentManager.java:866)*
> *at
> org.apache.felix.scr.impl.manager.SingleComponentManager.getServiceInternal(SingleComponentManager.java:833)*
> *at
> org.apache.felix.scr.impl.manager.AbstractComponentManager.activateInternal(AbstractComponentManager.java:724)*
> *at
> org.apache.felix.scr.impl.manager.DependencyManager$SingleStaticCustomizer.addedService(DependencyManager.java:954)*
> *at
> org.apache.felix.scr.impl.manager.DependencyManager$SingleStaticCustomizer.addedService(DependencyManager.java:915)*
> *at
> org.apache.felix.scr.impl.manager.ServiceTracker$Tracked.customizerAdded(ServiceTracker.java:1215)*
> *at
> org.apache.felix.scr.impl.manager.ServiceTracker$Tracked.customizerAdded(ServiceTracker.java:1136)*
> *at
> org.apache.felix.scr.impl.manager.ServiceTracker$AbstractTracked.trackAdding(ServiceTracker.java:945)*
> *at
> org.apache.felix.scr.impl.manager.ServiceTracker$AbstractTracked.track(ServiceTracker.java:881)*
> *at
> org.apache.felix.scr.impl.manager.ServiceTracker$Tracked.serviceChanged(ServiceTracker.java:1167)*
> *at
> org.apache.felix.scr.impl.BundleComponentActivator$ListenerInfo.serviceChanged(BundleComponentActivator.java:120)*
> *at
> org.apache.felix.framework.util.EventDispatcher.invokeServiceListenerCallback(EventDispatcher.java:987)*
> *at
> org.apache.felix.framework.util.EventDispatcher.fireEventImmediately(EventDispatcher.j

[grpc-io] [Java] Override authority

2017-08-09 Thread vadim . ivanou
Hello,

What's the right way to override authority for a channel or stub?

I see there is withAuthority method in CallOptions, but there is no 
withAuthority method in AbstractStub and I cannot pass CallOptions to 
generated stubs.

There is overrideAuthority method in ManagedChannelBuilder class, but the 
comment says "Should only used by tests".

In gRPC C# I can do something like that:

var channel = new Channel(
"myproxy:4140",
ChannelCredentials.Insecure,
new []{new ChannelOption(ChannelOptions.DefaultAuthority, 
"original-authority")});

Thanks.

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/02c1f525-b8d0-4969-a5c0-8c601b3f5db8%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: [Java] Override authority

2017-08-09 Thread Ryan Michela
I've used ManagedChannelBuilder.overrideAuthority() to point to a proxy 
without issue. I don't know if that's "supported", but it works.

On Wednesday, August 9, 2017 at 1:56:05 PM UTC-7, vadim@gmail.com wrote:
>
> Hello,
>
> What's the right way to override authority for a channel or stub?
>
> I see there is withAuthority method in CallOptions, but there is no 
> withAuthority method in AbstractStub and I cannot pass CallOptions to 
> generated stubs.
>
> There is overrideAuthority method in ManagedChannelBuilder class, but the 
> comment says "Should only used by tests".
>
> In gRPC C# I can do something like that:
>
> var channel = new Channel(
> "myproxy:4140",
> ChannelCredentials.Insecure,
> new []{new ChannelOption(ChannelOptions.DefaultAuthority, 
> "original-authority")});
>
> Thanks.
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/c966356b-761f-4437-a4f7-a2e0068a881b%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: Single threaded async server in C++

2017-08-09 Thread Deepak Ojha
Thanks Sree!

Just to confirm by closure, you mean shutting down idle connections?

Regards,
Deepak

On Tue, Aug 8, 2017 at 8:14 PM, Sree Kuchibhotla  wrote:

> Hi Deepak,
> grpc core internally creates two sets of thread pools :
> - Timer thread pool
> 
> (to execute timers/alarms): Max of 2 threads. Typically just one.
> - Executor thread pool
> :
>  A dedicated thread pool (with max threads of 2*number of cores on your
> machine) that handle executing closures.=
>
> So this is what you are perhaps seeing.  Currently we have not exposed a
> way to configure these thread pool sizes but we might add it in future.
>
> thanks,
> -Sree
>
>
> On Thursday, August 3, 2017 at 3:32:28 PM UTC-7, deepako...@gmail.com
> wrote:
>>
>> Hi,
>>
>> I am planning to implement a service that has very low scale i.e. it
>> would service only a handful of clients.
>> I want to keep resource usage to minimal and thus trying to use a single
>> thread for all clients. After reading
>> gRPC documentation it seems async model is the way to go. But when I
>> tried greeter_async_server example
>>  in cpp, I see it creates multiple threads(18 in my case) although it
>> uses a single thread to service all clients(which I want)
>> .
>> Is there a way to avoid creation of so many threads in async model?
>>
>> bash-4.2$ ps -ax | grep async
>>  1425 pts/5Sl+0:27 ./greeter_async_server
>>
>> top - 15:15:02 up 153 days,  1:17, 13 users,  load average: 0.00, 0.00,
>> 0.04
>> Threads:  18 total,   0 running,  18 sleeping,   0 stopped,   0 zombie
>> %Cpu(s):  0.0 us,  0.0 sy,  0.0 ni, 99.9 id,  0.0 wa,  0.0 hi,  0.0 si,
>>  0.0 st
>> KiB Mem : 49457112 total, 45066340 free,  3105740 <310-5740> used,
>>  1285032 buff/cache
>> KiB Swap:  2097148 total,  2095108 free, 2040 used. 45603564 avail Mem
>>
>>   PID USER  PR  NIVIRTRESSHR S %CPU %MEM TIME+ COMMAND
>>  1425 deojha20   0  244532   7716   5164 S  0.0  0.0   0:00.00
>> greeter_async_s
>>  1426 deojha20   0  244532   7716   5164 S  0.0  0.0   0:12.08
>> greeter_async_s
>>  1428 deojha20   0  244532   7716   5164 S  0.0  0.0   0:01.04
>> greeter_async_s
>>  1429 deojha20   0  244532   7716   5164 S  0.0  0.0   0:00.81
>> greeter_async_s
>>  1430 deojha20   0  244532   7716   5164 S  0.0  0.0   0:00.99
>> greeter_async_s
>>  1431 deojha20   0  244532   7716   5164 S  0.0  0.0   0:01.09
>> greeter_async_s
>>  1432 deojha20   0  244532   7716   5164 S  0.0  0.0   0:00.77
>> greeter_async_s
>>  1433 deojha20   0  244532   7716   5164 S  0.0  0.0   0:01.02
>> greeter_async_s
>>  1434 deojha20   0  244532   7716   5164 S  0.0  0.0   0:00.99
>> greeter_async_s
>>  1435 deojha20   0  244532   7716   5164 S  0.0  0.0   0:01.08
>> greeter_async_s
>>  1436 deojha20   0  244532   7716   5164 S  0.0  0.0   0:00.83
>> greeter_async_s
>>  1437 deojha20   0  244532   7716   5164 S  0.0  0.0   0:01.06
>> greeter_async_s
>>  1438 deojha20   0  244532   7716   5164 S  0.0  0.0   0:00.91
>> greeter_async_s
>>  1439 deojha20   0  244532   7716   5164 S  0.0  0.0   0:00.79
>> greeter_async_s
>>  1440 deojha20   0  244532   7716   5164 S  0.0  0.0   0:01.06
>> greeter_async_s
>>  1444 deojha20   0  244532   7716   5164 S  0.0  0.0   0:00.83
>> greeter_async_s
>>  1445 deojha20   0  244532   7716   5164 S  0.0  0.0   0:00.75
>> greeter_async_s
>>  1446 deojha20   0  244532   7716   5164 S  0.0  0.0   0:00.76
>> greeter_async_s
>>
>>
>> Uses single thread to service all clients.
>>
>> top - 15:15:42 up 153 days,  1:18, 13 users,  load average: 0.22, 0.05,
>> 0.05
>> Threads:  18 total,   0 running,  18 sleeping,   0 stopped,   0 zombie
>> %Cpu(s):  6.3 us,  7.5 sy,  0.0 ni, 85.7 id,  0.0 wa,  0.0 hi,  0.5 si,
>>  0.0 st
>> KiB Mem : 49457112 total, 45056008 free,  3110176 used,  1290928
>> buff/cache
>> KiB Swap:  2097148 total,  2095108 free, 2040 used. 45593760 avail Mem
>>
>>   PID USER  PR  NIVIRTRESSHR S %CPU %MEM TIME+ COMMAND
>>  1425 deojha20   0  244640   8308   5548 S 19.6  0.0   0:02.21
>> greeter_async_s
>>  1426 deojha20   0  244640   8308   5548 S  0.0  0.0   0:12.09
>> greeter_async_s
>>  1428 deojha20   0  244640   8308   5548 S  0.0  0.0   0:01.04
>> greeter_async_s
>>  1429 deojha20   0  244640   8308   5548 S  0.0  0.0   0:00.81
>> greeter_async_s
>>  1430 deojha20   0  244640   8308   5548 S  0.0  0.0   0:00.99
>> greeter_async_s
>>  1431 deojha20   0  244640   8308   5548 S  0.0  0.0   0:01.09
>> greeter_async_s
>>  1432 deojha20   0  244640   8308   5548 S  0.0  0.0   0:00.77
>> greeter_async_s
>>  1433 deojha20   0  244640   8308   5548 S  0.0  0.0   0:01.02
>> greeter_async_s
>>  1434 deojha20   0  244640   8308   5548 S  0.0  0.0   0:00.99
>> greeter_async_s
>>  1435 deojha20   0

[grpc-io] Re: Single threaded async server in C++

2017-08-09 Thread 'Sree Kuchibhotla' via grpc.io
Hi Deepak,
By closure I meant grpc_closure

i.e
the callback functions which contain most of the logic inside grpc core.

thanks,
Sree

On Wed, Aug 9, 2017 at 5:23 PM, Deepak Ojha 
wrote:

> Thanks Sree!
>
> Just to confirm by closure, you mean shutting down idle connections?
>
> Regards,
> Deepak
>
> On Tue, Aug 8, 2017 at 8:14 PM, Sree Kuchibhotla  wrote:
>
>> Hi Deepak,
>> grpc core internally creates two sets of thread pools :
>> - Timer thread pool
>> 
>> (to execute timers/alarms): Max of 2 threads. Typically just one.
>> - Executor thread pool
>> :
>>  A dedicated thread pool (with max threads of 2*number of cores on your
>> machine) that handle executing closures.=
>>
>> So this is what you are perhaps seeing.  Currently we have not exposed a
>> way to configure these thread pool sizes but we might add it in future.
>>
>> thanks,
>> -Sree
>>
>>
>> On Thursday, August 3, 2017 at 3:32:28 PM UTC-7, deepako...@gmail.com
>> wrote:
>>>
>>> Hi,
>>>
>>> I am planning to implement a service that has very low scale i.e. it
>>> would service only a handful of clients.
>>> I want to keep resource usage to minimal and thus trying to use a single
>>> thread for all clients. After reading
>>> gRPC documentation it seems async model is the way to go. But when I
>>> tried greeter_async_server example
>>>  in cpp, I see it creates multiple threads(18 in my case) although it
>>> uses a single thread to service all clients(which I want)
>>> .
>>> Is there a way to avoid creation of so many threads in async model?
>>>
>>> bash-4.2$ ps -ax | grep async
>>>  1425 pts/5Sl+0:27 ./greeter_async_server
>>>
>>> top - 15:15:02 up 153 days,  1:17, 13 users,  load average: 0.00, 0.00,
>>> 0.04
>>> Threads:  18 total,   0 running,  18 sleeping,   0 stopped,   0 zombie
>>> %Cpu(s):  0.0 us,  0.0 sy,  0.0 ni, 99.9 id,  0.0 wa,  0.0 hi,  0.0 si,
>>>  0.0 st
>>> KiB Mem : 49457112 total, 45066340 free,  3105740 <310-5740> used,
>>>  1285032 buff/cache
>>> KiB Swap:  2097148 total,  2095108 free, 2040 used. 45603564 avail
>>> Mem
>>>
>>>   PID USER  PR  NIVIRTRESSHR S %CPU %MEM TIME+
>>> COMMAND
>>>  1425 deojha20   0  244532   7716   5164 S  0.0  0.0   0:00.00
>>> greeter_async_s
>>>  1426 deojha20   0  244532   7716   5164 S  0.0  0.0   0:12.08
>>> greeter_async_s
>>>  1428 deojha20   0  244532   7716   5164 S  0.0  0.0   0:01.04
>>> greeter_async_s
>>>  1429 deojha20   0  244532   7716   5164 S  0.0  0.0   0:00.81
>>> greeter_async_s
>>>  1430 deojha20   0  244532   7716   5164 S  0.0  0.0   0:00.99
>>> greeter_async_s
>>>  1431 deojha20   0  244532   7716   5164 S  0.0  0.0   0:01.09
>>> greeter_async_s
>>>  1432 deojha20   0  244532   7716   5164 S  0.0  0.0   0:00.77
>>> greeter_async_s
>>>  1433 deojha20   0  244532   7716   5164 S  0.0  0.0   0:01.02
>>> greeter_async_s
>>>  1434 deojha20   0  244532   7716   5164 S  0.0  0.0   0:00.99
>>> greeter_async_s
>>>  1435 deojha20   0  244532   7716   5164 S  0.0  0.0   0:01.08
>>> greeter_async_s
>>>  1436 deojha20   0  244532   7716   5164 S  0.0  0.0   0:00.83
>>> greeter_async_s
>>>  1437 deojha20   0  244532   7716   5164 S  0.0  0.0   0:01.06
>>> greeter_async_s
>>>  1438 deojha20   0  244532   7716   5164 S  0.0  0.0   0:00.91
>>> greeter_async_s
>>>  1439 deojha20   0  244532   7716   5164 S  0.0  0.0   0:00.79
>>> greeter_async_s
>>>  1440 deojha20   0  244532   7716   5164 S  0.0  0.0   0:01.06
>>> greeter_async_s
>>>  1444 deojha20   0  244532   7716   5164 S  0.0  0.0   0:00.83
>>> greeter_async_s
>>>  1445 deojha20   0  244532   7716   5164 S  0.0  0.0   0:00.75
>>> greeter_async_s
>>>  1446 deojha20   0  244532   7716   5164 S  0.0  0.0   0:00.76
>>> greeter_async_s
>>>
>>>
>>> Uses single thread to service all clients.
>>>
>>> top - 15:15:42 up 153 days,  1:18, 13 users,  load average: 0.22, 0.05,
>>> 0.05
>>> Threads:  18 total,   0 running,  18 sleeping,   0 stopped,   0 zombie
>>> %Cpu(s):  6.3 us,  7.5 sy,  0.0 ni, 85.7 id,  0.0 wa,  0.0 hi,  0.5 si,
>>>  0.0 st
>>> KiB Mem : 49457112 total, 45056008 free,  3110176 used,  1290928
>>> buff/cache
>>> KiB Swap:  2097148 total,  2095108 free, 2040 used. 45593760 avail
>>> Mem
>>>
>>>   PID USER  PR  NIVIRTRESSHR S %CPU %MEM TIME+
>>> COMMAND
>>>  1425 deojha20   0  244640   8308   5548 S 19.6  0.0   0:02.21
>>> greeter_async_s
>>>  1426 deojha20   0  244640   8308   5548 S  0.0  0.0   0:12.09
>>> greeter_async_s
>>>  1428 deojha20   0  244640   8308   5548 S  0.0  0.0   0:01.04
>>> greeter_async_s
>>>  1429 deojha20   0  244640   8308   5548 S  0.0  0.0   0:00.81
>>> greeter_async_s
>>>  1430 deojha20   0  244640   8

Re: [grpc-io] Compiling grpc on MSYS breaks at PROTOC

2017-08-09 Thread Nicolas Noble
Maybe; without much details on the error, it's going to be difficult to
diagnose and fix the issue.

On Aug 8, 2017 01:35, "Thomas Schober"  wrote:

> I managed to get it compiling. I removed the option "generate_mock_code"
> in the line with the error and it compiled / generated without any problem.
> I also could compile and run the helloWorld example. Maybe there is a bug
> in the script here ?
>
> Am Montag, 7. August 2017 19:55:58 UTC+2 schrieb Nicolas Noble:
>>
>> Try running with the environment variable V=1. There's something odd with
>> this error line since it seems to mean you're not using the plugin that got
>> just compiled. It may be a matter of file extension that's missing.
>>
>> On Thu, Aug 3, 2017 at 6:03 AM, Thomas Schober 
>> wrote:
>>
>>> Hi,
>>>
>>> i manged to get the things compiling on MSYS2 by removing -Werror from
>>> the Makefile. There have been a lot off dllimport warnings which would be
>>> treated as errors insted. After i removed -Werro compiler flag it compiles
>>> until it steps on to build the protobuf things. There it hangs here :
>>>
>>> [HOSTLD]  Linking /home/tsb/grpc/bins/opt/grpc_cpp_plugin
>>> [HOSTCXX] Compiling src/compiler/csharp_plugin.cc
>>> [HOSTLD]  Linking /home/tsb/grpc/bins/opt/grpc_csharp_plugin
>>> [HOSTCXX] Compiling src/compiler/node_plugin.cc
>>> [HOSTLD]  Linking /home/tsb/grpc/bins/opt/grpc_node_plugin
>>> [HOSTCXX] Compiling src/compiler/objective_c_plugin.cc
>>> [HOSTLD]  Linking /home/tsb/grpc/bins/opt/grpc_objective_c_plugin
>>> [HOSTCXX] Compiling src/compiler/php_plugin.cc
>>> [HOSTLD]  Linking /home/tsb/grpc/bins/opt/grpc_php_plugin
>>> [HOSTCXX] Compiling src/compiler/python_plugin.cc
>>> [HOSTLD]  Linking /home/tsb/grpc/bins/opt/grpc_python_plugin
>>> [HOSTCXX] Compiling src/compiler/ruby_plugin.cc
>>> [HOSTLD]  Linking /home/tsb/grpc/bins/opt/grpc_ruby_plugin
>>> [PROTOC]  Generating protobuf CC file from src/proto/grpc/health/v1/healt
>>> h.proto
>>> [GRPC]Generating gRPC's protobuf service CC file from
>>> src/proto/grpc/health/v1/health.proto
>>> [PROTOC]  Generating protobuf CC file from src/proto/grpc/testing/echo_me
>>> ssages.proto
>>> [GRPC]Generating gRPC's protobuf service CC file from
>>> src/proto/grpc/testing/echo_messages.proto
>>> [PROTOC]  Generating protobuf CC file from src/proto/grpc/testing/
>>> echo.proto
>>> [GRPC]Generating gRPC's protobuf service CC file from
>>> src/proto/grpc/testing/echo.proto
>>> --grpc_out: src/proto/grpc/testing/echo.proto: Invalid parameter:
>>> generate_mock_code=true;C
>>> make: *** [Makefile:2381: /home/tsb/grpc/gens/src/proto/grpc/testing/
>>> echo.grpc.pb.cc] Error 1
>>>
>>> Could anyone have a look at this ? i can't evene find
>>> generate_mock_code=true anywhere in the complete directory of grpc.
>>>
>>> My gcc info is as follows:
>>>
>>> $ gcc -v
>>> Using built-in specs.
>>> COLLECT_GCC=C:\msys64\mingw64\bin\gcc.exe
>>> COLLECT_LTO_WRAPPER=C:/msys64/mingw64/bin/../lib/gcc/x86_64-
>>> w64-mingw32/6.2.0/lto-wrapper.exe
>>> Target: x86_64-w64-mingw32
>>> Configured with: ../gcc-6.2.0/configure --prefix=/mingw64
>>> --with-local-prefix=/mingw64/local --build=x86_64-w64-mingw32
>>> --host=x86_64-w64-mingw32 --target=x86_64-w64-mingw32
>>> --with-native-system-header-dir=/mingw64/x86_64-w64-mingw32/include
>>> --libexecdir=/mingw64/lib --enable-bootstrap --with-arch=x86-64
>>> --with-tune=generic --enable-languages=c,lto,c++,objc,obj-c++,fortran,ada
>>> --enable-shared --enable-static --enable-libatomic --enable-threads=posix
>>> --enable-graphite --enable-fully-dynamic-string --enable-libstdcxx-time=yes
>>> --disable-libstdcxx-pch --disable-libstdcxx-debug
>>> --disable-isl-version-check --enable-lto --enable-libgomp
>>> --disable-multilib --enable-checking=release --disable-rpath
>>> --disable-win32-registry --disable-nls --disable-werror --disable-symvers
>>> --with-libiconv --with-system-zlib --with-gmp=/mingw64 --with-mpfr=/mingw64
>>> --with-mpc=/mingw64 --with-isl=/mingw64 --with-pkgversion='Rev2, Built by
>>> MSYS2 project' --with-bugurl=https://sourceforge.net/projects/msys2
>>> --with-gnu-as --with-gnu-ld
>>> Thread model: posix
>>> gcc version 6.2.0 (Rev2, Built by MSYS2 project)
>>>
>>> Any Idea what i am dowing wrong here ?
>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "grpc.io" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to grpc-io+u...@googlegroups.com.
>>> To post to this group, send email to grp...@googlegroups.com.
>>> Visit this group at https://groups.google.com/group/grpc-io.
>>> To view this discussion on the web visit https://groups.google.com/d/ms
>>> gid/grpc-io/5e290759-454e-4dd6-992d-75d5e8df44aa%40googlegroups.com
>>> 
>>> .
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>> --
> You received this