[grpc-io] Re: gRPC java (1.16) RoundRobinLoadBalancer is not able to load balancing to the newly added server

2019-01-18 Thread eleanore . jin
Hi Kun, 

Based on your input, once a new server (server 3 listening on 9097, once it 
is ready, LB will be notified and updated, however I see from the log, 
server3 is in ready state, but the request is never routed to server3, can 
you please suggest where should I look into the issue? Thanks a lot!


[RegistryNameResolver] (registry-refresher-safe-2-thread-1) NameResolver is 
notified there is a change in Registry... 
[RegistryNameResolver] (registry-refresher-safe-2-thread-1) NameResolver is 
notified there is a change in Registry... 
[RegistryNameResolver] (registry-refresher-safe-2-thread-1) NameResolver is 
notified there is a change for interested service , refreshing now... 
[RegistryNameResolver] (registry-refresher-safe-2-thread-1) NameResolver is 
notified there is a change for interested service , refreshing now... 
[RegistryNameResolver] (registry-refresher-safe-2-thread-1) NameResolver 
Refreshing ... 
[RegistryNameResolver] (registry-refresher-safe-2-thread-1) NameResolver 
Refreshing ... 
[RegistryNameResolver] (registry-refresher-safe-2-thread-1) trying to 
resolve 
[RegistryNameResolver] (registry-refresher-safe-2-thread-1) trying to 
resolve 
[RegistryNameResolver] (registry-refresher-safe-2-thread-1) NameResolver 
resolve completed 
[RegistryNameResolver] (registry-refresher-safe-2-thread-1) NameResolver 
resolve completed 
[RegistryNameResolver] (registry-name-resolver-safe-6-thread-1) Resolving 
for service name: 
[RegistryNameResolver] (registry-name-resolver-safe-6-thread-1) Resolving 
for service name:  
[RegistryNameResolver] (registry-name-resolver-safe-6-thread-1) Notifying 
with address size: 2, list: [[addrs=[localhost/127.0.0.1:9096], attrs={}], 
[addrs=[localhost/127.0.0.1:9097], attrs={}]] 
[RegistryNameResolver] (registry-name-resolver-safe-6-thread-1) Notifying 
with address size: 2, list: [[addrs=[localhost/127.0.0.1:9096], attrs={}], 
[addrs=[localhost/127.0.0.1:9097], attrs={}]] 
[io.grpc.internal.ManagedChannelImpl] 
(registry-name-resolver-safe-6-thread-1) 
[io.grpc.internal.ManagedChannelImpl-4] resolved address: 
[[addrs=[localhost/127.0.0.1:9096], attrs={}], 
[addrs=[localhost/127.0.0.1:9097], attrs={}]], config={} 
[io.grpc.internal.ManagedChannelImpl] 
(registry-name-resolver-safe-6-thread-1) 
[io.grpc.internal.ManagedChannelImpl-4] resolved address: 
[[addrs=[localhost/127.0.0.1:9096], attrs={}], 
[addrs=[localhost/127.0.0.1:9097], attrs={}]], config={} 
[io.grpc.internal.ManagedChannelImpl] 
(registry-name-resolver-safe-6-thread-1) 
[io.grpc.internal.ManagedChannelImpl-4] 
io.grpc.internal.InternalSubchannel-8 created for 
[[addrs=[localhost/127.0.0.1:9097], attrs={}]] 
[io.grpc.internal.ManagedChannelImpl] 
(registry-name-resolver-safe-6-thread-1) 
[io.grpc.internal.ManagedChannelImpl-4] 
io.grpc.internal.InternalSubchannel-8 created for 
[[addrs=[localhost/127.0.0.1:9097], attrs={}]] 
[io.grpc.internal.InternalSubchannel] 
(registry-name-resolver-safe-6-thread-1) 
[io.grpc.internal.InternalSubchannel-8] Created 
io.grpc.netty.NettyClientTransport-9 for localhost/127.0.0.1:9097 
[io.grpc.internal.InternalSubchannel] 
(registry-name-resolver-safe-6-thread-1) 
[io.grpc.internal.InternalSubchannel-8] Created 
io.grpc.netty.NettyClientTransport-9 for localhost/127.0.0.1:9097 
[io.grpc.internal.InternalSubchannel] (grpc-default-worker-ELG-3-3) 
[io.grpc.internal.InternalSubchannel-8] 
io.grpc.netty.NettyClientTransport-9 for localhost/127.0.0.1:9097 is ready 
[io.grpc.internal.InternalSubchannel] (grpc-default-worker-ELG-3-3) 
[io.grpc.internal.InternalSubchannel-8] 
io.grpc.netty.NettyClientTransport-9 for localhost/127.0.0.1:9097 is ready 

On Thursday, January 17, 2019 at 4:35:29 PM UTC-8, eleano...@gmail.com 
wrote:
>
> Got it! Thanks a lot
>
> On Thursday, January 17, 2019 at 2:35:54 PM UTC-8, Kun Zhang wrote:
>>
>> You don't need to worry about the timing. As soon as the Subchannel 
>> becomes ready, RoundRobinLoadBalancer should notice that by yet another 
>> call to updateBalancingState() and add it to the round-robin list. If 
>> you continue debugging, you should be able to see that.
>>
>> On Wednesday, January 16, 2019 at 1:44:41 PM UTC-8, eleano...@gmail.com 
>> wrote:
>>>
>>> Hi Kun, 
>>>
>>> I am trying to debug further, in 
>>> io.grpc.util.RoundRobinLoadBalancerFactory::handleResolvedAddressGroups 
>>> will be called if the NameResolver.Listener::onAddress is called, 
>>>
>>> inside handleResolvedAddressGroups method, it is calling 
>>> updateBalancingState(getAggregatedState(), 
>>> getAggregatedError()); where it seems in getAggregatedState(),
>>> it is not returning the subchannel state as READY, sometimes connecting, 
>>> sometimes idle.
>>>
>>> Then in updateBalancingState(), it will only put those subchannel's 
>>> state with READY in the activeList. 
>>>
>>> So just wonder is there anyway to ensure the sub channel is READY when 
>>> updating the loadbalancer ?
>>>
>>> On Wednesday, January 16, 2019 at 12:50:04 PM UTC-8, 

[grpc-io] Re: gRPC java (1.16) RoundRobinLoadBalancer is not able to load balancing to the newly added server

2019-01-17 Thread eleanore . jin
Got it! Thanks a lot

On Thursday, January 17, 2019 at 2:35:54 PM UTC-8, Kun Zhang wrote:
>
> You don't need to worry about the timing. As soon as the Subchannel 
> becomes ready, RoundRobinLoadBalancer should notice that by yet another 
> call to updateBalancingState() and add it to the round-robin list. If you 
> continue debugging, you should be able to see that.
>
> On Wednesday, January 16, 2019 at 1:44:41 PM UTC-8, eleano...@gmail.com 
> wrote:
>>
>> Hi Kun, 
>>
>> I am trying to debug further, in 
>> io.grpc.util.RoundRobinLoadBalancerFactory::handleResolvedAddressGroups 
>> will be called if the NameResolver.Listener::onAddress is called, 
>>
>> inside handleResolvedAddressGroups method, it is calling 
>> updateBalancingState(getAggregatedState(), 
>> getAggregatedError()); where it seems in getAggregatedState(),
>> it is not returning the subchannel state as READY, sometimes connecting, 
>> sometimes idle.
>>
>> Then in updateBalancingState(), it will only put those subchannel's state 
>> with READY in the activeList. 
>>
>> So just wonder is there anyway to ensure the sub channel is READY when 
>> updating the loadbalancer ?
>>
>> On Wednesday, January 16, 2019 at 12:50:04 PM UTC-8, eleano...@gmail.com 
>> wrote:
>>>
>>> Hi Kun, 
>>>  
>>> I did see that the new server3 (listening on 9097) has its 
>>> InternalSubchannel gets created:
>>>
>>>  [io.grpc.internal.InternalSubchannel] (grpc-default-worker-ELG-3-9) 
>>> [io.grpc.internal.InternalSubchannel-20] 
>>> io.grpc.netty.NettyClientTransport-21 for localhost/127.0.0.1:9097 is 
>>> ready
>>>  [io.grpc.internal.InternalSubchannel] (grpc-default-worker-ELG-3-9) 
>>> [io.grpc.internal.InternalSubchannel-20] 
>>> io.grpc.netty.NettyClientTransport-21 for localhost/127.0.0.1:9097 is 
>>> ready
>>>
>>> On Wednesday, January 9, 2019 at 10:18:47 AM UTC-8, eleano...@gmail.com 
>>> wrote:

 Hi, 

 in my java gRPC client, when I create the ManagedChannel, I am passing 
 my custom NameResolver, and using RoundRobinLoadBalancer. When my 
 NameResolver is notified with a change to the server list (new server 
 added), it will call Listener.onAddress and pass the updated the list.

 I see from the Log: the onAddress is called from 
 NameResolverListenerImpl, (9097 is the new server address added)

 resolved address: [[addrs=[localhost/127.0.0.1:9096], attrs={}], 
 [addrs=[localhost/127.0.0.1:9097], attrs={}]], config={}


 however, the traffic is not coming to the new server, did I miss 
 anything?


 Thanks a lot!







-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/488e7bc4-4171-4c2d-a7d4-0521ed3fa369%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: gRPC java (1.16) RoundRobinLoadBalancer is not able to load balancing to the newly added server

2019-01-16 Thread eleanore . jin
Hi Kun, 

I am trying to debug further, in 
io.grpc.util.RoundRobinLoadBalancerFactory::handleResolvedAddressGroups 
will be called if the NameResolver.Listener::onAddress is called, 

inside handleResolvedAddressGroups method, it is calling 
updateBalancingState(getAggregatedState(), 
getAggregatedError()); where it seems in getAggregatedState(),
it is not returning the subchannel state as READY, sometimes connecting, 
sometimes idle.

Then in updateBalancingState(), it will only put those subchannel's state 
with READY in the activeList. 

So just wonder is there anyway to ensure the sub channel is READY when 
updating the loadbalancer ?

On Wednesday, January 16, 2019 at 12:50:04 PM UTC-8, eleano...@gmail.com 
wrote:
>
> Hi Kun, 
>  
> I did see that the new server3 (listening on 9097) has its 
> InternalSubchannel gets created:
>
>  [io.grpc.internal.InternalSubchannel] (grpc-default-worker-ELG-3-9) 
> [io.grpc.internal.InternalSubchannel-20] 
> io.grpc.netty.NettyClientTransport-21 for localhost/127.0.0.1:9097 is 
> ready
>  [io.grpc.internal.InternalSubchannel] (grpc-default-worker-ELG-3-9) 
> [io.grpc.internal.InternalSubchannel-20] 
> io.grpc.netty.NettyClientTransport-21 for localhost/127.0.0.1:9097 is 
> ready
>
> On Wednesday, January 9, 2019 at 10:18:47 AM UTC-8, eleano...@gmail.com 
> wrote:
>>
>> Hi, 
>>
>> in my java gRPC client, when I create the ManagedChannel, I am passing my 
>> custom NameResolver, and using RoundRobinLoadBalancer. When my NameResolver 
>> is notified with a change to the server list (new server added), it will 
>> call Listener.onAddress and pass the updated the list.
>>
>> I see from the Log: the onAddress is called from 
>> NameResolverListenerImpl, (9097 is the new server address added)
>>
>> resolved address: [[addrs=[localhost/127.0.0.1:9096], attrs={}], 
>> [addrs=[localhost/127.0.0.1:9097], attrs={}]], config={}
>>
>>
>> however, the traffic is not coming to the new server, did I miss anything?
>>
>>
>> Thanks a lot!
>>
>>
>>
>>
>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/8c400fe6-7d61-4c3f-ba15-81c8530b13e8%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: gRPC java (1.16) RoundRobinLoadBalancer is not able to load balancing to the newly added server

2019-01-16 Thread eleanore . jin
Hi Kun, 
 
I did see that the new server3 (listening on 9097) has its 
InternalSubchannel gets created:

 [io.grpc.internal.InternalSubchannel] (grpc-default-worker-ELG-3-9) 
[io.grpc.internal.InternalSubchannel-20] 
io.grpc.netty.NettyClientTransport-21 for localhost/127.0.0.1:9097 is ready
 [io.grpc.internal.InternalSubchannel] (grpc-default-worker-ELG-3-9) 
[io.grpc.internal.InternalSubchannel-20] 
io.grpc.netty.NettyClientTransport-21 for localhost/127.0.0.1:9097 is ready

On Wednesday, January 9, 2019 at 10:18:47 AM UTC-8, eleano...@gmail.com 
wrote:
>
> Hi, 
>
> in my java gRPC client, when I create the ManagedChannel, I am passing my 
> custom NameResolver, and using RoundRobinLoadBalancer. When my NameResolver 
> is notified with a change to the server list (new server added), it will 
> call Listener.onAddress and pass the updated the list.
>
> I see from the Log: the onAddress is called from NameResolverListenerImpl, 
> (9097 is the new server address added)
>
> resolved address: [[addrs=[localhost/127.0.0.1:9096], attrs={}], 
> [addrs=[localhost/127.0.0.1:9097], attrs={}]], config={}
>
>
> however, the traffic is not coming to the new server, did I miss anything?
>
>
> Thanks a lot!
>
>
>
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/8ac923ac-0dd1-4165-8fde-859635c37678%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: gRPC java (1.16) RoundRobinLoadBalancer is not able to load balancing to the newly added server

2019-01-16 Thread eleanore . jin
Hi Kun, 

please see the logs below: 
 [io.grpc.internal.ManagedChannelImpl] (pool-3-thread-1) 
[io.grpc.internal.ManagedChannelImpl-4] Created with target 
a_ECHOSERVICE_echo_1_0_3
 [io.grpc.internal.ManagedChannelImpl] (pool-3-thread-1) 
[io.grpc.internal.ManagedChannelImpl-4] Created with target 
a_ECHOSERVICE_echo_1_0_3
 [io.grpc.internal.ManagedChannelImpl] (pool-3-thread-1) 
[io.grpc.internal.ManagedChannelImpl-4] Exiting idle mode
 [io.grpc.internal.ManagedChannelImpl] (pool-3-thread-1) 
[io.grpc.internal.ManagedChannelImpl-4] Exiting idle mode
 [io.grpc.internal.ManagedChannelImpl] (name-resolver-safe-5-thread-1) 
[io.grpc.internal.ManagedChannelImpl-6] Created with target localhost:9096
 [io.grpc.internal.ManagedChannelImpl] (name-resolver-safe-5-thread-1) 
[io.grpc.internal.ManagedChannelImpl-6] Created with target localhost:9096
 [io.grpc.internal.ManagedChannelImpl] (name-resolver-safe-5-thread-1) 
[io.grpc.internal.ManagedChannelImpl-6] Exiting idle mode
 [io.grpc.internal.ManagedChannelImpl] (name-resolver-safe-5-thread-1) 
[io.grpc.internal.ManagedChannelImpl-6] Exiting idle mode
 [io.grpc.internal.ManagedChannelImpl] (grpc-default-executor-0) 
[io.grpc.internal.ManagedChannelImpl-6] resolved address: 
[[addrs=[localhost/127.0.0.1:9096], attrs={}], 
[addrs=[localhost/0:0:0:0:0:0:0:1:9096], attrs={}]], config={}
 [io.grpc.internal.ManagedChannelImpl] (grpc-default-executor-0) 
[io.grpc.internal.ManagedChannelImpl-6] resolved address: 
[[addrs=[localhost/127.0.0.1:9096], attrs={}], 
[addrs=[localhost/0:0:0:0:0:0:0:1:9096], attrs={}]], config={}
 [io.grpc.internal.ManagedChannelImpl] (grpc-default-executor-0) 
[io.grpc.internal.ManagedChannelImpl-6] 
io.grpc.internal.InternalSubchannel-8 created for 
[[addrs=[localhost/127.0.0.1:9096], attrs={}], 
[addrs=[localhost/0:0:0:0:0:0:0:1:9096], attrs={}]]
 [io.grpc.internal.ManagedChannelImpl] (grpc-default-executor-0) 
[io.grpc.internal.ManagedChannelImpl-6] 
io.grpc.internal.InternalSubchannel-8 created for 
[[addrs=[localhost/127.0.0.1:9096], attrs={}], 
[addrs=[localhost/0:0:0:0:0:0:0:1:9096], attrs={}]]
 [io.grpc.internal.InternalSubchannel] (grpc-default-executor-0) 
[io.grpc.internal.InternalSubchannel-8] Created 
io.grpc.netty.NettyClientTransport-9 for localhost/127.0.0.1:9096
 [io.grpc.internal.InternalSubchannel] (grpc-default-executor-0) 
[io.grpc.internal.InternalSubchannel-8] Created 
io.grpc.netty.NettyClientTransport-9 for localhost/127.0.0.1:9096
 [io.grpc.internal.InternalSubchannel] (grpc-default-worker-ELG-3-3) 
[io.grpc.internal.InternalSubchannel-8] 
io.grpc.netty.NettyClientTransport-9 for localhost/127.0.0.1:9096 is ready
 [io.grpc.internal.InternalSubchannel] (grpc-default-worker-ELG-3-3) 
[io.grpc.internal.InternalSubchannel-8] 
io.grpc.netty.NettyClientTransport-9 for localhost/127.0.0.1:9096 is ready
 [io.grpc.internal.ManagedChannelImpl] (name-resolver-safe-5-thread-1) 
[io.grpc.internal.ManagedChannelImpl-6] shutdownNow() called
 [io.grpc.internal.ManagedChannelImpl] (name-resolver-safe-5-thread-1) 
[io.grpc.internal.ManagedChannelImpl-6] shutdownNow() called
 [io.grpc.internal.ManagedChannelImpl] (name-resolver-safe-5-thread-1) 
[io.grpc.internal.ManagedChannelImpl-6] shutdown() called
 [io.grpc.internal.ManagedChannelImpl] (name-resolver-safe-5-thread-1) 
[io.grpc.internal.ManagedChannelImpl-6] shutdown() called
 [io.grpc.internal.ManagedChannelImpl] (name-resolver-safe-5-thread-1) 
[io.grpc.internal.ManagedChannelImpl-6] Shutting down
 [io.grpc.internal.ManagedChannelImpl] (name-resolver-safe-5-thread-1) 
[io.grpc.internal.ManagedChannelImpl-6] Shutting down
 [io.grpc.internal.InternalSubchannel] (grpc-default-worker-ELG-3-3) 
[io.grpc.internal.InternalSubchannel-8] 
io.grpc.netty.NettyClientTransport-9 for localhost/127.0.0.1:9096 is being 
shutdown with status Status{code=UNAVAILABLE, description=Channel shutdown 
invoked, cause=null}
 [io.grpc.internal.InternalSubchannel] (grpc-default-worker-ELG-3-3) 
[io.grpc.internal.InternalSubchannel-8] 
io.grpc.netty.NettyClientTransport-9 for localhost/127.0.0.1:9096 is being 
shutdown with status Status{code=UNAVAILABLE, description=Channel shutdown 
invoked, cause=null}
 [io.grpc.internal.ManagedChannelImpl] (name-resolver-safe-5-thread-1) 
[io.grpc.internal.ManagedChannelImpl-4] resolved address: 
[[addrs=[localhost/127.0.0.1:9096], attrs={}]], config={}
 [io.grpc.internal.ManagedChannelImpl] (name-resolver-safe-5-thread-1) 
[io.grpc.internal.ManagedChannelImpl-4] resolved address: 
[[addrs=[localhost/127.0.0.1:9096], attrs={}]], config={}
 [io.grpc.internal.ManagedChannelImpl] (name-resolver-safe-5-thread-1) 
[io.grpc.internal.ManagedChannelImpl-4] 
io.grpc.internal.InternalSubchannel-10 created for 
[[addrs=[localhost/127.0.0.1:9096], attrs={}]]
 [io.grpc.internal.ManagedChannelImpl] (name-resolver-safe-5-thread-1) 
[io.grpc.internal.ManagedChannelImpl-4] 
io.grpc.internal.InternalSubchannel-10 created for 
[[addrs=[localhost/127.0.0.1:9096], attrs={}]]
 

[grpc-io] Re: gRPC java (1.16) RoundRobinLoadBalancer is not able to load balancing to the newly added server

2019-01-15 Thread eleanore . jin
Hi Kun, 

please see log below

[io.grpc.internal.ManagedChannelImpl] (pool-3-thread-1) 
[io.grpc.internal.ManagedChannelImpl-4] Exiting idle mode
[io.grpc.internal.ManagedChannelImpl] (pool-3-thread-1) 
[io.grpc.internal.ManagedChannelImpl-4] Exiting idle mode
[io.grpc.internal.ManagedChannelImpl] (safe-5-thread-1) 
[io.grpc.internal.ManagedChannelImpl-6] Created with target localhost:9096
[io.grpc.internal.ManagedChannelImpl] (safe-5-thread-1) 
[io.grpc.internal.ManagedChannelImpl-6] Created with target localhost:9096
[io.grpc.internal.ManagedChannelImpl] (safe-5-thread-1) 
[io.grpc.internal.ManagedChannelImpl-6] Exiting idle mode
[io.grpc.internal.ManagedChannelImpl] (safe-5-thread-1) 
[io.grpc.internal.ManagedChannelImpl-6] Exiting idle mode
[io.grpc.internal.ManagedChannelImpl] (grpc-default-executor-0) 
[io.grpc.internal.ManagedChannelImpl-6] resolved address: 
[[addrs=[localhost/127.0.0.1:9096], attrs={}], 
[addrs=[localhost/0:0:0:0:0:0:0:1:9096], attrs={}]], config={}
[io.grpc.internal.ManagedChannelImpl] (grpc-default-executor-0) 
[io.grpc.internal.ManagedChannelImpl-6] resolved address: 
[[addrs=[localhost/127.0.0.1:9096], attrs={}], 
[addrs=[localhost/0:0:0:0:0:0:0:1:9096], attrs={}]], config={}
 [io.grpc.internal.ManagedChannelImpl] (grpc-default-executor-0) 
[io.grpc.internal.ManagedChannelImpl-6] 
io.grpc.internal.InternalSubchannel-8 created for 
[[addrs=[localhost/127.0.0.1:9096], attrs={}], 
[addrs=[localhost/0:0:0:0:0:0:0:1:9096], attrs={}]]
 [io.grpc.internal.ManagedChannelImpl] (grpc-default-executor-0) 
[io.grpc.internal.ManagedChannelImpl-6] 
io.grpc.internal.InternalSubchannel-8 created for 
[[addrs=[localhost/127.0.0.1:9096], attrs={}], 
[addrs=[localhost/0:0:0:0:0:0:0:1:9096], attrs={}]]
 [io.grpc.internal.InternalSubchannel] (grpc-default-executor-0) 
[io.grpc.internal.InternalSubchannel-8] Created 
io.grpc.netty.NettyClientTransport-9 for localhost/127.0.0.1:9096
 [io.grpc.internal.InternalSubchannel] (grpc-default-executor-0) 
[io.grpc.internal.InternalSubchannel-8] Created 
io.grpc.netty.NettyClientTransport-9 for localhost/127.0.0.1:9096
 [io.grpc.internal.InternalSubchannel] (grpc-default-worker-ELG-3-3) 
[io.grpc.internal.InternalSubchannel-8] 
io.grpc.netty.NettyClientTransport-9 for localhost/127.0.0.1:9096 is ready
 [io.grpc.internal.InternalSubchannel] (grpc-default-worker-ELG-3-3) 
[io.grpc.internal.InternalSubchannel-8] 
io.grpc.netty.NettyClientTransport-9 for localhost/127.0.0.1:9096 is ready
 [io.grpc.internal.ManagedChannelImpl] (safe-5-thread-1) 
[io.grpc.internal.ManagedChannelImpl-6] shutdownNow() called
 [io.grpc.internal.ManagedChannelImpl] (safe-5-thread-1) 
[io.grpc.internal.ManagedChannelImpl-6] shutdownNow() called
 [io.grpc.internal.ManagedChannelImpl] (safe-5-thread-1) 
[io.grpc.internal.ManagedChannelImpl-6] shutdown() called
 [io.grpc.internal.ManagedChannelImpl] (safe-5-thread-1) 
[io.grpc.internal.ManagedChannelImpl-6] shutdown() called
 [io.grpc.internal.ManagedChannelImpl] (safe-5-thread-1) 
[io.grpc.internal.ManagedChannelImpl-6] Shutting down
 [io.grpc.internal.ManagedChannelImpl] (safe-5-thread-1) 
[io.grpc.internal.ManagedChannelImpl-6] Shutting down
 [io.grpc.internal.InternalSubchannel] (grpc-default-worker-ELG-3-3) 
[io.grpc.internal.InternalSubchannel-8] 
io.grpc.netty.NettyClientTransport-9 for localhost/127.0.0.1:9096 is being 
shutdown with status Status{code=UNAVAILABLE, description=Channel shutdown 
invoked, cause=null}
 [io.grpc.internal.InternalSubchannel] (grpc-default-worker-ELG-3-3) 
[io.grpc.internal.InternalSubchannel-8] 
io.grpc.netty.NettyClientTransport-9 for localhost/127.0.0.1:9096 is being 
shutdown with status Status{code=UNAVAILABLE, description=Channel shutdown 
invoked, cause=null}
 [io.grpc.internal.ManagedChannelImpl] (safe-5-thread-1) 
[io.grpc.internal.ManagedChannelImpl-10] Created with target localhost:9097
 [io.grpc.internal.ManagedChannelImpl] (safe-5-thread-1) 
[io.grpc.internal.ManagedChannelImpl-10] Created with target localhost:9097
 [io.grpc.internal.ManagedChannelImpl] (safe-5-thread-1) 
[io.grpc.internal.ManagedChannelImpl-10] Exiting idle mode
 [io.grpc.internal.ManagedChannelImpl] (safe-5-thread-1) 
[io.grpc.internal.ManagedChannelImpl-10] Exiting idle mode
 [io.grpc.internal.ManagedChannelImpl] (grpc-default-executor-0) 
[io.grpc.internal.ManagedChannelImpl-10] resolved address: 
[[addrs=[localhost/127.0.0.1:9097], attrs={}], 
[addrs=[localhost/0:0:0:0:0:0:0:1:9097], attrs={}]], config={}
 [io.grpc.internal.ManagedChannelImpl] (grpc-default-executor-0) 
[io.grpc.internal.ManagedChannelImpl-10] resolved address: 
[[addrs=[localhost/127.0.0.1:9097], attrs={}], 
[addrs=[localhost/0:0:0:0:0:0:0:1:9097], attrs={}]], config={}
 [io.grpc.internal.ManagedChannelImpl] (grpc-default-executor-0) 
[io.grpc.internal.ManagedChannelImpl-10] 
io.grpc.internal.InternalSubchannel-12 created for 
[[addrs=[localhost/127.0.0.1:9097], attrs={}], 
[addrs=[localhost/0:0:0:0:0:0:0:1:9097], attrs={}]]
 

Re: [grpc-io] Re: gRPC java (1.16) RoundRobinLoadBalancer is not able to load balancing to the newly added server

2019-01-15 Thread eleanore . jin
Hi Eric, 

one more question, when the subchannel gets updated from a channel, how 
about the Streams that is created from the channel? I assume that the 
stream is for a particular tcp connection, meaning a particular subchannel?

On Tuesday, January 15, 2019 at 10:04:09 AM UTC-8, eleano...@gmail.com 
wrote:
>
> Hi Eric, 
>
>
> Thanks a lot for the reply, actually I do have my custom NameResolver, and 
> upon changes for the server list, NameResolver will be notified. And I do 
> have the RoundRobinLoadBalancer
>
> configured, please see code below.
>
>
> ManagedChannel channel = ManagedChannelBuilder.forTarget(...)
>  .executor(channelExecutor)
> .nameResolverFactory(new Factory() {
>   public NameResolver newNameResolver(URI targetUri, Attributes params) {
> return new MyCustomNameResolver(*...*);
>   }
>
>   @Override
>   public String getDefaultScheme() {
> return null;
>   }
> })
> .loadBalancerFactory(RoundRobinLoadBalancerFactory.getInstance())
> .usePlaintext()
> .enableRetry()
> .build();
>
> channel.getState(true);
>
>
> On Tuesday, January 15, 2019 at 8:12:16 AM UTC-8, Eric Anderson wrote:
>>
>> It looks like you are re-creating channels when the backends change. That 
>> is unfortunate; I would encourage you to instead create a NameResolver that 
>> will provide updated server addresses when they change. That will prevent 
>> needing to shut down perfectly good connections and avoids you having to 
>> deal with many races when swapping out the Channel.
>>
>> Are you sure you are using RoundRobin? The last channel would likely only 
>> send RPCs to 9095 if it was using the default PickFirst.
>>
>> On Fri, Jan 11, 2019 at 2:43 PM  wrote:
>>
>>> Hi Kun, 
>>>
>>> please see below the logs from the gRPC client, so server1 
>>> (localhost:9095) is running first, then the client start making requests, 
>>> afterward, I started up server2 (localhost:9096), then I see the following 
>>> logs, and the request is not sent to server2. 
>>>
>>> [io.grpc.internal.ManagedChannelImpl][io.grpc.internal.ManagedChannelImpl-12]
>>>  
>>> Created with target localhost:9095
>>> [io.grpc.internal.ManagedChannelImpl][io.grpc.internal.ManagedChannelImpl-12]
>>>  
>>> Created with target localhost:9095
>>> [io.grpc.internal.ManagedChannelImpl][io.grpc.internal.ManagedChannelImpl-12]
>>>  
>>> Exiting idle mode
>>> [io.grpc.internal.ManagedChannelImpl][io.grpc.internal.ManagedChannelImpl-12]
>>>  
>>> Exiting idle mode
>>> [io.grpc.internal.ManagedChannelImpl][io.grpc.internal.ManagedChannelImpl-12]
>>>  
>>> resolved address: [[addrs=[localhost/127.0.0.1:9095], attrs={}], 
>>> [addrs=[localhost/0:0:0:0:0:0:0:1:9095], attrs={}]], config={}
>>> [io.grpc.internal.ManagedChannelImpl][io.grpc.internal.ManagedChannelImpl-12]
>>>  
>>> resolved address: [[addrs=[localhost/127.0.0.1:9095], attrs={}], 
>>> [addrs=[localhost/0:0:0:0:0:0:0:1:9095], attrs={}]], config={}
>>> [io.grpc.internal.ManagedChannelImpl][io.grpc.internal.ManagedChannelImpl-12]
>>>  
>>> io.grpc.internal.InternalSubchannel-14 created for [[addrs=[localhost/
>>> 127.0.0.1:9095], attrs={}], [addrs=[localhost/0:0:0:0:0:0:0:1:9095], 
>>> attrs={}]]
>>> [io.grpc.internal.ManagedChannelImpl][io.grpc.internal.ManagedChannelImpl-12]
>>>  
>>> io.grpc.internal.InternalSubchannel-14 created for [[addrs=[localhost/
>>> 127.0.0.1:9095], attrs={}], [addrs=[localhost/0:0:0:0:0:0:0:1:9095], 
>>> attrs={}]]
>>> [io.grpc.internal.ManagedChannelImpl][io.grpc.internal.ManagedChannelImpl-12]
>>>  
>>> shutdownNow() called
>>> [io.grpc.internal.ManagedChannelImpl][io.grpc.internal.ManagedChannelImpl-12]
>>>  
>>> shutdownNow() called
>>> [io.grpc.internal.ManagedChannelImpl][io.grpc.internal.ManagedChannelImpl-12]
>>>  
>>> shutdown() called
>>> [io.grpc.internal.ManagedChannelImpl][io.grpc.internal.ManagedChannelImpl-12]
>>>  
>>> shutdown() called
>>> [io.grpc.internal.ManagedChannelImpl][io.grpc.internal.ManagedChannelImpl-12]
>>>  
>>> Shutting down
>>> [io.grpc.internal.ManagedChannelImpl][io.grpc.internal.ManagedChannelImpl-12]
>>>  
>>> Shutting down
>>> [io.grpc.internal.ManagedChannelImpl][io.grpc.internal.ManagedChannelImpl-16]
>>>  
>>> Created with target localhost:9096
>>> [io.grpc.internal.ManagedChannelImpl][io.grpc.internal.ManagedChannelImpl-16]
>>>  
>>> Created with target localhost:9096
>>> [io.grpc.internal.ManagedChannelImpl][io.grpc.internal.ManagedChannelImpl-12]
>>>  
>>> Terminated
>>> [io.grpc.internal.ManagedChannelImpl][io.grpc.internal.ManagedChannelImpl-12]
>>>  
>>> Terminated
>>> [io.grpc.internal.ManagedChannelImpl][io.grpc.internal.ManagedChannelImpl-16]
>>>  
>>> Exiting idle mode
>>> [io.grpc.internal.ManagedChannelImpl][io.grpc.internal.ManagedChannelImpl-16]
>>>  
>>> Exiting idle mode
>>> [io.grpc.internal.ManagedChannelImpl][io.grpc.internal.ManagedChannelImpl-16]
>>>  
>>> resolved address: [[addrs=[localhost/127.0.0.1:9096], attrs={}], 
>>> 

Re: [grpc-io] Re: gRPC java (1.16) RoundRobinLoadBalancer is not able to load balancing to the newly added server

2019-01-15 Thread eleanore . jin


Hi Eric, 


Thanks a lot for the reply, actually I do have my custom NameResolver, and upon 
changes for the server list, NameResolver will be notified. And I do have the 
RoundRobinLoadBalancer

configured, please see code below.


ManagedChannel channel = ManagedChannelBuilder.forTarget(...)
 .executor(channelExecutor)
.nameResolverFactory(new Factory() {
  public NameResolver newNameResolver(URI targetUri, Attributes params) {
return new MyCustomNameResolver(*...*);
  }

  @Override
  public String getDefaultScheme() {
return null;
  }
})
.loadBalancerFactory(RoundRobinLoadBalancerFactory.getInstance())
.usePlaintext()
.enableRetry()
.build();

channel.getState(true);


On Tuesday, January 15, 2019 at 8:12:16 AM UTC-8, Eric Anderson wrote:
>
> It looks like you are re-creating channels when the backends change. That 
> is unfortunate; I would encourage you to instead create a NameResolver that 
> will provide updated server addresses when they change. That will prevent 
> needing to shut down perfectly good connections and avoids you having to 
> deal with many races when swapping out the Channel.
>
> Are you sure you are using RoundRobin? The last channel would likely only 
> send RPCs to 9095 if it was using the default PickFirst.
>
> On Fri, Jan 11, 2019 at 2:43 PM > wrote:
>
>> Hi Kun, 
>>
>> please see below the logs from the gRPC client, so server1 
>> (localhost:9095) is running first, then the client start making requests, 
>> afterward, I started up server2 (localhost:9096), then I see the following 
>> logs, and the request is not sent to server2. 
>>
>> [io.grpc.internal.ManagedChannelImpl][io.grpc.internal.ManagedChannelImpl-12]
>>  
>> Created with target localhost:9095
>> [io.grpc.internal.ManagedChannelImpl][io.grpc.internal.ManagedChannelImpl-12]
>>  
>> Created with target localhost:9095
>> [io.grpc.internal.ManagedChannelImpl][io.grpc.internal.ManagedChannelImpl-12]
>>  
>> Exiting idle mode
>> [io.grpc.internal.ManagedChannelImpl][io.grpc.internal.ManagedChannelImpl-12]
>>  
>> Exiting idle mode
>> [io.grpc.internal.ManagedChannelImpl][io.grpc.internal.ManagedChannelImpl-12]
>>  
>> resolved address: [[addrs=[localhost/127.0.0.1:9095], attrs={}], 
>> [addrs=[localhost/0:0:0:0:0:0:0:1:9095], attrs={}]], config={}
>> [io.grpc.internal.ManagedChannelImpl][io.grpc.internal.ManagedChannelImpl-12]
>>  
>> resolved address: [[addrs=[localhost/127.0.0.1:9095], attrs={}], 
>> [addrs=[localhost/0:0:0:0:0:0:0:1:9095], attrs={}]], config={}
>> [io.grpc.internal.ManagedChannelImpl][io.grpc.internal.ManagedChannelImpl-12]
>>  
>> io.grpc.internal.InternalSubchannel-14 created for [[addrs=[localhost/
>> 127.0.0.1:9095], attrs={}], [addrs=[localhost/0:0:0:0:0:0:0:1:9095], 
>> attrs={}]]
>> [io.grpc.internal.ManagedChannelImpl][io.grpc.internal.ManagedChannelImpl-12]
>>  
>> io.grpc.internal.InternalSubchannel-14 created for [[addrs=[localhost/
>> 127.0.0.1:9095], attrs={}], [addrs=[localhost/0:0:0:0:0:0:0:1:9095], 
>> attrs={}]]
>> [io.grpc.internal.ManagedChannelImpl][io.grpc.internal.ManagedChannelImpl-12]
>>  
>> shutdownNow() called
>> [io.grpc.internal.ManagedChannelImpl][io.grpc.internal.ManagedChannelImpl-12]
>>  
>> shutdownNow() called
>> [io.grpc.internal.ManagedChannelImpl][io.grpc.internal.ManagedChannelImpl-12]
>>  
>> shutdown() called
>> [io.grpc.internal.ManagedChannelImpl][io.grpc.internal.ManagedChannelImpl-12]
>>  
>> shutdown() called
>> [io.grpc.internal.ManagedChannelImpl][io.grpc.internal.ManagedChannelImpl-12]
>>  
>> Shutting down
>> [io.grpc.internal.ManagedChannelImpl][io.grpc.internal.ManagedChannelImpl-12]
>>  
>> Shutting down
>> [io.grpc.internal.ManagedChannelImpl][io.grpc.internal.ManagedChannelImpl-16]
>>  
>> Created with target localhost:9096
>> [io.grpc.internal.ManagedChannelImpl][io.grpc.internal.ManagedChannelImpl-16]
>>  
>> Created with target localhost:9096
>> [io.grpc.internal.ManagedChannelImpl][io.grpc.internal.ManagedChannelImpl-12]
>>  
>> Terminated
>> [io.grpc.internal.ManagedChannelImpl][io.grpc.internal.ManagedChannelImpl-12]
>>  
>> Terminated
>> [io.grpc.internal.ManagedChannelImpl][io.grpc.internal.ManagedChannelImpl-16]
>>  
>> Exiting idle mode
>> [io.grpc.internal.ManagedChannelImpl][io.grpc.internal.ManagedChannelImpl-16]
>>  
>> Exiting idle mode
>> [io.grpc.internal.ManagedChannelImpl][io.grpc.internal.ManagedChannelImpl-16]
>>  
>> resolved address: [[addrs=[localhost/127.0.0.1:9096], attrs={}], 
>> [addrs=[localhost/0:0:0:0:0:0:0:1:9096], attrs={}]], config={}
>> [io.grpc.internal.ManagedChannelImpl][io.grpc.internal.ManagedChannelImpl-16]
>>  
>> resolved address: [[addrs=[localhost/127.0.0.1:9096], attrs={}], 
>> [addrs=[localhost/0:0:0:0:0:0:0:1:9096], attrs={}]], config={}
>> [io.grpc.internal.ManagedChannelImpl][io.grpc.internal.ManagedChannelImpl-16]
>>  
>> io.grpc.internal.InternalSubchannel-18 created for [[addrs=[localhost/
>> 

[grpc-io] Re: gRPC java (1.16) RoundRobinLoadBalancer is not able to load balancing to the newly added server

2019-01-11 Thread eleanore . jin
Hi Kun, 

please see below the logs from the gRPC client, so server1 (localhost:9095) 
is running first, then the client start making requests, afterward, I 
started up server2 (localhost:9096), then I see the following logs, and the 
request is not sent to server2. 

[io.grpc.internal.ManagedChannelImpl][io.grpc.internal.ManagedChannelImpl-12] 
Created with target localhost:9095
[io.grpc.internal.ManagedChannelImpl][io.grpc.internal.ManagedChannelImpl-12] 
Created with target localhost:9095
[io.grpc.internal.ManagedChannelImpl][io.grpc.internal.ManagedChannelImpl-12] 
Exiting idle mode
[io.grpc.internal.ManagedChannelImpl][io.grpc.internal.ManagedChannelImpl-12] 
Exiting idle mode
[io.grpc.internal.ManagedChannelImpl][io.grpc.internal.ManagedChannelImpl-12] 
resolved address: [[addrs=[localhost/127.0.0.1:9095], attrs={}], 
[addrs=[localhost/0:0:0:0:0:0:0:1:9095], attrs={}]], config={}
[io.grpc.internal.ManagedChannelImpl][io.grpc.internal.ManagedChannelImpl-12] 
resolved address: [[addrs=[localhost/127.0.0.1:9095], attrs={}], 
[addrs=[localhost/0:0:0:0:0:0:0:1:9095], attrs={}]], config={}
[io.grpc.internal.ManagedChannelImpl][io.grpc.internal.ManagedChannelImpl-12] 
io.grpc.internal.InternalSubchannel-14 created for 
[[addrs=[localhost/127.0.0.1:9095], attrs={}], 
[addrs=[localhost/0:0:0:0:0:0:0:1:9095], attrs={}]]
[io.grpc.internal.ManagedChannelImpl][io.grpc.internal.ManagedChannelImpl-12] 
io.grpc.internal.InternalSubchannel-14 created for 
[[addrs=[localhost/127.0.0.1:9095], attrs={}], 
[addrs=[localhost/0:0:0:0:0:0:0:1:9095], attrs={}]]
[io.grpc.internal.ManagedChannelImpl][io.grpc.internal.ManagedChannelImpl-12] 
shutdownNow() called
[io.grpc.internal.ManagedChannelImpl][io.grpc.internal.ManagedChannelImpl-12] 
shutdownNow() called
[io.grpc.internal.ManagedChannelImpl][io.grpc.internal.ManagedChannelImpl-12] 
shutdown() called
[io.grpc.internal.ManagedChannelImpl][io.grpc.internal.ManagedChannelImpl-12] 
shutdown() called
[io.grpc.internal.ManagedChannelImpl][io.grpc.internal.ManagedChannelImpl-12] 
Shutting down
[io.grpc.internal.ManagedChannelImpl][io.grpc.internal.ManagedChannelImpl-12] 
Shutting down
[io.grpc.internal.ManagedChannelImpl][io.grpc.internal.ManagedChannelImpl-16] 
Created with target localhost:9096
[io.grpc.internal.ManagedChannelImpl][io.grpc.internal.ManagedChannelImpl-16] 
Created with target localhost:9096
[io.grpc.internal.ManagedChannelImpl][io.grpc.internal.ManagedChannelImpl-12] 
Terminated
[io.grpc.internal.ManagedChannelImpl][io.grpc.internal.ManagedChannelImpl-12] 
Terminated
[io.grpc.internal.ManagedChannelImpl][io.grpc.internal.ManagedChannelImpl-16] 
Exiting idle mode
[io.grpc.internal.ManagedChannelImpl][io.grpc.internal.ManagedChannelImpl-16] 
Exiting idle mode
[io.grpc.internal.ManagedChannelImpl][io.grpc.internal.ManagedChannelImpl-16] 
resolved address: [[addrs=[localhost/127.0.0.1:9096], attrs={}], 
[addrs=[localhost/0:0:0:0:0:0:0:1:9096], attrs={}]], config={}
[io.grpc.internal.ManagedChannelImpl][io.grpc.internal.ManagedChannelImpl-16] 
resolved address: [[addrs=[localhost/127.0.0.1:9096], attrs={}], 
[addrs=[localhost/0:0:0:0:0:0:0:1:9096], attrs={}]], config={}
[io.grpc.internal.ManagedChannelImpl][io.grpc.internal.ManagedChannelImpl-16] 
io.grpc.internal.InternalSubchannel-18 created for 
[[addrs=[localhost/127.0.0.1:9096], attrs={}], 
[addrs=[localhost/0:0:0:0:0:0:0:1:9096], attrs={}]]
[io.grpc.internal.ManagedChannelImpl][io.grpc.internal.ManagedChannelImpl-16] 
io.grpc.internal.InternalSubchannel-18 created for 
[[addrs=[localhost/127.0.0.1:9096], attrs={}], 
[addrs=[localhost/0:0:0:0:0:0:0:1:9096], attrs={}]]

[io.grpc.internal.ManagedChannelImpl][io.grpc.internal.ManagedChannelImpl-16] 
shutdownNow() called
[io.grpc.internal.ManagedChannelImpl][io.grpc.internal.ManagedChannelImpl-16] 
shutdownNow() called
[io.grpc.internal.ManagedChannelImpl][io.grpc.internal.ManagedChannelImpl-16] 
shutdown() called
[io.grpc.internal.ManagedChannelImpl][io.grpc.internal.ManagedChannelImpl-16] 
shutdown() called
[io.grpc.internal.ManagedChannelImpl][io.grpc.internal.ManagedChannelImpl-16] 
Shutting down
[io.grpc.internal.ManagedChannelImpl][io.grpc.internal.ManagedChannelImpl-16] 
Shutting down
[io.grpc.internal.ManagedChannelImpl][io.grpc.internal.ManagedChannelImpl-4] 
resolved address: [[addrs=[localhost/127.0.0.1:9095], attrs={}], 
[addrs=[localhost/127.0.0.1:9096], attrs={}]], config={}
[io.grpc.internal.ManagedChannelImpl][io.grpc.internal.ManagedChannelImpl-4] 
resolved address: [[addrs=[localhost/127.0.0.1:9095], attrs={}], 
[addrs=[localhost/127.0.0.1:9096], attrs={}]], config={}
[io.grpc.internal.ManagedChannelImpl][io.grpc.internal.ManagedChannelImpl-4] 
io.grpc.internal.InternalSubchannel-20 created for 
[[addrs=[localhost/127.0.0.1:9096], attrs={}]]
[io.grpc.internal.ManagedChannelImpl][io.grpc.internal.ManagedChannelImpl-4] 
io.grpc.internal.InternalSubchannel-20 created for 
[[addrs=[localhost/127.0.0.1:9096], attrs={}]]

Re: [grpc-io] enable client side keepalive but seeing server side initiate the ping

2019-01-10 Thread eleanore . jin
Hi Eric, 

Thanks for the reply, but I only enabled ping on client side, and only 
configured server to allow client sending pings, but the server is not 
configured to send pings.

On Thursday, January 10, 2019 at 5:07:09 PM UTC-8, Eric Anderson wrote:
>
> The client sends a keepalive ping after X time since the last read. The 
> server does similar. If the client receives the server's ping before it 
> does its own keepalive ping, that resets the "time since last read" timer; 
> the server's keepalive is enough for the client to know the connection is 
> still good.
>
> On Mon, Jan 7, 2019 at 4:56 PM > wrote:
>
>>
>> Hi, 
>>
>> I have enabled client side keepalive and also on the server side, enable 
>> permission to send ping via NettyServerBuilder.permitKeepAliveTime(), 
>>
>> however, what I see from the wireshark, the keepalive ping seems to be 
>> initiated from server side (gRPC server listens on 9096): 
>>
>> [image: Screen Shot 2019-01-07 at 4.29.01 PM.png]
>>
>>
>> Any ideas why this happens ?
>>
>> Thanks a lot!
>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "grpc.io" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to grpc-io+u...@googlegroups.com .
>> To post to this group, send email to grp...@googlegroups.com 
>> .
>> Visit this group at https://groups.google.com/group/grpc-io.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/grpc-io/b6cca740-92f2-47b8-9c8b-d0bbadd825b1%40googlegroups.com
>>  
>> 
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/2a30061d-bb97-43f9-9408-6ea35d4251e9%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: gRPC java (1.16) RoundRobinLoadBalancer is not able to load balancing to the newly added server

2019-01-10 Thread eleanore . jin
Hi Kun, 

Thanks for your reply, I did see that new SubChannel gets created for the 
new server,  do you mean that so long as the new server's subchannel gets 
created, it should take effect immediately, meaning the new server should 
also get the traffic?

Thanks a lot!

On Thursday, January 10, 2019 at 4:00:28 PM UTC-8, Kun Zhang wrote:
>
> Can you find logs from InternalSubchannel that mention the new server?
> If the new server can not be connected, round-robin won't use it.
>
> On Wednesday, January 9, 2019 at 10:18:47 AM UTC-8, eleano...@gmail.com 
> wrote:
>>
>> Hi, 
>>
>> in my java gRPC client, when I create the ManagedChannel, I am passing my 
>> custom NameResolver, and using RoundRobinLoadBalancer. When my NameResolver 
>> is notified with a change to the server list (new server added), it will 
>> call Listener.onAddress and pass the updated the list.
>>
>> I see from the Log: the onAddress is called from 
>> NameResolverListenerImpl, (9097 is the new server address added)
>>
>> resolved address: [[addrs=[localhost/127.0.0.1:9096], attrs={}], 
>> [addrs=[localhost/127.0.0.1:9097], attrs={}]], config={}
>>
>>
>> however, the traffic is not coming to the new server, did I miss anything?
>>
>>
>> Thanks a lot!
>>
>>
>>
>>
>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/a705d4f7-428e-4891-a015-cabbe2d4de90%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] gRPC java (1.16) RoundRobinLoadBalancer is not able to load balancing to the newly added server

2019-01-09 Thread eleanore . jin
Hi, 

in my java gRPC client, when I create the ManagedChannel, I am passing my 
custom NameResolver, and using RoundRobinLoadBalancer. When my NameResolver 
is notified with a change to the server list (new server added), it will 
call Listener.onAddress and pass the updated the list.

I see from the Log: the onAddress is called from NameResolverListenerImpl, 
(9097 is the new server address added)

resolved address: [[addrs=[localhost/127.0.0.1:9096], attrs={}], 
[addrs=[localhost/127.0.0.1:9097], attrs={}]], config={}


however, the traffic is not coming to the new server, did I miss anything?


Thanks a lot!





-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/f0e6a181-e8b5-4d5f-ba3f-3ba3f5eb6579%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] enable client side keepalive but seeing server side initiate the ping

2019-01-07 Thread eleanore . jin

Hi, 

I have enabled client side keepalive and also on the server side, enable 
permission to send ping via NettyServerBuilder.permitKeepAliveTime(), 

however, what I see from the wireshark, the keepalive ping seems to be 
initiated from server side (gRPC server listens on 9096): 

[image: Screen Shot 2019-01-07 at 4.29.01 PM.png]


Any ideas why this happens ?

Thanks a lot!

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/b6cca740-92f2-47b8-9c8b-d0bbadd825b1%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: java LB round-robin has 30 minutes blank window before re-resolve

2018-11-29 Thread eleanore . jin
Hi Carl, 

Thanks for the reply:

1. how do I kill the instances: docker stop the container of the gRPC 
server.  is this what you meant by 'pull the plug?'
2. when you say: DNS is pull based, so we implemented as a timer based 
refresh, but it isn't desirable, so you mean the DNS refresh will be called 
periodically? If so, what is the configuration for the period? is it hard 
coded in the code (can you please point to me the class) or it is 
configurable (if so, please also point to me how I can configure it)?

For my project, it is just I extends io.grpc.NameResolver and overwrite 
getServiceAuthority, start(Listener listener), refresh and shutdown 
methods. 


Thanks a lot!

On Thursday, November 29, 2018 at 1:14:37 PM UTC-8, Carl Mastrangelo wrote:
>
> Responses inline
>
> On Wednesday, November 28, 2018 at 2:23:13 PM UTC-8, eleano...@gmail.com 
> wrote:
>>
>> Here is the test case:
>>
>> I have implemented my custom NameResolver, and using RoundRobinLoadBalancer 
>> in managedChannelBuilder. 
>>
>> 1. initially has 4 instances running (serverA, serverB, serverC, serverD)
>>
>> 2. then kill 2 instances (serverC, serverD), then serverA and serverB 
>> continues serving the request
>>
>
> Do you mean gracefully shutdown, or just pull the plug?  gRPC has no way 
> of knowing the latter case, which means you need to turn on keep-alives in 
> the channel.
>  
>
>>
>> 3. then create 2 more instances (serverE, serverF), only serverA and 
>> serverB continues serving the request, since the NameResolver::refresh is 
>> only triggered due to connection failures or GOAWAY signal.
>>
>
> Name resolvers are meant to be push based.   It is expected that some 
> other service will notify your name resolver when new servers enter the 
> pool.   DNS is pull based, so we implemented as a timer based refresh, but 
> it isn't desirable.  If in your custom resolver you pull, then you'll have 
> to use a timer like DNS does. 
>  
>
>>
>> 4. then kill serverA and serverB, there is 30 minutes blank window, that 
>> gRPC seems not doing anything, then after 30 minutes NameResolver::refresh 
>> is triggered and the messages are served by serverE and serverF. (seems no 
>> messaging loss).
>>
>> Can someone please suggest why there is a 30 minutes blank window, and is 
>> there anyway we can configure it to be shorter?
>>
>> Thanks a lot!
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/cea7815a-f269-473d-afe0-d50b6e363faf%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] java LB round-robin has 30 minutes blank window before re-resolve

2018-11-28 Thread eleanore . jin
Here is the test case:

I have implemented my custom NameResolver, and using RoundRobinLoadBalancer 
in managedChannelBuilder. 

1. initially has 4 instances running (serverA, serverB, serverC, serverD)

2. then kill 2 instances (serverC, serverD), then serverA and serverB 
continues serving the request

3. then create 2 more instances (serverE, serverF), only serverA and 
serverB continues serving the request, since the NameResolver::refresh is 
only triggered due to connection failures or GOAWAY signal.

4. then kill serverA and serverB, there is 30 minutes blank window, that 
gRPC seems not doing anything, then after 30 minutes NameResolver::refresh 
is triggered and the messages are served by serverE and serverF. (seems no 
messaging loss).

Can someone please suggest why there is a 30 minutes blank window, and is 
there anyway we can configure it to be shorter?

Thanks a lot!

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/7bab2d9a-b697-4b32-a9cc-a268d51a4564%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] grpc java biDirectional streaming client retry when onError is called

2018-09-21 Thread eleanore . jin
Hi, 

My project uses gRPC bi-directional streaming call. We are not using grpc 
LB, but have our own logic implemented to select which server. So when I 
kill one of the server, client StreamObserver.onError is called and the 
original message is lost. 

I just wonder is there anyway to obtain the original message, so that we 
can re-select another server to retry?

Thanks a lot!


-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/5c0e5c0c-eb5c-4fe1-a110-55da5f8fd6f3%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: java Authentication API custom implementation

2018-09-13 Thread eleanore . jin
Hi Penn, 

Thanks a lot! will take a look!

On Thursday, September 13, 2018 at 2:55:52 PM UTC-7, Penn (Dapeng) Zhang 
wrote:
>
>
>
> On Thursday, September 13, 2018 at 2:03:36 PM UTC-7, eleano...@gmail.com 
> wrote:
>>
>> Hi 
>>
>> my current project is using gRPC (*java*) for service communications. We 
>> have our own way of authenticate and authorize the client request. Reading 
>> from https://grpch
>>
>>
>> *Q1: is this the interface that I should implement?*
>>
>> public interface CallCredentials {}
>>
>>
>> That's right if you implement custom authentication. If you are using 
> Google-auth, use io.grpc.auth.MoreCallCredentals.from(googleCreds).
>  
>
>> *Q2: given the code example in this class (see below), it seems it will 
>> carry credential information for rpc calls, *
>>
>> *is there a way I can pass the credentials when creating the channel?*
>>
>>
>> FooGrpc.FooStub stub = FooGrpc.newStub(channel);
>> response = stub.withCallCredentials(creds).bar(request);
>>
>> To create a channel carrying creds, you might need to use channel = 
> io.grpc.ClientInterceptors.intercept(channel, callOptionsInterceptor), and 
> implement a ClientInterceptor callOptionsInterceptor that injects creds 
> into the callOptions.
>  
>
>>
>> *Q3: how should I plugin the custom authentication/authorization mechanism 
>> on gRPC server side?*
>>
>>
> For server side, the answer can be found here:
> https://github.com/grpc/grpc-java/issues/4842 
>
>>
>> Thanks a lot!
>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/ee8a8277-81e6-49d2-bb20-1d7a912836bd%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] java Authentication API custom implementation

2018-09-13 Thread eleanore . jin
Hi 

my current project is using gRPC (*java*) for service communications. We 
have our own way of authenticate and authorize the client request. Reading 
from https://grpch


*Q1: is this the interface that I should implement?*

public interface CallCredentials {}


*Q2: given the code example in this class (see below), it seems it will carry 
credential information for rpc calls, *

*is there a way I can pass the credentials when creating the channel?*


FooGrpc.FooStub stub = FooGrpc.newStub(channel);
response = stub.withCallCredentials(creds).bar(request);


*Q3: how should I plugin the custom authentication/authorization mechanism on 
gRPC server side?*


Thanks a lot!

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/16c44599-682e-490b-875f-ddaf4e960095%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [grpc-io] Re: gRPC Java 1.14.0 Released

2018-08-24 Thread eleanore . jin
Hi Eric, 

Thanks a lot, I got it!

On Friday, August 24, 2018 at 3:34:49 PM UTC-7, Eric Anderson wrote:
>
> On Fri, Aug 24, 2018 at 9:05 AM > wrote:
>
>> If I understand you correctly, at certain point in time, there will only 
>> be 1 thread processing the callback, and there will NEVER be multiple 
>> threads processing the callbacks concurrently. 
>> If this is the case, what is the point of having the executor() 
>> configuration in ChannelBuilder and ServerBuilder?
>>
>
> There will only be 1 thread processing *that one RPC's* callback. 
> Multiple threads can be processing callbacks, but for different RPCs.
>
> Also, it's common for applications to already have thread pools sitting 
> around and want to reuse them instead of creating yet-more-threads.
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/bc5f41a8-7d40-41eb-8577-7c3b8522e475%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [grpc-io] Re: gRPC Java 1.14.0 Released

2018-08-24 Thread eleanore . jin
Hi Eric, 

Thanks a lot for the explanation. If I understand you correctly, at certain 
point in time, there will only be 1 thread processing the callback, and 
there will NEVER be multiple threads processing the callbacks concurrently. 
If this is the case, what is the point of having the executor() 
configuration in ChannelBuilder and ServerBuilder? 

Thanks a lot!

On Friday, August 24, 2018 at 7:17:22 AM UTC-7, Eric Anderson wrote:
>
> On Thu, Aug 23, 2018 at 3:00 PM > wrote:
>
>> my grpc client and server are doing bi-directional streaming, in the 
>> StreamObserver.onNext() the client passed to server, its just print out the 
>> response from the server.
>> And on the client side, when creating the channel, I passed a 
>> fixedThreadPool with 5 threads. And I see from client side, the results get 
>> printed by 5 threads.  So that means 5 threads are accessing the same 
>> StreamObserver object, but as you mentioned StreamObserver is not thread 
>> safe?
>>
>> Since the onNext() is just System.out.println(), maybe the threads do not 
>> access the StreamObserver concurrently. but what if the logic of process 
>> the response takes time, and when thread1 hasn't finished with its onNext() 
>> call, the next response arrives, and another thread trying to process it, 
>> is there any consequence of this scenario?
>>
>
> We make sure to call it only from one thread at a time. We'll continue 
> re-using a thread for delivering callbacks if more work is coming. But if 
> there's a period of no callbacks we'll return from the thread. The next 
> time we need to do callbacks a different thread may be chosen.
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/1acaf215-f36d-4eaa-9ebb-bbed32034126%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: gRPC Java 1.14.0 Released

2018-08-23 Thread eleanore . jin

Hi Carl, 

Thanks for the reply! I have a question regarding this:

my grpc client and server are doing bi-directional streaming, in the 
StreamObserver.onNext() the client passed to server, its just print out the 
response from the server.
And on the client side, when creating the channel, I passed a 
fixedThreadPool with 5 threads. And I see from client side, the results get 
printed by 5 threads.  So that means 5 threads are accessing the same 
StreamObserver object, but as you mentioned StreamObserver is not thread 
safe?

Since the onNext() is just System.out.println(), maybe the threads do not 
access the StreamObserver concurrently. but what if the logic of process 
the response takes time, and when thread1 hasn't finished with its onNext() 
call, the next response arrives, and another thread trying to process it, 
is there any consequence of this scenario?

Thanks a lot!

On Thursday, August 23, 2018 at 2:38:07 PM UTC-7, Carl Mastrangelo wrote:
>
> You can see the change here: 
> https://github.com/grpc/grpc-java/commit/defb955f3ab233e11d960a42495ca955306d57a4
>   
> .  StreamObserver wraps a ClientCall.
> On Thursday, August 23, 2018 at 1:09:55 PM UTC-7, eleano...@gmail.com 
> wrote:
>>
>> Hi Carl, 
>>
>> what about StreamObserver thread safety? can you please point to me the 
>> documentation if it exists?
>>
>> Thanks a lot!
>>
>> On Tuesday, July 31, 2018 at 11:18:59 AM UTC-7, Carl Mastrangelo wrote:
>>>
>>> Notice: This is expected to be the last version supporting Java 6. 
>>> Comment on #3961  if 
>>> this causes you trouble. Android API level 14 support will be unchanged.
>>> Dependencies
>>>
>>>- Updated to Netty 4.1.27 and Netty TCNative 2.0.12
>>>- gRPC is now regularly tested with JDK 9 and 10
>>>
>>> API Changes
>>>
>>>- OkHttpChannelBuilder#negotiationType is now deprecated
>>>- Made protobuf, protobuf-lite, and protobuf-nano classes final.
>>>
>>> New Features
>>>
>>>- Channel Tracing now record State Changes
>>>- Stubs now have an RpcMethod annotation for use with annotation 
>>>processors
>>>- Added support for providing List to 
>>>LoadBalancer Subchannels, in addition to the option of providing a 
>>>EquivalentAddressGroup (EAG). This prevents the need for LoadBalancers 
>>>to "flatten" a List into a single 
>>>EquivalentAddressGroup which loses/confuses the EAG's Attributes. 
>>>NameResolvers can now specify Attributes in an EAG and expect that 
>>>the values are passed to gRPC's core. Future work will add List
>>> for OobChannels.
>>>- InProcessSocketAddress now has a useful toString() method
>>>- AndroidChannelBuilder is now easier to build
>>>- RoundRobinLoadBalancer now scales better when using stickiness
>>>
>>> Behavior Changes
>>>
>>>- gRPCLB no longer depends on having a Service Config
>>>
>>> Bug Fixes
>>>
>>>- Fix regression that broke Java 9 ALPN support. This fixes the 
>>>error "SunJSSE selected, but Jetty NPN/ALPN unavailable" (#4620 
>>>)
>>>- Fixed a bug with gRPC LB parsing SRV DNS records ( 6dbe392 
>>>
>>> 
>>> )
>>>- enterIdle() will exit idle mode if channel is still in use (#4665 
>>>)
>>>- TransmitStatusRuntimeExceptionInterceptor now avoids accidentally 
>>>double closing the call.
>>>
>>> Documentation
>>>
>>>- Clarified StreamObserver interaction with thread safety
>>>
>>> Thanks to all our Contributors:
>>>
>>>- @DmPanov 
>>>- @groakley  - Grant Oakley
>>>- @jbingham-google  - Jonathan 
>>>Bingham
>>>- @jyane  - Shohei Kamimori
>>>- @kay  - Doug Lawrie
>>>- @marcoferrer  - Marco Ferrer
>>>- @njhill  - Nick Hill
>>>- @PunKeel  - Maxime Guerreiro
>>>- @songya  - Yang Song
>>>- @sullis 
>>>- @werkt  - George Gensure
>>>
>>>
>>>
>>> See https://github.com/grpc/grpc-java/releases/tag/v1.14.0 
>>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/526c6862-e948-4a81-81d1-e44e39cbefaf%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: gRPC java multiple bi-directional streams share same channel is faster than 1 channel per streaming

2018-08-20 Thread eleanore . jin
Hi Carl, 

It is hard to show my code as I have a wrapper API on top of grpc. 

However, are you suggesting that using 1 tcp connection per stream should 
be faster than using 1 tcp connection for all streams?

On Monday, August 20, 2018 at 11:14:43 AM UTC-7, Carl Mastrangelo wrote:
>
> Can you show your code?   This may just be a threading problem.  
>
> On Saturday, August 18, 2018 at 9:02:59 PM UTC-7, eleano...@gmail.com 
> wrote:
>>
>> Hi Srini, 
>>
>> The way how I do it:
>> for single connection:
>> 1. send 1 request via  request StreamObserver, to let initial connection 
>> established 
>> 2. start the timer, send 1 requests
>> 3. end the timer when see all results from the response 
>> StreamObserver.onNext() that the client passed to the server. the logic is 
>> just System.out.println
>>
>> for multiple connections:
>> 1. send 1 request for each channel created, to let initial connection 
>> established
>> 2 start the timer, send 1000 per connection, total 10 connections, so 
>> total 1 requests
>> 3. end the timer when see all the results from the response 
>> StreamObserver.onNext() that the client passed to the server, for all 
>> connections, the logic is just System.out.println
>>
>> Thanks!
>>
>> for multiple connection:
>>
>> On Saturday, August 18, 2018 at 8:37:22 PM UTC-7, Srini Polavarapu wrote:
>>>
>>> Could you provide some stats on your observation and how you are 
>>> measuring this? Two streams sharing a connection vs. separate connections 
>>> could be faster due to these reasons:
>>> - One less socket to service: less system calls, context switching, 
>>> cache misses etc.
>>> - Better batching of data from different streams on a single connection 
>>> resulting in better connection utilization and larger av. pkt size on the 
>>> wire.
>>>
>>> On Friday, August 17, 2018 at 3:30:17 PM UTC-7, eleano...@gmail.com 
>>> wrote:

 Hi Carl, 

 Thanks for the very detailed explanation! my question is why I observed 
 using a separate TCP connection per stream was SLOWER!

 If the single TCP connection for multiple streams are faster 
 (Regardless the reason), will the connection get saturated? e.g. too many 
 streams sending on the same TCP connection.


 On Friday, August 17, 2018 at 3:25:54 PM UTC-7, Carl Mastrangelo wrote:
>
> I may have misinterpretted your question; are you asking why gRPC 
> prefers to use a single connection, or why you observed using a separate 
> TCP connection per stream was faster?
>
> If the first, the reason is that the number of TCP connections may be 
> limitted.   For example, making gRPC requests from the browser may limit 
> how many connections can exist.   Also, a Proxy between the client and 
> server may limit the number of connections.   Connection setup and 
> teardown 
> is slower due to the TCP 3-way handshake, so gRPC (really HTTP/2) prefers 
> to reuse a connection.
>
> If the second, then I am not sure.   If you are benchmarking with 
> Java, I strongly recommend using the JMH benchmarking framework.  It's 
> difficult to setup, but it provides the most accurate, believe benchmark 
> results.
>
> On Friday, August 17, 2018 at 2:09:20 PM UTC-7, eleano...@gmail.com 
> wrote:
>>
>> Hi Carl, 
>>
>> Thanks for the explanation, however, that still does not explain why 
>> using single tcp for multiple streamObserver is faster than using 1 tcp 
>> per 
>> stream. 
>>
>> On Friday, August 17, 2018 at 12:45:32 PM UTC-7, Carl Mastrangelo 
>> wrote:
>>>
>>> gRPC does connection management for you.  If you don't have any 
>>> active RPCs, it will not actively create connections for you.  
>>>
>>> You can force gRPC to create a connection eaglerly by calling 
>>> ManagedChannel.getState(true), which requests the channel enter the 
>>> ready 
>>> state. 
>>>
>>> Do note that in Java, class loading is done lazily, so you may be 
>>> measuring connection time plus classload time if you only measure on 
>>> the 
>>> first connection.
>>>
>>> On Friday, August 17, 2018 at 9:17:16 AM UTC-7, eleano...@gmail.com 
>>> wrote:

 Hi, 

 I am doing some experiment with gRPC java to determine the right 
 gRPC call type to use. 

 here is my finding:

 creating 4 sets of StreamObservers (1 for Client to send request, 1 
 for Server to send response), sending on the same channel is slightly 
 after 
 than sending on 1 channel per stream.
 I have already elimiated the time of creating initial tcp 
 connection by making a initial call to let the connection to be 
 established, then start the timer. 

 I just wonder why this is the case?

 Thanks!



-- 
You received 

[grpc-io] how to use grpc java load balancing library with a list of server ip address directly

2018-08-20 Thread eleanore . jin
Hi, 

I would like to use the grpc java load balancing library, I looked at the 
example code, it looks like below:

public HelloWorldClient(String zkAddr) {
  this(ManagedChannelBuilder.forTarget(zkAddr)
  .loadBalancerFactory(RoundRobinLoadBalancerFactory.getInstance())
  .nameResolverFactory(new ZkNameResolverProvider())
  .usePlaintext(true));
}


I would like to pass ManagedChannelBuilder a list of service ip addresses 
directly, rather than let a NameResolver to resolve for me. Then I still

want to use .loadBalancerFactory(roundRobinLoadBalancerFactory.getInstance()),


if there a way to do it ?


Thanks a lot!

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/a568a077-0f4f-4386-9446-613ea83a469f%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: gRPC java multiple bi-directional streams share same channel is faster than 1 channel per streaming

2018-08-18 Thread eleanore . jin
The single connection time for the whole process is about 2.75 sec, and the 
multiple connection time is about 3.5 sec. I run this test multiple times, 
and the single connection is always a bit faster than the multiple 
connections

On Saturday, August 18, 2018 at 8:37:22 PM UTC-7, Srini Polavarapu wrote:
>
> Could you provide some stats on your observation and how you are measuring 
> this? Two streams sharing a connection vs. separate connections could be 
> faster due to these reasons:
> - One less socket to service: less system calls, context switching, cache 
> misses etc.
> - Better batching of data from different streams on a single connection 
> resulting in better connection utilization and larger av. pkt size on the 
> wire.
>
> On Friday, August 17, 2018 at 3:30:17 PM UTC-7, eleano...@gmail.com wrote:
>>
>> Hi Carl, 
>>
>> Thanks for the very detailed explanation! my question is why I observed 
>> using a separate TCP connection per stream was SLOWER!
>>
>> If the single TCP connection for multiple streams are faster (Regardless 
>> the reason), will the connection get saturated? e.g. too many streams 
>> sending on the same TCP connection.
>>
>>
>> On Friday, August 17, 2018 at 3:25:54 PM UTC-7, Carl Mastrangelo wrote:
>>>
>>> I may have misinterpretted your question; are you asking why gRPC 
>>> prefers to use a single connection, or why you observed using a separate 
>>> TCP connection per stream was faster?
>>>
>>> If the first, the reason is that the number of TCP connections may be 
>>> limitted.   For example, making gRPC requests from the browser may limit 
>>> how many connections can exist.   Also, a Proxy between the client and 
>>> server may limit the number of connections.   Connection setup and teardown 
>>> is slower due to the TCP 3-way handshake, so gRPC (really HTTP/2) prefers 
>>> to reuse a connection.
>>>
>>> If the second, then I am not sure.   If you are benchmarking with Java, 
>>> I strongly recommend using the JMH benchmarking framework.  It's difficult 
>>> to setup, but it provides the most accurate, believe benchmark results.
>>>
>>> On Friday, August 17, 2018 at 2:09:20 PM UTC-7, eleano...@gmail.com 
>>> wrote:

 Hi Carl, 

 Thanks for the explanation, however, that still does not explain why 
 using single tcp for multiple streamObserver is faster than using 1 tcp 
 per 
 stream. 

 On Friday, August 17, 2018 at 12:45:32 PM UTC-7, Carl Mastrangelo wrote:
>
> gRPC does connection management for you.  If you don't have any active 
> RPCs, it will not actively create connections for you.  
>
> You can force gRPC to create a connection eaglerly by calling 
> ManagedChannel.getState(true), which requests the channel enter the ready 
> state. 
>
> Do note that in Java, class loading is done lazily, so you may be 
> measuring connection time plus classload time if you only measure on the 
> first connection.
>
> On Friday, August 17, 2018 at 9:17:16 AM UTC-7, eleano...@gmail.com 
> wrote:
>>
>> Hi, 
>>
>> I am doing some experiment with gRPC java to determine the right gRPC 
>> call type to use. 
>>
>> here is my finding:
>>
>> creating 4 sets of StreamObservers (1 for Client to send request, 1 
>> for Server to send response), sending on the same channel is slightly 
>> after 
>> than sending on 1 channel per stream.
>> I have already elimiated the time of creating initial tcp connection 
>> by making a initial call to let the connection to be established, then 
>> start the timer. 
>>
>> I just wonder why this is the case?
>>
>> Thanks!
>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/3a2bd3a8-01f9-4115-976f-fa4637bfee06%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: gRPC java multiple bi-directional streams share same channel is faster than 1 channel per streaming

2018-08-18 Thread eleanore . jin
Hi Srini, 

The way how I do it:
for single connection:
1. send 1 request via  request StreamObserver, to let initial connection 
established 
2. start the timer, send 1 requests
3. end the timer when see all results from the response 
StreamObserver.onNext() that the client passed to the server. the logic is 
just System.out.println

for multiple connections:
1. send 1 request for each channel created, to let initial connection 
established
2 start the timer, send 1000 per connection, total 10 connections, so total 
1 requests
3. end the timer when see all the results from the response 
StreamObserver.onNext() that the client passed to the server, for all 
connections, the logic is just System.out.println

Thanks!

for multiple connection:

On Saturday, August 18, 2018 at 8:37:22 PM UTC-7, Srini Polavarapu wrote:
>
> Could you provide some stats on your observation and how you are measuring 
> this? Two streams sharing a connection vs. separate connections could be 
> faster due to these reasons:
> - One less socket to service: less system calls, context switching, cache 
> misses etc.
> - Better batching of data from different streams on a single connection 
> resulting in better connection utilization and larger av. pkt size on the 
> wire.
>
> On Friday, August 17, 2018 at 3:30:17 PM UTC-7, eleano...@gmail.com wrote:
>>
>> Hi Carl, 
>>
>> Thanks for the very detailed explanation! my question is why I observed 
>> using a separate TCP connection per stream was SLOWER!
>>
>> If the single TCP connection for multiple streams are faster (Regardless 
>> the reason), will the connection get saturated? e.g. too many streams 
>> sending on the same TCP connection.
>>
>>
>> On Friday, August 17, 2018 at 3:25:54 PM UTC-7, Carl Mastrangelo wrote:
>>>
>>> I may have misinterpretted your question; are you asking why gRPC 
>>> prefers to use a single connection, or why you observed using a separate 
>>> TCP connection per stream was faster?
>>>
>>> If the first, the reason is that the number of TCP connections may be 
>>> limitted.   For example, making gRPC requests from the browser may limit 
>>> how many connections can exist.   Also, a Proxy between the client and 
>>> server may limit the number of connections.   Connection setup and teardown 
>>> is slower due to the TCP 3-way handshake, so gRPC (really HTTP/2) prefers 
>>> to reuse a connection.
>>>
>>> If the second, then I am not sure.   If you are benchmarking with Java, 
>>> I strongly recommend using the JMH benchmarking framework.  It's difficult 
>>> to setup, but it provides the most accurate, believe benchmark results.
>>>
>>> On Friday, August 17, 2018 at 2:09:20 PM UTC-7, eleano...@gmail.com 
>>> wrote:

 Hi Carl, 

 Thanks for the explanation, however, that still does not explain why 
 using single tcp for multiple streamObserver is faster than using 1 tcp 
 per 
 stream. 

 On Friday, August 17, 2018 at 12:45:32 PM UTC-7, Carl Mastrangelo wrote:
>
> gRPC does connection management for you.  If you don't have any active 
> RPCs, it will not actively create connections for you.  
>
> You can force gRPC to create a connection eaglerly by calling 
> ManagedChannel.getState(true), which requests the channel enter the ready 
> state. 
>
> Do note that in Java, class loading is done lazily, so you may be 
> measuring connection time plus classload time if you only measure on the 
> first connection.
>
> On Friday, August 17, 2018 at 9:17:16 AM UTC-7, eleano...@gmail.com 
> wrote:
>>
>> Hi, 
>>
>> I am doing some experiment with gRPC java to determine the right gRPC 
>> call type to use. 
>>
>> here is my finding:
>>
>> creating 4 sets of StreamObservers (1 for Client to send request, 1 
>> for Server to send response), sending on the same channel is slightly 
>> after 
>> than sending on 1 channel per stream.
>> I have already elimiated the time of creating initial tcp connection 
>> by making a initial call to let the connection to be established, then 
>> start the timer. 
>>
>> I just wonder why this is the case?
>>
>> Thanks!
>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/2c1a3739-6cc2-48d8-a75b-cc067995f55c%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: gRPC java multiple bi-directional streams share same channel is faster than 1 channel per streaming

2018-08-17 Thread eleanore . jin
Hi Carl, 

Thanks for the very detailed explanation! my question is why I observed 
using a separate TCP connection per stream was SLOWER!

If the single TCP connection for multiple streams are faster (Regardless 
the reason), will the connection get saturated? e.g. too many streams 
sending on the same TCP connection.


On Friday, August 17, 2018 at 3:25:54 PM UTC-7, Carl Mastrangelo wrote:
>
> I may have misinterpretted your question; are you asking why gRPC prefers 
> to use a single connection, or why you observed using a separate TCP 
> connection per stream was faster?
>
> If the first, the reason is that the number of TCP connections may be 
> limitted.   For example, making gRPC requests from the browser may limit 
> how many connections can exist.   Also, a Proxy between the client and 
> server may limit the number of connections.   Connection setup and teardown 
> is slower due to the TCP 3-way handshake, so gRPC (really HTTP/2) prefers 
> to reuse a connection.
>
> If the second, then I am not sure.   If you are benchmarking with Java, I 
> strongly recommend using the JMH benchmarking framework.  It's difficult to 
> setup, but it provides the most accurate, believe benchmark results.
>
> On Friday, August 17, 2018 at 2:09:20 PM UTC-7, eleano...@gmail.com wrote:
>>
>> Hi Carl, 
>>
>> Thanks for the explanation, however, that still does not explain why 
>> using single tcp for multiple streamObserver is faster than using 1 tcp per 
>> stream. 
>>
>> On Friday, August 17, 2018 at 12:45:32 PM UTC-7, Carl Mastrangelo wrote:
>>>
>>> gRPC does connection management for you.  If you don't have any active 
>>> RPCs, it will not actively create connections for you.  
>>>
>>> You can force gRPC to create a connection eaglerly by calling 
>>> ManagedChannel.getState(true), which requests the channel enter the ready 
>>> state. 
>>>
>>> Do note that in Java, class loading is done lazily, so you may be 
>>> measuring connection time plus classload time if you only measure on the 
>>> first connection.
>>>
>>> On Friday, August 17, 2018 at 9:17:16 AM UTC-7, eleano...@gmail.com 
>>> wrote:

 Hi, 

 I am doing some experiment with gRPC java to determine the right gRPC 
 call type to use. 

 here is my finding:

 creating 4 sets of StreamObservers (1 for Client to send request, 1 for 
 Server to send response), sending on the same channel is slightly after 
 than sending on 1 channel per stream.
 I have already elimiated the time of creating initial tcp connection by 
 making a initial call to let the connection to be established, then start 
 the timer. 

 I just wonder why this is the case?

 Thanks!



-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/cf11bcac-76ec-400b-956d-05263f1ff8bc%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: gRPC java multiple bi-directional streams share same channel is faster than 1 channel per streaming

2018-08-17 Thread eleanore . jin
Hi Carl, 

Thanks for the explanation, however, that still does not explain why using 
single tcp for multiple streamObserver is faster than using 1 tcp per 
stream. 

On Friday, August 17, 2018 at 12:45:32 PM UTC-7, Carl Mastrangelo wrote:
>
> gRPC does connection management for you.  If you don't have any active 
> RPCs, it will not actively create connections for you.  
>
> You can force gRPC to create a connection eaglerly by calling 
> ManagedChannel.getState(true), which requests the channel enter the ready 
> state. 
>
> Do note that in Java, class loading is done lazily, so you may be 
> measuring connection time plus classload time if you only measure on the 
> first connection.
>
> On Friday, August 17, 2018 at 9:17:16 AM UTC-7, eleano...@gmail.com wrote:
>>
>> Hi, 
>>
>> I am doing some experiment with gRPC java to determine the right gRPC 
>> call type to use. 
>>
>> here is my finding:
>>
>> creating 4 sets of StreamObservers (1 for Client to send request, 1 for 
>> Server to send response), sending on the same channel is slightly after 
>> than sending on 1 channel per stream.
>> I have already elimiated the time of creating initial tcp connection by 
>> making a initial call to let the connection to be established, then start 
>> the timer. 
>>
>> I just wonder why this is the case?
>>
>> Thanks!
>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/3c0ead44-88bd-490c-9fba-43063f6b8041%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: gRPCLB in gRPC Java 1.14

2018-08-17 Thread eleanore . jin
Thanks a lot for the clarification

On Friday, August 17, 2018 at 12:29:15 PM UTC-7, Carl Mastrangelo wrote:
>
> Yes, that is correct.   Load Balancing is done on a per-call basis.  Once 
> an RPC has been assigned to a backend, it will continue on that backend.
>
> On Friday, August 17, 2018 at 9:20:02 AM UTC-7, eleano...@gmail.com wrote:
>>
>> Hi, 
>>
>> I just wonder how does gRPC LB handles the bi-directional stream? Once it 
>> picks which server instance to serve the streaming request, then it will 
>> continue the streaming request only with that particular server?
>>
>> On Tuesday, July 31, 2018 at 11:30:41 AM UTC-7, Carl Mastrangelo wrote:
>>>
>>> In release 1.14, it is now possible to use gPRC LB, gRPC's full featured 
>>> load balancer client.  This is an experimental feature that contacts a gRPC 
>>> LB server to get load balancing data.  
>>>
>>> To get started, you will need to set the JVM flag 
>>> "-Dio.grpc.internal.DnsNameResolverProvider.enable_grpclb=true", and 
>>> include the grpc-grpclb artifact on your class path.  This enables using 
>>> DNS SRV records to point to gRPCLB servers when doing load balancing.  
>>>
>>> The DNS entries need to be in a specific format to be usable.   For a 
>>> service called "api.service.com", It should look something like this:
>>>
>>> A api.service.com - 127.0.0.1
>>>  api.service.com - ::1
>>> SRV _grpclb._tcp.api.service.com - lb.service.com
>>> A  lb.service.com - 192.168.0.1
>>>
>>>
>>> gRPC will check for an SRV record with the prefix "_grpclb._tcp"   on 
>>> the target you provide to the channel.  If present, gRPC will use the 
>>> addresses of THAT domain as balancer addresses.  In LB parlance, 
>>> lb.service.com is a *balancer* address, while api.service.com is a 
>>> *backend* address.   Balanacer addresses must speak the gRPCLB protocol (as 
>>> defined in the proto).
>>>
>>> There will be upcoming documentation on the exact way to configure this, 
>>> but this is being announced here for interested parties to try it out and 
>>> answer any questions.
>>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/38bf8f7e-55d7-49af-94f1-4c7ee9131d50%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: gRPCLB in gRPC Java 1.14

2018-08-17 Thread eleanore . jin
Hi, 

I just wonder how does gRPC LB handles the bi-directional stream? Once it 
picks which server instance to serve the streaming request, then it will 
continue the streaming request only with that particular server?

On Tuesday, July 31, 2018 at 11:30:41 AM UTC-7, Carl Mastrangelo wrote:
>
> In release 1.14, it is now possible to use gPRC LB, gRPC's full featured 
> load balancer client.  This is an experimental feature that contacts a gRPC 
> LB server to get load balancing data.  
>
> To get started, you will need to set the JVM flag 
> "-Dio.grpc.internal.DnsNameResolverProvider.enable_grpclb=true", and 
> include the grpc-grpclb artifact on your class path.  This enables using 
> DNS SRV records to point to gRPCLB servers when doing load balancing.  
>
> The DNS entries need to be in a specific format to be usable.   For a 
> service called "api.service.com", It should look something like this:
>
> A api.service.com - 127.0.0.1
>  api.service.com - ::1
> SRV _grpclb._tcp.api.service.com - lb.service.com
> A  lb.service.com - 192.168.0.1
>
>
> gRPC will check for an SRV record with the prefix "_grpclb._tcp"   on the 
> target you provide to the channel.  If present, gRPC will use the addresses 
> of THAT domain as balancer addresses.  In LB parlance, lb.service.com is 
> a *balancer* address, while api.service.com is a *backend* address.  
>  Balanacer addresses must speak the gRPCLB protocol (as defined in the 
> proto).
>
> There will be upcoming documentation on the exact way to configure this, 
> but this is being announced here for interested parties to try it out and 
> answer any questions.
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/1f66b078-fb95-46dd-bbb0-bbaad8cceb2b%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] gRPC java multiple bi-directional streams share same channel is faster than 1 channel per streaming

2018-08-17 Thread eleanore . jin
Hi, 

I am doing some experiment with gRPC java to determine the right gRPC call 
type to use. 

here is my finding:

creating 4 sets of StreamObservers (1 for Client to send request, 1 for 
Server to send response), sending on the same channel is slightly after 
than sending on 1 channel per stream.
I have already elimiated the time of creating initial tcp connection by 
making a initial call to let the connection to be established, then start 
the timer. 

I just wonder why this is the case?

Thanks!

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/12a04bcf-f775-4d3c-8fbc-9a0d7f4bfc3f%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [grpc-io] Re: tcp connection management

2018-08-08 Thread eleanore . jin
Hi Eric, 

Thanks a lot! Yes I indeed create a new channel everytime for the unary 
call, after changing to use the same channel, I only see 1 tcp connection 
created. 


On Wednesday, August 8, 2018 at 9:47:53 AM UTC-7, Eric Gribkoff wrote:
>
> There should only be a single TCP connection when sending five unary 
> calls. Can you post a code sample of how you are testing this? It sounds 
> like you might be re-creating the gRPC channel for each call, which would 
> create a separate TCP connection for each RPC. You should create only one 
> channel, and use this to send multiple RPCs over the same TCP connection.
>
> Eric
>
>
> On Tue, Aug 7, 2018 at 4:22 PM > wrote:
>
>> BTW, I am using grpc-java
>>
>> On Tuesday, August 7, 2018 at 4:21:53 PM UTC-7, eleano...@gmail.com 
>> wrote:
>>>
>>>
>>> Hi, 
>>>
>>> I am doing an experiment to decide whether my application should choose 
>>> unary call, or bi-directional streaming. Here is what I observe by enable 
>>> the debug logging:
>>>
>>> for unary call, the tcp connection is created per call: 
>>>
>>> client side single thread making 5 calls in a for loop: total 5 tcp 
>>> connections - using blocking stub
>>> client side multi-threaded making 5 calls at the same time: total 5 tcp 
>>> connections - using block stub
>>> bi-directional streaming making 5 requests: total 1 tcp connection - 
>>> using async stub
>>>
>>> So that means for unary call, it will always create new tcp connection 
>>> every time? Can you please confirm this behaviour?
>>>
>>> Thanks!
>>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "grpc.io" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to grpc-io+u...@googlegroups.com .
>> To post to this group, send email to grp...@googlegroups.com 
>> .
>> Visit this group at https://groups.google.com/group/grpc-io.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/grpc-io/1251d22f-547a-4888-9096-2d36ce1c5705%40googlegroups.com
>>  
>> 
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/7c335d72-a996-45cb-ad03-41b47dfa5fef%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: tcp connection management

2018-08-07 Thread eleanore . jin
sorry what do you mean?

On Tuesday, August 7, 2018 at 4:31:34 PM UTC-7, pizzas...@gmail.com wrote:
>
> Bvfdc

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/5e8c8e96-3849-4209-837a-2b5a6e20ea70%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: tcp connection management

2018-08-07 Thread eleanore . jin
BTW, I am using grpc-java

On Tuesday, August 7, 2018 at 4:21:53 PM UTC-7, eleano...@gmail.com wrote:
>
>
> Hi, 
>
> I am doing an experiment to decide whether my application should choose 
> unary call, or bi-directional streaming. Here is what I observe by enable 
> the debug logging:
>
> for unary call, the tcp connection is created per call: 
>
> client side single thread making 5 calls in a for loop: total 5 tcp 
> connections - using blocking stub
> client side multi-threaded making 5 calls at the same time: total 5 tcp 
> connections - using block stub
> bi-directional streaming making 5 requests: total 1 tcp connection - using 
> async stub
>
> So that means for unary call, it will always create new tcp connection 
> every time? Can you please confirm this behaviour?
>
> Thanks!
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/1251d22f-547a-4888-9096-2d36ce1c5705%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] tcp connection management

2018-08-07 Thread eleanore . jin

Hi, 

I am doing an experiment to decide whether my application should choose 
unary call, or bi-directional streaming. Here is what I observe by enable 
the debug logging:

for unary call, the tcp connection is created per call: 

client side single thread making 5 calls in a for loop: total 5 tcp 
connections - using blocking stub
client side multi-threaded making 5 calls at the same time: total 5 tcp 
connections - using block stub
bi-directional streaming making 5 requests: total 1 tcp connection - using 
async stub

So that means for unary call, it will always create new tcp connection 
every time? Can you please confirm this behaviour?

Thanks!

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/d9008445-256c-4b71-8d0d-526bd415b78b%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.