[grpc-io] Re: Benchmark data for gRPC + xDS. vs envoy

2021-08-17 Thread 'Srini Polavarapu' via grpc.io
Hi,

The gRPC team did a one-time perf benchmarking to get some general idea. A 
comprehensive and continuous benchmarking plan is on the roadmap. In the ad 
hoc test, we tested gRPC 1.30 C++ xDS stack against Enovy that was compiled 
with -c opt and -fno-omit-frame-pointer from the 1.14.1 tag. Envoy was run 
with logging turned off entirely and with a default concurrency setting, 
which creates one thread per CPU. This resulted in messages being balanced 
across 8 threads in our set up. We were interested in the cost of a query 
in terms of CPU-seconds, i.e., how much CPU time is required on the client 
side (i.e. client process + sidecar) to transmit a single request. Load was 
varied from 1K to 22K QPS with 1K-byte payload.

Since this was not a comprehensive test and real world mileage depends on 
many things, we don't want to publish data from this test but in general 
you can expect to see 1.5-3x CPU savings in networking cost, i.e., the more 
network intensive your application is, the higher the benefits. We didn't 
test latency or memory utilization but you can find latency data in Istio 
benchmarking 

. 
On Monday, August 16, 2021 at 9:50:14 AM UTC-7 Gaurav Poothia wrote:

> Hello,
> I saw a talk by Mark Roth from envoycon that talked about gRPC proxyless 
> mesh having superior QPS per cpu second and latency than envoy all of which 
> is of course expected.
>
> Can anyone pls share results/setup from benchmarks around these two 
> metrics? 
> It would be great to understand perf benefits more deeply.
>
> Thanks!
> Gaurav
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/d8f8f249-8132-442f-bb63-d974c2ea1b36n%40googlegroups.com.


[grpc-io] Re: Slides for xDS + gRPC talk

2021-08-12 Thread 'Srini Polavarapu' via grpc.io
Hi,

I am able to access the slides 
here: 
https://static.sched.com/hosted_files/kccnceu2021/2a/Service%20Mesh%20with%20gRPC%20and%20xDS.pdf
Not sure why this link doesn't work for you.

On Thursday, August 12, 2021 at 11:10:04 AM UTC-7 Gaurav Poothia wrote:

> Hi,
> I was trying to follow the very interesting hyperlinked references from 
> the slides in this recent talk
> https://www.youtube.com/watch?v=cGJXkZ7jiDk
>
> But the kubecon page has a stale slides link so no way to get at them
>
> https://kccnceu2021.sched.com/event/iE8o/xds-in-grpc-for-service-mesh-megan-yahya-google-llc
>
> Maybe someone from Google can CC the presenter and/or help repost them 
> elsewhere/attach?
>
> Thanks in advance!
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/70d8c9e1-0a82-4757-b0b3-cfdd1cb301ddn%40googlegroups.com.


[grpc-io] Re: gRPC load balancing

2021-08-10 Thread 'Srini Polavarapu' via grpc.io
Hi,

When you say a request from a client is sent to all servers at once, what 
is your expected behavior with the responses? If only the first response is 
accepted and others are discarded, then this similar to hedging 
functionality. Hedging is available in gRPC in Java 

 
but not in other languages. If you are expecting all responses to be 
processed then such a functionality doesn't exist in gRPC.

On Thursday, August 5, 2021 at 9:20:46 AM UTC-7 Sergey S wrote:

> Hello! 
> Please tell me if it is possible to configure the balancer so that you can 
> send a request not to a random server, but to all that the balancer sees, 
> or to any specific one. I will clarify the situation. I have 1 client, 1 
> balancer and 3 servers. I need to send a request from the client to all 3 
> servers at once. Is there such a possibility?
> Thank you very much! 
> With best wishes
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/daf0da0f-0a4e-4e42-8f53-fc66ac307adcn%40googlegroups.com.


[grpc-io] Re: Missing protocol buffers submodule commit

2021-07-28 Thread 'Srini Polavarapu' via grpc.io
Must be a temporary issue. I see both commits fine.

On Wednesday, July 28, 2021 at 9:04:54 AM UTC-7 mark...@gmail.com wrote:

> The heap of the grpc repository is pointing at a missing protocol buffers 
> commit:
>
>
> https://github.com/google/protobuf/tree/436bd7880e458532901c58f4d9d1ea23fa7edd52
>
> If I go back to the last release protocol buffers were updates, also a 
> missing commit:
>
>
> https://github.com/google/protobuf/tree/d7e943b8d2bc444a8c770644e73d090b486f8b37
>
> Is there something happening with the protocol buffers repository?
>
> Mark
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/af1b5c43-d3b6-4db8-8d68-0318effd39edn%40googlegroups.com.


[grpc-io] Re: grpc proxyless using istio

2021-07-28 Thread 'Srini Polavarapu' via grpc.io
Hi,

AFAIK, Istio doesn't yet support gRPC clients. I believe they have it on 
the roadmap but please confirm with the Istio community.

On Sunday, July 25, 2021 at 2:11:49 AM UTC-7 tiz...@gmail.com wrote:

> Hi
> I'm looking for examples of using istio as an xds server. I want my grpc
> client to get the routing configuration from that server. Anyone knows of 
> such an example?
>
> Thanks!
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/db40c644-d545-49b1-abda-c9144228ca09n%40googlegroups.com.


[grpc-io] Re: Connection management and load balancing

2021-02-22 Thread 'Srini Polavarapu' via grpc.io
Hi Vitaly,

Please see this post 
 if you 
are planning to use gRPCLB. gRPC has moved away from gRPCLB protocol. 
Instead, gRPC is adopting xDS protocol. A number of xDS features 
, 
including round robin LB, are already supported in gRPC. This project 
 might be useful to you but I 
think it is blocked on this issue 
. This 
 project might be useful too.

On Friday, February 19, 2021 at 3:47:22 PM UTC-8 vitaly@gmail.com wrote:

> Thanks Srini,
>
> I haven't tested option 2 yet, I would expect though that since client is 
> unaware of what is happening we should see some request failures/latency 
> spikes until new connection is established. That's why I would consider it 
> mostly for disaster prevention rather than for general connection balancing.
> I'm actually now more interested in exploring option 4 as it looks like we 
> can achieve safe setup if we keep proxy in front of servers and expose a 
> separate proxy port for each server.
> Can someone recommend a good opensource grpclb implementation? I've found 
> bsm/grpclb  which looks reasonable but 
> wasn't sure if there is anything else available.
>
> On Friday, February 19, 2021 at 12:50:17 PM UTC-8 Srini Polavarapu wrote:
>
>> Hi,
>>
>> Option 3 is ideal but since you don't have that as well as option 4 
>> available, option 2 is worth exploring. Are the concerns with options 2 
>> based on some experiments you have done or is it just a hunch? This 
>> comment 
>>  has 
>> some relevant info that you could use.  
>>
>> On Thursday, February 18, 2021 at 7:06:37 PM UTC-8 vitaly@gmail.com 
>> wrote:
>>
>>>
>>> Hey folks,
>>>
>>> I'm trying to solve a problem of even load (or at least connection) 
>>> distribution between  grpc clients and our backend servers.
>>>
>>> First of all let me describe our setup:
>>> We are using network load balancing (L4) in front of our grpc servers.
>>> Clients will see one endpoint (LB) and connect to it. This means that 
>>> standard client-side load balancing features like round robing wouldn't 
>>> work as there will only be one sub-channel for client-server communication.
>>>
>>> One issue with this approach can be demonstrated by the following 
>>> example:
>>> Let's say we have 2 servers running and 20 clients connect to them. At 
>>> the beginning, since we go through the network load balancer, connections 
>>> will be distributed evenly (or close to that), so we'll roughly have 50% of 
>>> connections to each server. Now let's assume these servers reboot one after 
>>> another, like in a deployment. What would happen is that server that comes 
>>> up first would get all 20 worker connections and server that comes up later 
>>> would have zero. This situation won't change unless client or server would 
>>> drop a connection periodically or more clients request connections.
>>>
>>> I've considered a few options for solving this:
>>> 1. Connection management on the client side - do something to reset the 
>>> channel (like [enterIdle](
>>> https://grpc.github.io/grpc-java/javadoc/io/grpc/ManagedChannel.html#enterIdle)
>>>  
>>> in grpc-java). Downside - it seems that this feature has been developed for 
>>> android and I can't find similar functionality in grpc-go.
>>> 2. Connection management on the server side - drop connections 
>>> periodically on the server. Downside - this approach looks less graceful 
>>> than the client side one and may impact request latency and result in 
>>> request failures on the client side.
>>> 3. Use request based grpc-aware L7 LB, this way client would connect to 
>>> the LB, which would fan out requests to the servers. Downside - I've been 
>>> told by our infra guys that it is hard to implement in our setup due to the 
>>> way we use TLS and manage certificates.
>>> 4. Expose our servers outside and use grpc-lb or client side load 
>>> balancing. Downside - it seems less secure and would make it harder to 
>>> protect against DDoS attacks if we go this route. I think this downside 
>>> makes this approach unviable.
>>>
>>> My bias is towards going with option 3 and doing request based load 
>>> balancing because it allows much more fine grained control based on load, 
>>> but since our infra can not support it at the moment, I might be forced to 
>>> use option 1 or 2 in the short to mid term. Option 2 I like the least, as 
>>> it might result in latency spikes and errors on the client side.
>>>
>>> My questions are:
>>> 1. Which approach is generally preferable? 
>>> 2. Are there other options to consider?
>>> 3. Is it possible to influence grpc channel state in grpc-go, which 
>>> would trigger 

[grpc-io] Re: Connection management and load balancing

2021-02-19 Thread 'Srini Polavarapu' via grpc.io
Hi,

Option 3 is ideal but since you don't have that as well as option 4 
available, option 2 is worth exploring. Are the concerns with options 2 
based on some experiments you have done or is it just a hunch? This comment 
 has some 
relevant info that you could use.  

On Thursday, February 18, 2021 at 7:06:37 PM UTC-8 vitaly@gmail.com 
wrote:

>
> Hey folks,
>
> I'm trying to solve a problem of even load (or at least connection) 
> distribution between  grpc clients and our backend servers.
>
> First of all let me describe our setup:
> We are using network load balancing (L4) in front of our grpc servers.
> Clients will see one endpoint (LB) and connect to it. This means that 
> standard client-side load balancing features like round robing wouldn't 
> work as there will only be one sub-channel for client-server communication.
>
> One issue with this approach can be demonstrated by the following example:
> Let's say we have 2 servers running and 20 clients connect to them. At the 
> beginning, since we go through the network load balancer, connections will 
> be distributed evenly (or close to that), so we'll roughly have 50% of 
> connections to each server. Now let's assume these servers reboot one after 
> another, like in a deployment. What would happen is that server that comes 
> up first would get all 20 worker connections and server that comes up later 
> would have zero. This situation won't change unless client or server would 
> drop a connection periodically or more clients request connections.
>
> I've considered a few options for solving this:
> 1. Connection management on the client side - do something to reset the 
> channel (like [enterIdle](
> https://grpc.github.io/grpc-java/javadoc/io/grpc/ManagedChannel.html#enterIdle)
>  
> in grpc-java). Downside - it seems that this feature has been developed for 
> android and I can't find similar functionality in grpc-go.
> 2. Connection management on the server side - drop connections 
> periodically on the server. Downside - this approach looks less graceful 
> than the client side one and may impact request latency and result in 
> request failures on the client side.
> 3. Use request based grpc-aware L7 LB, this way client would connect to 
> the LB, which would fan out requests to the servers. Downside - I've been 
> told by our infra guys that it is hard to implement in our setup due to the 
> way we use TLS and manage certificates.
> 4. Expose our servers outside and use grpc-lb or client side load 
> balancing. Downside - it seems less secure and would make it harder to 
> protect against DDoS attacks if we go this route. I think this downside 
> makes this approach unviable.
>
> My bias is towards going with option 3 and doing request based load 
> balancing because it allows much more fine grained control based on load, 
> but since our infra can not support it at the moment, I might be forced to 
> use option 1 or 2 in the short to mid term. Option 2 I like the least, as 
> it might result in latency spikes and errors on the client side.
>
> My questions are:
> 1. Which approach is generally preferable? 
> 2. Are there other options to consider?
> 3. Is it possible to influence grpc channel state in grpc-go, which would 
> trigger resolver and balancer to establish a new connection similar to what 
> enterIdle does in java? From what I see in the [clientconn.go](
> https://github.com/grpc/grpc-go/blob/master/clientconn.go) there is no 
> option to change the channel state to idle or trigger a reconnect in some 
> other way.
> 4. Is there a way to implement server side connection management cleanly 
> without impacting client-side severely?
>
> Here are links that I find useful for some context:
> grpc/load-balancing.md at master · grpc/grpc 
> 
> proposal/A9-server-side-conn-mgt.md at master · grpc/proposal 
> 
>  
> proposal/A8-client-side-keepalive.md at master · grpc/proposal 
> 
>   
> grpc/keepalive.md at master · grpc/grpc 
> 
>
>
> Sorry for the long read,
> Vitaly
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 

[grpc-io] Re: Minimum gRPC stack requirement to support gNMI proto 070

2020-11-11 Thread 'Srini Polavarapu' via grpc.io
Hi,

A quick look at gNMI seems to indicate it needs streaming and reflection in 
gRPC. 1.0.0 should be fine. That said, why not upgrade to the latest 
version v1.33.0 which is backward compatible with 1.0.0 and get all the bug 
fixes and enhancements done since 1.0.0 release? FYI, gRPC project supports 
last two releases only in case you run into any issues with an old version.

On Monday, November 9, 2020 at 11:16:14 PM UTC-8 shikhach...@gmail.com 
wrote:

> Hello, 
>
> I am using gRPC stack version 1.0.0. Is there any need to upgrade the 
> stack if i want to use gNMI 070 ?
>
> -Thanks
> Shikha
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/f75d8a99-8c3d-495f-99e5-16efaec75287n%40googlegroups.com.


[grpc-io] Re: Is there have "Resolver" and "Balancer" interface in Python ?

2020-11-11 Thread 'Srini Polavarapu' via grpc.io
Unfortunately, no. This is a long pending item that is resource starved. 
See https://github.com/grpc/grpc/pull/16617 
and https://github.com/grpc/grpc/issues/11685.


On Sunday, November 8, 2020 at 7:53:25 PM UTC-8 stef...@gmail.com wrote:

> Hello guys.
>
> I found "Resolver" and "Balancer" in Java and Go,  Is there have in  
> Python ?
> If not , How can I do ?  I am plan to use them with zookeeper , as  
> Service Found 
>
> Can I build custom c-core version  for Python ?
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/3d95d058-ab2d-419d-b6d2-c745828f53d1n%40googlegroups.com.


[grpc-io] Re: Load balancing while using gRPC

2020-11-08 Thread 'Srini Polavarapu' via grpc.io
You can specify client side round robin load balancing like this:

 opts = [("grpc.lb_policy_name", "round_robin",)] 
 channel = grpc.insecure_channel(':' , opts)

If your target address resolves to multiple addresses then requests are 
load balanced in a round robin fashion. Other than pick-first, this is the 
only built-in load balancing gRPC client supports

On Sunday, November 1, 2020 at 6:28:53 PM UTC-8 stef...@gmail.com wrote:

> Hi Srini Polavarapu
>
> How can I use client side load balancing in Python ?
> Would you please show me some example ? 
>
> Thinks !
>
>
> 在2019年2月11日星期一 UTC+8 下午12:02:22 写道:
>
>> Since you want to use an in-line load balancer, take a look at NGINX 
>> proxy  that supports 
>> gRPC. In such a topology, the DNS points to your LB IP which gRPC clients 
>> connects to. LB will then load balance to backends. gRPC client is not 
>> aware of how many backends are present since it only talk to the LB in the 
>> middle. 
>>
>>
>> On Wednesday, February 6, 2019 at 2:50:06 PM UTC-8, ankitpa...@gmail.com 
>> wrote:
>>>
>>> Hello 
>>>
>>> I am exploring Load balancing  while using gRPC . So far i have gone 
>>> through python quick start guide (hello world example); and it was easy to 
>>> follow example. I have the same example running on two of my servers. 
>>>
>>> Now I am in search for similar quick start with load balancing.  I went 
>>> over https://grpc.io/blog/loadbalancing and similar pages which made 
>>> sense to me but  at the same time i felt overwhelmed and cant figure out 
>>> how and where to start. 
>>>
>>> I want to start exercising simple code just like python hello world with 
>>> load balancing enabled (when client say "hello"; load balancer decides 
>>> which server to forward the request and respective server responds with 
>>> "world" )
>>>
>>> if you are aware of any such tutorials or reference links please help me 
>>> out
>>>
>>> thanks
>>> Ankit 
>>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/f953089a-dd6b-4f1c-847e-272c830467abn%40googlegroups.com.


[grpc-io] Re: "Could not contact DNS servers" even setting GRPC_DNS_RESOLVER=native

2020-08-05 Thread 'Srini Polavarapu' via grpc.io
DNS is the default name resolver in gRPC. gRPC C++ uses C-Ares as default 
library to resolve DNS unless you set GRPC_DNS_RESOLVER=native. Check that 
you have valid nameserver in /etc/resolv.conf.  It is strange that 
GRPC_DNS_RESOLVER=native did not help which should just use the system 
nameresolver. What was the error in that case? Does replacing localhost 
with 127.0.0.1 in the channel help?

On Tuesday, August 4, 2020 at 4:25:36 AM UTC-7 belanke...@gmail.com wrote:

> I have cross-compiled the code successfully for QNX ARM 7.0, and able to 
> generate the binaries.
>
>
> Very first thing I am going to do is to run the helloworld code on the 
> target. So I built the helloworld code available in cpp folder(since I am 
> using cpp). 
>
>
> I copied only two files greeter_server and greeter_client in target 
> machine. I am able to start server using greeter_server successfully. But 
> when I run greeter_client in another window, I get failure message.
>
> 14: DNS resolution failed
> Greeter received: RPC failed
>
> Here is the logs 
> 
>  after 
> enabling GRPC_TRACE=ALL, GRPC_VERBOSITY=DEBUG.
>
>
> Please help me know the issue.
>
>
> We do not have DNS server, I tried using GRPC_DNS_RESOLVER=native, but did 
> not helped. 
>
>
> Do we need DNS? If its not DNS then, what is it failing?
>
>
>
> Thank you in advance.
>
>
> -Darshan  Belanke
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/0b409a28-b820-4f1b-a281-806109d657e6n%40googlegroups.com.


[grpc-io] Re: At times new channel get stuck in resolver

2020-04-22 Thread 'Srini Polavarapu' via grpc.io
Have you tried newer versions of gRPC to reproduce this issue? 1.14 is 
fairly old.
On Monday, April 20, 2020 at 11:45:23 AM UTC-7 grpc_user wrote:

> Hi!
>
> We are running grpc version 1.14 and we hit a strange condition (at times, 
> not easily reproducible). When attempting to create a new channel, the 
> library never progresses beyond the next stage.
>
> client_channel/resolver/dns/native/dns_resolver.cc:284 ref 1 -> 2 
> dns-resolving
> EXECUTOR try to schedule 0x55b81a368fa0 (short) to thread 5
>
>
> And that's about it.
>
> Wondering if anyone could shed some light onto this behavior.
>
> Thanks!
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/f9e0bfe6-a3c2-415d-a11e-489b2880c005%40googlegroups.com.


Re: [grpc-io] gRPC for large data transfer

2020-04-22 Thread 'Srini Polavarapu' via grpc.io
FWIW, gRPC can support msg size up to 4GB (minus a few bytes) but you may 
be limited to 2GB or less due to protobuf library limitations. For example, 
python protobuf plugin default is 64MB which can be increased. See 
https://github.com/grpc/grpc/issues/19221.

On Tuesday, April 21, 2020 at 7:54:05 AM UTC-7 Dean Hiller wrote:

> Having worked at Twitter(heavy thrift!!!) I would say it doesn't really 
> matter as long as you deliver it in pieces typically.  I would not deliver 
> one HUGE object (HUGE is relative) and instead might call N times to 
> deliver the whole thing.  This has huge advantages in slamming it in many 
> cases and when done right, servers can remain stateless.  My 2 cents is 
> that 80% of your perf issue will be in 20% of the code and there is a good 
> chance, it's not the protocol unless you have been optimizing your server 
> for a while.  (Donald Knuth - Premature optimization is the root of all 
> evil)
>
> On Mon, Apr 20, 2020 at 7:29 PM Philip  wrote:
>
>> I have the large size of request data to be submitted to remote server 
>> API. (something like machine learning's feature data).
>> Will I use thrift or gRPC (protobuf behind it) to implement it for 
>> better performance/security etc?
>>
>> Thank you.
>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "grpc.io" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to grpc-io+u...@googlegroups.com.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/grpc-io/07228c47-81c2-57bb-f196-7fb4588e68ea%40list.199903.xyz
>> .
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/9dddba95-dae6-4687-99de-bdc0b5a32773%40googlegroups.com.


[grpc-io] Re: create channel credential from access token only (python)

2020-04-22 Thread 'Srini Polavarapu' via grpc.io
I believe credentials can be added to an insecure channel only if the 
channel is local (e.g. UDS). See more details here: 
https://github.com/grpc/grpc/pull/20875

On Tuesday, April 21, 2020 at 2:35:29 PM UTC-7 davidk...@gmail.com wrote:

> I am creating a channel  as follows:
> call_credentials = grpc.access_token_call_credentials(token)
> root_credentials = grpc.ssl_channel_credentials(certificate)
> credentials = grpc.composite_channel_credentials(root_credentials, 
> call_credentials)
> channel = grpc.secure_channel(endpoint, credentials=credentials)
>
> Suppose I am not using SSL (no certificate), but just an access token, how 
> do I create a channel as above.
> Thanks.
>   
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/fd67d015-0af7-4a34-9b24-d8a34ac724f6%40googlegroups.com.


[grpc-io] Re: Intermediate Certificates not sending

2020-04-22 Thread 'Srini Polavarapu' via grpc.io
Have you tried another gRPC language with same certs/keys to isolate this 
issue to gRPC-Go implementation? You can find examples in other languages 
here. https://grpc.io/docs/guides/auth/

On Wednesday, April 22, 2020 at 8:13:54 AM UTC-7 edward...@lacity.org wrote:

> -- Golang app server TLS connections to mobile clients --
>
> Everything is working except the FULL CHAIN of trust is not being sent.
>
> I created a pfx file (full identity file) converted it to PEM, loaded it 
> into a Go app (code below) and it works great except the INTERMEDIATE 
> certificates are not being sent as part of the chain of trust.
>
> I've tried all the examples I can find, but none have resolved my issue.
>
> I'm also using online TLS checker tools that mostly check web servers, I'm 
> not sure if better tools exist for testing pure gRPC connections besides 
> other one-off gRPC apps.
>
> Again, this is a pure gRPC, non-web related connection.  Below is a 
> snippet of code that is 99% working with comodo TLS certs, I'm concerned 
> that my issue may be with the CertPool and how it gets passed to 
> tls.Config.  I'm following the examples but something is not working; also, 
> it's not entirely obvious whether an event hook is required to fetch and 
> unwind the CertPool or if the TLS libs can unwind everything in the proper 
> order: host_key, [INTERMEDIATES], RootCA_key; I have to assume so.
>
>
> // Load the certificates from disk
> //
> certificate, err := tls.LoadX509KeyPair(crt, key)
> if err != nil {
>return fmt.Errorf("could not load server key pair: %s", err)
> } else {
>log.Println("loaded key pair")
> }
>
> // Read FullChain file from disk
> //
> CACert, err := ioutil.ReadFile(ca)
> if err != nil {
>return fmt.Errorf("could not read CACert certificate: %s", err)
> } else {
>log.Println("Found Cert Bundle")
> }
>
> // Create a certificate pool to hold certificates from authorities
> //
> certPool, _ := x509.SystemCertPool()
>
> // Append the client certificates from the CA
> //
> if ok := certPool.AppendCertsFromPEM(CACert); !ok {
>log.Println("- Error: Not able to Append Certs to CertPool -")
> } else {
>log.Println("Loaded PEM certs")
> }
>
> // TLS configuration object
> //
> tlsConfig := {
>
>RootCAs: certPool,
>
>Certificates: []tls.Certificate{certificate},
>
>CipherSuites: []uint16{
>   tls.TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,
>   tls.TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,
>   tls.TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,
>   tls.TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,
>   tls.TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,
>   tls.TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,
>   tls.TLS_RSA_WITH_AES_128_GCM_SHA256,
>   tls.TLS_RSA_WITH_AES_256_GCM_SHA384,
>},
>
>PreferServerCipherSuites: true,
>
>// Forbid all TLS below 1.2
>MinVersion: tls.VersionTLS12,
> }
>
> s := grpc.NewServer(
>grpc.Creds(credentials.NewTLS(tlsConfig)),
>grpc.KeepaliveParams(
>   keepalive.ServerParameters{
>  Time:(time.Duration((300) * time.Second)),
>  Timeout: (time.Duration(10) * time.Second),
>   },
>),
>grpc.KeepaliveEnforcementPolicy(
>   keepalive.EnforcementPolicy{
>  MinTime: (time.Duration((300) * time.Second)),
>  PermitWithoutStream: true,
>   },
>),
> )
>
> [... start listening boilerplate...]
>
>
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/26ae91c2-d639-48bd-8ad3-9f8dd2491410%40googlegroups.com.


[grpc-io] Re: TLS chain of trust and Golang

2020-04-22 Thread 'Srini Polavarapu' via grpc.io
For chain of trust to work, you must ensure that the server is presenting 
the right cert to the client and the client has Comodo root cert in its 
trust store. Ensure that you are using the right cert file in the client 
dial "creds, _ := credentials.NewClientTLSFromFile(certFile, "")". This 
cert file must have Comodo root cert. Similarly, ensure that the server is 
using the correct cert and private key file in creds, _ := 
credentials.NewServerTLSFromFile(certFile, keyFile).

On Thursday, April 16, 2020 at 5:07:20 PM UTC-7 mauricio...@lacity.org 
wrote:

> We implemented a gRPC server in Golang and we’re using a Comodo wildcard 
> certificate. Everything was going along well until we were audited and 
>  told the chain of trust on our gRPC ports could not be verified. Have 
> looked at tons of example configs and code samples but we can’t seem to 
> clear this issue. We are using testssl.sh 
>  to test our TLS config and no 
> matter what we do it keeps giving us chain of trust issues. 
>
> We started with the basic server with self signed certs but we’re dinged 
> for self signed. Moved to  the comodo cert but now chain of trust issue. 
> Any pointers appreciated 
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/7ede9efd-d8fb-4fb1-902a-869b29c61ca2%40googlegroups.com.


Re: [grpc-io] Optimizing gRPC for localhost

2019-12-11 Thread 'Srini Polavarapu' via grpc.io
Currently there is no plan to do this and I don't think it has a lot of 
value. One option is to write your own interceptors on both sides to bypass 
HTTP/2 stack. You'll have to handle deadlines, flow control, multiplexing 
etc. on your own. Essentially, you'll be implementing your own transport 
between the two processes. 

On Wednesday, December 11, 2019 at 9:46:48 AM UTC-8, Andrey Tcherepanov 
wrote:
>
> Any plans to look into that, or it is not of any importance?
>
> On Tuesday, December 10, 2019 at 8:14:37 AM UTC-7, Nicolas Noble wrote:
>>
>> We don't have that sort of optimization at the moment, no. Even if you 
>> use unix domain sockets, it still go through the whole process. 
>>
>> On Mon, Dec 9, 2019 at 10:56 PM Gautham Banasandra  
>> wrote:
>>
>>> Hi all,
>>>
>>> I'm using gRPC to communicate between a go and C++ process running on 
>>> the same node. The C++ process hosts the gRPC server and the go process is 
>>> the client. The go process makes a lot of gRPC calls hosted by the gRPC 
>>> server in the C++ process in a blocking manner.
>>> I collected a CPU profile (see below) of the go process and I see that 
>>> about 50% of CPU is spent in gRPC. Out of which, only about 20% is spent in 
>>> I/O. I assume that the remaining 30% is spent in marshalling/unmarshalling 
>>> the messages. Given that all the communications is going to be limited to 
>>> just one node, is there any way that I could tune gRPC to get better 
>>> performance? For example, is there a way to avoid the overhead in 
>>> constructing HTTP2 messages? Essentially, what I'm looking for is a way to 
>>> use gRPC for Inter Process Communication.
>>>
>>> [image: cpu profile.jpg]
>>> Thanks,
>>> --Gautham
>>>
>>> -- 
>>> You received this message because you are subscribed to the Google 
>>> Groups "grpc.io" group.
>>> To unsubscribe from this group and stop receiving emails from it, send 
>>> an email to grp...@googlegroups.com.
>>> To view this discussion on the web visit 
>>> https://groups.google.com/d/msgid/grpc-io/4aa144cb-587b-4286-8832-cad1528ca12b%40googlegroups.com
>>>  
>>> 
>>> .
>>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/434399a5-3e04-4fc6-b2de-761242217de4%40googlegroups.com.


Re: [grpc-io] Report vulnerability

2019-08-21 Thread 'Srini Polavarapu' via grpc.io
Hi,

Thanks for reaching out. Please follow the CVE process here:
https://github.com/grpc/proposal/blob/master/P4-grpc-cve-process.md.

Thanks.

On Wed, Aug 21, 2019 at 3:19 AM  wrote:

> Hi,
>
> Our team has recently discovered a Null Pointer Dereference security
> vulnerability in gRPC.
>
> How do we disclose it and open a CVE.
>
> Thanks!
>
> --
> You received this message because you are subscribed to the Google Groups "
> grpc.io" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to grpc-io+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/grpc-io/850291a3-0eef-4455-8748-1cacb3a2ceda%40googlegroups.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CAP2DVw0TaOLq_qcZF4mhA%3DdyUUbSDXNbYfj1K%3DGSnMXac0U19A%40mail.gmail.com.


[grpc-io] Re: is it possible to use grpc's loadbalancing framework for active/passive?

2019-08-05 Thread 'Srini Polavarapu' via grpc.io
You can look into enabling keepalive on the channel that will detect 
failure of underlying TCP connections and attempt to reconnect. If you have 
multiple TCP connections, gRPC will pick the first available connection by 
default and switch to another one when that one fails.

On Monday, August 5, 2019 at 2:19:19 AM UTC-7, Elhanan Maayan wrote:
>
> hi.. i was wondering if it's possible to configure grpc's api to an 
> active/passive config, so that if one packet  doesn't come from one source 
> in a defined gime, it would automatically switch to another source
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/1dfa258e-925e-461f-94c7-f35ff32ae875%40googlegroups.com.


Re: [grpc-io] connect grpc with rest API

2019-07-27 Thread 'Srini Polavarapu' via grpc.io
Check out Cloud Endpoints
or grpc-gateway
.

On Sat, Jul 27, 2019 at 3:23 AM abdelrahman hamdy <
hamdyabdelrahman...@gmail.com> wrote:

> how to connect grpc with rest API?
>
> --
> You received this message because you are subscribed to the Google Groups "
> grpc.io" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to grpc-io+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/grpc-io/c3b13e82-5696-4829-af71-0dc6f0b9a559%40googlegroups.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CAP2DVw3-m1db6fVhss8Yji3jdhCRLF0AXt9ngoHoPEFt5gj1tw%40mail.gmail.com.


Re: [grpc-io] Re: Pushback in unidirectional streaming RPC's

2019-07-19 Thread 'Srini Polavarapu' via grpc.io
Enabling flowctl debug tracing might show some useful log when, say, client
is not at all consuming while server keeps generating.
https://github.com/grpc/grpc/blob/master/doc/environment_variables.md



On Fri, Jul 19, 2019 at 1:03 PM Yonatan Zunger  wrote:

> I have no idea what would be involved in attaching ASAN to Python, and
> suspect it may be "exciting," so I'm trying to see first if gRPC has any
> monitoring capability around its buffers.
>
> One thing I did notice while reading through the codebase was unittests
> like this one
> 
>  about
> exceeding buffer sizes -- that does seem to trigger an ABORTED response,
> but the test was fairly hard to understand (not much commenting there...).
> Am I right in thinking that if this 4MB buffer is overflowed, that's
> somehow going to happen?
>
> On Fri, Jul 19, 2019 at 12:59 PM Lidi Zheng  wrote:
>
>> Hi Yonatan,
>>
>> In gRPC Python side, the consumption of message is sequential, and won't
>> be kept in memory.
>> If you recall the batch operations, only if a message is sent to
>> application, will gRPC Python start another RECV_MESSAGE operation.
>> It's unlikely that the problem resided in Python space.
>>
>> In C-Core space, AFAIK for each TCP read, the size is 4MiB
>> 
>>  per
>> channel.
>> I think we have flow control both in TCP level and HTTP2 level.
>>
>> For debugging, did you try to use ASAN? For channel arg, I can only find
>> "GRPC_ARG_TCP_READ_CHUNK_SIZE" and "GRPC_ARG_MAX_RECEIVE_MESSAGE_LENGTH"
>> that might be related to your case.
>>
>> Lidi Zheng
>>
>> On Fri, Jul 19, 2019 at 12:48 PM Yonatan Zunger  wrote:
>>
>>> Maybe a more concrete way of asking this question: Let's say we have a
>>> Python gRPC client making a response-streaming request to some gRPC server.
>>> The server starts to stream back responses. If the client fails to consume
>>> data as fast as the server generates it, I'm trying to figure out where the
>>> data would accumulate, and which memory allocator it would be using.
>>> (Because Python heap profiling won't see calls to malloc())
>>>
>>> If I'm understanding correctly:
>>>
>>> * The responses are written by the server to the network socket at the
>>> server's own speed (no pushback controlling it);
>>> * These get picked up by the kernel network device on the client, and
>>> get pulled into userspace ASAP by the event loop, which is in the C layer
>>> of the gRPC client. This is stored in a grpc_byte_buffer and builds up
>>> there.
>>> * The Python client library exposes a response iterator, which is
>>> ultimately a _Rendezvous object; its iteration is implemented in
>>> _Rendezvous._next(), which calls cygrpc.ReceiveMessageOperation, which is
>>> what drains data from the grpc_byte_buffer and passes it to the protobuf
>>> parser, which creates objects in the Python memory address space and
>>> returns them to the caller.
>>>
>>> This means that if the client were to drain the iterator more slowly,
>>> data would accumulate in the grpc_byte_buffer, which is in the C layer and
>>> not visible to (e.g.) Python heap profiling using the PEP445 malloc hooks.
>>>
>>> If I am understanding this correctly, is there any way (without doing a
>>> massive amount of plumbing) to monitor the state of the byte buffer, e.g.
>>> with some gRPC debug parameter? And is there any mechanism in the C layer
>>> which limits the size of this buffer, doing something like failing the RPC
>>> if the buffer size exceeds some threshold?
>>>
>>> Yonatan
>>>
>>> On Thu, Jul 18, 2019 at 5:27 PM Yonatan Zunger  wrote:
>>>
 Hi everyone,

 I'm trying to debug a mysterious memory blowout in a Python batch job,
 and one of the angles I'm exploring is that this may have to do with the
 way it's reading data. This job is reading from bigtable, which is
 ultimately fetching the actual data with a unidirectional streaming "read
 rows" RPC. This takes a single request and returns a sequence of data
 chunks, the higher-level client reshapes this into an iterator over the
 individual data cells, and those are consumed by the higher-level program,
 so that the next response proto is consumed once the program is ready to
 parse it.

 Something I can't remember about gRPC internals: What, if anything, is
 the pushback mechanism in unidirectional streaming? In the zero-pushback
 case, it would seem that a server could yield results at any speed, which
 would be accepted by the client and stored in gRPC's internal buffers until
 it got read by the client code, which could potentially cause a large
 memory blowout if the server wrote faster than the client read. Is this in
 fact the case? If so, is there any good way to instrument and detect if
 it's happening? (Some combination of gRPC 

Re: [grpc-io] Re: Can the gRPC be used by the client to transmit only unidirect messages to Server.

2019-05-29 Thread 'Srini Polavarapu' via grpc.io
I am not clear abt the requirements here but setting the deadline to now()
won't work. Server will see the deadline has passed and not process the
request at all.

On Wed, May 29, 2019 at 2:58 PM 'Juanli Shen' via grpc.io <
grpc-io@googlegroups.com> wrote:

> Sorry, what do you mean by "undirect" transfer? Do you mean "unidirect"?
>
> We have four kinds of RPCs considering if the server or client is
> streaming the messages, namely, unary RPC, server streaming RPC, client
> streaming RPC, and bidirectional streaming RPC, as detailed in
> https://grpc.io/docs/guides/concepts/.
>
> From your description, I guess what you want to use is client streaming
> RPC.
>
> On Wednesday, May 22, 2019 at 7:12:52 PM UTC-7, 윤석영 wrote:
>>
>> I know that gRPC supports undirect transfer by stream.
>> But I would like to ask if I can transmit a unidirect message by not
>> stream transmission.
>>
>> There is a way to set an empty response message in the result of Google,
>> but I understood that the server actually transmits the response message.
>>
>> I would like to make sure that Client does not wait for the response
>> itself and does not use the resource itself to receive the response message.
>> I am also considering setting the deadline to "now()", but I am not sure
>> if this is the right way.
>>
>>
>> --
> You received this message because you are subscribed to the Google Groups "
> grpc.io" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to grpc-io+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/grpc-io/dd4d1bf6-ae5f-4ffe-8ef3-591e1eb2db27%40googlegroups.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CAP2DVw0BZBEcKnQg562uqxEyDZVJcSiGBno0cz__82KP-Qb2wQ%40mail.gmail.com.


[grpc-io] Re: server side streaming

2019-04-05 Thread 'Srini Polavarapu' via grpc.io
Your understanding is correct. You may want to consider using streaming 
instead of repeated field if the aggregate response size is very large 
which can cause out-of-memory or flow control issues in your application. 
Using unary for large repeated response has no big benefits over streaming. 
Even in the case of large unary response, the HTTP/2 transport will break 
it up into smaller frames and stream it to the client. It is assembled back 
into a single response before presenting it to the application.

In gRPC, client always initiates the RPC which translates to client always 
initiating the stream.

On Friday, April 5, 2019 at 12:44:46 PM UTC-7, chirag shah wrote:
>
> I think I found some clarification which is like...
>
> In general, if your use case would allow the client to process the 
> incoming messages one at a time, the stream is the better choice. 
> If your client will just be blocking until all of the messages arrive and 
> then processing them in aggregate, the repeated field may be appropriate.
>
> So looks like both the approaches are correct.
>
> In that case, in the gRPC  no matter which kind of streaming we are doing  
> (i.e.,  client-side,  server-side or bidirectional)   my understanding is 
> the HTTP/2 stream that gets  created underneath is always initiated by the 
> client.   Server is not creating the HTTP/2 stream. 
>
> Am I correct ?
>
>
>
> Thanks.
>
> On Friday, April 5, 2019 at 1:38:06 PM UTC-5, chirag shah wrote:
>>
>> Hello ,
>>
>>
>> In gRPC we have 4 typical ways of client-server communication.  Let’s 
>> pick server-streaming.
>>
>> As we know Server streaming meaning a single client request triggers 
>> multiple response from the server.  I wanted to zoom into this line.
>>
>> Let’s say following is one such method in the service of my 
>> protocol-buffer file.
>>
>> *rpc ListFeatures(Rectangle) returns (stream Feature)*
>>
>>  
>>
>> This method obtains the Features available within the given Rectangle.
>>
>> Results are  streamed rather than returned at once (e.g. in a response 
>> message with a  repeated field), as the rectangle may have  huge number of 
>> features.
>>
>> But that is exactly what I am not following.
>>
>> Just because server wants to send more than one Feature object, that 
>> should not be a qualification for using Stream (I can do it with Unary call 
>> too)
>>
>> If server wants to send multiple feature objects, in  my proto-buffer 
>> file, I can create a wrapper message object like 
>>
>>message FeatureResponse {
>>
>>   repeated Feature features = 1;
>>
>> }
>>
>>  
>>
>> message Feature {
>>
>>string url = 1;
>>
>>string title = 2;
>>
>>   }
>>
>>  
>>
>> And now server can expose  *rpc ListFeatures(Rectangle) returns 
>> (FeatureResponse) This is Unary call.*
>>
>>  
>>
>>  
>>
>> My understanding about using Server-side-Streaming RPC call is *when the 
>> server does not have all the complete data right when the RPC call  came 
>> from the client (Or expecting more and more data along with the time)*
>>
>> So when client call method *ListFeatures*, server prepares 
>> FeatureResponse and stuff  as many Features as possible at that point of 
>> time and push it out to the client on the HTTP2 stream initiated by the 
>> client.
>>
>> It however, knows that after some time (for eg., 15 min) he is going to 
>> get another set of Features  object to send out.
>>
>> So that time it will use the SAME logical HTTP2 stream to push out those 
>> new objects.
>>
>>  
>>
>> Am I correct ?  If not how can we realize above business situation where 
>> server for eg., has to push out the latest stock prices every 30 min .
>>
>>  
>>
>> Really appreciate your help demystifying this concept.
>>
>>  
>>
>> Thanks.
>>
>>  
>>
>>  
>>
>>  
>>
>>  
>>
>>  
>>
>>  
>>
>>  
>>
>>  
>>
>>  
>>
>>  
>>
>>  
>>
>>  
>>
>>  
>>
>>  
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/2402699e-19ec-440b-ac10-a451ba0ed1f0%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [grpc-io] Re: Evolution of gRPCLB

2019-03-06 Thread 'Srini Polavarapu' via grpc.io
That's correct but note that xDS spec already exists as linked above and it 
is not something new that gRPC team is proposing.

On Saturday, March 2, 2019 at 8:44:45 PM UTC-8, Rama Rao wrote:
>
> Srini,
>
> Does it mean that if control plane implements the new xDS api that gRPC 
> team is going propose - gRPC services can make use of it for load balancing 
> (optionally control planes can also implement and configure 
> LoadReportService, which would be used for more intelligent load 
> balancing?) Is that right?
> Thanks,
> Rama
>
> On Sat, Mar 2, 2019 at 5:20 AM 'Srini Polavarapu' via grpc.io <
> grp...@googlegroups.com > wrote:
>
>>
>> On Friday, March 1, 2019 at 10:18:28 AM UTC-8, blazej...@gmail.com wrote:
>>>
>>>
>>> Do you work on both sides of xDS protocol, or only on the client-side 
>>> implementation?
>>>
>>
>> gRPC team does not work on the LB server side of xDS. The goal here is to 
>> be able to use existing and future xDS compatible LB servers as mentioned 
>> above. 
>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "grpc.io" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to grpc-io+u...@googlegroups.com .
>> To post to this group, send email to grp...@googlegroups.com 
>> .
>> Visit this group at https://groups.google.com/group/grpc-io.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/grpc-io/5d61ae88-131e-47c6-b5c6-71b567c06194%40googlegroups.com
>>  
>> <https://groups.google.com/d/msgid/grpc-io/5d61ae88-131e-47c6-b5c6-71b567c06194%40googlegroups.com?utm_medium=email_source=footer>
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/0940fdc3-dba0-4961-a8ea-445e2a5e513b%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: Evolution of gRPCLB

2019-03-01 Thread 'Srini Polavarapu' via grpc.io

On Friday, March 1, 2019 at 10:18:28 AM UTC-8, blazej...@gmail.com wrote:
>
>
> Do you work on both sides of xDS protocol, or only on the client-side 
> implementation?
>

gRPC team does not work on the LB server side of xDS. The goal here is to 
be able to use existing and future xDS compatible LB servers as mentioned 
above. 

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/5d61ae88-131e-47c6-b5c6-71b567c06194%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: Future of GRPC-LB

2019-02-28 Thread 'Srini Polavarapu' via grpc.io
Details are posted 
 now.

On Wednesday, February 27, 2019 at 9:58:18 PM UTC-8, Srini Polavarapu wrote:
>
> Please expect a post on this soon. It will have the details you are 
> looking for but a detailed gRFC will come later. We can continue discussion 
> on that post.
>
> Thanks. 
>
> On Monday, February 25, 2019 at 12:39:27 PM UTC-8, blazej...@gmail.com 
> wrote:
>>
>> The thing is, that we have implemented a server-side LB which speaks 
>> grpclb and this whole machinery seems to work pretty well - so we wanted to 
>> deploy it any day now. What is more, we wanted to make it open-source and 
>> we had some ideas to develop it further. So, could you be a little bit more 
>> specific on those topics?
>>
>> 1) How long (approximately) will grpclb be still supported? It would be 
>> fair to have some time to migrate to the new solution.
>> 2) How will this new solution look like? I can't find any information 
>> about it, either on grpc's GitHub (no docs) or on this group. When I try to 
>> google "grpc load balancing" I still hit docs about grpclb.
>> 3) Is this XDS solution going to work out-of-the-box, or, similarly to 
>> grpclb we will have to implement some part on our own (I mean, in grpclb we 
>> had to implement server-side of the protocol - how does it work with XDS)?
>>
>> W dniu poniedziałek, 25 lutego 2019 19:19:48 UTC+1 użytkownik Carl 
>> Mastrangelo napisał:
>>>
>>> Like Penn said, you can turn it on (it's experimental), but will 
>>> eventually be replaced.  The flag itself is pretty simple, but the rest of 
>>> the machinery needs to be set up properly for it to work.  We (gRPC 
>>> maintainers) are not comfortable supporting this yet, hence the extra 
>>> effort to turn it on.   The gRPCLB Load Balancer is experimental, so we 
>>> will likely remove it at some point.  We will give a notice in one of the 
>>> upcoming releases that it is deprecated, and then remove it the release 
>>> after.   Since the replacement isn't yet ready, it has not been removed.
>>>
>>> Sorry to be so non-committal, but it seems like XDS is a better long 
>>> term LB solution, and we don't want to support two competing 
>>> implementations.
>>>
>>> On Saturday, February 23, 2019 at 12:48:43 AM UTC-8, blazej...@gmail.com 
>>> wrote:

 And what about SRV records lookup: now I have to set this flag:

 io.grpc.internal.DnsNameResolverProvider.enable_grpclb
>

 to true, and there was a commit some time ago which enabled it by 
 default: 
 https://github.com/grpc/grpc-java/commit/c729a0f76b244da9f4aebc40896b2fb891d1b5c4
  
 and now it has been reverted: 
 https://github.com/grpc/grpc-java/pull/5232 - how it is eventually 
 going to be? 


 W dniu piątek, 22 lutego 2019 21:16:54 UTC+1 użytkownik Penn (Dapeng) 
 Zhang napisał:
>
> Neither grpclb nor xds will be enabled by default, grpclb need be 
> explicitly enabled by a service config or a ManagedChannelBuilder option, 
> and xds need be explicitly enabled by a service config.  Grpclb will 
> eventually be replaced by xds based solution in the future, but the 
> grpc-grpclb  maven 
> artifact will stay and work for a long time (for as many new releases as 
> possible). When grpclb is not available for a new grpc release, your 
> client 
> can still automatically switch to a fallback loadbalancer (pick_first).
>
> On Friday, February 22, 2019 at 8:52:16 AM UTC-8, blazej...@gmail.com 
> wrote:
>>
>> What is the status of GRPCLB - are there any plans to enable it by 
>> default and finish the experimental stage (we want to start using it in 
>> production), or opposite, you plan to abandon it? I am confused, because 
>> I've read this PR: https://github.com/grpc/grpc-java/pull/5232:
>>
>> SRV has not yet been enabled in a release. 
>>>
>>> *Since work is rapidlyunderway to replace GRPC-LB with a service 
>>> config+XDS-based solution,there's now thoughts that we won't ever 
>>> enable 
>>> grpclb by default* (but
>>> may allow it to be automatically enabled when using 
>>> GoogleDefaultChannel
>>> or similar). Since things are being worked out, disable it.
>>
>>
>> It will be really helpful to us to know, what is the plan for it :)
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/dd53f352-6f64-4b61-b67b-62e52fbc3f4a%40googlegroups.com.
For more options, visit 

[grpc-io] Evolution of gRPCLB

2019-02-28 Thread 'Srini Polavarapu' via grpc.io


If you are using gRPCLB 
,
 
please read on:

gRPC team is working on evolving the current gRPCLB functionality. We will 
be moving away from our custom load balancing protocol and adopting xDS 
Protocol 
 
based on Envoy xDS API 
. 
This will allow interoperability with open source control planes that 
support the xDS API, such as Istio Pilot 
, go-control-plane 
 and java-control-plane 
. Other improvements 
include a more flexible and improved load balancing policy configuration 
 and 
load reporting based on LRS 

 
(load reporting service).

The client-side implementation of xDS load balancing plugin will not be 
compatible with the current gRPCLB protocol. Hence, the current gRPCLB 
implementation can be considered deprecated. We are actively working on the 
implementation of the new protocol. Expect to see a lot of progress in the 
coming quarters, including a gRFC on the new design. If you are relying on 
the current implementation in any way, please comment here so we can figure 
out an appropriate time to remove it after the new implementation is ready.

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/4c98eedb-5f65-4ade-b4e7-0798733548de%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: Future of GRPC-LB

2019-02-27 Thread 'Srini Polavarapu' via grpc.io
Please expect a post on this soon. It will have the details you are looking 
for but a detailed gRFC will come later. We can continue discussion on that 
post.

Thanks. 

On Monday, February 25, 2019 at 12:39:27 PM UTC-8, blazej...@gmail.com 
wrote:
>
> The thing is, that we have implemented a server-side LB which speaks 
> grpclb and this whole machinery seems to work pretty well - so we wanted to 
> deploy it any day now. What is more, we wanted to make it open-source and 
> we had some ideas to develop it further. So, could you be a little bit more 
> specific on those topics?
>
> 1) How long (approximately) will grpclb be still supported? It would be 
> fair to have some time to migrate to the new solution.
> 2) How will this new solution look like? I can't find any information 
> about it, either on grpc's GitHub (no docs) or on this group. When I try to 
> google "grpc load balancing" I still hit docs about grpclb.
> 3) Is this XDS solution going to work out-of-the-box, or, similarly to 
> grpclb we will have to implement some part on our own (I mean, in grpclb we 
> had to implement server-side of the protocol - how does it work with XDS)?
>
> W dniu poniedziałek, 25 lutego 2019 19:19:48 UTC+1 użytkownik Carl 
> Mastrangelo napisał:
>>
>> Like Penn said, you can turn it on (it's experimental), but will 
>> eventually be replaced.  The flag itself is pretty simple, but the rest of 
>> the machinery needs to be set up properly for it to work.  We (gRPC 
>> maintainers) are not comfortable supporting this yet, hence the extra 
>> effort to turn it on.   The gRPCLB Load Balancer is experimental, so we 
>> will likely remove it at some point.  We will give a notice in one of the 
>> upcoming releases that it is deprecated, and then remove it the release 
>> after.   Since the replacement isn't yet ready, it has not been removed.
>>
>> Sorry to be so non-committal, but it seems like XDS is a better long term 
>> LB solution, and we don't want to support two competing implementations.
>>
>> On Saturday, February 23, 2019 at 12:48:43 AM UTC-8, blazej...@gmail.com 
>> wrote:
>>>
>>> And what about SRV records lookup: now I have to set this flag:
>>>
>>> io.grpc.internal.DnsNameResolverProvider.enable_grpclb

>>>
>>> to true, and there was a commit some time ago which enabled it by 
>>> default: 
>>> https://github.com/grpc/grpc-java/commit/c729a0f76b244da9f4aebc40896b2fb891d1b5c4
>>>  
>>> and now it has been reverted: 
>>> https://github.com/grpc/grpc-java/pull/5232 - how it is eventually 
>>> going to be? 
>>>
>>>
>>> W dniu piątek, 22 lutego 2019 21:16:54 UTC+1 użytkownik Penn (Dapeng) 
>>> Zhang napisał:

 Neither grpclb nor xds will be enabled by default, grpclb need be 
 explicitly enabled by a service config or a ManagedChannelBuilder option, 
 and xds need be explicitly enabled by a service config.  Grpclb will 
 eventually be replaced by xds based solution in the future, but the 
 grpc-grpclb  maven 
 artifact will stay and work for a long time (for as many new releases as 
 possible). When grpclb is not available for a new grpc release, your 
 client 
 can still automatically switch to a fallback loadbalancer (pick_first).

 On Friday, February 22, 2019 at 8:52:16 AM UTC-8, blazej...@gmail.com 
 wrote:
>
> What is the status of GRPCLB - are there any plans to enable it by 
> default and finish the experimental stage (we want to start using it in 
> production), or opposite, you plan to abandon it? I am confused, because 
> I've read this PR: https://github.com/grpc/grpc-java/pull/5232:
>
> SRV has not yet been enabled in a release. 
>>
>> *Since work is rapidlyunderway to replace GRPC-LB with a service 
>> config+XDS-based solution,there's now thoughts that we won't ever enable 
>> grpclb by default* (but
>> may allow it to be automatically enabled when using 
>> GoogleDefaultChannel
>> or similar). Since things are being worked out, disable it.
>
>
> It will be really helpful to us to know, what is the plan for it :)
>


-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/56690c2c-0e9a-47bf-832b-28afe9bf8165%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] gRPC-Core Release 1.19.0

2019-02-27 Thread 'Srini Polavarapu' via grpc.io
This is the 1.19.0 (gold) release announcement for gRPC-Core and the 
wrapped languages C++, C#, Objective-C, Python, PHP and Ruby. Latest 
release notes are here .

This release contains refinements, improvements, and bug fixes, with 
highlights listed below.
Core
   
   - Fix c-ares on Windows "DNS resolution failure" triggered by logging. (
   #18092 )
   - Disable c-ares on Android (Backport #18046 
   ). (#18050 
   )
   - Ignore reserved bit in WINDOW_UPDATE frame. (#17950 
   )
   - Set c-ares as the default resolver. (#17897 
   )
   - Add period at end of metadata.google.internal to prevent unnecessary 
   DNS lookups. (#17598 )
   - Decrease verbosity of ALTS platform check to avoid a spam log message. 
   (#17874 )
   - Fix windows localhost address sorting bug. (#17790 
   )
   - Re-enable c-ares as the default resolver; but don't turn on SRV 
   queries. (#17723 )
   - Remove filters from subchannel args. (#17629 
   )

C++
   
   - Register for cq avalanching when interceptors are going to be run. (
   #17806 )
   - Add a caching interceptor to the keyvaluestore example. (#17689 
   )
   - Enable per-channel subchannel pool. (#17513 
   )
   - Fix build with bazel 0.21. (#17684 
   )
   - Switch the default DNS resolver from native to c-ares. (#16862 
   )
   - Modifying semantics for GetSendMessage and GetSerializedSendMessage. 
   Also adding ModifySendMessage. (#17630 
   )
   - Add interceptor methods to fail recv msg for hijacked rpcs and set 
   recv message to nullptr on failure. (#17179 
   )
   - Add interceptor method to fail hijacked send messages and get status 
   on POST_SEND_MESSAGE. (#17220 )
   - New Experimental Interception API - GetSendMessage and 
   GetSerializedSendMessage. (#17609 
   )

C#
   
   - Upgrade System.Interactive.Async to 3.2.0. (#16745 
   )
   - Refactor ServerServiceDefinition and move it to Grpc.Core.Api nuget. (
   #17889 )
   - Allow passing null implementation to generated BindService overload 
   using ServiceBinderBase. (#17837 
   )
   - Move public types needed for server implementation to Grpc.Core.Api. (
   #17778 )

Objective-C
   
   - Disable c-ares on iOS. (#17894 
   )
   - Added support for tvOS. (#17731 
   )
   - Fixing a few thread safety issues in gRPC Objective-C library. (#17578 
   )
   - Rolling out new API for gRPC Objective-C library. (#16190 
   )

Python
   
   - grpc_prefork(): check grpc_is_initialized before creating execctx. (
   #17996 )
   - [gRPC] Enable Python 3 for Bazel to Run Tests. (#17644 
   )
   - Escalate the failure of protoc execution. (#17734 
   )
   - Remove dependency of grpc.framework.foundation.callable_util. (#17543 
   )

Ruby
   
   - Disable service config resolution with c-ares by default, for now. (
   #17998 )
   - Ruby: refactor init/shutdown logic to avoid using atexit; fix windows. 
   (#17997 )
   - Ruby tooling: respect user toolchain overrides. (#17606 
   )

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/feb06837-cfcb-4dd5-9563-267bbee7b6b8%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: SEGFAULT in greeter_client

2019-02-13 Thread 'Srini Polavarapu' via grpc.io
Bump the priority of the Github issue to P1. Let's track it over there.

On Wednesday, February 13, 2019 at 8:36:09 PM UTC-8, Gautham B A wrote:
>
> I had filed it long back - https://github.com/grpc/grpc/issues/17807 .
>
> On Thursday, 14 February 2019 00:06:05 UTC+5:30, Carl Mastrangelo wrote:
>>
>> Hi, can you file an issue on gRPC's GitHub issue tracker?  
>> https://github.com/grpc/grpc/issues/new
>>
>> On Thursday, January 24, 2019 at 7:37:05 AM UTC-8, Gautham B A wrote:
>>>
>>> Hi all,
>>>
>>> I just cloned and built gRPC 
>>> (SHA 9ed8734efb9b1b2cd892942c2c6dd57e903ce719). I'm getting SEGFAULT when I 
>>> try to run greeter_client in C++. It SEGFAULTs when the RPC call is made -
>>>
>>> Status status = stub_->SayHello(, request, );
>>>
>>> Here's how I'm building greeter_client -
>>>
>>> cmake_minimum_required(VERSION 3.13)
>>> project(HelloWorld)
>>>
>>> set(CMAKE_CXX_STANDARD 17)
>>>
>>> set(GRPC_BUILD_DIR
>>> /Users/gautham/projects/github/grpc)
>>>
>>> set(LIB_GRPC
>>> ${GRPC_BUILD_DIR}/libs/opt/libgpr.dylib
>>> ${GRPC_BUILD_DIR}/libs/opt/libaddress_sorting.dylib
>>> ${GRPC_BUILD_DIR}/libs/opt/libgrpc++.dylib
>>> ${GRPC_BUILD_DIR}/libs/opt/libgrpc++_cronet.dylib
>>> ${GRPC_BUILD_DIR}/libs/opt/libgrpc++_error_details.dylib
>>> ${GRPC_BUILD_DIR}/libs/opt/libgrpc++_reflection.dylib
>>> ${GRPC_BUILD_DIR}/libs/opt/libgrpc++_unsecure.dylib
>>> ${GRPC_BUILD_DIR}/libs/opt/libgrpc.dylib
>>> ${GRPC_BUILD_DIR}/libs/opt/libgrpc_cronet.dylib
>>> ${GRPC_BUILD_DIR}/libs/opt/libgrpc_unsecure.dylib
>>> ${GRPC_BUILD_DIR}/libs/opt/libgrpcpp_channelz.dylib
>>> )
>>>
>>> set(LIB_PROTOBUF
>>> 
>>> ${GRPC_BUILD_DIR}/third_party/protobuf/src/.libs/libprotobuf-lite.17.dylib
>>> 
>>> ${GRPC_BUILD_DIR}/third_party/protobuf/src/.libs/libprotobuf-lite.dylib
>>> 
>>> ${GRPC_BUILD_DIR}/third_party/protobuf/src/.libs/libprotobuf.17.dylib
>>> 
>>> ${GRPC_BUILD_DIR}/third_party/protobuf/src/.libs/libprotobuf.dylib
>>> 
>>> ${GRPC_BUILD_DIR}/third_party/protobuf/src/.libs/libprotoc.17.dylib
>>> ${GRPC_BUILD_DIR}/third_party/protobuf/src/.libs/libprotoc.dylib
>>> )
>>>
>>> include_directories(
>>> ${GRPC_BUILD_DIR}/include
>>> )
>>>
>>> add_executable(greeter_client
>>> greeter_client.cc
>>> helloworld.grpc.pb.cc
>>> helloworld.pb.cc
>>> )
>>>
>>> target_link_libraries(greeter_client
>>> ${LIB_GRPC}
>>> ${LIB_PROTOBUF}
>>> )
>>>
>>> Here's the coredump -
>>> * thread #1, stop reason = signal SIGSTOP
>>>   * frame #0: 0x7fffa253a19e libsystem_kernel.dylib`poll + 10
>>> frame #1: 0x00010e6c01a6 
>>> libgrpc.dylib`pollset_work(pollset=, 
>>> worker_hdl=0x7fff519dded8, deadline=) at 
>>> ev_poll_posix.cc:1063 [opt]
>>> frame #2: 0x00010e6e5999 
>>> libgrpc.dylib`cq_pluck(cq=0x7fad6240ae40, tag=0x7fff519de200, 
>>> deadline=, reserved=) at completion_queue.cc:1282 
>>> [opt]
>>> frame #3: 0x00010e22c4d1 
>>> greeter_client`grpc::CompletionQueue::Pluck(grpc::internal::CompletionQueueTag*)
>>>  
>>> + 161
>>> frame #4: 0x00010e22b810 
>>> greeter_client`grpc::internal::BlockingUnaryCallImpl>>  
>>> helloworld::HelloReply>::BlockingUnaryCallImpl(grpc::ChannelInterface*, 
>>> grpc::internal::RpcMethod const&, grpc::ClientContext*, 
>>> helloworld::HelloRequest const&, helloworld::HelloReply*) + 704
>>> frame #5: 0x00010e22b4ed 
>>> greeter_client`grpc::internal::BlockingUnaryCallImpl>>  
>>> helloworld::HelloReply>::BlockingUnaryCallImpl(grpc::ChannelInterface*, 
>>> grpc::internal::RpcMethod const&, grpc::ClientContext*, 
>>> helloworld::HelloRequest const&, helloworld::HelloReply*) + 61
>>> frame #6: 0x00010e228921 greeter_client`grpc::Status 
>>> grpc::internal::BlockingUnaryCall>> helloworld::HelloReply>(grpc::ChannelInterface*, grpc::internal::RpcMethod 
>>> const&, grpc::ClientContext*, helloworld::HelloRequest const&, 
>>> helloworld::HelloReply*) + 81
>>> frame #7: 0x00010e2288c5 
>>> greeter_client`helloworld::Greeter::Stub::SayHello(grpc::ClientContext*, 
>>> helloworld::HelloRequest const&, helloworld::HelloReply*) + 85
>>> frame #8: 0x00010e226ecb 
>>> greeter_client`GreeterClient::SayHello(std::__1::basic_string>> std::__1::char_traits, std::__1::allocator > const&) + 235
>>> frame #9: 0x00010e226c05 greeter_client`main + 469
>>> frame #10: 0x7fffa240a235 libdyld.dylib`start + 1
>>> frame #11: 0x7fffa240a235 libdyld.dylib`start + 1
>>>
>>> I'm using macOS Sierra 10.12.6
>>>
>>> Compiler -
>>> clang
>>> Apple LLVM version 9.0.0 (clang-900.0.39.2)
>>> Target: x86_64-apple-darwin16.7.0
>>> Thread model: posix
>>>
>>> Can anyone please help me?
>>>
>>> Thanks,
>>> --Gautham
>>>
>>>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group 

Re: [grpc-io] PHP and Python client fails SSL connection

2019-02-10 Thread 'Srini Polavarapu' via grpc.io
You are very likely running into this issue 
. This fix will be available in 
gRPC release 1.19.0. You can try nightly pkgs 
from https://packages.grpc.io/ or wait for 1.19 RC coming out early next 
week.

On Wednesday, February 6, 2019 at 10:52:41 AM UTC-8, jis...@wepay.com wrote:
>
> Yes, we have tried to use that option but does not change anything. Here 
> are the grpc version we are using.
>
> For PHP we are using the packages php56-php-pecl-grpc (version 1.17.0) and 
> php56-php-pecl-protobuf (version 3.6.1) and openssl (version 1.0.2k-fips). 
> The Java dropwizard-grpc version for the server is 1.1.3-1. The Java 
> grpc-netty, grpc-protobuf, and grpc-stub versions for the Java client is 
> 1.13.1.
>
> On Tuesday, February 5, 2019 at 6:03:02 PM UTC-8, Stanley Cheung wrote:
>>
>> Did you try supplying the "grpc.ssl_target_name_override" key to the 
>> options?
>>
>> On Tue, Feb 5, 2019 at 4:01 PM jisooh via grpc.io <
>> grp...@googlegroups.com> wrote:
>>
>>> Hello,
>>>
>>>
>>> We are currently facing an issue with trying to connect our PHP gRPC 
>>> client with SSL to our Java gRPC server. The gRPC service we are trying to 
>>> connect to is running on a service mesh (linkerd/namerd), and the call 
>>> first hits a linkerd instance that routes to the service.
>>>
>>>
>>> When we run a Java client using the trusted certificate, it is able to 
>>> connect to the server; however, with a Python and PHP client, the SSL 
>>> connection fails even with the same cert.
>>>
>>>
>>> Java client code:
>>>
>>>
>>> ManagedChannel channel = NettyChannelBuilder.forAddress(host, port) 
>>> .overrideAuthority(‘cert-
>>> common-name’) 
>>> .sslContext(GrpcSslContexts.
>>> forClient().trustManager(new File(‘path/to/cert’)).build()) 
>>> .build();
>>>
>>>
>>>
>>> Python code:
>>>
>>>
>>> credentials = grpc.ssl_channel_credentials(open(‘path/to/cert’).read())
>>> channel = grpc.secure_channel(host + str(port), credentials, options=((
>>> 'grpc.default_authority', ‘cert-common-name’,),))
>>>
>>>
>>>
>>> PHP code:
>>>
>>>
>>>
>>> $channel_credentials = \Grpc\ChannelCredentials::createSsl(
>>> file_get_contents(‘path/to/cert’));
>>> $channel = new \Grpc\Channel($hostname, 
>>> [ 
>>> 'grpc_target_persist_bound' => 2, 
>>> 'grpc.default_authority' => ‘cert-common-name’, 
>>> 'credentials' => $channel_credentials
>>> ]);
>>>
>>>
>>>
>>> We are interested in fixing the problem for PHP at the moment. Our PHP 
>>> client runs in a CentOS 7 docker container with nginx + php-fpm.
>>>
>>>
>>> We have tried to make the OS trust the certificate by using 
>>> update-ca-trust. Running *openssl s_client -connect host:port* returns:
>>>

 verify error:num=2:unable to get issuer certificate
>>>
>>>
>>> We receive the following error when calling the server with the created 
>>> client for PHP:
>>>
>>>
>>> ssl_transport_security.cc:1229] Handshake failed with fatal error 
 SSL_ERROR_SSL: error:107d:SSL 
 routines:OPENSSL_internal:CERTIFICATE_VERIFY_FAILED
>>>
>>>
>>> With the gRPC logs, we can see that the connection fails when it tries 
>>> to call the security handshake.
>>>
>>>
>>> We are not sure why the Java client is able to connect to the server 
>>> while the PHP and Python clients cannot with the same cert.
>>>
>>>
>>> Has anyone ran into these issues before? It would be helpful if anyone 
>>> has some information on this as this is currently a high priority blocker 
>>> for us.
>>>
>>>
>>> Thank you.
>>>
>>> -- 
>>> You received this message because you are subscribed to the Google 
>>> Groups "grpc.io" group.
>>> To unsubscribe from this group and stop receiving emails from it, send 
>>> an email to grpc-io+u...@googlegroups.com.
>>> To post to this group, send email to grp...@googlegroups.com.
>>> Visit this group at https://groups.google.com/group/grpc-io.
>>> To view this discussion on the web visit 
>>> https://groups.google.com/d/msgid/grpc-io/ce0546a9-8a0e-41b1-9f0d-25ff2a415d8b%40googlegroups.com
>>>  
>>> 
>>> .
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/e3a4daf7-ff15-4765-95d1-33d1a6c2e5a3%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: Load balancing while using gRPC

2019-02-10 Thread 'Srini Polavarapu' via grpc.io
Since you want to use an in-line load balancer, take a look at NGINX proxy 
 that supports gRPC. In 
such a topology, the DNS points to your LB IP which gRPC clients connects 
to. LB will then load balance to backends. gRPC client is not aware of how 
many backends are present since it only talk to the LB in the middle. 


On Wednesday, February 6, 2019 at 2:50:06 PM UTC-8, ankitpa...@gmail.com 
wrote:
>
> Hello 
>
> I am exploring Load balancing  while using gRPC . So far i have gone 
> through python quick start guide (hello world example); and it was easy to 
> follow example. I have the same example running on two of my servers. 
>
> Now I am in search for similar quick start with load balancing.  I went 
> over https://grpc.io/blog/loadbalancing and similar pages which made 
> sense to me but  at the same time i felt overwhelmed and cant figure out 
> how and where to start. 
>
> I want to start exercising simple code just like python hello world with 
> load balancing enabled (when client say "hello"; load balancer decides 
> which server to forward the request and respective server responds with 
> "world" )
>
> if you are aware of any such tutorials or reference links please help me 
> out
>
> thanks
> Ankit 
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/8f8f5ad7-0ba8-4790-aba3-bd949f6e4df4%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: CERTIFICATE_VERIFY_FAILED in OpenSuse leap 15.0

2019-01-30 Thread 'Srini Polavarapu' via grpc.io
The root CA cert is bundled with gRPC that is used by Speech library. To 
point to your own root CA cert, you can try setting the env variable 
"GRPC_DEFAULT_SSL_ROOTS_FILE_PATH=/your/path/to/CAcert"

On Monday, January 28, 2019 at 3:25:23 AM UTC-8, rohan@matrixcomsec.com 
wrote:
>
> Hi all,
> I am using the openSuse leap 15.0 to develop the Google speech recognition 
> program. I am using the cpp-doc-samples at 
> https://github.com/GoogleCloudPlatform/cpp-docs-samples
> as the starting point. I am trying to make the tests as `make run_tests` 
> but I am getting stuck at the SSL handshake.
>
> The error shown is: 
> E0128 15:20:51.1915766087156 ssl_transport_security.cc:1233] Handshake 
> failed with fatal error SSL_ERROR_SSL: error:107d:SSL routines:
> OPENSSL_internal:CERTIFICATE_VERIFY_FAILED.
>
> I have checked that my browser is working properly with *https://* pages. 
> I would like to mention that my network is monitored and handled by 
> security framework which comes with its SSL (CA) certificates.
>
> I am guessing that the security framework is the cause of above error. I 
> am not sure which certificate to update though. I am assuming that the 
> Google cpp sample mentioned above would be pointing to some certificate to 
> do the SSL handshake, but is not getting it. so I will have to manually 
> give those CA certificates on the location or may be change the location to 
> point on these certificates.
>
> Can someone help me on how to do that. (its OK if you have nothing to 
> refer regarding OpenSuse) I am looking for mainly the File/ certificate 
> that the program needs and where to find it.
>
> Regards to all, Thank you
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/3d678c3c-4dc2-4dd2-89d0-e540137c8a6e%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: gRPC Python DNS Resolution

2019-01-25 Thread 'Srini Polavarapu' via grpc.io
>From your description it looks like you are destroying all the old servers 
and bringing up a completely new set of servers with new IPs. The gRPC 
client is still seeing old IPs in the cached DNS, none of which are 
available. It will try to connect to these unavailable IPs until the 
deadline is reached. 

On Thursday, January 24, 2019 at 10:30:02 AM UTC-8, as...@brilliant.tech 
wrote:
>
> Sure, I understand that part. But what I didn't understand was why I 
> continued to get DEADLINE_EXCEEDED errors after the UNAVAILABLE errors? I 
> would have though that since the old instances were terminated, I wouldn't 
> be able to even maintain a connection to the old instances, so 
> theoretically I would keep trying to reconnect and eventually open up a 
> connection to the new instance.
>
> On Wednesday, January 23, 2019 at 11:29:35 AM UTC-8, Srini Polavarapu 
> wrote:
>>
>> That's right. When a subchannel goes down, the channel re-resolves DNS in 
>> round-robin LB. Depending on your DNS TTL, the OS may still be returning 
>> cached DNS entry which might still contain the server IP that went down. 
>> Regardless, depending on DNS updates will not fully meet your LB 
>> requirements. This is because gRPC client does not periodically re-resolve 
>> DNS. This means when new backends are added, gRPC client will not know 
>> about those. See this  and 
>> this .
>>
>> On Sunday, January 20, 2019 at 9:27:49 PM UTC-8, as...@brilliant.tech 
>> wrote:
>>>
>>> I'm trying to set up a python gRPC simple client and server, where the 
>>> client uses round robin load balancing against a single DNS record, where I 
>>> have multiple servers (instances) in the DNS record.
>>>
>>> In the beginning, I'm able to connect and issue queries fine, but when I 
>>> try a re-deploy of my servers, I get some weird behavior that I was hoping 
>>> would be resolved automatically by the client library. In my re-deploy, I 
>>> first bring up new servers, set the DNS record to the IPs of the new 
>>> servers, and then destroy the old servers. Everything seems to work until I 
>>> destroy the old servers, at which point I get a couple of 
>>> UNAVAILABLE_ERRORs followed by DEADLINE_EXCEEDED until I kill the client. 
>>> From what I understand, when the sub-channels go down (i.e. the server 
>>> instances are killed), the channel should re-resolve the DNS record and 
>>> attempt to re-connect to the new instances. Am I interpreting this 
>>> incorrectly? Is there some channel and/or server option I need to set in 
>>> order for this to work?
>>>
>>> Sample client below:
>>>
>>>   channel = grpc.insecure_channel("localhost:1", 
>>> options=(("grpc.lb_policy_name", "round_robin"),))
>>>   fut = grpc.channel_ready_future(channel)
>>>   fut.result()
>>>   print("done waiting")
>>>   stub = test_pb2_grpc.TestStub(channel)
>>>   while True:
>>> try:
>>>   print(stub.Test(request, timeout=5))
>>> except grpc.RpcError as e:
>>>   print("{} {}".format(time.time(), e))
>>> time.sleep(0.5)
>>>
>>>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/29ca432a-8b67-44a6-a000-2d844ad4de21%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: Calling gRPC Client from an External Django Application

2019-01-16 Thread 'Srini Polavarapu' via grpc.io
Are you able to use the examples shown here 
 to write an 
independent gRPC server?

On Tuesday, January 15, 2019 at 8:58:46 AM UTC-8, asadh...@gmail.com wrote:
>
> I am following the post with link:
>
> https://blog.codeship.com/using-grpc-in-python/
>
> Is there a way to call this gRPC client from an external Django/Flask 
> project? The example shows a Flask app that's integrated with the gRPC code 
> in the same project but I wish to decouple these 2 and deploy them 
> independently. Any help would be much appreciated. Thanks.
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/8f6abe27-2316-4d8e-9f8c-5c714a635bff%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: Unable to connect to remote grpc server using InsecureChannelCredentials from local client

2019-01-08 Thread 'Srini Polavarapu' via grpc.io
You may need to enable HTTP/2 in the NGINX config. gRPC uses HTTP/2. The 
proxy may be downgrading to HTTP/1.1 which a gRPC server doesn't support.

On Wednesday, December 26, 2018 at 5:38:11 PM UTC-8, Gopalakrishna Hegde 
wrote:
>
> I tried that with no success. I experimented a bit and noticed that
> 1. If I do not install nginx on that machine and run the grpc server then 
> everything works fine. 
> 2. If I install nginx then the grpc server is not reachable. Even if I 
> stop and disable nginx service using systemctl grpc server is not reachable.
>
> Is this something to do with Nginx? Latest Nginx supports forwarding grpc 
> requests using server block configuration but I am yet to try that option.  
> Trying to get grpc server working standalone.
>
>
> On Wednesday, 26 December 2018 12:18:35 UTC+8, Gopalakrishna Hegde wrote:
>>
>> Hi,
>>   I have a grpc server running on Digital Ocean Droplet which has 
>> a public IP address. The grpc server is using Nodejs and is running on 
>> "localhost:50051". I am running C++ client on my local PC and 
>> using InsecureChannelCredentials() to create a channel to the remote grpc 
>> server running on the droplet. However, the client is not able to connect 
>> to the server and always gets Status Code 14 with "Connect failed" or 
>> "Socket closed" message. If both server and client run on my local PC then 
>> everything works fine. What could be the issue? Can't the client connect to 
>> remote server using InsecureChannelCredentials() as credentials? Really 
>> appreciate any help. Below is the code block for server and client.
>>
>> --- Server Side Nodejs code running on Digital ocean 
>> droplet--
>>var server = new grpc.Server();
>>   const service_handlers = {
>> Hello: HelloService,
>>   };
>>
>>
>>   server.addService(service.TestServiceService, service_handlers);
>>
>>   const server_uri = 'localhost:' + server_port;
>>   console.log(`Creating grpc server and binding to ${server_uri}`);
>>   var ret = server.bind(server_uri, 
>> grpc.ServerCredentials.createInsecure());
>>   console.log(ret);
>>
>> -
>>
>> - Client side C++ code --
>> auto creds = grpc::InsecureChannelCredentials();
>> std::string server_addr_ = "> droplet>:50051";
>> stub_ = TestService::NewStub(
>> grpc::CreateChannel(server_addr_, creds));
>> -
>>
>> Thanks, Gopal
>>
>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/90e86d6c-33b2-4ae3-991e-84e7f5b6c94d%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [grpc-io] How to solve the problem reported by Mergeable when open a PR to GRPC

2018-12-24 Thread 'Srini Polavarapu' via grpc.io
You can ignore this error. Once the PR is accepted, a maintainer will apply
appropriate labels to pass this check. The labels mentioned here are need
to figure out if the PR needs to be noted in release notes.

On Sun, Dec 23, 2018 at 11:30 PM Tao Tse  wrote:

> Mergeable has found the following failed checks
>
>- ((Please include release note: yes *AND* Please include a language
>label) *OR* Please include release note: no)
>
> ---
>
> As the notice above, I have no idea about this description.
> Would someone give some guidelines to solve this problem?
>
> Thanks!
>
> --
> You received this message because you are subscribed to the Google Groups "
> grpc.io" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to grpc-io+unsubscr...@googlegroups.com.
> To post to this group, send email to grpc-io@googlegroups.com.
> Visit this group at https://groups.google.com/group/grpc-io.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/grpc-io/f93b8899-143c-4557-8d60-8202c77f0edf%40googlegroups.com
> 
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CAP2DVw1rmsefd7Qia4%3DHVZGxao%2BeTq9R1eDbLH4KiJBH5_MUNA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: GRPC with istio for public clients and internal applications

2018-12-02 Thread 'Srini Polavarapu' via grpc.io
gRPC clients have only round-robin and pick-first built-in LB policies 
available to them. You are probably better off using Istio ingress LB to 
share LB policy unless you want to run your own gRPCLB service and 
implement your LB policy in there.

On Wednesday, November 14, 2018 at 8:35:36 PM UTC-8, Isuru Samaraweera 
wrote:
>
> Hi,
> I am going to expose GRPC services to public clients using istion ingress 
> and proxy load balancing.
> In addition to public grpc clients I want internal applications to use the 
> same grpc cluster with a common load balancing policy for both public 
> clients and internal applications.
>
> Should the internal applications use istio ingress to achieve shared load 
> balancing policy or can it be done via a separate client side load 
> balancing mechanism only for internal apps?What is the recommended approach?
>
> Thanks,
> Isuru
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/ed569881-f2b5-4aa1-a4e6-0b4917ed848a%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [grpc-io] Re: Does gRPC use only http2? tcpdump from a particular client does not show it as http2

2018-11-29 Thread 'Srini Polavarapu' via grpc.io

>
>
> What does grpc rely on to for http2 capability? (any tool in os 
> environment or http2 capability is inbuilt in grpc?)
>

gRPC Python has a built-in HTTP/2 stack. 


On Thursday, November 29, 2018 at 10:13:27 AM UTC-8, Josh Humphries wrote:
>
> The main gRPC libraries *only* use HTTP/2. As you saw, they negotiate the 
> same protocol during NPN step of TLS handshake: "h2". It is more likely 
> that whatever analysis tool you used in the first case did not recognize 
> "h2" as the HTTP/2 protocol, so treated it as an unknown application 
> protocol.
>
> 
> *Josh Humphries*
> jh...@bluegosling.com 
>
>
> On Thu, Nov 29, 2018 at 1:42 AM > wrote:
>
>> Thanks for the prompt response.
>> We use Python grpcio 1.0.0. 
>> No as i mentioned, for now version will not be updated as Network device 
>> i am talking about is already deployed in customer networks.
>> We have to make our application work with this for now.
>>
>> My question is more towards, 
>> What does grpc rely on to for http2 capability? (any tool in os 
>> environment or http2 capability is inbuilt in grpc?)
>> Why same version in another Ubuntu VM used http2, where as this specific 
>> env, it did not use http2?
>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "grpc.io" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to grpc-io+u...@googlegroups.com .
>> To post to this group, send email to grp...@googlegroups.com 
>> .
>> Visit this group at https://groups.google.com/group/grpc-io.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/grpc-io/ff3c3be4-e3e0-4f3d-af4e-73595f2018e0%40googlegroups.com
>>  
>> 
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/7eb04fb9-d29c-41a7-8c09-f7ac4b5504e5%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [grpc-io] How to set source ip address in grpc client application

2018-11-26 Thread 'Srini Polavarapu' via grpc.io
Specifying a source address is not supported in gRPC-Core on which Python 
is wrapped. Please file an issue on github 
 and give details on your use 
case. I see that a similar issue  
for C++ was created recently. You could just add  your comments there.

On Thursday, November 15, 2018 at 11:05:23 AM UTC-8, Carl Mastrangelo wrote:
>
> I believe the original questions was for Python, not Java.   There 
> currently isn't a way to get the currently bound address in Java *except* 
> via the Channelz service.   The issue you found is a tracking issue for an 
> experimental feature to customize connection setup.  The use case is 
> narrowly scoped, so I don't think it's what you are looking for.  
>
> On Wednesday, November 14, 2018 at 11:32:35 PM UTC-8, xing...@gmail.com 
> wrote:
>>
>>
>> found this, the feature seems under developing . 
>> https://github.com/grpc/grpc-java/issues/4900
>>
>> On Thursday, November 15, 2018 at 2:58:16 PM UTC+8, xing...@gmail.com 
>> wrote:
>>>
>>> Hi, 
>>>
>>> I am facing the same problem, does there any solutions?
>>>
>>>
>>> Thanks 
>>> xcc
>>>
>>> On Friday, February 23, 2018 at 2:13:29 PM UTC+8, dekum...@gmail.com 
>>> wrote:

 Hi,

 Is there way in grpc to bind to source ip address. 
 In scenario of multiple physical ecmp interface to reach server it's 
 better to use loopback interface source ip.

 Thanks,
 Deepak

 On Wednesday, October 18, 2017 at 8:43:09 AM UTC-7, Nathaniel Manista 
 wrote:
>
> On Wed, Oct 18, 2017 at 12:05 AM, GVA Rao  wrote:
>
>> I would like my grpc client application to carry specified source ip 
>> in case client has multiple hops to reach grpc server.
>> grpc insecure_channel rpc has only destination ip address i.e. 
>> server address field but not client source ip field.
>> Is there a way to set source ip address in grpc client application?  
>> If not in grpc is there way we can set source in python application and 
>> use insecure_channel 
>> as is?
>>
>
> This sounds like something that you would want to include in the 
> metadata you pass when invoking your RPCs.
> -Nathaniel
>


-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/741ee75d-7a87-4b22-a9ad-b7a41c9ff0ed%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [grpc-io] Re: python quick start "hello world" example fails

2018-11-19 Thread 'Srini Polavarapu' via grpc.io
Your connection issue is due to proxy configuration on your machine. As you 
can see gRPC is connecting to a proxy for localhost.
http_connect_handshaker.cc:300] Connecting to server localhost:50051 via 
HTTP proxy ipv4:10.19.8.225:912

Try setting http_proxy= or try no_proxy=localhost.

On Monday, November 12, 2018 at 1:49:22 PM UTC-8, ankitpa...@gmail.com 
wrote:
>
> Today i tried this example on Centos 7.5 machine and i had same issue. 
>
> On Friday, November 9, 2018 at 3:54:50 PM UTC-8, Lidi Zheng wrote:
>
> Hi Ankit,
>
> Thanks for providing the trace log. I will look into it and update if I 
> found anything.
>
> Lidi Zheng
>
> On Fri, Nov 9, 2018 at 3:04 PM  wrote:
>
> here is server log; i killed server at the end by cntrl+c
>
> I1109 15:02:26.154260194   13636 ev_epoll1_linux.cc:116] grpc epoll 
> fd: 3
> D1109 15:02:26.154434159   13636 ev_posix.cc:169]Using polling 
> engine: epoll1
> D1109 15:02:26.154530558   13636 dns_resolver.cc:338]Using native 
> dns resolver
> I1109 15:02:26.155007186   13636 init.cc:153]
> grpc_init(void)
> I1109 15:02:26.155213454   13636 completion_queue.cc:474]
> grpc_completion_queue_create_internal(completion_type=0, polling_type=0)
> I1109 15:02:26.155388910   13636 init.cc:153]
> grpc_init(void)
> I1109 15:02:26.155498555   13636 server.cc:994]  
> grpc_server_create((nil), (nil))
> I1109 15:02:26.155608844   13636 server.cc:979]  
> grpc_server_register_completion_queue(server=0x10423b0, cq=0x10e4590, 
> reserved=(nil))
> I1109 15:02:26.156318834   13636 server_chttp2.cc:34]
> grpc_server_add_insecure_http2_port(server=0x10423b0, addr=[::]:50051)
> I1109 15:02:26.156495695   13636 socket_utils_common_posix.cc:310] 
> TCP_USER_TIMEOUT not supported for this platform
> I1109 15:02:26.156648253   13636 init.cc:153]
> grpc_init(void)
> I1109 15:02:26.156761086   13636 completion_queue.cc:474]
> grpc_completion_queue_create_internal(completion_type=0, polling_type=1)
> I1109 15:02:26.156907935   13636 server.cc:979]  
> grpc_server_register_completion_queue(server=0x10423b0, cq=0x10b5210, 
> reserved=(nil))
> I1109 15:02:26.157128476   13636 server.cc:1089]
>  grpc_server_start(server=0x10423b0)
> I1109 15:02:26.157323173   13636 completion_queue.cc:956]
> grpc_completion_queue_next(cq=0x10b5210, deadline=gpr_timespec { tv_sec: 
> 1541804546, tv_nsec: 157310009, clock_type: 1 }, reserved=(nil))
> I1109 15:02:26.158580160   13636 init.cc:153]
> grpc_init(void)
> I1109 15:02:26.158879985   13636 init.cc:153]
> grpc_init(void)
> I1109 15:02:26.159027389   13636 call_details.cc:31]
>  grpc_call_details_init(cd=0x7fdfe3e076d0)
> I1109 15:02:26.159205197   13636 metadata_array.cc:29]  
>  grpc_metadata_array_init(array=0x7fdfe3dd52f0)
> I1109 15:02:26.159393998   13636 server.cc:1417]
>  grpc_server_request_call(server=0x10423b0, call=0x7fdfe3dc85e0, 
> details=0x7fdfe3e076d0, initial_metadata=0x7fdfe3dd52f0, 
> cq_bound_to_call=0x10e4590, cq_for_notification=0x10e4590, 
> tag=0x7fdfe3dd52c0)
> I1109 15:02:26.159928640   13641 completion_queue.cc:956]
> grpc_completion_queue_next(cq=0x10e4590, deadline=gpr_timespec { tv_sec: 
> 1541804546, tv_nsec: 359917975, clock_type: 1 }, reserved=(nil))
> I1109 15:02:26.360656578   13641 completion_queue.cc:956]
> grpc_completion_queue_next(cq=0x10e4590, deadline=gpr_timespec { tv_sec: 
> 1541804546, tv_nsec: 560638312, clock_type: 1 }, reserved=(nil))
> I1109 15:02:26.561583802   13641 completion_queue.cc:956]
> grpc_completion_queue_next(cq=0x10e4590, deadline=gpr_timespec { tv_sec: 
> 1541804546, tv_nsec: 761562116, clock_type: 1 }, reserved=(nil))
> I1109 15:02:26.763345769   13641 completion_queue.cc:956]
> grpc_completion_queue_next(cq=0x10e4590, deadline=gpr_timespec { tv_sec: 
> 1541804546, tv_nsec: 963336198, clock_type: 1 }, reserved=(nil))
> I1109 15:02:26.965018149   13641 completion_queue.cc:956]
> grpc_completion_queue_next(cq=0x10e4590, deadline=gpr_timespec { tv_sec: 
> 1541804547, tv_nsec: 165008451, clock_type: 1 }, reserved=(nil))
> I1109 15:02:27.165804716   13641 completion_queue.cc:956]
> grpc_completion_queue_next(cq=0x10e4590, deadline=gpr_timespec { tv_sec: 
> 1541804547, tv_nsec: 365797710, clock_type: 1 }, reserved=(nil))
> I1109 15:02:27.367464272   13641 completion_queue.cc:956]
> grpc_completion_queue_next(cq=0x10e4590, deadline=gpr_timespec { tv_sec: 
> 1541804547, tv_nsec: 567454529, clock_type: 1 }, reserved=(nil))
> I1109 15:02:27.569251508   13641 completion_queue.cc:956]
> grpc_completion_queue_next(cq=0x10e4590, deadline=gpr_timespec { tv_sec: 
> 1541804547, tv_nsec: 769241829, clock_type: 1 }, reserved=(nil))
> I1109 15:02:27.771057300   13641 completion_queue.cc:956]
> grpc_completion_queue_next(cq=0x10e4590, deadline=gpr_timespec { tv_sec: 
> 1541804547, tv_nsec: 

Re: [grpc-io] bidirectional communication on the same socket?

2018-11-03 Thread 'Srini Polavarapu' via grpc.io
I think the OP is asking if server can start a request to client after 
client (behind a firewall) has established a gRPC connection to the server. 
The answer is no. See discussion 
here https://github.com/grpc/grpc/issues/14101. It is possible in gRPC-Go 
in a convoluted way. See https://github.com/grpc/grpc-go/issues/484

On Wednesday, October 31, 2018 at 8:09:24 AM UTC-7, robert engels wrote:
>
> It is my understanding - but I may be mistaken here - is that a gRPC 
> “session” is uni directional unless you use streaming (that being said, a 
> server can send unsolicited “responses” - which may be treated like a 
> request to the client- to a client).
>
> My continued confusion is that I believe the standard gRPC session is 
> transient - in that it may close the connection after each request - and it 
> may send a subsequent request to a completely different server, or on a new 
> connection. The only time this is not the case is if the session is in 
> streaming mode.
>
>
> On Oct 31, 2018, at 9:53 AM, dan.b...@huawei.com  wrote:
>
> Not necessarily for streaming. For sending requests and receiving 
> synchronous or asynchronous responses. (unless I completely misunderstand 
> what streaming means)
>  
>
> On Wednesday, October 31, 2018 at 4:49:21 PM UTC+2, robert engels wrote:
>>
>> If you mean using the ‘streaming’ protocol, you can look at 
>> github.com/robaho/keydbr which uses a bi-directional stream and the code 
>> is fairly simple.
>>
>> On Oct 31, 2018, at 9:35 AM, dan.b...@huawei.com wrote:
>>
>> Once establishing a connection from one service (behind a firewall) to 
>> another service, 
>> is it correct that gRPC can now use the established connection to 
>> initiate requests on either side?
>>
>> If there's a pointer to some example demonstrating such bidirectional 
>> communication that would be great.
>>
>> Thanks,
>> Dan
>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "grpc.io" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to grpc-io+u...@googlegroups.com.
>> To post to this group, send email to grp...@googlegroups.com.
>> Visit this group at https://groups.google.com/group/grpc-io.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/grpc-io/99c9d361-df3b-4d09-a9fa-3fc331b064db%40googlegroups.com
>>  
>> 
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>
>>
>>
> -- 
> You received this message because you are subscribed to the Google Groups "
> grpc.io" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to grpc-io+u...@googlegroups.com .
> To post to this group, send email to grp...@googlegroups.com 
> .
> Visit this group at https://groups.google.com/group/grpc-io.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/grpc-io/2df2a490-7cf0-4214-88a7-be3e26bdb6bb%40googlegroups.com
>  
> 
> .
> For more options, visit https://groups.google.com/d/optout.
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/93756e28-416a-432f-8070-33dd8f02e903%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: grpc multithreading

2018-10-26 Thread 'Srini Polavarapu' via grpc.io
Replied on the other thread you opened.

On Wednesday, October 24, 2018 at 10:06:15 PM UTC-7, rob.vai...@gmail.com 
wrote:
>
> Hi,
>
> How to write the program of grpc multithreading in python? please share 
> some references with me.
> Thank you
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/7b174195-7d4b-4e1f-9f68-39d5051b637d%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [grpc-io] Re: python support multi-thread

2018-10-26 Thread 'Srini Polavarapu' via grpc.io
On the server you can provide a threadpool like this: 
server = grpc.server(futures.ThreadPoolExecutor(max_workers=10))

This will spawn a maximum of 10 concurrent threads to handle requests in 
parallel. See this example: 
https://github.com/grpc/grpc/blob/master/examples/python/helloworld/greeter_server.py

On the client side, you can create one channel and pass it to your worker 
threads to create stubs on that channel. Worker threads can then do 
concurrent RPCs on their own stubs while using the same channel 
(connection) to the server.


On Friday, October 26, 2018 at 11:27:51 AM UTC-7, Carl Mastrangelo wrote:
>
> +Lidi
>
> On Fri, Oct 26, 2018 at 11:23 AM Vaibhav Bedi  > wrote:
>
>> Can you share some reference?
>> ᐧ
>>
>> On Fri, Oct 26, 2018 at 11:51 PM Vaibhav Bedi > > wrote:
>>
>>> My problem is
>>> I want to write the grpc python multithread code for the client-server 
>>> application, both client and server should use
>>> threads in order to handle multi-requests at the same time. The client 
>>> is simulating a gateway where it uploads data to the server. This data 
>>> should be an array of objects.
>>> The server is receiving these data and printing them in a multi-threaded 
>>> way.
>>>
>>> Thank you
>>> ᐧ
>>>
>>> On Fri, Oct 26, 2018 at 11:46 PM 'Carl Mastrangelo' via grpc.io <
>>> grp...@googlegroups.com > wrote:
>>>
 Yes it does.  If you provide more information about what you want to 
 do, we can give a better answer.

 On Thursday, October 25, 2018 at 9:11:49 AM UTC-7, rob.vai...@gmail.com 
 wrote:
>
> hi
>  
> I want to know Is grpc python support multi-thread?
>
 -- 
 You received this message because you are subscribed to a topic in the 
 Google Groups "grpc.io" group.
 To unsubscribe from this topic, visit 
 https://groups.google.com/d/topic/grpc-io/TYz7WUUJkiw/unsubscribe.
 To unsubscribe from this group and all its topics, send an email to 
 grpc-io+u...@googlegroups.com .
 To post to this group, send email to grp...@googlegroups.com 
 .
 Visit this group at https://groups.google.com/group/grpc-io.
 To view this discussion on the web visit 
 https://groups.google.com/d/msgid/grpc-io/31b617e5-629f-473b-8dcf-d9af2d17a0a5%40googlegroups.com
  
 
 .
 For more options, visit https://groups.google.com/d/optout.

>>>
>>>
>>> -- 
>>> Sincerely,
>>> Vaibhav Bedi
>>> Email ID- rob.vai...@gmail.com 
>>> Contact Number-8950597710
>>> Website-http://www.vaibhavbedi.com/
>>>
>>
>>
>> -- 
>> Sincerely,
>> Vaibhav Bedi
>> Email ID- rob.vai...@gmail.com 
>> Contact Number-8950597710
>> Website-http://www.vaibhavbedi.com/
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/46d960d2-fdfa-40ea-8cab-dee05adeceee%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: fork() support in Python!

2018-10-17 Thread 'Srini Polavarapu' via grpc.io
Hi Yonatan,

Your understanding is correct. The design takes care of not affecting RPCs 
in parent process due to FD shutdown in the child's post-fork handlers. 
Child process will recreate its own connection when a Python object (with a 
gRPC stub inside) inherited from the parent needs to be used. This should 
handle the case you described in your post 
. That was one of the use cases 
in our mind when deciding how to solve the fork issue. Please give it a try 
and let us know if you see any issues.

-Srini 

On Wednesday, October 17, 2018 at 5:26:16 PM UTC-7, Yonatan Zunger wrote:
>
>
> Wow -- I just saw the notes for 16264 
> , and that 1.15 now supports 
> fork() in Python. This is huge and great news!
>
> I just want to make sure I understand how this change works, and in 
> particular what the consequences of the shutdown of the core-level gRPC 
> resources in the child's post-fork handler means. The use case which is 
> (IMO) most important is where you create some kind of Python object which 
> has a gRPC stub inside it (e.g., a client object meant to talk to servers), 
> then fork() (often through use of the core Python multiprocessing library), 
> and use that object from within the child process as well. (This is 
> important because the multiprocessing library is the only built-in 
> parallelization mechanism that doesn't suffer from serialization due to the 
> GIL) The overhead cost, IIUC, would be essentially a restart of the core 
> resources, which is roughly equivalent to the cost of closing and reopening 
> the channel, but *not* of having to reboot all the wrapping objects, like 
> bigtable clients or whatever. (Which are notoriously slow to start up 
> because they also want to look up all sorts of metadata from the server 
> when they boot)
>
> I could imagine several gotchas here: for example, that the cancellation 
> of in-flight RPC's by the child during the reboot would also affect RPC's 
> in flight due to other threads, meaning that the client object has to be 
> entirely idle during the fork process.
>
> Am I understanding the new change correctly? What are the intended use 
> cases that it's meant to unlock?
>
> Yonatan
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/d7279410-69ab-4515-aa73-4d9b30cbcc73%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: How to change DSCP value of gRPC packets

2018-10-16 Thread 'Srini Polavarapu' via grpc.io
Hi,

There is no API in gRPC Python to set DSCP value. Please open an issue 
 to track this although this would 
be low priority for us.

Thanks.

On Monday, October 15, 2018 at 1:52:25 PM UTC-7, sara...@arista.com wrote:
>
> Hello,
>
> I am writing a Python gRPC client. How can I set DSCP value of the IP 
> packets generated by the gRPC client?
>
> Thanks,
> Sarah
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/bc96a4e9-ecaa-47a1-b6ea-800b5fe86c5b%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: Python channel option documentation. (or at least example)

2018-10-16 Thread 'Srini Polavarapu' via grpc.io
Hi,

Thanks for bringing this up. This is definitely an area where we need 
improvement. Our first order of priority is to update the reference guide 
 with latest release and clean up beta APIs. 
We need to fix some broken scripts on our end. This is the tracking issue 
.

For channel args, the link 

 
you mentioned is good to know about various args. Python passes these args 
to the Core transparently. It'll be useful to add links to such docs in 
Python documentation. Feel free to submit PRs. Once the documentation is 
fixed, we can continue to improve the description of various APIs.

We would also like to add more examples 
, especially for 
common channel and call options like LB, timeout, keepalive etc. Feel free 
to submit PRs for these too.


On Monday, October 15, 2018 at 9:32:05 PM UTC-7, ove...@gmail.com wrote:
>
> Hi,
>
> I use gRPC actively, especially gRPC python.
>
> I use several options to create channel.
> such as like 
>
>  grpc.insecure_channel(target="localhost:", 
>
>options=[("grpc.lb_policy_name", "round_robin"),
>
> .])
>
>
> But it is very hard to find which option is available and which value can 
> be assigned on it.
>
> I read several documents, which is even not sure up-to-date or correct one 
> to use it.
> such like
>
>- 
>
> https://github.com/grpc/grpc/tree/master/src/core/ext/filters/client_channel/lb_policy
>- https://github.com/grpc/grpc/blob/master/doc/load-balancing.md
>- https://github.com/grpc/grpc/blob/master/doc/service_config.md
>- 
>
> https://grpc.io/grpc/core/group__grpc__arg__keys.html#ga72c2b475e218ecfd36bb7d3551d0295b
>
>
> Is anyone know kind of clear and one shot document for usage of channel 
> option?
> If it does not exist, anybody know any plan for documentation or enhance 
> proposal?
> I could involve in the documentation if some correct sources.
>
> Thanks.
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/e996f5ad-2064-47d5-a31a-e1839f1b73d9%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: Is there a way to tell if Write failed because the msg is too big

2018-10-03 Thread 'Srini Polavarapu' via grpc.io
After the write fails, you have to call Finish() to get the status. See 
example here: https://grpc.io/docs/tutorials/async/helloasync-cpp.html


On Wednesday, September 26, 2018 at 12:17:05 PM UTC-7, dataman wrote:
>
>
> Thanks for the response! I am using the async Write. There is no status 
> returned in the call to Write itself. The Next call to the completion queue 
> has no status indication as well, except for the success/failure boolean 
> value.
>
>
> On Wednesday, September 26, 2018 at 2:05:44 PM UTC-4, Muxi Yan wrote:
>>
>> AFAIK the error detail should be included in the returned status of the 
>> call.
>>
>> On Thursday, September 20, 2018 at 7:22:49 AM UTC-7, dataman wrote:
>>>
>>>
>>> Hi all,
>>>
>>> I am working on an async gRPC client in C++. Due to server limitations 
>>> we need to cap the max send msg size in the client. When we try to send a 
>>> msg which is larger than the cap, the Write (async) fails, which is as 
>>> expected.
>>>
>>> Is there a way to know that the failure is due to the msg size and not 
>>> any other reason?
>>>
>>> Thanks!
>>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/d84deffc-1848-4038-acac-6f9b6a8728fd%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: [gRPC-python] Any examples of using "fork" method of parallelism in gRPC-Python

2018-09-27 Thread 'Srini Polavarapu' via grpc.io
The fork support added in 1.15.0 in gRPC-Python is only for the client 
side. This means, on a gRPC-Python client, it is now possible to fork the 
process while an RPC is in progress. Typically mulitprocessing module is 
used to distribute your work to child processes and get the result back. 
For example, you could be getting streamed responses from a server and 
handle the response in a child process without affecting the ongoing 
streaming RPC in the parent process. This was previously not supported. In 
this example , a datastore 
client forks using multiprocessing after it has created a gRPC connection 
and done an RPC. This didn't work prior to 1.15.0.

On Wednesday, September 26, 2018 at 9:48:05 PM UTC-7, Nanda wrote:
>
> With the recent launch of grpc-python 1.15.0 version fork method has been 
> enabled with GRPC_ENABLE_FORK_SUPPORT=1 flag, to achieve parallelism. Is 
> there any tutorial/example explaining how to make use of it ? (We have been 
> using single thread approach until now, as performance was degrading with 
> multiple threads (owing to GIL). So desparately need the multi-process way 
> of achieving parallelism in grpc-python.) Wondering how the request routing 
> actually happens. From Master to the forked processes ?
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/9cfb875a-7117-43d4-a229-c22410688af5%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [grpc-io] Re: Are there any good example of grpc in the browser?

2018-09-23 Thread 'Srini Polavarapu' via grpc.io
Greg, a gRPC-web tutorial is in the works 
here https://github.com/grpc/grpc.github.io/pull/737. If you are interested 
please give your feedback in the PR. 

On Thursday, September 20, 2018 at 1:53:06 PM UTC-7, Spencer Fang wrote:
>
> Yes envoy proxy is an example of a supported proxy. gRPC on the web 
> browser uses the grpc-web protocol, which is different from the regular 
> grpc protocol. Here's an example of a config that makes envoy perform this 
> translation: 
> https://github.com/grpc/grpc-experiments/blob/master/grpc-zpages/docker/envoy/zprox.sh
>
>
>
> On Thu, Sep 20, 2018 at 9:27 AM Greg Keys  > wrote:
>
>> By proxy do you mean something like Envoy? I'm not opposed to using 
>> something like that, it's basically serves the same purpose of the router 
>> that I'm used to using from crossbar, so I'm able to reason about that 
>> fairly easily. 
>>
>> I'm also happy to hear the http/2 works with tls, everything we do is 
>> tls. so that should be fine, so long as it will multiplex over a single tcp 
>> connection. the whole reason we're looking to move away from crossbar and 
>> websockets is the scaling issue.
>> We are currently dependent on the crossbar router which does not cluster 
>> or scale making it a single point of failure for us, but envoy appears to 
>> have scaling working quite nicely.
>>
>> I supposed what makes grpc-web so hard to reason about at the moment is 
>> the documentation, it's not very clear, the documentation lays out a simple 
>> Echo, EchoRequest, EchoResponse proto but in the actual code they implement 
>> about 20 other methods, addLeft, addRight etc. so my mind immediately 
>> goes wtf.
>>
>> I guess I'll spend a little more time sorting through the code and try to 
>> come up with something simpler to digest, grpc-web examples are definitely 
>> lacking right now.
>>
>> On Wednesday, September 19, 2018 at 6:15:54 PM UTC-7, Carl Mastrangelo 
>> wrote:
>>>
>>> There are examples how to run in a browser, but they typically involve a 
>>> sidecar proxy.  Here is one example: 
>>> https://github.com/grpc/grpc-experiments/tree/master/grpc-zpages   The 
>>> full docs are here: https://github.com/grpc/grpc-web
>>>
>>> Browsers present two challenges to gRPC.  First, they only use HTTP/2 
>>> when using TLS or SSL, which a lot of websites don't.   Second, Browsers 
>>> don't expose the HTTP trailers that are needed to tell when the response is 
>>> done.  To get around these issues, we have gRPC-Web protocol, which 
>>> modifies the gRPC protocol slightly to be usable on HTTP/1.1.  The fetch() 
>>> API for Browsers was supposed to fix the latter problem, but it has not 
>>> been implemented by them, so we are kinda stuck with the work around until 
>>> they do.  
>>>
>>> Lastly, browsers use CORS when making RPCs across origin, which happens 
>>> when you serve your RPCs from a different port than you HTML.  This may 
>>> affect you depending on your setup.
>>>
>>>
>>> I guess all of this is to say that getting requests (or RPCs) to work in 
>>> the browser is much more complicated that it first appears, and 
>>> unfortunately we can't fix it for you.  The proxy solution, while more 
>>> complex, does solve a number of things you would have to otherwise do.
>>>
>>>
>>>
>>>
>>>
>>> On Wednesday, September 19, 2018 at 4:26:57 PM UTC-7, Greg Keys wrote:

 I was thrilled when I started looking at gRPC as an alternative to our 
 current implementation of websockets (crossbar.io) however that 
 enthusiasm is dwindling the more I look into it.

 Am I understanding this correctly that it's primary strength is server 
 side service to service communication? 

 My hope was to use it in the browser as well as service to service. But 
 from what I gather the browser implementation is still using http/1.1 and 
 it does not multiplex as a result and instead uses xhr

 I have not been able to find any good (simple) examples of gRPC in the 
 browser, I've found a couple but they are really complex to reason about, 
 compared to the server side examples which are
 typically really simple.

 Are there any good examples of gRPC in the browser? are there any 
 implementations in the browser that multiplex?

>>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "grpc.io" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to grpc-io+u...@googlegroups.com .
>> To post to this group, send email to grp...@googlegroups.com 
>> .
>> Visit this group at https://groups.google.com/group/grpc-io.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/grpc-io/e31b7f17-70e8-4a53-bec4-055eca05c9d9%40googlegroups.com
>>  
>> 
>> .
>> For more options, visit 

[grpc-io] gRPC-Core Release 1.15.0

2018-09-13 Thread 'Srini Polavarapu' via grpc.io
This is 1.15.0(glider 
) release 
announcement for gRPC-Core and the wrapped languages C++, C#, Objective-C, 
Python, PHP and Ruby. Latest release notes are here 
.  

Core
   
   - Document SSL portability and performance considerations. See 
   https://github.com/grpc/grpc/blob/master/doc/ssl-performance.md .
   - Simplify call arena size growth. (#16396 
   )
   - Make gRPC buildable with AIX and Solaris (no official support). (#15926 
   )
   - PF: Check connectivity state before watching. (#16306 
   )
   - Added system roots feature to load roots from OS trust store. (#16083 
   )
   - Fix c-ares compilation under windows (but doesn't yet enable windows 
   DNS queries), and then enables address sorting on Windows. (#16163 
   )
   - Fix re-resolution in pick first. (#16076 
   )
   - Allow error strings in final_info to propagate to filters on call 
   destruction. (#16104 )
   - Add resolver executor . (#16010 
   )
   - Data race fix for lockfree_event. (#16053 
   )
   - Channelz: Expose new Core API. (#16022 
   )

C++
   
   - cmake: disable assembly optimizations only when necessary. (#16415 
   )
   - C++ sync server: Return status RESOURCE_EXHAUSTED if no thread quota 
   available. (#16356 )
   - Use correct target name for gflags-config.cmake. (#16343 
   )
   - Make should generate pkg-config file for gpr as well. (#15295 
   )
   - Restrict the number of threads in C++ sync server. (#16217 
   )
   - Allow reset of connection backoff. (#16225 
   )

C#
   
   - Add experimental support for Xamarin.Android and Xamarin.iOS, added 
   Helloworld example for Xamarin. See 
   https://github.com/grpc/grpc/tree/master/src/csharp/experimental.
   - Add experimental support for Unity Android and iOS. See 
   https://github.com/grpc/grpc/tree/master/src/csharp/experimental.
   - Add server reflection tutorial. See 
   https://github.com/grpc/grpc/blob/master/doc/csharp/server_reflection.md.
   - Avoid deadlock while cancelling a call. (#16440 
   )
   - Subchannel sharing for secure channels now works as expected. (#16438 
   )
   - Allow dot in metadata keys. (#16444 
   )
   - Avoid shutdown crash on iOS. (#16308 
   )
   - Add script for creating a C# package for Unity. (#16208 
   )
   - Add Xamarin example. (#16194 )
   - Cleanup and update C# examples. (#16144 
   )
   - Grpc.Core: add support for x86 android emulator. (#16121 
   )
   - Xamarin iOS: Add libgrpc_csharp_ext.a for iOS into Grpc.Core nuget. (
   #16109 )
   - Xamarin support improvements . (#16099 
   )
   - Mark native callbacks with MonoPInvokeCallback. (#16094 
   )
   - Xamarin.Android: add support. (#15969 
   )

Objective-C
   
   - Make BoringSSL symbols private to gRPC in Obj-C so there is no 
   conflict when linking with OpenSSL. (#16358 
   )
   - Use environment variable to enable CFStream. (#16261 
   )
   - Surface error_string to ObjC users. (#16271 
   )
   - Fix GRPCCall refcounting issue. (#16213 
   )

Python
   
   - Added support for client-side fork on Linux and Mac by setting the 
   environment variable GRPC_ENABLE_FORK_SUPPORT=1. Applications may fork 
   with active RPCs, as long as no user threads are currently invoking gRPC 
   library methods. In-progress RPCs continue in the parent process, and the 
   child process may use gRPC by creating new channels. (#16264 
   )
   - Improve Pypy compatibility. (#16364 
   )
   - Segmentation fault caused by channel.close() when used with 
   connectivity-state subscriptions. 

Re: [grpc-io] gRFC A16 Option for setting socket option TCP_USER_TIMEOUT

2018-08-23 Thread 'Srini Polavarapu' via grpc.io
In my opinion, gRPC should not set an artificial limit on min value of 
TCP_USER_TIMEOUT. It is a well know option available in Linux for a long 
time. It should be a pass-thru value for gRPC as it does not modify the 
kernel behavior w.r.t this setting. There are applications (e.g. in 
graphics design) where huge amounts of data needs to be transferred on 
lossless fabric and sub-second network error detection is crucial. There 
are setups where retransmissions are extremely rare and treated as errors. 
Setting an arbitrary min value of 10 secs doesn't seem right. 

On Thursday, August 23, 2018 at 10:53:16 AM UTC-7, yas...@google.com wrote:
>
> Also, 
> https://github.com/grpc/proposal/blob/master/A8-client-side-keepalive.md 
> specifies 
> that KEEPALIVE_TIME is restricted to 10 seconds, but doesn't seem to impose 
> a similar restriction on KEEPALIVE_TIMEOUT
>
> On Thursday, August 23, 2018 at 10:21:08 AM UTC-7, yas...@google.com 
> wrote:
>>
>> I like the idea of reusing the channel option KEEPALIVE_TIMEOUT for this, 
>> but I am hesitant for exactly the reason that you pointed out. It would 
>> give meaning to KEEPALIVE_TIMEOUT even if keepalive is disabled by setting 
>> KEEPALIVE_TIME to infinite. Also, given the fact that TCP_USER_TIMEOUT is 
>> not supported for on all platforms, it would mean that KEEPALIVE_TIMEOUT 
>> would behave differently on different systems. On the other hand, if we 
>> isolate this as a separate parameter for only those platforms that support 
>> it, it allows us to explicitly say that it is only valid for linux kernel 
>> versions 2.6.37 and later.
>>
>> TCP_USER_TIMEOUT should not have any affect on retransmits, other than 
>> shutting down the connection (which ofcourse might prevent a retransmit 
>> from taking place). I am currently of the opinion that if an application 
>> decides to change the timeout value from the default of 20 seconds, it is 
>> doing so knowingly and owns the responsibility of connections being dropped 
>> because of that.
>>
>> On Thursday, August 23, 2018 at 8:45:15 AM UTC-7, Eric Anderson wrote:
>>>
>>> Also, this stuff is pretty complex for users already. Adding *yet 
>>> another* configuration parameter just worsens that. I'd much rather 
>>> they just set one set of parameters and we make the most use of them as we 
>>> can on each platform.
>>>
>>> On Thu, Aug 23, 2018 at 8:43 AM Eric Anderson  wrote:
>>>
 I'd prefer we re-used KEEPALIVE_TIMEOUT for this. This would change the 
 semantics slightly, as right now the value does nothing when 
 KEEPALIVE_TIME 
 is infinite (the default). However, it makes a lot of sense to use the 
 same 
 value for both entries because they have mostly-shared fate. The only 
 difference is that keepalive goes through the remote application whereas 
 TCP_USER_TIMEOUT can be triggered directly by the kernel. The kernel will 
 delay ACKs to combine them or to attach them to outgoing data. So when 
 sending a keepalive, I'd expect the application to influence how soon data 
 is ACK'ed, so they would be transmitted on the same packet frequently.

 Also, KEEPALIVE_TIMEOUT is limited to no lower than 10 seconds. That is 
 a very appropriate limit for TCP_USER_TIMEOUT as well, as application 
 authors will commonly think "oh, a second looks good!" or "Oh, 100ms is 
 plenty!". But that ignores retransmits and puts applications in a very 
 dangerous position that can cause network collapse when the network slows 
 down, even with datacenter networks.

 On Wed, Aug 22, 2018 at 1:23 PM yashkt via grpc.io <
 grp...@googlegroups.com> wrote:

> This is the discussion thread for the proposal at 
> https://github.com/grpc/proposal/pull/95
>
> The proposal is to provide an option to set the socket 
> TCP_USER_TIMEOUT for platforms running on Linux kernels 2.6.37 and later. 
>
> -- 
> You received this message because you are subscribed to the Google 
> Groups "grpc.io" group.
> To unsubscribe from this group and stop receiving emails from it, send 
> an email to grpc-io+u...@googlegroups.com.
> To post to this group, send email to grp...@googlegroups.com.
> Visit this group at https://groups.google.com/group/grpc-io.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/grpc-io/4d585ee1-2dba-4895-9d55-b637a587b93d%40googlegroups.com
>  
> 
> .
> For more options, visit https://groups.google.com/d/optout.
>


-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group 

[grpc-io] Re: Support additional different platforms

2018-08-18 Thread 'Srini Polavarapu' via grpc.io
Added an assignee to the PR. This is low priority so please bear with the 
delay.

On Thursday, August 9, 2018 at 12:27:20 AM UTC-7, Jasper Siepkes wrote:
>
> Hi all!
>
> There is a PR for adding Solaris support in gRPC ( 
> https://github.com/grpc/grpc/pull/15926 ). Is there a chance for such 
> support to be merged? I fully understand the maintainers don't want to be 
> burdened with a platform they have no interest in. However we could add a 
> big fat warning that the platform itself is unsupported.
>
> Kind regards,
>
> Jasper
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/c83c30d5-19be-4f6c-b444-ef7a6b46fd43%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: Fatal: grpc_call_start_batch returned 8

2018-08-18 Thread 'Srini Polavarapu' via grpc.io
There is some flakiness in bazel tests. Please file an issue on github and 
provide the steps in detail to reproduce this. 

Thanks.

On Friday, August 17, 2018 at 3:16:13 PM UTC-7, Abhishek Parmar wrote:
>
> I have a weird problem with one of my unit tests that is using streaming 
> grpc. The test crashes about 10/1000 times but only when run under bazel 
> (i.e bazel test my_test). This happens with or without bazel sandboxing.
>
> When run on its own it works fine thousands of times.
>
> The most frequent test failure is
>
> E0817 19:54:09.499559477  18 server_cc.cc:629]   Fatal: 
> grpc_call_start_batch returned 8
> E0817 19:54:09.499590418  18 server_cc.cc:630]   ops[0]: 
> SEND_INITIAL_METADATA(nil)
> E0817 19:54:09.499595057  18 server_cc.cc:630]   ops[1]: 
> SEND_MESSAGE ptr=0x2330f00
>
> Sometimes I also see:
> E0817 19:54:21.345913273  18 proto_buffer_writer.h:65]   assertion 
> failed: !byte_buffer->Valid()
>
> Does this ring a bell for anyone? I did not find any similar issues 
> reported so I thought I would ask here before I dig any deeper.
>
> I am using grpc v1.14.1, though the failure happens in a similar place 
> even in 1.4.2 (from which I am upgrading our codebase).
>
> Thanks in advance.
> Abhishek
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/e775beeb-9600-45bf-bbdb-78eab5b4a725%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: gRPC java multiple bi-directional streams share same channel is faster than 1 channel per streaming

2018-08-18 Thread 'Srini Polavarapu' via grpc.io
Could you provide some stats on your observation and how you are measuring 
this? Two streams sharing a connection vs. separate connections could be 
faster due to these reasons:
- One less socket to service: less system calls, context switching, cache 
misses etc.
- Better batching of data from different streams on a single connection 
resulting in better connection utilization and larger av. pkt size on the 
wire.

On Friday, August 17, 2018 at 3:30:17 PM UTC-7, eleano...@gmail.com wrote:
>
> Hi Carl, 
>
> Thanks for the very detailed explanation! my question is why I observed 
> using a separate TCP connection per stream was SLOWER!
>
> If the single TCP connection for multiple streams are faster (Regardless 
> the reason), will the connection get saturated? e.g. too many streams 
> sending on the same TCP connection.
>
>
> On Friday, August 17, 2018 at 3:25:54 PM UTC-7, Carl Mastrangelo wrote:
>>
>> I may have misinterpretted your question; are you asking why gRPC prefers 
>> to use a single connection, or why you observed using a separate TCP 
>> connection per stream was faster?
>>
>> If the first, the reason is that the number of TCP connections may be 
>> limitted.   For example, making gRPC requests from the browser may limit 
>> how many connections can exist.   Also, a Proxy between the client and 
>> server may limit the number of connections.   Connection setup and teardown 
>> is slower due to the TCP 3-way handshake, so gRPC (really HTTP/2) prefers 
>> to reuse a connection.
>>
>> If the second, then I am not sure.   If you are benchmarking with Java, I 
>> strongly recommend using the JMH benchmarking framework.  It's difficult to 
>> setup, but it provides the most accurate, believe benchmark results.
>>
>> On Friday, August 17, 2018 at 2:09:20 PM UTC-7, eleano...@gmail.com 
>> wrote:
>>>
>>> Hi Carl, 
>>>
>>> Thanks for the explanation, however, that still does not explain why 
>>> using single tcp for multiple streamObserver is faster than using 1 tcp per 
>>> stream. 
>>>
>>> On Friday, August 17, 2018 at 12:45:32 PM UTC-7, Carl Mastrangelo wrote:

 gRPC does connection management for you.  If you don't have any active 
 RPCs, it will not actively create connections for you.  

 You can force gRPC to create a connection eaglerly by calling 
 ManagedChannel.getState(true), which requests the channel enter the ready 
 state. 

 Do note that in Java, class loading is done lazily, so you may be 
 measuring connection time plus classload time if you only measure on the 
 first connection.

 On Friday, August 17, 2018 at 9:17:16 AM UTC-7, eleano...@gmail.com 
 wrote:
>
> Hi, 
>
> I am doing some experiment with gRPC java to determine the right gRPC 
> call type to use. 
>
> here is my finding:
>
> creating 4 sets of StreamObservers (1 for Client to send request, 1 
> for Server to send response), sending on the same channel is slightly 
> after 
> than sending on 1 channel per stream.
> I have already elimiated the time of creating initial tcp connection 
> by making a initial call to let the connection to be established, then 
> start the timer. 
>
> I just wonder why this is the case?
>
> Thanks!
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/a72ee911-067a-4dd0-a6ee-c8ad6ca03677%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: Establishing multiple grpc subchannels for a single resolved host

2018-08-17 Thread 'Srini Polavarapu' via grpc.io
It looks like python does not have the API to set wait_for_ready :-(

On Friday, August 17, 2018 at 5:17:43 PM UTC-7, Srini Polavarapu wrote:
>
> Hi Alysha,
>
> How did you confirm that client is going into backoff and it is indeed 
> receiving a RST when nginx goes away? Have you looked at the logs gRPC 
> generates when this happens? One possibility is that nginx doesn't send RST 
> and client doesn't know that the connection is broken until TCP timeout 
> occurs. Using keepalive will help in this case.
>
> You can try using wait_for_ready=false 
> 
>  so 
> the call fails immediately and you can retry.
>
> A recent PR allows you to reset the backoff period. 
> https://github.com/grpc/grpc/pull/16225. It is experimental and doesn't 
> have python or ruby API so it can't be of immediate help.
>
> On Friday, August 17, 2018 at 12:58:12 PM UTC-7, alysha@shopify.com 
> wrote:
>>
>> Hey Carl,
>>
>> This is with L7 nginx balancing, the reason we moved to nginx from L4 
>> balancers was so we could do per-call balancing (instead of per-connection 
>> with L7).
>>
>> >  In an ideal world, nginx would send a GOAWAY frame to both the client 
>> and the server, and allow all the RPCs to complete before tearing down the 
>> connection.
>>
>>  I agree a GOAWAY would be better but it seems like nginx doesn't do that 
>> (at least yet), they just RST the connection :(
>>
>> > The client knows how to reschedule and unstarted RPC onto a different 
>> connection, without returning an UNAVAILABLE.  
>>
>> Even when we were using L4 it seemed like a GOAWAY from the Go server 
>> would put the Core clients in a backoff state instead of retrying 
>> immediately. The only solution that worked was a round-robin over multiple 
>> connections and a slow-enough rolling restart so the connections could 
>> re-establish before the next one died.
>>
>> > When you say multiple connections to a single IP, does that mean 
>> multiple nginx instances listening on different ports?
>>
>> No, it's a pool of ~20 ingress nginx instances with an L4 load balancer, 
>> so traffic looks like client -> L4 LB -> nginx L7 -> backend GRPC pod. The 
>> problem is the L4 LB in front of nginx has a single public IP.
>>
>> > I'm most familiar with Java, which can actually do what you want.  The 
>> normal way is the create a custom NameResolver that returns multiple 
>> address for a single address, which a RoundRobin load balancer will use
>>
>> Yeah I considered writing something similar in Core but I was worried it 
>> wouldn't be adopted upstream because of the move to external LBs? It's very 
>> tough (impossible?) to add new resolvers to Ruby or Python without 
>> rebuilding the whole extension, and we're pretty worried about maintaining 
>> a fork of the C++ implementation. It's nice to hear the approach has some 
>> merits, I might experiment with it.
>>
>> Thanks,
>> Alysha
>>
>> On Friday, August 17, 2018 at 3:42:31 PM UTC-4, Carl Mastrangelo wrote:
>>>
>>> Hi Alysha,
>>>
>>> Do you you know if nginx is balancing at L4 or L7?In an ideal world, 
>>> nginx would send a GOAWAY frame to both the client and the server, and 
>>> allow all the RPCs to complete before tearing down the connection.   The 
>>> client knows how to reschedule and unstarted RPC onto a different 
>>> connection, without returning an UNAVAILABLE.  
>>>
>>> When you say multiple connections to a single IP, does that mean 
>>> multiple nginx instances listening on different ports?
>>>
>>> I'm most familiar with Java, which can actually do what you want.  The 
>>> normal way is the create a custom NameResolver that returns multiple 
>>> address for a single address, which a RoundRobin load balancer will use.  
>>> It sounds like you aren't using Java, but since the implementations are all 
>>> similar there may be a way to do so.  
>>>
>>> On Friday, August 17, 2018 at 8:46:49 AM UTC-7, alysha@shopify.com 
>>> wrote:

 Hi grpc people!

 We have a setup where we're running a grpc service (written in Go) on 
 GKE, and we're accepting traffic from outside the cluster through nginx 
 ingresses. Our clients are all using Core GRPC libraries (mostly Ruby) to 
 make calls to the nginx ingress, which load-balances per-call to our 
 backend pods.

 The problem we have with this setup is that whenever the nginx 
 ingresses reload they drop all client connections, which results in spikes 
 of Unavailable errors from our grpc clients. There are many nginx 
 ingresses 
 but they all have a single IP, the incoming TCP connections are routed 
 through a google cloud L4 load balancer. Whenever an nginx . client closes 
 a TCP connection the GRPC subchannel treats the backend as unavailable, 
 even though there are many more nginx pods that may be available 
 immediately to serve traffic, and it goes into backoff 

[grpc-io] Re: Establishing multiple grpc subchannels for a single resolved host

2018-08-17 Thread 'Srini Polavarapu' via grpc.io
Hi Alysha,

How did you confirm that client is going into backoff and it is indeed 
receiving a RST when nginx goes away? Have you looked at the logs gRPC 
generates when this happens? One possibility is that nginx doesn't send RST 
and client doesn't know that the connection is broken until TCP timeout 
occurs. Using keepalive will help in this case.

You can try using wait_for_ready=false 

 so 
the call fails immediately and you can retry.

A recent PR allows you to reset the backoff period. 
https://github.com/grpc/grpc/pull/16225. It is experimental and doesn't 
have python or ruby API so it can't be of immediate help.

On Friday, August 17, 2018 at 12:58:12 PM UTC-7, alysha@shopify.com 
wrote:
>
> Hey Carl,
>
> This is with L7 nginx balancing, the reason we moved to nginx from L4 
> balancers was so we could do per-call balancing (instead of per-connection 
> with L7).
>
> >  In an ideal world, nginx would send a GOAWAY frame to both the client 
> and the server, and allow all the RPCs to complete before tearing down the 
> connection.
>
>  I agree a GOAWAY would be better but it seems like nginx doesn't do that 
> (at least yet), they just RST the connection :(
>
> > The client knows how to reschedule and unstarted RPC onto a different 
> connection, without returning an UNAVAILABLE.  
>
> Even when we were using L4 it seemed like a GOAWAY from the Go server 
> would put the Core clients in a backoff state instead of retrying 
> immediately. The only solution that worked was a round-robin over multiple 
> connections and a slow-enough rolling restart so the connections could 
> re-establish before the next one died.
>
> > When you say multiple connections to a single IP, does that mean 
> multiple nginx instances listening on different ports?
>
> No, it's a pool of ~20 ingress nginx instances with an L4 load balancer, 
> so traffic looks like client -> L4 LB -> nginx L7 -> backend GRPC pod. The 
> problem is the L4 LB in front of nginx has a single public IP.
>
> > I'm most familiar with Java, which can actually do what you want.  The 
> normal way is the create a custom NameResolver that returns multiple 
> address for a single address, which a RoundRobin load balancer will use
>
> Yeah I considered writing something similar in Core but I was worried it 
> wouldn't be adopted upstream because of the move to external LBs? It's very 
> tough (impossible?) to add new resolvers to Ruby or Python without 
> rebuilding the whole extension, and we're pretty worried about maintaining 
> a fork of the C++ implementation. It's nice to hear the approach has some 
> merits, I might experiment with it.
>
> Thanks,
> Alysha
>
> On Friday, August 17, 2018 at 3:42:31 PM UTC-4, Carl Mastrangelo wrote:
>>
>> Hi Alysha,
>>
>> Do you you know if nginx is balancing at L4 or L7?In an ideal world, 
>> nginx would send a GOAWAY frame to both the client and the server, and 
>> allow all the RPCs to complete before tearing down the connection.   The 
>> client knows how to reschedule and unstarted RPC onto a different 
>> connection, without returning an UNAVAILABLE.  
>>
>> When you say multiple connections to a single IP, does that mean multiple 
>> nginx instances listening on different ports?
>>
>> I'm most familiar with Java, which can actually do what you want.  The 
>> normal way is the create a custom NameResolver that returns multiple 
>> address for a single address, which a RoundRobin load balancer will use.  
>> It sounds like you aren't using Java, but since the implementations are all 
>> similar there may be a way to do so.  
>>
>> On Friday, August 17, 2018 at 8:46:49 AM UTC-7, alysha@shopify.com 
>> wrote:
>>>
>>> Hi grpc people!
>>>
>>> We have a setup where we're running a grpc service (written in Go) on 
>>> GKE, and we're accepting traffic from outside the cluster through nginx 
>>> ingresses. Our clients are all using Core GRPC libraries (mostly Ruby) to 
>>> make calls to the nginx ingress, which load-balances per-call to our 
>>> backend pods.
>>>
>>> The problem we have with this setup is that whenever the nginx ingresses 
>>> reload they drop all client connections, which results in spikes of 
>>> Unavailable errors from our grpc clients. There are many nginx ingresses 
>>> but they all have a single IP, the incoming TCP connections are routed 
>>> through a google cloud L4 load balancer. Whenever an nginx . client closes 
>>> a TCP connection the GRPC subchannel treats the backend as unavailable, 
>>> even though there are many more nginx pods that may be available 
>>> immediately to serve traffic, and it goes into backoff logic. My 
>>> understanding is that with multiple subchannels even if one nginx ingress 
>>> is restarted the others can continue to serve requests and we shouldn't see 
>>> Unavailable errors.
>>>
>>> My question is: what is the best way to make GRPC Core 

Re: [grpc-io] Re: SSL/TLS handshake NPN vs ALPN

2018-08-15 Thread 'Srini Polavarapu' via grpc.io
The fix is made in gRPC Core which gRPC-Python wraps. If you are building 
from src from an older version then yes, you will have to patch the file. 
If you are doing pip install, you could either wait for next release or try 
nightly builds from https://packages.grpc.io/ 

On Thursday, August 9, 2018 at 12:30:32 PM UTC-7, Gustavo Cayres wrote:
>
> Hi!
>
> I'm also running into this problem while using a gRPC-Python *client *and 
> a gRPC-Go *server.*
> I'm not sure if this will sound dumb but, to make use of this fix, would I 
> have to change the value of the macro and compile the gRPC-Python locally?
>
> On Tuesday, July 17, 2018 at 1:20:44 PM UTC-3, jian...@google.com wrote:
>>
>> This PR would fix the the check peer error if both client and server use 
>> NPN rather than ALPN.
>> https://github.com/grpc/grpc/pull/16007
>>
>> On Thursday, July 12, 2018 at 8:06:05 AM UTC-7, grpc_client wrote:
>>>
>>> Thanks, I will try to check the ssl library we have.
>>>
>>> However, shouldn't the gRPC client work with an NPN response if it sends 
>>> the NPN request by itself?
>>>
>>> On Thursday, July 12, 2018 at 12:53:58 AM UTC-4, Jiangtao Li wrote:

 + Nicolas

 It looks like your libssl version is before 1.0.2 thus NPN is used. In 
 gRPC, APLN (rather than NPN) will be used if it is available. If you try 
 to 
 uninstall system libssl using the following command, try again. 
 $ sudo apt-get remove libssl-dev

 Thanks,
 Jiangtao 

>>>
>>>  
>>>
 On Wed, Jul 11, 2018 at 8:38 PM grpc_client wrote:

> Thanks for the reply!
>
> The client is in C++ and the server is in Go. I am not sure about the 
> SSL version for both. The handshake itself is successful though!
>
> The client sends NPN, the server responds with NPN - no problems. 
> However during the processing of the response in *security_connector.*cc 
> the gRPC code complains about receiving NPN and not ALPN.
>
> I believe this is a bug due to bad handling of the 
> *TSI_OPENSSL_ALPN_SUPPORT* macro. Wondering if anyone else ran into 
> it and if there is a fix to it.
>
> Thanks!
>
> On Wednesday, July 11, 2018 at 7:35:58 PM UTC-4, jian...@google.com 
> wrote:
>>
>> Could you give us more details on 
>> - what is your client language, which version of ssl client uses.
>> - what is your server language, which version of ssl server uses.
>> It looks like that your ssl is before 1.0.2 and there is no ALPN.
>>
>> On Monday, July 9, 2018 at 2:43:17 PM UTC-7, grpc_client wrote:
>>>
>>> Hi, got a quick question which has bothered me for the past couple 
>>> of days.
>>>
>>> I have a C++ gRPC client (which uses the 1.2.5 gRPC library). The 
>>> SSL/TLS handshake fails with the following error: "*Cannot check 
>>> peer: missing selected ALPN property*"
>>>
>>> Digging a little deeper I found that the client in fact sends an NPN 
>>> (next_protocol_negotiation) ssl extension and receives the same NPN 
>>> extension from the server - which I find to be the correct behavior as 
>>> far 
>>> as SSL is concerned. However it seems that the gRPC code expects an 
>>> ALPN 
>>> extension instead.
>>>
>>> Am I doing something wrong? I have tested the server with the 
>>> openssl tool, with both NPN and ALPN options and both handshakes were 
>>> successful
>>>
>>> Thanks!
>>>
>> -- 
> You received this message because you are subscribed to a topic in the 
> Google Groups "grpc.io" group.
> To unsubscribe from this topic, visit 
> https://groups.google.com/d/topic/grpc-io/x25rc8lJK4k/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to 
> grpc-io+u...@googlegroups.com.
> To post to this group, send email to grp...@googlegroups.com.
> Visit this group at https://groups.google.com/group/grpc-io.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/grpc-io/75d914ed-5050-45d6-92a5-e470502720ed%40googlegroups.com
>  
> 
> .
> For more options, visit https://groups.google.com/d/optout.
>


-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/159fee37-aa45-4d58-9418-98b65ddc893b%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [grpc-io] Why does not grpc use a lot of cpu?

2018-08-06 Thread 'Srini Polavarapu' via grpc.io
If your workload is CPU intensive then you should pre-fork your 
subprocesses and have each one start their own gRPC server. gRPC-Python 
does not support forking after the server is created. If your workload is 
not CPU intensive (i.e., subprocesses/threads block on I/O frequently) then 
using ThreadPoolExecutor should scale fine.

On Wednesday, July 25, 2018 at 10:43:34 PM UTC-7, venkat...@gmail.com wrote:
>
> Does gRPC Python support multicore use case now ? 
> We are using t2.xlarge instance types for our python grpc server , it is 
> also running on kubernetes .
> this is our resource request 
>  "requests": {
>   "cpu": "1800m",
>   "memory": "3584Mi"
>  }
> we would like to know can we utilize the requested cpu to the fullest .
> here the grpcio version we use
>
> grpcio==1.13.0
> grpcio-tools==1.11.0
>
>
> On Thursday, September 28, 2017 at 7:28:33 AM UTC+8, Nathaniel Manista 
> wrote:
>>
>> On Wed, Sep 27, 2017 at 8:18 AM, Gyuseong jo  wrote:
>>
>>> [gRPC Python] seems to use only single core.
>>>
>>
>> How many cores do you think it should use? How familiar are you with 
>> Python's 
>> Global Interpreter Lock (GIL) 
>> ? How likely do you 
>> think it is that the single-core use that you're seeing is due to Python's 
>> GIL? What about your code is written suggests that it should be using more 
>> than one core? If you take gRPC "out of the experiment" and just exercise 
>> your service-side application code alone in a single Python interpreter, do 
>> you see it take advantage of multiple cores?
>>
>> We've got some work planned for the future to better support multicore 
>> Python use cases, but for now gRPC Python is GIL-limited in most scenarios.
>> -Nathaniel
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/06392ed3-7cba-4666-9d7a-4fc849a859b4%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [grpc-io] Re: grpc streaming large file

2018-04-11 Thread 'Srini Polavarapu' via grpc.io
You could set request parameters as metadata for the file upload RPC. 
Server receives metadata before the streaming data is received.

On Wednesday, April 11, 2018 at 10:58:20 AM UTC-7, rSam wrote:
>
> I recently had to implement a similar solution to send files. I considered 
> these options:
> a) Your option, rpc call that sends back chunks
> b) RPC call that replies with a port number where the server will listen 
> for the file (or in your case, some other response to the rpc call, like a 
> file identifier that will be requested in other way)
> c) Do not use rpc calls, and use regular sockets for file transfers, 
> handling other types of communication via rpc
>
> After researching 'a)', and implementing 'b)', I ended up going back to 
> 'c)'. My server has two ports open, one for RPC calls, and one for FTP 
> calls. They are separate threads that know how to talk to each other, but 
> files are not served through RPC anymore.
>
>
>
>
>
> On Wed, Apr 11, 2018 at 9:51 AM, 'Eric Gribkoff' via grpc.io <
> grp...@googlegroups.com > wrote:
>
>>
>> Either approach could be appropriate: dividing the large message into 
>> chunks or sending it all in one message (note: gRPC has a default max 
>> message size that varies by language but can be configured). Which one 
>> performs best for your use case will depend on a number of other factors. 
>> There was some earlier discussion around this in 
>> https://groups.google.com/forum/?utm_medium=email_source=footer#!msg/grpc-io/MbDTqNXhv7o/cvPjrhwCAgAJ
>> .
>>
>> Thanks,
>>
>> Eric
>>
>>
>> On Tuesday, April 10, 2018 at 2:39:51 PM UTC-7, Weidong Lian wrote:
>>>
>>> Hello grpcs,
>>>
>>> I have the following task sent from client to server.
>>>
>>> service applis {
>>>   rpc GenerateVoxelMesh(VoxelMeshRequest) returns (VoxelMeshReply) {}
>>> }
>>>
>>> message VoxelMeshRequest {
>>>   any image_input = 1;
>>>   bool smooth = 4;
>>>   int32 iterations = 5;   
>>>   double mu = 6;  
>>>   double lambda = 7;   
>>>   double scale = 8;  
>>>   repeat int32 part_ids = 9; 
>>> }
>>>
>>> message VoxelMeshReply {
>>>   any nas_output = 1; // text file
>>> }
>>>
>>> The image_input, nas_output are the binary files that can be fairly 
>>> large sometimes. I would guess the `any` is not a recommended type. 
>>> It is preferred to use stream chunk bytes to send and receive the image 
>>> and nas files. However, if we stream chunk file, we can not send
>>> the other request parameters at one call. We will have to make multiple 
>>> calls and make server side a state machine. It increases the complexity.
>>>
>>> I am just wondering if there any more element design or what the 
>>> idiomatic way of doing this in grpc? 
>>>
>>> the possible design like below.
>>>
>>> service applis {
>>>   rpc GenerateVoxelMesh(stream VoxelMeshRequest) returns (stream 
>>> VoxelMeshReply) {}
>>> }
>>>
>>> message VoxelMeshRequest {
>>> oneof test_oneof {
>>>FileChunk image_input = 1;
>>>VoxelMeshParamters mesh_params = 2;
>>> }
>>> }
>>>
>>> message FileChunk {
>>>  bytes chunk = 1;
>>> }
>>>
>>> message VoxelMeshParameters {
>>>   bool smooth = 4;
>>>   int32 iterations = 5;   
>>>   double mu = 6;  
>>>   double lambda = 7;   
>>>   double scale = 8;  
>>>   repeat int32 part_ids = 9; 
>>> }
>>>
>>> message VoxelMeshReply {
>>>   FileChunk nas_output = 1; // text file
>>> }
>>>
>>> Any suggestion will be appreciated. 
>>> Thanks in advance,
>>> Weidong
>>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "grpc.io" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to grpc-io+u...@googlegroups.com .
>> To post to this group, send email to grp...@googlegroups.com 
>> .
>> Visit this group at https://groups.google.com/group/grpc-io.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/grpc-io/a4c82a7c-6542-4d51-be35-7c2b9e583dc9%40googlegroups.com
>>  
>> 
>> .
>>
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/5be2146f-19d2-4b1a-885b-f6270534cc20%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: grpc python streaming response order

2018-03-28 Thread 'Srini Polavarapu' via grpc.io
If your goal is to notify an invalid argument why not use 
context.set_code(grpc.StatusCode.INVALID_ARGUMENT) instead of abort.

On Friday, March 23, 2018 at 4:48:33 AM UTC-7, rvsh...@gmail.com wrote:
>
> I was trying out python server streaming, and it is unclear to me if there 
> is a guarantee that the client will receive all messages the server sent. I 
> have a test setup here 
> https://github.com/rvshah825/grpc-python/tree/5f306d820458b539187a6c7fa80f7d3e7d2bed87
>  
> (client.py, server.py)
>
> The setup is that the client opens a stream request to server, and the 
> server returns a value then aborts the call. It seems that if I wait a 
> second on client before looking for replies on the stream, I never seen the 
> initial value, and instead only get abort.
>
> The reason I am opening in forum rather than as bug on tracker is that it 
> is unclear to me what is the expected behavior. Naively, I would assume 
> that if I consume the stream response in client calling `next` repeatedly 
> will give me the results that the server sent in order. I would consider 
> aborting to be a value, so in this test I assume that the stream of values 
> from server is [Response, Abort]. What is a little ambiguous is that since 
> the server is sending replies without waiting for requests from client, 
> maybe I cannot expect this?
>
>
>
> Platform I am testing on if relevant is:
> ```
> (grpc110) vagrant@vagrant:/vagrant$ python
> Python 3.6.4 (default, Jan 28 2018, 17:52:01)
> (grpc110) vagrant@vagrant:/vagrant$ lsb_release -a
> No LSB modules are available.
> Distributor ID: Ubuntu
> Description:Ubuntu 16.04.3 LTS
> Release:16.04
> Codename:   xenial
> (grpc110) vagrant@vagrant:/vagrant$ pip list
> DEPRECATION: The default format will switch to columns in the future. You 
> can use --format=(legacy|columns) (or define a format=(legacy|columns) in 
> your pip.conf under the [list] section) to disable this warning.
> grpcio (1.10.0)
> grpcio-tools (1.10.0)
> pip (9.0.3)
> protobuf (3.5.2.post1)
> setuptools (39.0.1)
> six (1.11.0)
> wheel (0.30.0)
>
> ```
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/6f648fa6-1147-444d-a1e1-da836420ef1b%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: gRPC calls from a Web App (in browser)?

2018-03-28 Thread 'Srini Polavarapu' via grpc.io
Yes. See this https://github.com/grpc/grpc-web


On Saturday, March 24, 2018 at 11:20:08 AM UTC-7, amer...@gmail.com wrote:
>
> Hi. 
>
> Is there any way to call a gRPC back-end service from a web browser 
> application (say using Dart or JavaScript)?
>
> Thanks.
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/8b8adda5-8993-4ad0-a34f-0ce0ebfce61e%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: Handshake in TLS

2018-03-14 Thread 'Srini Polavarapu' via grpc.io
Please specify the language you are using and whether it is flex or 
standard AppEngine.

On Monday, March 12, 2018 at 10:21:03 AM UTC-7, Baojun Xu wrote:
>
> Hi,
>
> We are calling our grpc endpoint with TLS from app engine. I am wondering 
> how often does a TLS handshake happen in case we are calling from app 
> engine, given our client stub is provided as a Singleton? Especially we 
> don't know how app engine will deal with the client stub.
>
> Best,
> Baojun
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/ed8d2019-e541-4a4b-a65f-ccf14b054902%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: C fork() server->embedded Python->gRPC-python deadlock on method call

2018-02-13 Thread 'Srini Polavarapu' via grpc.io
Looks similar to https://github.com/grpc/grpc/issues/14258 . Have you tried 
setting GRPC_ENABLE_FORK_SUPPORT=FALSE as described in 
https://github.com/grpc/grpc/issues/14056 ?


On Monday, February 12, 2018 at 10:50:05 PM UTC-8, ascani...@gmail.com 
wrote:
>
>
> (gdb) py-bt
> #4 Waiting for a lock (e.g. GIL)
> #11 Frame 0x30654e0, for file 
> /usr/local/lib/python2.7/site-packages/grpc/_channel.py, line 475,
>  in _blocking (self=<_UnaryUnaryMultiCallable(_managed_call= remote 0x3080320>, _request_serializer= hod_descriptor at remote 0x2f40ab8>, _channel= at remote 0x2ffde10>, _response_deseriali
> zer= remote 0x2eec1c0>, _method='/
> Service/Retrieve') at remote 0x307cbd0>, request= remote 0x3080230>, ti
> meout=None, metadata=None, credentials=None, state=<_RPCState(code=None, 
> due=set([0, 1, 2, 4, 5, 6]), callbacks=[], 
> trailing_metadata=None, cancelled=False, initial_metadata=None, 
> response=None, condition=<_Condition(_Condition__loc
> k=<_RLock(_Verbose__verbose=False, _RLock__owner=None, 
> _RLock__block=, _RLock__coun
> t=0) at remote 0x307cc90>, acquire=, 
> _is_owned= self._method, None, deadline)
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/7d34b9d7-13f7-4c8f-bac0-5cd5647157a6%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: gRFC L22: Change name of directory `include/grpc++` to `include/grpcxx`

2018-01-25 Thread 'Srini Polavarapu' via grpc.io
Is the reason for choosing grpcxx (vs grpc_cpp or grpc_plusplus) is so that 
it is inline with #ifndef GRPCXX_AAA_BBB_H usage in header files? 

On Thursday, January 25, 2018 at 4:44:05 PM UTC-8, Muxi Yan wrote:
>
> grpc.io members,
>
> Please review gRFC proposal L22 
> 
>  which 
> proposes changing the header directory of gRPC C++ library from 
> `include/grpc++` to `include/grpcxx` for compatibility issue. Let me know 
> any question, comment, or concern.
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/d8e56a77-44df-4027-aa05-82ed6c474cf1%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.