[grpc-io] Re: CallCredentials Without ChannelCredentials

2021-02-22 Thread 'Menghan Li' via grpc.io
If you don't need transport credentials, create the channel with 
`grpc.WithInsecure()`.

And make sure your `TokenAuth` returns false in 
`RequireTransportSecurity()`. Otherwise Dial will fail.
(See doc at 
https://pkg.go.dev/google.golang.org/grpc@v1.35.0/credentials#PerRPCCredentials)

Please give it a try and let me know if you have other questions.

Thanks,
Menghan

On Friday, February 19, 2021 at 6:16:47 AM UTC-8 Daniele T. wrote:

> Dear Grpc community,
>
> I would like to submit an issue I am experiencing with the Grpc Credential 
> mechanism.
>
> As far as I understand, there are 2 types of credentials:
>
>- 
>
>Channel Credentials (TLS basically)
>- 
>
>Call Credentials (per call headers management)
>
> Those 2 mechanisms are supposed to be orthogonal (i.e. non dependent to 
> each other).
>
> In my domain, there is a Scala based application that acts as a Grpc 
> Server.
>
> My goal is to implement many clients in many different languages.
>
> The server implements an authorization mechanism (realized by an 
> interceptor) that essentially checks a JWT token coming from a request 
> header.
>
> Since the server will be deployed inside a private network and a proxy 
> server will be used to expose the Grpc services, it’s been decided that the 
> channel security will be in charge of this latter component, so the Grpc 
> server itself must use plain text communication.
>
> Consequently, My goal is to implement CallCredentials and not 
> ChannelCredentials
>
> For my Java and scala clients we were able to achieve that goal.
>
> In fact the server is defined as follows
>
> ```
>
> NettyServerBuilder
>
>  .forAddress(new InetSocketAddress(InetAddresses.forString(interface), 
> port))
>
> ```
>
> And clients leverages a managed channel like this
>
> ```
>
> ManagedChannelBuilder.forAddress(host, port).usePlaintext().build
>
> ```
>
> With an implementation of the abstract class CallCredentials which add a 
> Jwt token to each request.
>
> Everything is working fine.
>
> While in go I’m encountering the following issues.
>
> On the client side I implemented the interface 
> grpc/credentials.PerRPCCredentials using the tokenAuth structure in order 
> to insert the token in the request header:
>
> ```
>
> channel, _ := grpc.Dial(address,
>
> grpc.WithPerRPCCredentials(TokenAuth{
>
>  token: "my.token",
>
>   }))
>
> ```
>
> At this point I have a client error message since it is required to make 
> the credentials explicit:
>
> grpc: no transport security set (use grpc.WithInsecure() explicitly or set 
> credentials)
>
> But if I set the credentials as follows
>
> ```
>
> channel, channelErr := grpc.Dial(address,  
> grpc.WithTransportCredentials(credentials.NewClientTLSFromCert(nil, "")),
>
>   grpc.WithPerRPCCredentials(TokenAuth{
>
>  token: "my.token",
>
>   }))
> ```
>
> the server returns the following error message since no server-side TLS is 
> set up:
>
> rpc error: code = Unavailable desc = connection error: desc = "transport: 
> authentication handshake failed: tls: first record does not look like a TLS 
> handshake"
>
> To recap,
>
> My question is essentially what is the best practice, in the GO ecosystem, 
> to use during a call credential and if there is a way to set a call without 
> the transport credential, like I was able to achieve in the Java ecosystem.
> Thanks in advance
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/c31db24c-69e6-4ee6-9084-4f99ae111ed6n%40googlegroups.com.


Re: [grpc-io] Connection management and load balancing

2021-02-22 Thread 'Eric Anderson' via grpc.io
On Thu, Feb 18, 2021 at 7:06 PM Vitaly  wrote:

> 1. Connection management on the client side - do something to reset the
> channel (like [enterIdle](
> https://grpc.github.io/grpc-java/javadoc/io/grpc/ManagedChannel.html#enterIdle)
> in grpc-java). Downside - it seems that this feature has been developed for
> android and I can't find similar functionality in grpc-go.
>

Go doesn't go into IDLE at all today. But even so, this isn't an approach
we'd encourage. The enterIdle() is really for re-choosing which network to
use, and would be a hack to use it in this case.

2. Connection management on the server side - drop connections periodically
> on the server. Downside - this approach looks less graceful than the client
> side one and may impact request latency and result in request failures on
> the client side.
>

L4 proxy is *exactly* the use-case for server-side connection age (as you
should have seen in gRFC A9
).
The impact on request latency is the connection handshake, which is no
worse than if you were using HTTP/1. The shutdown should avoid races
on-the-wire, which should prevent most errors and some latency. There are
some races client-side that could cause *very rare* failures; it should be
well below the normal noise level of failures.

We have seen issues with GOAWAYs introducing disappointing latency, but in
large part because of cold caches in the *backend*.

3. Use request based grpc-aware L7 LB, this way client would connect to the
> LB, which would fan out requests to the servers. Downside - I've been told
> by our infra guys that it is hard to implement in our setup due to the way
> we use TLS and manage certificates.
> 4. Expose our servers outside and use grpc-lb or client side load
> balancing. Downside - it seems less secure and would make it harder to
> protect against DDoS attacks if we go this route. I think this downside
> makes this approach unviable.
>

Option 3 is the most common solution for serious load balancing across
trust domains (like public internet vs data center). Option 4 depends on
how much you trust your clients.

1. Which approach is generally preferable?
>

For a "public" service, the "normal" preference would be (3) L7 proxy
(highest), (2) L4 proxy + MAX_CONNECTION_AGE, (1) manual code on
client-side hard-coded with special magical numbers. (4) gRPC-LB/xDS could
actually go anywhere, depending on how you felt about your client and your
LB needs; it's more about how *you* feel about (4). (4) is the highest
performance and lowest latency solution, although it is rarely used for
public services that receive traffic from the Internet.

2. Are there other options to consider?
>

You could go with option (2), but expose two L4 proxy IP addresses to your
clients and have the clients use round-robin. Since MAX_CONNECTION_AGE uses
jitter, the connections are unlikely to both go down at the same time and
so it'd hide the connection establishment latency.

3. Is it possible to influence grpc channel state in grpc-go, which would
> trigger resolver and balancer to establish a new connection similar to what
> enterIdle does in java?
>

You'd have to shut down the ClientConn and replace it.

4. Is there a way to implement server side connection management cleanly
> without impacting client-side severely?
>

I'd suggest giving option (2) a try and informing us if you have poor
experiences. Option (2) is actually pretty common, even when using L7
proxies, as you may need to *load balance the proxies*.

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CA%2B4M1oPDjdaNWSaq-dPAfjq0AVkLDFYXtP2H4%3Di6d0hRQjDNrg%40mail.gmail.com.


smime.p7s
Description: S/MIME Cryptographic Signature


Re: [grpc-io] Can we skip Validation of IP or Hostname info within the Cert during the grpc SSL secure connection??

2021-02-22 Thread Sachin Bharadwaj S
Hi Yang,

Have a CN in the certificate and then use SetSslTargetNameOverride()
For example, if CN is "test"
args.SetSslTargetNameOverride("test");

Regards,
Sachin

On Wed, Feb 10, 2021 at 8:10 AM yang ma  wrote:

> *- **Abstract*
>
> Using the C++ interface, if I setup a server using SslServerCredentials
> and just give the grpc::ServerBuilder instance a IP to create the Listening
> Port.
>
> The code of *server-side* is shown below:
>
>
> // Ssl-Cert info of server side encapsulation
> grpc::SslServerCredentialsOptions::PemKeyCertPair pkcp = {
> serverKey.c_str(), serverCert.c_str() };
> grpc::SslServerCredentialsOptions
> ssl_opts(GRPC_SSL_REQUEST_CLIENT_CERTIFICATE_BUT_DONT_VERIFY);
> ssl_opts.pem_root_certs = clientCert;
> ssl_opts.pem_key_cert_pairs.push_back(pkcp);
> std::shared_ptr creds;
> creds = grpc::SslServerCredentials(ssl_opts);
> // Create server listening port
> std::string server_address("127.0.0.1:50051");
> ServerBuilder builder;
> builder.AddListeningPort(server_address, creds);
>
> And the code of *client-side* is shown below:
> // Ssl-Cert info of client side encapsulation
> grpc::SslCredentialsOptions ssl_opts;
> ssl_opts.pem_root_certs = servercert;
> ssl_opts.pem_private_key = clientkey;
> ssl_opts.pem_cert_chain = clientcert;
> // Client side IP and params setup
> std::string hostname{"127.0.0.1:50051"};
> std::shared_ptr creds =
> grpc::SslCredentials(ssl_opts); grpc::ChannelArguments args;
> auto channel = grpc::CreateCustomChannel(hostname creds, args);
>
> My question is that, the grpc connection between server and client using
> IP only works fine without the Ssl secure channel inserted.
>
> But if I insert the Ssl Credential info as above, error found as below.
>
>
> *- Error Found*
>
>
> *client_side:*
>
> (base) user@user-machine:~/grpc/examples/cpp/helloworld$GRPC_VERBOSITY=DEBUG
> ./greeter_client
>
>
> D0207 16:02:57.197850779 16548 dns_resolver_ares.cc:504] Using ares dns
> resolver
>
> D0207 16:02:57.204809585 16548 security_handshaker.cc:184] Security
> handshake failed: {"created":"@1612684977.204796431","description":"Peer
> name 127.0.0.1 is not in peer
> certificate","file":"~/grpc/src/core/lib/security/security_connector/ssl/ssl_security_connector.cc","file_line":57}
>
> I0207 16:02:57.204886270 16548 subchannel.cc:1033] Connect failed:
> {"created":"@1612684977.204796431","description":"Peer name 127.0.0.1 is
> not in peer
> certificate","file":"~/grpc/src/core/lib/security/security_connector/ssl/ssl_security_connector.cc","file_line":57}
>
> I0207 16:02:57.204919892 16548 subchannel.cc:972] Subchannel
> 0x55f0ee4cb360: Retry in 993 milliseconds
>
> 14: failed to connect to all addresses
>
> Greeter received: RPC failed
>
>
> *server_side:*
>
> (base) user@user-machine:~/grpc/examples/cpp/helloworld$
> GRPC_VERBOSITY=DEBUG ./greeter_server
>
>
> D0207 16:02:43.985391400 16542 dns_resolver_ares.cc:504] Using ares dns
> resolver
>
> I0207 16:02:43.985475962 16542 server_builder.cc:332] Synchronous server.
> Num CQs: 1, Min pollers: 1, Max Pollers: 2, CQ timeout (msec): 1
>
> Server listening on 127.0.0.1:50051
>
> E0207 16:02:57.200528351 16546 ssl_transport_security.cc:1723] No match
> found for server name: 127.0.0.1.
>
>
> *client_self_signed_cert_info:*
>
> openssl x509 -in
> ~/grpc/examples/cpp/helloworld/ssl_key1/client_self_signed_crt.pem -text
> -noout
>
>
> Certificate:
>
> Data:
>
> Version: 1 (0x0)
>
> Serial Number:
>
> ...
>
> Signature Algorithm: sha256WithRSAEncryption
>
> Issuer: C = CN, ST = FuJian, L = XiaMen, O = YaXon, OU = gRPC, CN =
> 127.0.0.1
>
> Validity
>
> Not Before: Feb 7 07:13:41 2021 GMT
>
> Not After : Feb 5 07:13:41 2031 GMT
>
> Subject: C = CN, ST = FuJian, L = XiaMen, O = YaXon, OU = gRPC, CN =
> 127.0.0.1
>
> Subject Public Key Info:
>
> Public Key Algorithm: rsaEncryption
>
> RSA Public-Key: (2048 bit)
>
> Modulus:
>
> ...
>
> Exponent: 65537 (0x10001)
>
> Signature Algorithm: sha256WithRSAEncryption
>
> ...
>
> (server_self_signed_cert is the same as above)
>
>
> *ca_cert_info:*
>
> openssl x509 -in ca.crt -text -noout
>
> Certificate:
>
> Data:
>
> Version: 3 (0x2)
>
> Serial Number:
>
> ...
>
> Signature Algorithm: sha256WithRSAEncryption
>
> Issuer: C = CN, ST = FuJian, L = XiaMen, O = YaXon, OU = gRPC, CN =
> 127.0.0.1
>
> Validity
>
> Not Before: Feb 7 07:13:41 2021 GMT
>
> Not After : Feb 5 07:13:41 2031 GMT
>
> Subject: C = CN, ST = FuJian, L = XiaMen, O = YaXon, OU = gRPC, CN =
> 127.0.0.1
>
> Subject Public Key Info:
>
> Public Key Algorithm: rsaEncryption
>
> RSA Public-Key: (2048 bit)
>
> Modulus:
>
> ...
>
> Exponent: 65537 (0x10001)
>
> X509v3 extensions:
>
> X509v3 Subject Key Identifier:
>
> ...
>
> X509v3 Authority Key Identifier:
>
> ..
>
>
> X509v3 Basic Constraints: critical
>
> CA:TRUE
>
> Signature Algorithm: sha256WithRSAEncryption
>
> …
>
>
> *- Question && Requirement*
>
> *Can we skip Validation of IP or Hostname info within the Cert?*
>
> After investigation and analysis of your source code and 

[grpc-io] Re: Connection management and load balancing

2021-02-22 Thread 'Srini Polavarapu' via grpc.io
Hi Vitaly,

Please see this post 
 if you 
are planning to use gRPCLB. gRPC has moved away from gRPCLB protocol. 
Instead, gRPC is adopting xDS protocol. A number of xDS features 
, 
including round robin LB, are already supported in gRPC. This project 
 might be useful to you but I 
think it is blocked on this issue 
. This 
 project might be useful too.

On Friday, February 19, 2021 at 3:47:22 PM UTC-8 vitaly@gmail.com wrote:

> Thanks Srini,
>
> I haven't tested option 2 yet, I would expect though that since client is 
> unaware of what is happening we should see some request failures/latency 
> spikes until new connection is established. That's why I would consider it 
> mostly for disaster prevention rather than for general connection balancing.
> I'm actually now more interested in exploring option 4 as it looks like we 
> can achieve safe setup if we keep proxy in front of servers and expose a 
> separate proxy port for each server.
> Can someone recommend a good opensource grpclb implementation? I've found 
> bsm/grpclb  which looks reasonable but 
> wasn't sure if there is anything else available.
>
> On Friday, February 19, 2021 at 12:50:17 PM UTC-8 Srini Polavarapu wrote:
>
>> Hi,
>>
>> Option 3 is ideal but since you don't have that as well as option 4 
>> available, option 2 is worth exploring. Are the concerns with options 2 
>> based on some experiments you have done or is it just a hunch? This 
>> comment 
>>  has 
>> some relevant info that you could use.  
>>
>> On Thursday, February 18, 2021 at 7:06:37 PM UTC-8 vitaly@gmail.com 
>> wrote:
>>
>>>
>>> Hey folks,
>>>
>>> I'm trying to solve a problem of even load (or at least connection) 
>>> distribution between  grpc clients and our backend servers.
>>>
>>> First of all let me describe our setup:
>>> We are using network load balancing (L4) in front of our grpc servers.
>>> Clients will see one endpoint (LB) and connect to it. This means that 
>>> standard client-side load balancing features like round robing wouldn't 
>>> work as there will only be one sub-channel for client-server communication.
>>>
>>> One issue with this approach can be demonstrated by the following 
>>> example:
>>> Let's say we have 2 servers running and 20 clients connect to them. At 
>>> the beginning, since we go through the network load balancer, connections 
>>> will be distributed evenly (or close to that), so we'll roughly have 50% of 
>>> connections to each server. Now let's assume these servers reboot one after 
>>> another, like in a deployment. What would happen is that server that comes 
>>> up first would get all 20 worker connections and server that comes up later 
>>> would have zero. This situation won't change unless client or server would 
>>> drop a connection periodically or more clients request connections.
>>>
>>> I've considered a few options for solving this:
>>> 1. Connection management on the client side - do something to reset the 
>>> channel (like [enterIdle](
>>> https://grpc.github.io/grpc-java/javadoc/io/grpc/ManagedChannel.html#enterIdle)
>>>  
>>> in grpc-java). Downside - it seems that this feature has been developed for 
>>> android and I can't find similar functionality in grpc-go.
>>> 2. Connection management on the server side - drop connections 
>>> periodically on the server. Downside - this approach looks less graceful 
>>> than the client side one and may impact request latency and result in 
>>> request failures on the client side.
>>> 3. Use request based grpc-aware L7 LB, this way client would connect to 
>>> the LB, which would fan out requests to the servers. Downside - I've been 
>>> told by our infra guys that it is hard to implement in our setup due to the 
>>> way we use TLS and manage certificates.
>>> 4. Expose our servers outside and use grpc-lb or client side load 
>>> balancing. Downside - it seems less secure and would make it harder to 
>>> protect against DDoS attacks if we go this route. I think this downside 
>>> makes this approach unviable.
>>>
>>> My bias is towards going with option 3 and doing request based load 
>>> balancing because it allows much more fine grained control based on load, 
>>> but since our infra can not support it at the moment, I might be forced to 
>>> use option 1 or 2 in the short to mid term. Option 2 I like the least, as 
>>> it might result in latency spikes and errors on the client side.
>>>
>>> My questions are:
>>> 1. Which approach is generally preferable? 
>>> 2. Are there other options to consider?
>>> 3. Is it possible to influence grpc channel state in grpc-go, which 
>>> would trigger re

[grpc-io] Re: [grpc-web] Looking for Envoy example of load balancing

2021-02-22 Thread Rob Cecil
This was , in large part how I found the approach I am using:

https://github.com/envoyproxy/envoy/issues/4897

On Monday, February 22, 2021 at 11:41:28 AM UTC-5 Rob Cecil wrote:

> I am aware of maglev as a potential alternative lb_policy, and also there 
> are problems if the cluster configuration is changed while there are 
> outstand connections through envoy.
>
> On Monday, February 22, 2021 at 11:40:36 AM UTC-5 Rob Cecil wrote:
>
>> Just some follow up - in case anyone else is looking for solutions.
>>
>> There are no obvious usage scenarios documented, but I found a solution 
>> using a lb_policy of "ring_hash" where the client provides a header that is 
>> unique the client. Just generate a unique, stable identifier throughout the 
>> lifetime of the client (browser) and inject that as a header (here using 
>> "x-session-hash", but it could be anything) into every request.
>>
>> I'm using nanoid to generate unique strings in Javascript. I simply 
>> generate one and store in local storage.
>>
>> Here's the relevant .yaml for an example configuration that defines a 
>> two-host upstream cluster:
>>
>> admin:
>>   access_log_path: /tmp/admin_access.log
>>   address:
>> socket_address: { address: 0.0.0.0, port_value: 9901 }
>>
>> static_resources:
>>   listeners:
>>   - name: listener_0
>> address:
>>   socket_address: { address: 0.0.0.0, port_value: 8080 }
>> filter_chains:
>> - filters:
>>   - name: envoy.filters.network.http_connection_manager
>> typed_config:
>>   "@type": 
>> type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
>>   codec_type: auto
>>   stat_prefix: ingress_http
>>   route_config:
>> name: local_route
>> virtual_hosts:
>> - name: local_service
>>   domains: ["*"]
>>   routes:
>>   - match: { prefix: "/" }
>> route:
>>   cluster: controlweb_backendservice
>>   hash_policy:
>> - header:
>> header_name: x-session-hash
>>   max_stream_duration:
>> grpc_timeout_header_max: 0s
>>   cors:
>> allow_origin_string_match:
>> - prefix: "*"
>> allow_methods: GET, PUT, DELETE, POST, OPTIONS
>>
>> allow_headers: 
>> keep-alive,user-agent,cache-control,content-type,content-transfer-encoding,access-token,x-accept-content-transfer-encoding,x-accept-response-streaming,x-user-agent,x-grpc-web,grpc-timeout,x-session-hash
>> max_age: "1728000"
>> expose_headers: access-token,grpc-status,grpc-message
>>   http_filters:
>>   - name: envoy.filters.http.grpc_web
>>   - name: envoy.filters.http.cors
>>   - name: envoy.filters.http.router
>>   clusters:
>>   - name: controlweb_backendservice
>> connect_timeout: 0.25s
>> type: strict_dns
>> http2_protocol_options: {}
>> lb_policy: ring_hash
>> 
>> load_assignment:
>>   cluster_name: cluster_0
>>   endpoints:
>> - lb_endpoints:
>> - endpoint:
>> address:
>>   socket_address:
>> address: 172.16.0.219
>> port_value: 50251
>>   load_balancing_weight: 10
>> - endpoint:
>> address:
>>   socket_address:
>> address: 172.16.0.132
>> port_value: 50251
>>   load_balancing_weight: 1
>>
>> On Sunday, February 14, 2021 at 12:43:42 PM UTC-5 Rob Cecil wrote:
>>
>>> Looking for an example of an Envoy configuration that implements session 
>>> affinity (stickiness) to load balance a cluster of backend servers.  Thanks!
>>>
>>> I'm open to using the source IP or something in the header, but probably 
>>> not cookie.
>>>
>>> Thanks
>>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/746e5142-d102-4fe6-8fd8-720f6641f3a7n%40googlegroups.com.


[grpc-io] Re: [grpc-web] Looking for Envoy example of load balancing

2021-02-22 Thread Rob Cecil
I am aware of maglev as a potential alternative lb_policy, and also there 
are problems if the cluster configuration is changed while there are 
outstand connections through envoy.

On Monday, February 22, 2021 at 11:40:36 AM UTC-5 Rob Cecil wrote:

> Just some follow up - in case anyone else is looking for solutions.
>
> There are no obvious usage scenarios documented, but I found a solution 
> using a lb_policy of "ring_hash" where the client provides a header that is 
> unique the client. Just generate a unique, stable identifier throughout the 
> lifetime of the client (browser) and inject that as a header (here using 
> "x-session-hash", but it could be anything) into every request.
>
> I'm using nanoid to generate unique strings in Javascript. I simply 
> generate one and store in local storage.
>
> Here's the relevant .yaml for an example configuration that defines a 
> two-host upstream cluster:
>
> admin:
>   access_log_path: /tmp/admin_access.log
>   address:
> socket_address: { address: 0.0.0.0, port_value: 9901 }
>
> static_resources:
>   listeners:
>   - name: listener_0
> address:
>   socket_address: { address: 0.0.0.0, port_value: 8080 }
> filter_chains:
> - filters:
>   - name: envoy.filters.network.http_connection_manager
> typed_config:
>   "@type": 
> type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
>   codec_type: auto
>   stat_prefix: ingress_http
>   route_config:
> name: local_route
> virtual_hosts:
> - name: local_service
>   domains: ["*"]
>   routes:
>   - match: { prefix: "/" }
> route:
>   cluster: controlweb_backendservice
>   hash_policy:
> - header:
> header_name: x-session-hash
>   max_stream_duration:
> grpc_timeout_header_max: 0s
>   cors:
> allow_origin_string_match:
> - prefix: "*"
> allow_methods: GET, PUT, DELETE, POST, OPTIONS
>
> allow_headers: 
> keep-alive,user-agent,cache-control,content-type,content-transfer-encoding,access-token,x-accept-content-transfer-encoding,x-accept-response-streaming,x-user-agent,x-grpc-web,grpc-timeout,x-session-hash
> max_age: "1728000"
> expose_headers: access-token,grpc-status,grpc-message
>   http_filters:
>   - name: envoy.filters.http.grpc_web
>   - name: envoy.filters.http.cors
>   - name: envoy.filters.http.router
>   clusters:
>   - name: controlweb_backendservice
> connect_timeout: 0.25s
> type: strict_dns
> http2_protocol_options: {}
> lb_policy: ring_hash
> 
> load_assignment:
>   cluster_name: cluster_0
>   endpoints:
> - lb_endpoints:
> - endpoint:
> address:
>   socket_address:
> address: 172.16.0.219
> port_value: 50251
>   load_balancing_weight: 10
> - endpoint:
> address:
>   socket_address:
> address: 172.16.0.132
> port_value: 50251
>   load_balancing_weight: 1
>
> On Sunday, February 14, 2021 at 12:43:42 PM UTC-5 Rob Cecil wrote:
>
>> Looking for an example of an Envoy configuration that implements session 
>> affinity (stickiness) to load balance a cluster of backend servers.  Thanks!
>>
>> I'm open to using the source IP or something in the header, but probably 
>> not cookie.
>>
>> Thanks
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/95bfe438-a9b6-4a6d-8a48-94fe0b99cb75n%40googlegroups.com.


[grpc-io] Re: [grpc-web] Looking for Envoy example of load balancing

2021-02-22 Thread Rob Cecil
Just some follow up - in case anyone else is looking for solutions.

There are no obvious usage scenarios documented, but I found a solution 
using a lb_policy of "ring_hash" where the client provides a header that is 
unique the client. Just generate a unique, stable identifier throughout the 
lifetime of the client (browser) and inject that as a header (here using 
"x-session-hash", but it could be anything) into every request.

I'm using nanoid to generate unique strings in Javascript. I simply 
generate one and store in local storage.

Here's the relevant .yaml for an example configuration that defines a 
two-host upstream cluster:

admin:
  access_log_path: /tmp/admin_access.log
  address:
socket_address: { address: 0.0.0.0, port_value: 9901 }

static_resources:
  listeners:
  - name: listener_0
address:
  socket_address: { address: 0.0.0.0, port_value: 8080 }
filter_chains:
- filters:
  - name: envoy.filters.network.http_connection_manager
typed_config:
  "@type": 
type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
  codec_type: auto
  stat_prefix: ingress_http
  route_config:
name: local_route
virtual_hosts:
- name: local_service
  domains: ["*"]
  routes:
  - match: { prefix: "/" }
route:
  cluster: controlweb_backendservice
  hash_policy:
- header:
header_name: x-session-hash
  max_stream_duration:
grpc_timeout_header_max: 0s
  cors:
allow_origin_string_match:
- prefix: "*"
allow_methods: GET, PUT, DELETE, POST, OPTIONS
allow_headers: 
keep-alive,user-agent,cache-control,content-type,content-transfer-encoding,access-token,x-accept-content-transfer-encoding,x-accept-response-streaming,x-user-agent,x-grpc-web,grpc-timeout,x-session-hash
max_age: "1728000"
expose_headers: access-token,grpc-status,grpc-message
  http_filters:
  - name: envoy.filters.http.grpc_web
  - name: envoy.filters.http.cors
  - name: envoy.filters.http.router
  clusters:
  - name: controlweb_backendservice
connect_timeout: 0.25s
type: strict_dns
http2_protocol_options: {}
lb_policy: ring_hash

load_assignment:
  cluster_name: cluster_0
  endpoints:
- lb_endpoints:
- endpoint:
address:
  socket_address:
address: 172.16.0.219
port_value: 50251
  load_balancing_weight: 10
- endpoint:
address:
  socket_address:
address: 172.16.0.132
port_value: 50251
  load_balancing_weight: 1

On Sunday, February 14, 2021 at 12:43:42 PM UTC-5 Rob Cecil wrote:

> Looking for an example of an Envoy configuration that implements session 
> affinity (stickiness) to load balance a cluster of backend servers.  Thanks!
>
> I'm open to using the source IP or something in the header, but probably 
> not cookie.
>
> Thanks
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/c8c89d80-6800-4d28-8637-d0204eac1381n%40googlegroups.com.


[grpc-io] Re: Server Stream RPC and server-side Interceptors

2021-02-22 Thread 'Jan Tattermusch' via grpc.io
The behavior you're describing is odd. The ResponseHeaderAsync  metadata is 
something the client should be able to receive as soon as the server writes 
them (without needing to wait for the entire call to finish or receiving 
any responses from the server). The way to force sending the response 
headers on the server is by invoking 
serverCallContext.WriteResponseHeadersAsync() (if you don't force sending 
the response headers explicitly, 
they will be sent along with the first response sent by server).


On Wednesday, February 17, 2021 at 6:46:13 PM UTC+1 fbr...@beckman.com 
wrote:

> When making a streaming RPC call, the client returns immediately, before 
> the server-side interceptor is executed. The interceptor is doing a 
> security check and would throw an RpcException prior to calling the 
> continuation. What event occurs that could be hooked or detected to allow 
> the client to know it has passed the initial security check? The 
> ResponseHeadersAsync task does not appear to complete until the stream is 
> closed. Thank you.

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/7273cf5b-f11d-46ad-a425-4f6035dda7a6n%40googlegroups.com.