[grpc-io] Re: Handshake failed with fatal error SSL_ERROR_SSL: error:14090086:SSL

2018-10-17 Thread skuchuk
Thanks.
I have recreated the certificates and the handshake pass successfully


On Wednesday, October 17, 2018 at 5:13:38 PM UTC+3, sku...@gmail.com wrote:
>
> Hi,
> I am trying to setup a gRPC client and server example on WSL with SSL 
> (server authentication only)
>
> I created the following files (following this tutorial: 
> https://jsherz.com/grpc/node/nodejs/mutual/authentication/ssl/2017/10/27/grpc-node-with-mutual-auth.html
> )
>
> *my_root_cert.crt*
> -BEGIN CERTIFICATE-
> MIIEgDCCAmigAwIBAgIQBSsnVXC24hhmdgVV6NlFXzANBgkqhkiG9w0BAQsFADA3
> MRcwFQYDVQQKEw5FbnJpY2htZW50IEluYzEcMBoGA1UEAxMTRW5yaWNobWVudCBz
> ZXJ2aWNlczAeFw0xODEwMTcwODE2MjVaFw0yMDA0MTcwODEwMTFaMCUxIzAhBgNV
> BAMTGkxULTgyMDRQVC5jZWxsZWJyaXRlLmxvY2FsMIIBIjANBgkqhkiG9w0BAQEF
> AAOCAQ8AMIIBCgKCAQEAuk+HpXl6WE7oYm+AfgRqPWDc4MWCErax7LmFXXQXuh9x
> a6Rv7fa/Vu7v31mQhdrFIcQu8DW/4q9jkGTYp4mUsmA7TapWhWDtN1GCr+gHeUYN
> oFwXP3pki9BWWCR4lrCNeInSpDzTn71eymyfItUcWYHWcm4uM/hQ03/KpXtDzdHr
> IQPDH6QmNFi8ulfyv6Urr/DOC9QHazgYnShHPJMEnUXv05vP0lAT30qR/9yaTcke
> XI+332G+38iivLNp1ESWh+u+uMm1Yf/cz/Ai1rCPdTct/br1bl2LWm1vz6vI176W
> 93oHCOOcAW+/Hf/11F/KvtlVBoZ0Tl6e7d++tDnG9wIDAQABo4GZMIGWMA4GA1Ud
> DwEB/wQEAwIDuDAdBgNVHSUEFjAUBggrBgEFBQcDAQYIKwYBBQUHAwIwHQYDVR0O
> BBYEFP/stlJSMQ0Pf1I8RFZ9jMNFqbmlMB8GA1UdIwQYMBaAFMKTZ5z5ZU9uSurP
> Lqi1Yfnfi4gfMCUGA1UdEQQeMByCGkxULTgyMDRQVC5jZWxsZWJyaXRlLmxvY2Fs
> MA0GCSqGSIb3DQEBCwUAA4ICAQAEdZ5+RvPfg46DypZx3pctlWa4r2yFln8gzwyW
> Xq6VaK29jkFNlbchOXkFrhtOWIskZLmmNhLOCWHDgvleclt96kHxjr4tAC8S6rRb
> sjTSnhkIFOQYSGvBDTTuvNb371zl5kXlnCFntvpOh4PxTmzlyb1TdnZXUYWSDuDl
> eL5KBeFCoZhzsohZB+LTsOeRfYR86koMSpZKZcg5NxQfjdni7WMPti956jITOKm/
> FVh57HWMZWe587gtvK9Ntm29j4uiX6skpgprHgHwzBnfYIiyCWRneu2IZ7oJjeU1
> s+IsskYpNLx/9tyV3PHcbcslvxDsV8SntW6Ds5kIc/qtgBqv2cmAc4fECTEJdJLP
> 7aMBhq3nKTEQoogy0VgNUKrQG66y0x467epiHtMO6doxCEt0wcvH/Z4ou4Vm9MtL
> dXpJ4a60Vqpd1Da3WuyNFP0YeINeDjgREJHEIkdwpbm86RkxgZtQM2C7lsB3A4rg
> H2ql7nvx3YXQOqcdWk+OwB6f70nvEm8Ph1U/qeLPchB4YnzQ670nDRjY4boKaZ1g
> hZKdD/J6j9Aua7F2NhCvzFlgKEALZbhPzzy+XwYZWf+oF2OB+rVA462g6ULWplkd
> 70Nb+hecRqp8y4D1qn1bcZftfsAxhv73Myb+fwUBnhWNKTFpW5HSTZYY0qx5zOf1
> rlGphw==
> -END CERTIFICATE-
>
> *my_private.key (private_key)*
> -BEGIN RSA PRIVATE KEY-
> MIIJKQIBAAKCAgEAzcgKxU9Xss9Lup0atVdCrRAn/W/mAyVpSZrlRWdO9/Rk4GuN
> MehNNaCUrux+UQ8kUJn8S9+PBHW1SSG/IkRazfnk9Y9ThlIQiU3PNVbY9cHwXwf1
> kTtMe7jo58B1vY4MM4Llu311WQ74ru4voElyupZh7m8wlbEIhqNZAuZ2733wgntv
> kX0EXVb1wKdnQDCX+aDwti6KEIyI04dMdlpJ+cwkJnXTErwCdePF26lx8Lw3SNji
> WLiiQJewBJQ5qDmeOL/5dDXu4cf/6kp/wPrQaKUEAtw8gK90QJJR9trO2GiWalhI
> b+oHA+eLsOultqk3ZlQ0l84QKtwUkDnPpR4PK5yI+ykBZXSuFX0eyM0vOIf65I3e
> UxIfKGm3f4Xh0gn2hRCFvVQ/wNUdsTi+itn9JipzzPK/OtI7+pkLi/mEdb6uD+50
> 9Y/icnVope3KNsYqAfg0KNiv5l5gBzEISvRbwm4IEQ/4QjyBAPac19LoI1scECv4
> imyD/R3/7bxdbsTPJwg+wdyevGB1SU0D2DtopM8qR62lqzcJLaeZXIq1U5TYqwFu
> CsEjs8ZKiTe+2NiFpCFvPtZha4ulUt2sdk8h/d07VZW92i1EnkeKyRTVo0TLkU0m
> v46/bFH5VkoruAJFsuNPucdK35s6yPFgH2/Xtql7VL3ZRNGapjsukf6uHPUCAwEA
> AQKCAgEAnKKhODE9wvixXxnIw7HpKcx7dBkhztFCRGmoDN0nKewYgQ68yflWE/To
> WAHh4JeS/9tGRQalWTKzzDfowg+fwtttYVE4taxvs+PLToGN4fs+mUd4r5Sgkihc
> +FLyDFg8h1Uiw0Uq9qBDwPvCutJNhyOC5bgzFi5MHBfoYCHG9GM7mEaW1PqBQP85
> Tuzd1elnNPdBYpsoMpKWb9Sz6f6uAntWJQRYpxD/GndHGv3uodzShBu6pufbcSlF
> LScagCdjfTT7j26iJ7BR5yfP+LexvYWl+Ptk/lsPNTtrMmi5O9bYb5hFgxJzRpCQ
> Lxof6FsDtVtxMQAEJGujJ2kp2jh4OE630b/yrHdhfXiBCnvnaZZatr8x5QwTS7d8
> 6AF2/CGmbnKe5CfJ4ry99EQrIhUNk19l+De2lk5hDHldm+8A5zJVKzPEcj7wU/sC
> jXDS3orDcECBr2bqWp0pLHPy+SQcerPsnpD+1pxsPPuJhOdpLRTNk99umhojiWdw
> i17EjR9qKE1aFSfBCu7DloVD0bF9+nDLmVT2P9oZAchRI8Qd/o93K+FUTKpbybyQ
> D7YVb3CDnshtCV2DfyeVibVnoJOdEuY72/KF5qbphBpH/NSL5RgJ90n3Xo+x8qyd
> 2GjapwdOYRSWuJRaqlD4pPeWRs/A5NXSJi9cfGoRL5aW+T18feECggEBANV3xf9a
> kFsoms10Sg+OSU15mhcvPkLkqnMiiqjGGWtjGT9H5PuFjTvBY36f0rlWADLvSPyz
> oNuV3JaMhAFpjYUQzFXOtbm/CejjzQdn8ZLW6WtMA4e8buy/w9Xek0XgZXsvNAg1
> U2OWKXH0qUN0GtVc2smo6dy+uy6L7LkNc+QDWopBzDdZf8r/k1dRI7iU5zSo/Bo6
> f04d3AYf5QGTmBosJYkXppzk/TRc7/O7jjr5Ta0zF3lE9sdSsdrCCNFb5jgJafuu
> 8Is5li49jbQ2IxXgPVvHqVW4RebcV6IcXavmNnUUYENDr7bLqAWdsKhu7kF0L3f7
> FyHJrMHvjzwbqskCggEBAPbINeMx6uTpDrktf/O+ecsmBB9Y+9k6hh5mxdq/aZDd
> rYeiZ2hSm7haZQEXPa04S0Z0CgqWw/ucCgOdUUdprzWiomGKcKdBn/cro4mauoFQ
> DXs9BBhQBWRbNNe9jIR9g6aOW3wsoS+4+qwU/98fxD0g5jHznclge0c7ny8LygYF
> T/dhAv/XM79zX9Vdr88H69ELsGRzC28bOECYwU8kxFL109CSoNjxljj0NlGSARUC
> 2ZzIQ2lMxhRzy1a7U/7KA/vYw7sY+vbLQOYPZ0WqxvIwbltJ2URSBrylCel9ehKa
> /hIDrIMSgnBx/hHWE0IaGqkNlgLJYWMJTD2QxYPGis0CggEAcRwj9+he8U6UqCTk
> UVXNlZXHhl1sGjnb72HwIvnE4lgCOru3o2birTUNqTy6haYCOPr9q5jqtS+1ULho
> Ae+SI14BR75eIGwPri12qGP1Zx8lU8tVW4kHJb9+30Yutynt29XpNig7ZVtd3poL
> TkipJ0EqVQyBzovp1wIhjvSH4du9D+FJelKcGk5OHkhKKzYLRKX930/7wMKloUEp
> MSqpv8SApyG3EQ9s82ADbRyGgs0y0YFvAL0AHiG9R/LkhTqyxCKI2+mYX81FvH61
> JTZCZQcKvCURnvAjae57KNTq9XjohiUj1MB6zNsgzsj9oGIXMOuFc4fCfA7G0YRE
> W081sQKCAQAnTQkv7nIvFGKQ4QsggTQaQyqi52PsW2KiktFtndAtDvCkyhtXxNgh
> ytuNCet7m5x5Ut+Kgioh9t6tZq9cBRuvGgBsMkTwjgXwshVwQ6DyGRKcjsIJMS06
> pz/KH9ix/N8rdj5hjyX4WKgrIYkCOqfg6E1gpSB6wo+/b2JRdrosrUnn5p44qkgG
> dFRNwYbPHL7UYt0rkhq/DgGuX+VhOkS9xYJ/E+rjw

Re: [grpc-io] gRFC L42 Adding wait-for-ready semantics

2018-10-17 Thread 'Mehrdad Afshari' via grpc.io
I strongly concur with Eric. As we also discussed offline, I am against the
per-channel wait_for_ready value for two primary reasons:

1. we should be conservative when adding APIs, as we can easily add but
cannot remove APIs. If this turns out to be a massive user concern, we can
resolve it later.
2. additional complexity of defining the semantics when per call and per
channel values are present (or absent), which ties closely to Eric’s
concern of non-obviousness of behavior.

On Wed, Oct 17, 2018 at 6:00 PM 'Lidi Zheng' via grpc.io <
grpc-io@googlegroups.com> wrote:

> Got it, thanks.
> Do you know why Python version doesn't have the a subset config class?
> Should this feature be implemented in that way?
> It seems that will be inconsistent with previous `timeout` variable.
>
> On Wed, Oct 17, 2018 at 5:53 PM Eric Anderson  wrote:
>
>> On Wed, Oct 17, 2018 at 5:49 PM Lidi Zheng  wrote:
>>
>>> Both designs are valid for me, it can be convenient when users want an
>>> easy way to adopt this logic to all of their RPC calls. Especially for
>>> users who doesn't implement fallback logic for RPC calls.
>>>
>>
>> It's dangerous because users "pay" for the complexity even if they don't
>> use the feature. Because now they (as us when we help them debug) have to
>> determine what the setting is.
>>
>> It has the potential to replace the existing `grpc.channel_ready_future`
>>> function.
>>>
>>
>> Definitely. +1. But that is true in either case I think.
>>
>>
>>> And it can make the logic of Python API more similar to `client_context`
>>> in C++ or `CallOptions` in Java and Golang.
>>>
>>
>> Except in C++ and Java those settings *don't* apply to the entire
>> Channel. They may apply to a subset of calls, but the user has good control
>> over what.
>>
>>
>>>
>>> On Wed, Oct 17, 2018 at 5:20 PM Eric Anderson  wrote:
>>>
 On Wed, Oct 17, 2018 at 4:52 PM lidiz via grpc.io <
 grpc-io@googlegroups.com> wrote:

> * (Suggesting) Add an optional `wait_for_ready` variable to `Channel` 
> class initialization method. Default `None`, accept `bool`.
>
>
 Please don't. wait_for_ready changes the semantics of the call enough
 that you don't ever want to wonder what the current value is. You need the
 wait_for_ready configuration *very* close to the code doing the RPC.

>>> --
> You received this message because you are subscribed to the Google Groups "
> grpc.io" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to grpc-io+unsubscr...@googlegroups.com.
> To post to this group, send email to grpc-io@googlegroups.com.
> Visit this group at https://groups.google.com/group/grpc-io.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/grpc-io/CAMC1%3Djezna9YkAmRuiJQNL5WbDjmxSTA7aOz756y7R-1DQa%2B6A%40mail.gmail.com
> 
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CABdARdBMAhqKoQP8_cFW0%3D4WP3K6M_a-hRkxRv58KMciTby79g%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: [grpc-io] gRFC L42 Adding wait-for-ready semantics

2018-10-17 Thread 'Lidi Zheng' via grpc.io
Got it, thanks.
Do you know why Python version doesn't have the a subset config class?
Should this feature be implemented in that way?
It seems that will be inconsistent with previous `timeout` variable.

On Wed, Oct 17, 2018 at 5:53 PM Eric Anderson  wrote:

> On Wed, Oct 17, 2018 at 5:49 PM Lidi Zheng  wrote:
>
>> Both designs are valid for me, it can be convenient when users want an
>> easy way to adopt this logic to all of their RPC calls. Especially for
>> users who doesn't implement fallback logic for RPC calls.
>>
>
> It's dangerous because users "pay" for the complexity even if they don't
> use the feature. Because now they (as us when we help them debug) have to
> determine what the setting is.
>
> It has the potential to replace the existing `grpc.channel_ready_future`
>> function.
>>
>
> Definitely. +1. But that is true in either case I think.
>
>
>> And it can make the logic of Python API more similar to `client_context`
>> in C++ or `CallOptions` in Java and Golang.
>>
>
> Except in C++ and Java those settings *don't* apply to the entire
> Channel. They may apply to a subset of calls, but the user has good control
> over what.
>
>
>>
>> On Wed, Oct 17, 2018 at 5:20 PM Eric Anderson  wrote:
>>
>>> On Wed, Oct 17, 2018 at 4:52 PM lidiz via grpc.io <
>>> grpc-io@googlegroups.com> wrote:
>>>
 * (Suggesting) Add an optional `wait_for_ready` variable to `Channel` 
 class initialization method. Default `None`, accept `bool`.


>>> Please don't. wait_for_ready changes the semantics of the call enough
>>> that you don't ever want to wonder what the current value is. You need the
>>> wait_for_ready configuration *very* close to the code doing the RPC.
>>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CAMC1%3Djezna9YkAmRuiJQNL5WbDjmxSTA7aOz756y7R-1DQa%2B6A%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: fork() support in Python!

2018-10-17 Thread 'Srini Polavarapu' via grpc.io
Hi Yonatan,

Your understanding is correct. The design takes care of not affecting RPCs 
in parent process due to FD shutdown in the child's post-fork handlers. 
Child process will recreate its own connection when a Python object (with a 
gRPC stub inside) inherited from the parent needs to be used. This should 
handle the case you described in your post 
. That was one of the use cases 
in our mind when deciding how to solve the fork issue. Please give it a try 
and let us know if you see any issues.

-Srini 

On Wednesday, October 17, 2018 at 5:26:16 PM UTC-7, Yonatan Zunger wrote:
>
>
> Wow -- I just saw the notes for 16264 
> , and that 1.15 now supports 
> fork() in Python. This is huge and great news!
>
> I just want to make sure I understand how this change works, and in 
> particular what the consequences of the shutdown of the core-level gRPC 
> resources in the child's post-fork handler means. The use case which is 
> (IMO) most important is where you create some kind of Python object which 
> has a gRPC stub inside it (e.g., a client object meant to talk to servers), 
> then fork() (often through use of the core Python multiprocessing library), 
> and use that object from within the child process as well. (This is 
> important because the multiprocessing library is the only built-in 
> parallelization mechanism that doesn't suffer from serialization due to the 
> GIL) The overhead cost, IIUC, would be essentially a restart of the core 
> resources, which is roughly equivalent to the cost of closing and reopening 
> the channel, but *not* of having to reboot all the wrapping objects, like 
> bigtable clients or whatever. (Which are notoriously slow to start up 
> because they also want to look up all sorts of metadata from the server 
> when they boot)
>
> I could imagine several gotchas here: for example, that the cancellation 
> of in-flight RPC's by the child during the reboot would also affect RPC's 
> in flight due to other threads, meaning that the client object has to be 
> entirely idle during the fork process.
>
> Am I understanding the new change correctly? What are the intended use 
> cases that it's meant to unlock?
>
> Yonatan
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/d7279410-69ab-4515-aa73-4d9b30cbcc73%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] fork() support in Python!

2018-10-17 Thread Yonatan Zunger
Wow -- I just saw the notes for 16264
, and that 1.15 now supports
fork() in Python. This is huge and great news!

I just want to make sure I understand how this change works, and in
particular what the consequences of the shutdown of the core-level gRPC
resources in the child's post-fork handler means. The use case which is
(IMO) most important is where you create some kind of Python object which
has a gRPC stub inside it (e.g., a client object meant to talk to servers),
then fork() (often through use of the core Python multiprocessing library),
and use that object from within the child process as well. (This is
important because the multiprocessing library is the only built-in
parallelization mechanism that doesn't suffer from serialization due to the
GIL) The overhead cost, IIUC, would be essentially a restart of the core
resources, which is roughly equivalent to the cost of closing and reopening
the channel, but *not* of having to reboot all the wrapping objects, like
bigtable clients or whatever. (Which are notoriously slow to start up
because they also want to look up all sorts of metadata from the server
when they boot)

I could imagine several gotchas here: for example, that the cancellation of
in-flight RPC's by the child during the reboot would also affect RPC's in
flight due to other threads, meaning that the client object has to be
entirely idle during the fork process.

Am I understanding the new change correctly? What are the intended use
cases that it's meant to unlock?

Yonatan

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CAFk%3DnbRKa2oEKjdvuwA6fbFCaxZ9JBYa1QcDqoSGEowwfP9tNQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: [grpc-io] gRFC L42 Adding wait-for-ready semantics

2018-10-17 Thread 'Eric Anderson' via grpc.io
On Wed, Oct 17, 2018 at 4:52 PM lidiz via grpc.io 
wrote:

> * (Suggesting) Add an optional `wait_for_ready` variable to `Channel` class 
> initialization method. Default `None`, accept `bool`.
>
>
Please don't. wait_for_ready changes the semantics of the call enough that
you don't ever want to wonder what the current value is. You need the
wait_for_ready configuration *very* close to the code doing the RPC.

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CA%2B4M1oN6hrx%2BDgwpHJAG1cN%3DUkunK%3DhFvR5%3Dysz%2BPwsYBDaQsQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


smime.p7s
Description: S/MIME Cryptographic Signature


[grpc-io] gRFC L42 Adding wait-for-ready semantics

2018-10-17 Thread lidiz via grpc.io
This thread is for the proposal at 
https://github.com/grpc/proposal/pull/112.
Full gRFC at 
https://github.com/lidizheng/proposal/blob/master/L42-metadata-flags.md.

The purpose of this gRFC is to enable gRPC Python client to utilize the 
“Wait For Ready” mechanism provided by C-Core, which underlying is to 
utilize the initial metadata flags. This mechanism can later be used to 
support future metadata flags. 

Definition of Wait For Ready semantics

> If an RPC is issued but the channel is in TRANSIENT_FAILURE or SHUTDOWN 
> states, the RPC is unable to be transmitted promptly. By default, gRPC 
> implementations SHOULD fail such RPCs immediately. This is known as "fail 
> fast," but the usage of the term is historical. RPCs SHOULD NOT fail as a 
> result of the channel being in other states (CONNECTING, READY, or IDLE).
> 
> gRPC implementations MAY provide a per-RPC option to not fail RPCs as a 
> result of the channel being in TRANSIENT_FAILURE state. Instead, the 
> implementation queues the RPCs until the channel is READY. This is known as 
> "wait for ready." The RPCs SHOULD still fail before READY if there are 
> unrelated reasons, such as the channel is SHUTDOWN or the RPC's deadline is 
> reached.
> 
> From https://github.com/grpc/grpc/blob/master/doc/wait-for-ready.md 


Proposal

* Add an optional `wait_for_ready` variable to `MultiCallable` classes 
initialization methods. Default `None`, accept `bool`.
* Per RPC level `wait_for_ready` variable can override upper level.
* Import initial metadata flags constants from `grpc_types.h` to `grpc.pxi`.
* (Suggesting) Add an optional `wait_for_ready` variable to `Channel` class 
initialization method. Default `None`, accept `bool`.


-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/31981f51-f613-4e5b-b484-8d592db61574%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: Load Balance secure connection

2018-10-17 Thread 'Carl Mastrangelo' via grpc.io
Expanding on this answer further


> The data passes through the said proxy.


Ok, so that makes sense.   So basically:

   + --> B1
   |
C ---> P --> B2
   |
   + --> B3

If this diagram doesn't get broke, the issue you have is that the Proxy
can't have access to the certs but each of the backends may have different
certs, which may be self signed.

Also assuming there isn't SSL today, then you can make traffic to the proxy
use either port 80 or 443 to distinguish between plaintext and secure.  The
client will need to know the CA cert that signed the TLS certs for B1, B2,
and B3.  Each backend has their own key, and own cert.  When the client
connects, the server uses it's own cert, trusted by the client via the CA
cert.



On Tue, Oct 16, 2018 at 4:47 PM Carl Mastrangelo  wrote:

> There are a few options.  The key words to look for are "L7" loadbalancing
> and "L4" loadbalancing.  For L7, your entry point to the load balancer,
> typically some kind of reverse proxy, decodes the TLS and then forwards the
> traffic to the correct backend.  Your client sends traffic to the proxy
> which then decides which of the available backends is least loaded.   For
> L4, there is still a reverse proxy, but it does not decode TLS.  Instead,
> it forwards all the encrypted data to a backend IP address, again deciding
> where to send based on load.   (or using roudn robin even).   The benefit
> of L7 load balancing is that it can make smarter decisions about where to
> send traffic, but has a downside that it's slightly slower.  L4 is nice
> because it does not need the TLS certs (as the hardware may not be
> trusted), but can't decide which backend to route requests to.
>
> In both cases, the client always sends traffic to the same place, which is
> in charge of routing to the next hop.  Also in both cases, the LB proxy
> needs to know all the backends available to send traffic to, and a way of
> telling if they are healthy.   Depending on how big your architecture is,
> even these two approaches are not enough, but let's not get too complicated
> too quickly.
>
> In gRPC LB, the approach is more different than the above two.  Instead, a
> dedicated load balancing service (i.e. gRPCLB) is contacted by the client
> at startup and asks for addresses to connect to.  The gRPCLB service can
> send a list of backend IP addresses to use, as well as relative weights for
> how much traffic each BE should take.   This is probably the most scalable
> approach, because it avoids the intermediate proxy altogether.   However,
> there is no premade gRPCLB server available and you would have to implement
> the protocol yourself.
>
> HTH,
> Carl
>
> On Tuesday, October 16, 2018 at 9:57:34 AM UTC-7, mauricio...@lacity.org
> wrote:
>>
>> We're setting up a mobile application (objective-c) that communicates
>> back to the server (go) using gRPC.  We intend to place those servers
>> behind a Netscaler load balancer.  We now have a requirement to encrypt the
>> messages going through.  How would we configure the client/server/load
>> balancer to accept and forward on the messages with TLS back to the
>> individual servers?  We thought about attempting the 1st certificate, and
>> if that fails, try the subsequent ones.  That seems a very fragile
>> approach.  How does secure load balancing happen in gRPC world?
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CAAcqB%2BsQwxVnBiCp3UvvEB%3DJhA-VWv2eW1z%3DxyhvpxUO-LWg5Q%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: Handshake failed with fatal error SSL_ERROR_SSL: error:14090086:SSL

2018-10-17 Thread 'Carl Mastrangelo' via grpc.io
Two immediate questions;:  

Did you swap the order of the root cert and the my_cert?   When I used 
openssl verify I had to swap the order.  Normally I would call the CA cert 
the root.  I think you need to rename the two certs to each other.

The authority you connect to in your client needs to match the authority in 
the cert.  You are trying to connect to "localhost", but the cert is for 
"LT-8204PT.cellebrite.local".  They have to match.  You can either add a 
dns entry in your hosts file to point LT-8204PT.cellebrite.local to 
127.0.0.1, or you can regenerate the cert for localhost. 

On Wednesday, October 17, 2018 at 7:13:38 AM UTC-7, sku...@gmail.com wrote:
>
> Hi,
> I am trying to setup a gRPC client and server example on WSL with SSL 
> (server authentication only)
>
> I created the following files (following this tutorial: 
> https://jsherz.com/grpc/node/nodejs/mutual/authentication/ssl/2017/10/27/grpc-node-with-mutual-auth.html
> )
>
> *my_root_cert.crt*
> -BEGIN CERTIFICATE-
> MIIEgDCCAmigAwIBAgIQBSsnVXC24hhmdgVV6NlFXzANBgkqhkiG9w0BAQsFADA3
> MRcwFQYDVQQKEw5FbnJpY2htZW50IEluYzEcMBoGA1UEAxMTRW5yaWNobWVudCBz
> ZXJ2aWNlczAeFw0xODEwMTcwODE2MjVaFw0yMDA0MTcwODEwMTFaMCUxIzAhBgNV
> BAMTGkxULTgyMDRQVC5jZWxsZWJyaXRlLmxvY2FsMIIBIjANBgkqhkiG9w0BAQEF
> AAOCAQ8AMIIBCgKCAQEAuk+HpXl6WE7oYm+AfgRqPWDc4MWCErax7LmFXXQXuh9x
> a6Rv7fa/Vu7v31mQhdrFIcQu8DW/4q9jkGTYp4mUsmA7TapWhWDtN1GCr+gHeUYN
> oFwXP3pki9BWWCR4lrCNeInSpDzTn71eymyfItUcWYHWcm4uM/hQ03/KpXtDzdHr
> IQPDH6QmNFi8ulfyv6Urr/DOC9QHazgYnShHPJMEnUXv05vP0lAT30qR/9yaTcke
> XI+332G+38iivLNp1ESWh+u+uMm1Yf/cz/Ai1rCPdTct/br1bl2LWm1vz6vI176W
> 93oHCOOcAW+/Hf/11F/KvtlVBoZ0Tl6e7d++tDnG9wIDAQABo4GZMIGWMA4GA1Ud
> DwEB/wQEAwIDuDAdBgNVHSUEFjAUBggrBgEFBQcDAQYIKwYBBQUHAwIwHQYDVR0O
> BBYEFP/stlJSMQ0Pf1I8RFZ9jMNFqbmlMB8GA1UdIwQYMBaAFMKTZ5z5ZU9uSurP
> Lqi1Yfnfi4gfMCUGA1UdEQQeMByCGkxULTgyMDRQVC5jZWxsZWJyaXRlLmxvY2Fs
> MA0GCSqGSIb3DQEBCwUAA4ICAQAEdZ5+RvPfg46DypZx3pctlWa4r2yFln8gzwyW
> Xq6VaK29jkFNlbchOXkFrhtOWIskZLmmNhLOCWHDgvleclt96kHxjr4tAC8S6rRb
> sjTSnhkIFOQYSGvBDTTuvNb371zl5kXlnCFntvpOh4PxTmzlyb1TdnZXUYWSDuDl
> eL5KBeFCoZhzsohZB+LTsOeRfYR86koMSpZKZcg5NxQfjdni7WMPti956jITOKm/
> FVh57HWMZWe587gtvK9Ntm29j4uiX6skpgprHgHwzBnfYIiyCWRneu2IZ7oJjeU1
> s+IsskYpNLx/9tyV3PHcbcslvxDsV8SntW6Ds5kIc/qtgBqv2cmAc4fECTEJdJLP
> 7aMBhq3nKTEQoogy0VgNUKrQG66y0x467epiHtMO6doxCEt0wcvH/Z4ou4Vm9MtL
> dXpJ4a60Vqpd1Da3WuyNFP0YeINeDjgREJHEIkdwpbm86RkxgZtQM2C7lsB3A4rg
> H2ql7nvx3YXQOqcdWk+OwB6f70nvEm8Ph1U/qeLPchB4YnzQ670nDRjY4boKaZ1g
> hZKdD/J6j9Aua7F2NhCvzFlgKEALZbhPzzy+XwYZWf+oF2OB+rVA462g6ULWplkd
> 70Nb+hecRqp8y4D1qn1bcZftfsAxhv73Myb+fwUBnhWNKTFpW5HSTZYY0qx5zOf1
> rlGphw==
> -END CERTIFICATE-
>
> *my_private.key (private_key)*
> -BEGIN RSA PRIVATE KEY-
> MIIJKQIBAAKCAgEAzcgKxU9Xss9Lup0atVdCrRAn/W/mAyVpSZrlRWdO9/Rk4GuN
> MehNNaCUrux+UQ8kUJn8S9+PBHW1SSG/IkRazfnk9Y9ThlIQiU3PNVbY9cHwXwf1
> kTtMe7jo58B1vY4MM4Llu311WQ74ru4voElyupZh7m8wlbEIhqNZAuZ2733wgntv
> kX0EXVb1wKdnQDCX+aDwti6KEIyI04dMdlpJ+cwkJnXTErwCdePF26lx8Lw3SNji
> WLiiQJewBJQ5qDmeOL/5dDXu4cf/6kp/wPrQaKUEAtw8gK90QJJR9trO2GiWalhI
> b+oHA+eLsOultqk3ZlQ0l84QKtwUkDnPpR4PK5yI+ykBZXSuFX0eyM0vOIf65I3e
> UxIfKGm3f4Xh0gn2hRCFvVQ/wNUdsTi+itn9JipzzPK/OtI7+pkLi/mEdb6uD+50
> 9Y/icnVope3KNsYqAfg0KNiv5l5gBzEISvRbwm4IEQ/4QjyBAPac19LoI1scECv4
> imyD/R3/7bxdbsTPJwg+wdyevGB1SU0D2DtopM8qR62lqzcJLaeZXIq1U5TYqwFu
> CsEjs8ZKiTe+2NiFpCFvPtZha4ulUt2sdk8h/d07VZW92i1EnkeKyRTVo0TLkU0m
> v46/bFH5VkoruAJFsuNPucdK35s6yPFgH2/Xtql7VL3ZRNGapjsukf6uHPUCAwEA
> AQKCAgEAnKKhODE9wvixXxnIw7HpKcx7dBkhztFCRGmoDN0nKewYgQ68yflWE/To
> WAHh4JeS/9tGRQalWTKzzDfowg+fwtttYVE4taxvs+PLToGN4fs+mUd4r5Sgkihc
> +FLyDFg8h1Uiw0Uq9qBDwPvCutJNhyOC5bgzFi5MHBfoYCHG9GM7mEaW1PqBQP85
> Tuzd1elnNPdBYpsoMpKWb9Sz6f6uAntWJQRYpxD/GndHGv3uodzShBu6pufbcSlF
> LScagCdjfTT7j26iJ7BR5yfP+LexvYWl+Ptk/lsPNTtrMmi5O9bYb5hFgxJzRpCQ
> Lxof6FsDtVtxMQAEJGujJ2kp2jh4OE630b/yrHdhfXiBCnvnaZZatr8x5QwTS7d8
> 6AF2/CGmbnKe5CfJ4ry99EQrIhUNk19l+De2lk5hDHldm+8A5zJVKzPEcj7wU/sC
> jXDS3orDcECBr2bqWp0pLHPy+SQcerPsnpD+1pxsPPuJhOdpLRTNk99umhojiWdw
> i17EjR9qKE1aFSfBCu7DloVD0bF9+nDLmVT2P9oZAchRI8Qd/o93K+FUTKpbybyQ
> D7YVb3CDnshtCV2DfyeVibVnoJOdEuY72/KF5qbphBpH/NSL5RgJ90n3Xo+x8qyd
> 2GjapwdOYRSWuJRaqlD4pPeWRs/A5NXSJi9cfGoRL5aW+T18feECggEBANV3xf9a
> kFsoms10Sg+OSU15mhcvPkLkqnMiiqjGGWtjGT9H5PuFjTvBY36f0rlWADLvSPyz
> oNuV3JaMhAFpjYUQzFXOtbm/CejjzQdn8ZLW6WtMA4e8buy/w9Xek0XgZXsvNAg1
> U2OWKXH0qUN0GtVc2smo6dy+uy6L7LkNc+QDWopBzDdZf8r/k1dRI7iU5zSo/Bo6
> f04d3AYf5QGTmBosJYkXppzk/TRc7/O7jjr5Ta0zF3lE9sdSsdrCCNFb5jgJafuu
> 8Is5li49jbQ2IxXgPVvHqVW4RebcV6IcXavmNnUUYENDr7bLqAWdsKhu7kF0L3f7
> FyHJrMHvjzwbqskCggEBAPbINeMx6uTpDrktf/O+ecsmBB9Y+9k6hh5mxdq/aZDd
> rYeiZ2hSm7haZQEXPa04S0Z0CgqWw/ucCgOdUUdprzWiomGKcKdBn/cro4mauoFQ
> DXs9BBhQBWRbNNe9jIR9g6aOW3wsoS+4+qwU/98fxD0g5jHznclge0c7ny8LygYF
> T/dhAv/XM79zX9Vdr88H69ELsGRzC28bOECYwU8kxFL109CSoNjxljj0NlGSARUC
> 2ZzIQ2lMxhRzy1a7U/7KA/vYw7sY+vbLQOYPZ0WqxvIwbltJ2URSBrylCel9ehKa
> /hIDrIMSgnBx/hHWE0IaGqkNlgLJYWMJTD2QxYPGis0CggEAcRwj9+he8U6UqCTk
> UVXNlZXHhl1sGjnb72HwIvnE4lgCOru3o2birTUNqTy6haYCOPr9q5jqtS+

Re: [grpc-io] grpc for database driver

2018-10-17 Thread 'Eric Anderson' via grpc.io
On Oct 17, 2018, at 2:06 PM, robert engels  wrote:

> Ok, so my original statement about being forced to use the ‘streaming rpc’
> was the correct way. I thought you said that was the case, but then you
> offered up what seemed like other solutions that would allow me to use
> individual rpcs…
>

You said you were forced to use streaming because of some specific reasons.
I tried to correct some specific incorrect understanding, namely that "a
new connection (tcp) is made for each request" is incorrect. You also asked
about connection lifecycle notifications, but it is unclear what for. I
think part of the problem is it's not really clear to me what semantics
you're looking for in "stable connection." It's in quotes and can be
interpreted a few different ways.

It sounds like you may be wanting to use streaming since it provides a
lifetime. For example, if you were wanting to implement a transaction, a
stream could be a quite nice approach. The stream provides a "context" for
all the messages while also providing a lifetime for the transaction; if
the RPC is cancelled the transaction could be aborted.

I'm sorry, but I wasn't trying to say you should or shouldn't use
streaming. I was only trying to inform the constraints that may cause you
to choose streaming.

But then why have the ‘idle time’ - if the connections are terminated when
> there are not more rpcs ? I am not sure why the server process can’t be
> notified when a connection is “terminated” due to idle timeout (or any
> other reason - like IO errors) - so it can clean up cached resources -
> doesn’t make a lot of sense to me?


On Wed, Oct 17, 2018 at 12:07 PM robert engels 
wrote:

> Sorry, meaning, that I need to add my own ‘idle timeout’ on the server
> application level to basically accomplish the same thing that the rpc
> connection is already doing…
>

If you will be using a single stream for all the data, then yes, you may
want to close it after an idle period. You may even want to close it after
a certain lifetime (this is useful for re-distributing load across multiple
backends, for instance).

I don't know what you are trying to do well enough to understand what you
are wanting to do with the connection notification. I will say that for
gRPC, we make no association between a "client" and a "connection." A
single client can have more than one connection to the same backend (this
can happen normally due to GOAWAY, but also possibly with client-side load
balancing) and a single connection to a backend can have multiple clients
(in the case of an L7/HTTP proxy).

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CA%2B4M1oP5_%3D%2BcyLDLh7Nxb%2B%3D4hvS21CMSb7eog5Y-z9tWoAjH_g%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


smime.p7s
Description: S/MIME Cryptographic Signature


Re: [grpc-io] grpc for database driver

2018-10-17 Thread robert engels
Sorry, meaning, that I need to add my own ‘idle timeout’ on the server 
application level to basically accomplish the same thing that the rpc 
connection is already doing…

> On Oct 17, 2018, at 2:06 PM, robert engels  wrote:
> 
> Ok, so my original statement about being forced to use the ‘streaming rpc’ 
> was the correct way. I thought you said that was the case, but then you 
> offered up what seemed like other solutions that would allow me to use 
> individual rpcs…
> 
> But then why have the ‘idle time’ - if the connections are terminated when 
> there are not more rpcs ? I am not sure why the server process can’t be 
> notified when a connection is “terminated” due to idle timeout (or any other 
> reason - like IO errors) - so it can clean up cached resources - doesn’t make 
> a lot of sense to me?
> 
>> On Oct 17, 2018, at 2:01 PM, Eric Anderson > > wrote:
>> 
>> On Wed, Oct 17, 2018 at 11:54 AM robert engels > > wrote:
>> Yes, I see how I can re-use the ClientConn across db open requests, the 
>> problem I have is that even if I set a ‘MAX-IDLE’ (and add client code to 
>> ‘ping’ within this interval), I don’t see any method on the server side to 
>> detect when the connection was dropped - there does not seem to be a 
>> callback (or channel) in the Go api ?
>> 
>> There's no API to know when the connection is dropped. All the semantics are 
>> per-RPC. RPC's can be "cancelled." Normally the connection is only dropped 
>> if there are no RPCs. But if there is an I/O error or similar, then the RPCs 
>> will be treated as cancelled. You can be notified when the RPC is cancelled 
>> via Context.Done().
> 

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/1C453FFC-F2A7-4538-923A-96C6C64DC6E7%40earthlink.net.
For more options, visit https://groups.google.com/d/optout.


Re: [grpc-io] grpc for database driver

2018-10-17 Thread robert engels
Ok, so my original statement about being forced to use the ‘streaming rpc’ was 
the correct way. I thought you said that was the case, but then you offered up 
what seemed like other solutions that would allow me to use individual rpcs…

But then why have the ‘idle time’ - if the connections are terminated when 
there are not more rpcs ? I am not sure why the server process can’t be 
notified when a connection is “terminated” due to idle timeout (or any other 
reason - like IO errors) - so it can clean up cached resources - doesn’t make a 
lot of sense to me?

> On Oct 17, 2018, at 2:01 PM, Eric Anderson  wrote:
> 
> On Wed, Oct 17, 2018 at 11:54 AM robert engels  > wrote:
> Yes, I see how I can re-use the ClientConn across db open requests, the 
> problem I have is that even if I set a ‘MAX-IDLE’ (and add client code to 
> ‘ping’ within this interval), I don’t see any method on the server side to 
> detect when the connection was dropped - there does not seem to be a callback 
> (or channel) in the Go api ?
> 
> There's no API to know when the connection is dropped. All the semantics are 
> per-RPC. RPC's can be "cancelled." Normally the connection is only dropped if 
> there are no RPCs. But if there is an I/O error or similar, then the RPCs 
> will be treated as cancelled. You can be notified when the RPC is cancelled 
> via Context.Done().

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/9C59B815-3941-4361-B953-B68AA5734C0A%40earthlink.net.
For more options, visit https://groups.google.com/d/optout.


Re: [grpc-io] grpc for database driver

2018-10-17 Thread 'Eric Anderson' via grpc.io
On Wed, Oct 17, 2018 at 11:54 AM robert engels 
wrote:

> Yes, I see how I can re-use the ClientConn across db open requests, the
> problem I have is that even if I set a ‘MAX-IDLE’ (and add client code to
> ‘ping’ within this interval), I don’t see any method on the server side to
> detect when the connection was dropped - there does not seem to be a
> callback (or channel) in the Go api ?
>

There's no API to know when the connection is dropped. All the semantics
are per-RPC. RPC's can be "cancelled." Normally the connection is only
dropped if there are no RPCs. But if there is an I/O error or similar, then
the RPCs will be treated as cancelled. You can be notified when the RPC is
cancelled via Context.Done().

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CA%2B4M1oP1PbeeEKSPQCvdDjk6CxQfVnZeeDqrqFO2sYkhy8B9UA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


smime.p7s
Description: S/MIME Cryptographic Signature


Re: [grpc-io] grpc for database driver

2018-10-17 Thread robert engels
Thanks for your help.

Yes, I see how I can re-use the ClientConn across db open requests, the problem 
I have is that even if I set a ‘MAX-IDLE’ (and add client code to ‘ping’ within 
this interval), I don’t see any method on the server side to detect when the 
connection was dropped - there does not seem to be a callback (or channel) in 
the Go api ?

> On Oct 17, 2018, at 12:01 PM, 'Eric Anderson' via grpc.io 
>  wrote:
> 
> On Wed, Oct 17, 2018 at 8:58 AM  > wrote:
> When reviewing the API docs, it was the only way I could implement a 
> "connection", otherwise for each rpc call, a new connection (tcp) is made for 
> each request
> 
> No. A stream is the only way to guarantee multiple requests go to the same 
> server, but normal RPCs can/do reuse connections. Since gRPC is using HTTP/2, 
> as long as you re-use the ClientConn gRPC is able to issue lots of RPCs on 
> the same connection, even concurrently.
> 
> ClientConn may create more than one connection, but this is for things like 
> load balancing, where you need distribute traffic to multiple backends. But 
> by default, ClientConn will have a single connection it will use for all 
> RPCs, and automatically replace that connection if something is wrong with it.
> 
> and there is no "lifecycle" events on the server side to determine when a 
> connection is dead, etc.
> 
> Yes, this is on purpose. The vast majority of use cases should have no need 
> for this level of API. We do provide mechanisms 
>  to 
> detect broken connections and release idle connections, however. (That is 
> implemented in Go)
> 
> Is it also a possibility that if I use a TLS connection, then the grpc 
> connection will be stable across rpc calls ?
> 
> Yes. If a call fails, that shouldn't impact the TLS connection. Simply 
> continue using the same ClientConn and gRPC should manage the connection for 
> you.
> 
> You can see the code at github.com/robaho/keydbr 
> 
> 
> I would not suggest closing the ClientConn on error. Even if there is a 
> failure with the connection and RPCs fail, the ClientConn will properly 
> reconnect. It will also do things like exponential backoff on repeated 
> connection failures to avoid cascading failures.
> 
> I suggest sharing a single ClientConn per `addr` (in your app) as much as 
> possible.
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "grpc.io" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to grpc-io+unsubscr...@googlegroups.com 
> .
> To post to this group, send email to grpc-io@googlegroups.com 
> .
> Visit this group at https://groups.google.com/group/grpc-io 
> .
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/grpc-io/CA%2B4M1oPKeLnEso9Sypc-O_KAT_d0i6g63%3D2pjBqmvj-J21CsVg%40mail.gmail.com
>  
> .
> For more options, visit https://groups.google.com/d/optout 
> .

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/883A600A-8C04-4C05-A29F-7A5D7C35A990%40earthlink.net.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: How to change DSCP value of gRPC packets

2018-10-17 Thread sarahchen via grpc.io
Thank you, Srini.

On Tuesday, October 16, 2018 at 12:39:41 PM UTC-7, Srini Polavarapu wrote:
>
> Hi,
>
> There is no API in gRPC Python to set DSCP value. Please open an issue 
>  to track this although this 
> would be low priority for us.
>
> Thanks.
>
> On Monday, October 15, 2018 at 1:52:25 PM UTC-7, sara...@arista.com wrote:
>>
>> Hello,
>>
>> I am writing a Python gRPC client. How can I set DSCP value of the IP 
>> packets generated by the gRPC client?
>>
>> Thanks,
>> Sarah
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/17af44eb-83d8-4c51-a22b-52d45e597e6f%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [grpc-io] grpc for database driver

2018-10-17 Thread 'Eric Anderson' via grpc.io
On Wed, Oct 17, 2018 at 8:58 AM  wrote:

> When reviewing the API docs, it was the only way I could implement a
> "connection", otherwise for each rpc call, a new connection (tcp) is made
> for each request
>

No. A stream is the only way to guarantee multiple requests go to the same
server, but normal RPCs can/do reuse connections. Since gRPC is using
HTTP/2, as long as you re-use the ClientConn gRPC is able to issue lots of
RPCs on the same connection, even concurrently.

ClientConn may create more than one connection, but this is for things like
load balancing, where you need distribute traffic to multiple backends. But
by default, ClientConn will have a single connection it will use for all
RPCs, and automatically replace that connection if something is wrong with
it.

and there is no "lifecycle" events on the server side to determine when a
> connection is dead, etc.
>

Yes, this is on purpose. The vast majority of use cases should have no need
for this level of API. We do provide mechanisms

to detect broken connections and release idle connections, however. (That
is implemented in Go)

Is it also a possibility that if I use a TLS connection, then the grpc
> connection will be stable across rpc calls ?
>

Yes. If a call fails, that shouldn't impact the TLS connection. Simply
continue using the same ClientConn and gRPC should manage the connection
for you.

You can see the code at github.com/robaho/keydbr
>

I would not suggest closing the ClientConn on error. Even if there is a
failure with the connection and RPCs fail, the ClientConn will properly
reconnect. It will also do things like exponential backoff on repeated
connection failures to avoid cascading failures.

I suggest sharing a single ClientConn per `addr` (in your app) as much as
possible.

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CA%2B4M1oPKeLnEso9Sypc-O_KAT_d0i6g63%3D2pjBqmvj-J21CsVg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


smime.p7s
Description: S/MIME Cryptographic Signature


[grpc-io] grpc for database driver

2018-10-17 Thread rengels
Hello,

I wrote a database driver using Go and grpc. I ended up using the 
"streaming" rpc because that was the only way it seems to have a "stable 
connection".

When reviewing the API docs, it was the only way I could implement a 
"connection", otherwise for each rpc call, a new connection (tcp) is made 
for each request - at least as far as I could tell - and there is no 
"lifecycle" events on the server side to determine when a connection is 
dead, etc.

Am I reading the documentation correctly and implementing this correctly?

Is it also a possibility that if I use a TLS connection, then the grpc 
connection will be stable across rpc calls ?

Looking for direction here (it works now, but something doesn't fell right).

You can see the code at github.com/robaho/keydbr

Thanks.
Robert

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/be67bee8-6687-46ae-846b-83fd6814e818%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Handshake failed with fatal error SSL_ERROR_SSL: error:14090086:SSL

2018-10-17 Thread skuchuk
Hi,
I am trying to setup a gRPC client and server example on WSL with SSL 
(server authentication only)

I created the following files (following this 
tutorial: 
https://jsherz.com/grpc/node/nodejs/mutual/authentication/ssl/2017/10/27/grpc-node-with-mutual-auth.html)

*my_root_cert.crt*
-BEGIN CERTIFICATE-
MIIEgDCCAmigAwIBAgIQBSsnVXC24hhmdgVV6NlFXzANBgkqhkiG9w0BAQsFADA3
MRcwFQYDVQQKEw5FbnJpY2htZW50IEluYzEcMBoGA1UEAxMTRW5yaWNobWVudCBz
ZXJ2aWNlczAeFw0xODEwMTcwODE2MjVaFw0yMDA0MTcwODEwMTFaMCUxIzAhBgNV
BAMTGkxULTgyMDRQVC5jZWxsZWJyaXRlLmxvY2FsMIIBIjANBgkqhkiG9w0BAQEF
AAOCAQ8AMIIBCgKCAQEAuk+HpXl6WE7oYm+AfgRqPWDc4MWCErax7LmFXXQXuh9x
a6Rv7fa/Vu7v31mQhdrFIcQu8DW/4q9jkGTYp4mUsmA7TapWhWDtN1GCr+gHeUYN
oFwXP3pki9BWWCR4lrCNeInSpDzTn71eymyfItUcWYHWcm4uM/hQ03/KpXtDzdHr
IQPDH6QmNFi8ulfyv6Urr/DOC9QHazgYnShHPJMEnUXv05vP0lAT30qR/9yaTcke
XI+332G+38iivLNp1ESWh+u+uMm1Yf/cz/Ai1rCPdTct/br1bl2LWm1vz6vI176W
93oHCOOcAW+/Hf/11F/KvtlVBoZ0Tl6e7d++tDnG9wIDAQABo4GZMIGWMA4GA1Ud
DwEB/wQEAwIDuDAdBgNVHSUEFjAUBggrBgEFBQcDAQYIKwYBBQUHAwIwHQYDVR0O
BBYEFP/stlJSMQ0Pf1I8RFZ9jMNFqbmlMB8GA1UdIwQYMBaAFMKTZ5z5ZU9uSurP
Lqi1Yfnfi4gfMCUGA1UdEQQeMByCGkxULTgyMDRQVC5jZWxsZWJyaXRlLmxvY2Fs
MA0GCSqGSIb3DQEBCwUAA4ICAQAEdZ5+RvPfg46DypZx3pctlWa4r2yFln8gzwyW
Xq6VaK29jkFNlbchOXkFrhtOWIskZLmmNhLOCWHDgvleclt96kHxjr4tAC8S6rRb
sjTSnhkIFOQYSGvBDTTuvNb371zl5kXlnCFntvpOh4PxTmzlyb1TdnZXUYWSDuDl
eL5KBeFCoZhzsohZB+LTsOeRfYR86koMSpZKZcg5NxQfjdni7WMPti956jITOKm/
FVh57HWMZWe587gtvK9Ntm29j4uiX6skpgprHgHwzBnfYIiyCWRneu2IZ7oJjeU1
s+IsskYpNLx/9tyV3PHcbcslvxDsV8SntW6Ds5kIc/qtgBqv2cmAc4fECTEJdJLP
7aMBhq3nKTEQoogy0VgNUKrQG66y0x467epiHtMO6doxCEt0wcvH/Z4ou4Vm9MtL
dXpJ4a60Vqpd1Da3WuyNFP0YeINeDjgREJHEIkdwpbm86RkxgZtQM2C7lsB3A4rg
H2ql7nvx3YXQOqcdWk+OwB6f70nvEm8Ph1U/qeLPchB4YnzQ670nDRjY4boKaZ1g
hZKdD/J6j9Aua7F2NhCvzFlgKEALZbhPzzy+XwYZWf+oF2OB+rVA462g6ULWplkd
70Nb+hecRqp8y4D1qn1bcZftfsAxhv73Myb+fwUBnhWNKTFpW5HSTZYY0qx5zOf1
rlGphw==
-END CERTIFICATE-

*my_private.key (private_key)*
-BEGIN RSA PRIVATE KEY-
MIIJKQIBAAKCAgEAzcgKxU9Xss9Lup0atVdCrRAn/W/mAyVpSZrlRWdO9/Rk4GuN
MehNNaCUrux+UQ8kUJn8S9+PBHW1SSG/IkRazfnk9Y9ThlIQiU3PNVbY9cHwXwf1
kTtMe7jo58B1vY4MM4Llu311WQ74ru4voElyupZh7m8wlbEIhqNZAuZ2733wgntv
kX0EXVb1wKdnQDCX+aDwti6KEIyI04dMdlpJ+cwkJnXTErwCdePF26lx8Lw3SNji
WLiiQJewBJQ5qDmeOL/5dDXu4cf/6kp/wPrQaKUEAtw8gK90QJJR9trO2GiWalhI
b+oHA+eLsOultqk3ZlQ0l84QKtwUkDnPpR4PK5yI+ykBZXSuFX0eyM0vOIf65I3e
UxIfKGm3f4Xh0gn2hRCFvVQ/wNUdsTi+itn9JipzzPK/OtI7+pkLi/mEdb6uD+50
9Y/icnVope3KNsYqAfg0KNiv5l5gBzEISvRbwm4IEQ/4QjyBAPac19LoI1scECv4
imyD/R3/7bxdbsTPJwg+wdyevGB1SU0D2DtopM8qR62lqzcJLaeZXIq1U5TYqwFu
CsEjs8ZKiTe+2NiFpCFvPtZha4ulUt2sdk8h/d07VZW92i1EnkeKyRTVo0TLkU0m
v46/bFH5VkoruAJFsuNPucdK35s6yPFgH2/Xtql7VL3ZRNGapjsukf6uHPUCAwEA
AQKCAgEAnKKhODE9wvixXxnIw7HpKcx7dBkhztFCRGmoDN0nKewYgQ68yflWE/To
WAHh4JeS/9tGRQalWTKzzDfowg+fwtttYVE4taxvs+PLToGN4fs+mUd4r5Sgkihc
+FLyDFg8h1Uiw0Uq9qBDwPvCutJNhyOC5bgzFi5MHBfoYCHG9GM7mEaW1PqBQP85
Tuzd1elnNPdBYpsoMpKWb9Sz6f6uAntWJQRYpxD/GndHGv3uodzShBu6pufbcSlF
LScagCdjfTT7j26iJ7BR5yfP+LexvYWl+Ptk/lsPNTtrMmi5O9bYb5hFgxJzRpCQ
Lxof6FsDtVtxMQAEJGujJ2kp2jh4OE630b/yrHdhfXiBCnvnaZZatr8x5QwTS7d8
6AF2/CGmbnKe5CfJ4ry99EQrIhUNk19l+De2lk5hDHldm+8A5zJVKzPEcj7wU/sC
jXDS3orDcECBr2bqWp0pLHPy+SQcerPsnpD+1pxsPPuJhOdpLRTNk99umhojiWdw
i17EjR9qKE1aFSfBCu7DloVD0bF9+nDLmVT2P9oZAchRI8Qd/o93K+FUTKpbybyQ
D7YVb3CDnshtCV2DfyeVibVnoJOdEuY72/KF5qbphBpH/NSL5RgJ90n3Xo+x8qyd
2GjapwdOYRSWuJRaqlD4pPeWRs/A5NXSJi9cfGoRL5aW+T18feECggEBANV3xf9a
kFsoms10Sg+OSU15mhcvPkLkqnMiiqjGGWtjGT9H5PuFjTvBY36f0rlWADLvSPyz
oNuV3JaMhAFpjYUQzFXOtbm/CejjzQdn8ZLW6WtMA4e8buy/w9Xek0XgZXsvNAg1
U2OWKXH0qUN0GtVc2smo6dy+uy6L7LkNc+QDWopBzDdZf8r/k1dRI7iU5zSo/Bo6
f04d3AYf5QGTmBosJYkXppzk/TRc7/O7jjr5Ta0zF3lE9sdSsdrCCNFb5jgJafuu
8Is5li49jbQ2IxXgPVvHqVW4RebcV6IcXavmNnUUYENDr7bLqAWdsKhu7kF0L3f7
FyHJrMHvjzwbqskCggEBAPbINeMx6uTpDrktf/O+ecsmBB9Y+9k6hh5mxdq/aZDd
rYeiZ2hSm7haZQEXPa04S0Z0CgqWw/ucCgOdUUdprzWiomGKcKdBn/cro4mauoFQ
DXs9BBhQBWRbNNe9jIR9g6aOW3wsoS+4+qwU/98fxD0g5jHznclge0c7ny8LygYF
T/dhAv/XM79zX9Vdr88H69ELsGRzC28bOECYwU8kxFL109CSoNjxljj0NlGSARUC
2ZzIQ2lMxhRzy1a7U/7KA/vYw7sY+vbLQOYPZ0WqxvIwbltJ2URSBrylCel9ehKa
/hIDrIMSgnBx/hHWE0IaGqkNlgLJYWMJTD2QxYPGis0CggEAcRwj9+he8U6UqCTk
UVXNlZXHhl1sGjnb72HwIvnE4lgCOru3o2birTUNqTy6haYCOPr9q5jqtS+1ULho
Ae+SI14BR75eIGwPri12qGP1Zx8lU8tVW4kHJb9+30Yutynt29XpNig7ZVtd3poL
TkipJ0EqVQyBzovp1wIhjvSH4du9D+FJelKcGk5OHkhKKzYLRKX930/7wMKloUEp
MSqpv8SApyG3EQ9s82ADbRyGgs0y0YFvAL0AHiG9R/LkhTqyxCKI2+mYX81FvH61
JTZCZQcKvCURnvAjae57KNTq9XjohiUj1MB6zNsgzsj9oGIXMOuFc4fCfA7G0YRE
W081sQKCAQAnTQkv7nIvFGKQ4QsggTQaQyqi52PsW2KiktFtndAtDvCkyhtXxNgh
ytuNCet7m5x5Ut+Kgioh9t6tZq9cBRuvGgBsMkTwjgXwshVwQ6DyGRKcjsIJMS06
pz/KH9ix/N8rdj5hjyX4WKgrIYkCOqfg6E1gpSB6wo+/b2JRdrosrUnn5p44qkgG
dFRNwYbPHL7UYt0rkhq/DgGuX+VhOkS9xYJ/E+rjwc2fsly4Lt1XQEXxrv71VRGy
jiJS5LBiwj9SK1o4gKjvBr2GJevXb3QRe98HUMJ2G+4QuuPSOHZpYh+WNNmTYi49
xBmnM4WLoGagh5ZdST7mK8Plhhm+e679AoIBAQCFZY8hDyEemsHqnzPlkuvG+bBD
RD5QJ9epemDYm78SzDXZ2L1y+luNZVE0XlqXyEXe6z5qcZfa+o7BOLZIXH7qASXf
pGewHjcfPAzWpgYCNCkADbDtLWAhFg3fotGvRYj5n0cqVAmGqZBGsva4mEHA7jn5
/ra+FPZgKB1UapLrQ9ZxYPNZ9kD3UavTr

[grpc-io] Re: [go-nuts] go-grpc question

2018-10-17 Thread Robert Engels
Hi, my understanding, and it seems to be correct in testing is that grpc always 
makes a new connection (via tcp) due to load balancing. The only way I’ve been 
able to get grpc to use a single connection is if I use the “streaming” mode.

What am I doing wrong then?

> On Oct 17, 2018, at 8:24 AM, Josh Humphries  wrote:
> 
> +grpc-io@googlegroups.com
> moving golang-n...@googlegroups.com to BCC
> 
> In general, connections are not cheap, but stubs are. Actual implementations 
> for some languages differ, but Go complies with this.
> 
> What that means is that, generally speaking, you should not try creating the 
> *grpc.ClientConn for each request. Instead create it once and cache it. You 
> can create the stub just once and cache it (they are safe to use concurrently 
> form multiple goroutines). But that is not necessary; you could also create 
> the stub for each request, using the cached connection.
> 
> In practice, creating a new connection for each request will have overhead in 
> terms of allocations, creating and tearing down goroutines, and also in terms 
> of latency, to establish a new network connection every time. So it is 
> advisable to cache and re-use them. However, if you are not using TLS, it may 
> be acceptable to create a new connection per request (since the network 
> connection latency is often low, at least if the client and server are in the 
> same region/cloud provider). If you are using TLS, however, creating a 
> connection per request is a bit of an atrocity: you are not only adding the 
> extra latency of a TLS handshake to every request (typically 10s of 
> milliseconds IIRC), but you are also inducing a potentially huge amount of 
> load on the server, by making it perform many more digital signatures (one of 
> the handshake steps) than if the clients cached and re-used connections.
> 
> Historically, the only reason it might be useful to create a new connection 
> per request in Go was if you were using a layer-4(TCP) load balancer. In that 
> case, the standard DNS resolver would resolve to a single IP address (that of 
> the load balancer) and then only maintain a single connection. This would 
> result in very poor load balancing since 100% of that client's requests would 
> all route to the same backend. This would also happen when using standard 
> Kubernetes services (when using gRPC for server-to-serve communication), as 
> kubedns resolves a service name into a single virtual IP. I'm not sure if the 
> current state of the world regarding TCP load balancers and the grpc-go 
> project, but if it's still an issue and you run services in Kubernetes, you 
> can use a 3rd party resolver: https://github.com/sercand/kuberesolver.
> 
> 
> Josh Humphries
> jh...@bluegosling.com
> 
> 
>> On Wed, Oct 17, 2018 at 2:13 AM  wrote:
>> Hello,
>> 
>> I intend to use grpc between two fixed endpoints (client and server) where 
>> the client receives multiple requests (the client serves as a proxy) which 
>> in turn sends a grpc request to the server. I wanted to know of the 
>> following would be considered good practice:
>> 
>> a) For every request that comes in at the client, do the following in the 
>> http handler:
>>a) conn := grpc.Dial(...)// establish a grpc connection
>>b) client := NewClient(conn)// instantiate a new client
>>c) client.Something(..) // invoke the grpc method on the 
>> client
>> 
>> i.e Establish a new connection and client in handling every request
>> 
>> b) Establish a single grpc connection between client and server at init() 
>> time and then inside the handler, instantiate a new client and invoke the 
>> grpc method
>>a) client := NewClient(conn)// instantiate a new client
>>b) client.Something(..) // invoke the grpc method on the 
>> client 
>> 
>> c) Establish a connection and instantiate a client at init() and then in 
>> every handler, just invoke the grpc method.
>>a) client.Something(..)
>> 
>> The emphasis here is on performance as I expect the the client to process a 
>> large volume of requests coming in. I do know that grpc underneath creates 
>> streams but at the end of the day a single
>> logical grpc connection runs on a single TCP connection (multiplexing the 
>> streams) on it and having just one connection for all clients might not cut 
>> it. Thoughts and ideas appreciated !
>> 
>> Thanks,
>> Nakul
>> 
>> 
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "golang-nuts" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to golang-nuts+unsubscr...@googlegroups.com.
>> For more options, visit https://groups.google.com/d/optout.
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "golang-nuts" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to golang-nuts+unsubscr...@googlegroups.co

[grpc-io] Re: [go-nuts] go-grpc question

2018-10-17 Thread Josh Humphries
*+grpc-io@googlegroups.com *

*moving golang-n...@googlegroups.com  to BCC*

In general, connections are not cheap, but stubs are. Actual
implementations for some languages differ, but Go complies with this.

What that means is that, generally speaking, you should not try creating
the *grpc.ClientConn for each request. Instead create it once and cache it.
You *can* create the stub just once and cache it (they are safe to use
concurrently form multiple goroutines). But that is not necessary; you
could also create the stub for each request, using the cached connection.

In practice, creating a new connection for each request will have overhead
in terms of allocations, creating and tearing down goroutines, and also in
terms of latency, to establish a new network connection every time. So it
is advisable to cache and re-use them. However, if you are not using TLS,
it *may be* acceptable to create a new connection per request (since the
network connection latency is often low, at least if the client and server
are in the same region/cloud provider). If you are using TLS, however,
creating a connection per request is a bit of an atrocity: you are not only
adding the extra latency of a TLS handshake to every request (typically 10s
of milliseconds IIRC), but you are also inducing a potentially huge amount
of load on the server, by making it perform many more digital signatures
(one of the handshake steps) than if the clients cached and re-used
connections.

Historically, the only reason it might be useful to create a new connection
per request in Go was if you were using a layer-4(TCP) load balancer. In
that case, the standard DNS resolver would resolve to a single IP address
(that of the load balancer) and then only maintain a single connection.
This would result in very poor load balancing since 100% of that client's
requests would all route to the same backend. This would also happen when
using standard Kubernetes services (when using gRPC for server-to-serve
communication), as kubedns resolves a service name into a single virtual
IP. I'm not sure if the current state of the world regarding TCP load
balancers and the grpc-go project, but if it's still an issue and you run
services in Kubernetes, you can use a 3rd party resolver:
https://github.com/sercand/kuberesolver.


*Josh Humphries*
jh...@bluegosling.com


On Wed, Oct 17, 2018 at 2:13 AM  wrote:

> Hello,
>
> I intend to use grpc between two fixed endpoints (client and server) where
> the client receives multiple requests (the client serves as a proxy) which
> in turn sends a grpc request to the server. I wanted to know of the
> following would be considered good practice:
>
> a) For every request that comes in at the client, do the following in the
> http handler:
>a) conn := grpc.Dial(...)// establish a grpc connection
>b) client := NewClient(conn)// instantiate a new client
>c) client.Something(..) // invoke the grpc method on
> the client
>
> i.e Establish a new connection and client in handling every request
>
> b) Establish a single grpc connection between client and server at init()
> time and then inside the handler, instantiate a new client and invoke the
> grpc method
>a) client := NewClient(conn)// instantiate a new client
>b) client.Something(..) // invoke the grpc method on
> the client
>
> c) Establish a connection and instantiate a client at init() and then in
> every handler, just invoke the grpc method.
>a) client.Something(..)
>
> The emphasis here is on performance as I expect the the client to process
> a large volume of requests coming in. I do know that grpc underneath
> creates streams but at the end of the day a single
> logical grpc connection runs on a single TCP connection (multiplexing the
> streams) on it and having just one connection for all clients might not cut
> it. Thoughts and ideas appreciated !
>
> Thanks,
> Nakul
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "golang-nuts" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to golang-nuts+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CAO78j%2BKKAwnDk3FLGe7U3E%3Dpk6p1JVhuAFJERdocGhf8tzFq_g%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: [grpc-io] Async C++ service with multiple methods

2018-10-17 Thread Stephan Menzel
Am Dienstag, 16. Oktober 2018 00:40:58 UTC+2 schrieb Christopher Warrington 
- MSFT:
>
>
> By imposing these restrictions atop the gRPC++ library, we were able to
> simplify implementation of async services [8]:
>

class GreeterServiceImpl final : public Greeter::Service
> {
> public:
> using Greeter::Service::Service;
>
> private:
> void SayHello(bond::ext::grpc::unary_call HelloReply> call) override
> {
> HelloRequest request = call.request().Deserialize();
>
> HelloReply reply;
> reply.message = "hello " + request.name;
>
> call.Finish(reply);
> }
> };
>
>
Wow, that's pretty neat. It looks almost as tidy as the Sync Service did. 
Thanks for posting.
Only thing I would have with it is, the fact that the async approach forced 
me to take the route with the one class per call approach was one of the 
few things I liked about it. With a service of, say, 50 methods a class 
containing all the impls can grow tremendously. Even if each call is very 
much separated from the others. Few years back we had some static analysis 
run over the code that showed glowing red complexity dots over those files. 
Probably due to size, because they weren't this complex. I hope this gets 
better with the one-class-per-call way.

Cheers,
Stephan

PS: Amazed Microsoft uses gRPC. And open sources the results. The world we 
live in.

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/4143c4ac-b0cf-4b8e-acb8-bcfabdddba14%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [grpc-io] Async C++ service with multiple methods

2018-10-17 Thread Stephan Menzel
Hello Nathan,

Am Montag, 15. Oktober 2018 17:54:48 UTC+2 schrieb Nathan Prat:
>
> How can you use CRTP? I tried it this way, but after `cq_->Next` you can't 
> static_cast to a templated class. Or am I missing something?
>

I have a mixture now of virtual inheritance and CRTP.

Basically, one super simple call root class:

// First a macro I use later
#define GRPC_NATIVE_NAME_REQUEST( name )  \
void native_name_request() {   \
m_service->Request ## name ## (&m_ctx, &m_request, &m_responder, m_cq, 
m_cq, this);\
}


class RpcCallBase {

public:
RpcCallBase() {};
virtual ~RpcCallBase() {};

virtual void proceed() noexcept {
  MOOSE_ASSERT_MSG(true, "RPC implementation does not overload 
proceed()");
};
};

This is what I use to cast the void ptr to in order to get it to RTTI the 
right type. Then, on top of this, the actual CRTP base. Like this:

// First a macro I use later
#define GRPC_NATIVE_NAME_REQUEST( name )  \
void native_name_request() {   \
m_service->Request ## name ## (&m_ctx, &m_request, &m_responder, m_cq, 
m_cq, this);\
}


template< class DerivedType, class RequestType, class ResponseType >
class MyServiceCall : public RpcCallBase {

public:

typedef MyServiceCall 
base_type;

MyServiceCall() {
 // constructor with service specific stuff such as parent 
object and so on
 proceed();  like in the example
}

void proceed() noexcept override {
 // Much like the example, except:
 if (m_status == CREATE) {
   m_status = PROCESS;
   static_cast(this)->native_name_request();

 // this is what the macro injects in order to fake the right 
type in here. See below.
 } else if (m_status == PROCESS) {
// new object of CRTP derived type
new base_type(m_service, m_cq, m_parent);
   
// CRTP to the actual work, overloaded by derived class
grpc::Status retstat = static_cast(this)->work();

  }
// rest of the stuff pretty much like the example except template types
}

and then, each call can be implemented nicely:

class FooMethodCall : public MyServiceCall {

public:
GRPC_NATIVE_NAME_REQUEST( FooMethod )   // I know, not perfect but 
it does the trick

FooMethodCall(MyService::AsyncService *n_service, 
ServerCompletionQueue *n_cq, MyServiceImpl *n_parent)
  : base_type(n_service, n_cq, n_parent) {
}

grpc::Status work() {
  
// do the actual work and return a status object
} 
}


Finally, in the async loop it looks like this:

void MyServiceImpl::HandleRpcs() {

// Spawn a new CallData instance to serve new clients.
// The q takes ownership
new FooMethodCall(&m_service_instance, m_cq.get(), this);
new BarMethodCall(&m_service_instance, m_cq.get(), this);

void* tag;  // uniquely identifies a request.

bool ok;
while (true) {

 if (!m_cq->Next(&tag, &ok)) {
   BOOST_LOG_SEV(logger(), normal) << "service shutting down";
   break;
 }

 RpcCallBase *call = static_cast(tag);

 if (!ok) {
   // This seems to be the case while the q is draining of 
events 
   // during shutdown I'm gonna delete them
   delete call;
   continue;
  }

  // hand over to the call object to do the rest
  call->proceed();
 }
}

And it works. This way I don't need any further type guessing or casting 
beyond the RpcCallBase 

HTH,
Stephan

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/922f4158-87db-4d2e-a0eb-3266aee45137%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[grpc-io] Re: gRPCLB in gRPC Java 1.14

2018-10-17 Thread 'blazej.mro...@gmail.com' via grpc.io

On Tuesday, 31 July 2018 20:30:41 UTC+2, Carl Mastrangelo wrote:

> There will be upcoming documentation on the exact way to configure this, 
> but this is being announced here for interested parties to try it out and 
> answer any questions.
>
(...) and there currently isn't a load reporter.  I am considering showing 
> how to make a simple one, but there isn't an off the shelf version for use 
> yet.
>

Hi, I have a question: could you estimate (more\less) when those things are 
likely to happen? I mean more documentation on grpclb and ideas about load 
reporter.
I'm not asking about a specific date, only if it possible to be e.g. next 
1-2 months or rather it will be a year?

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/4f705eb8-5778-4cf2-805d-4cb09c890d8e%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.