Hi, when I execute the command build and locally install gRPC, Protocol
Buffers, and Abseil:
$ cd grpc
$ mkdir -p cmake/build
$ pushd cmake/build
$ cmake -DgRPC_INSTALL=ON \ -DgRPC_BUILD_TESTS=OFF \
-DCMAKE_INSTALL_PREFIX=$MY_INSTALL_DIR
\ ../..
I got the following error:
-- The C compiler i
ResourceQuota right now will track some but as you discovered certainly not
all of the memory used by gRPC.
The current behavior has been sufficient at Google to prevent some mishaps,
but it's certainly not bulletproof.
We'll be looking into improving this in the future.
On Wed, Jul 28, 2021, 6:
I am writing an async grpc server, and I want to control the total memory
it may use. Looks like that grpc::ResourceQuota is useful, but by checking
the places where it is called, it seems that we only check memory quota
when accepting a new connection.
What if a client keeps sending API calls
On Wed, Jul 28, 2021 at 1:33 AM Inian Vasanth wrote:
>
> Hi,
>
> I have written Unary and Stream server side interceptors to handle client
> requests. Our use case (at the server) is to authorise the client by looking
> at the Subject Alternate Name (Domain Name) as part of the x509 cert
> pre
In general the client authorization will be per RPC based on call metadata.
You can also use the TLS session data (such as SAN values from the client
cert) but the check will still be per RPC based on the server side
interceptor architecture. You might be able to optimize by caching the
authori
https://github.com/grpc/grpc/blob/master/doc/PROTOCOL-HTTP2.md specifies
that client messages involve headers and data only, and no trailers. Thus,
there is no concept of trailers sent from the client side. However, the
reason the two examples are different, is one is at the RPC layer API, and
I don't believe we have this logic about the state change to IDLE after a
certain time of not sending RPC's to connection. Forwarding this to my
teammate who has been here much longer and can answer this question better.
On Saturday, July 24, 2021 at 3:32:54 AM UTC-4 amits...@gmail.com wrote:
>
Hi Lixin. Good questions! I can offer a high-level summary.
> I'm wondering what's the threading model behind the completion queue?
This is a bit of an oversimplification, but the C++ API's `CompletionQueue`
borrows threads from the application. Work is done when applications make a
blocking ca
Nice, this looks solid to me :).
On Friday, July 23, 2021 at 4:42:12 AM UTC-4 amits...@gmail.com wrote:
>
>
> > On 11 Jul 2021, at 10:37 am, Amit Saha wrote:
> >
> > Hi all,
> >
> > I am implementing a reconnection logic in my client for a bidi RPC
> method.
> >
> > This is similar to what
>
Must be a temporary issue. I see both commits fine.
On Wednesday, July 28, 2021 at 9:04:54 AM UTC-7 mark...@gmail.com wrote:
> The heap of the grpc repository is pointing at a missing protocol buffers
> commit:
>
>
> https://github.com/google/protobuf/tree/436bd7880e458532901c58f4d9d1ea23fa7edd5
Hi,
AFAIK, Istio doesn't yet support gRPC clients. I believe they have it on
the roadmap but please confirm with the Istio community.
On Sunday, July 25, 2021 at 2:11:49 AM UTC-7 tiz...@gmail.com wrote:
> Hi
> I'm looking for examples of using istio as an xds server. I want my grpc
> client to
Adding the SO link for this question:
https://stackoverflow.com/questions/68491834/handle-client-side-cancellation-in-grpc-python-asyncio
In case new discussions happen there.
On Thursday, July 22, 2021 at 3:41:11 PM UTC-7 Brunston wrote:
> ...Turns out you *can* catch the asyncio.CancelledErro
The heap of the grpc repository is pointing at a missing protocol buffers
commit:
https://github.com/google/protobuf/tree/436bd7880e458532901c58f4d9d1ea23fa7edd52
If I go back to the last release protocol buffers were updates, also a
missing commit:
https://github.com/google/protobuf/tree/d7e943
13 matches
Mail list logo