[grpc-io] Compile and Install gRPC locally error

2021-07-28 Thread 欧阳昊
Hi, when I execute the command build and locally install gRPC, Protocol 
Buffers, and Abseil:
$ cd grpc
$ mkdir -p cmake/build 
$ pushd cmake/build 
$ cmake -DgRPC_INSTALL=ON \ -DgRPC_BUILD_TESTS=OFF \ 
-DCMAKE_INSTALL_PREFIX=$MY_INSTALL_DIR 
\ ../..

I got the following error:

-- The C compiler identification is GNU 9.3.1
-- The CXX compiler identification is GNU 9.3.1
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /opt/rh/devtoolset-9/root/usr/bin/cc - 
skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /opt/rh/devtoolset-9/root/usr/bin/c++ - 
skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Looking for pthread.h
-- Looking for pthread.h - found
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed
-- Looking for pthread_create in pthreads
-- Looking for pthread_create in pthreads - not found
-- Looking for pthread_create in pthread
-- Looking for pthread_create in pthread - found
-- Found Threads: TRUE  
CMake Error at cmake/cares.cmake:25 (add_subdirectory):
  The source directory

/home/yh/tools/grpc/third_party/cares/cares

  does not contain a CMakeLists.txt file.
Call Stack (most recent call first):
  CMakeLists.txt:254 (include)


-- 
-- 3.17.3.0
-- Performing Test protobuf_HAVE_BUILTIN_ATOMICS
-- Performing Test protobuf_HAVE_BUILTIN_ATOMICS - Success
-- Check if compiler accepts -pthread
-- Check if compiler accepts -pthread - yes
CMake Warning at cmake/ssl.cmake:55 (message):
  gRPC_SSL_PROVIDER is "module" but BORINGSSL_ROOT_DIR is wrong
Call Stack (most recent call first):
  CMakeLists.txt:257 (include)


CMake Deprecation Warning at third_party/zlib/CMakeLists.txt:1 
(cmake_minimum_required):
  Compatibility with CMake < 2.8.12 will be removed from a future version of
  CMake.

  Update the VERSION argument  value or use a ... suffix to tell
  CMake that the project does not need compatibility with older versions.


-- Looking for sys/types.h
-- Looking for sys/types.h - found
-- Looking for stdint.h
-- Looking for stdint.h - found
-- Looking for stddef.h
-- Looking for stddef.h - found
-- Check size of off64_t
-- Check size of off64_t - done
-- Looking for fseeko
-- Looking for fseeko - found
-- Looking for unistd.h
-- Looking for unistd.h - found
-- Configuring incomplete, errors occurred!
See also "/home/yh/tools/grpc/cmake/build/CMakeFiles/CMakeOutput.log".
See also "/home/yh/tools/grpc/cmake/build/CMakeFiles/CMakeError.log".

Any suggestion would be welcomed, thanks

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/7d9a16fe-2418-4206-8e83-cb35a1acac0cn%40googlegroups.com.


Re: [grpc-io] Memory usage and grpc::ResourceQuota

2021-07-28 Thread 'Craig Tiller' via grpc.io
ResourceQuota right now will track some but as you discovered certainly not
all of the memory used by gRPC.

The current behavior has been sufficient at Google to prevent some mishaps,
but it's certainly not bulletproof.

We'll be looking into improving this in the future.

On Wed, Jul 28, 2021, 6:17 PM 'Alex Zuo' via grpc.io <
grpc-io@googlegroups.com> wrote:

> I am writing an async grpc server, and I want to control the total memory
> it may use. Looks like that grpc::ResourceQuota is useful, but by checking
> the places where it is called, it seems that we only check memory quota
> when accepting a new connection.
>
> What if a client keeps sending API calls through the same connection? The
> async gRPC server won't be able to handle them quickly. Where will they be
> buffered? When I say the server cannot handle them quickly, I mean
> *RequestSayHello* is not even called in the following Helloworld sample
> code.
>
> void Proceed() {
> if (status_ == CREATE) {
> // Make this instance progress to the PROCESS state.
> status_ = PROCESS;
>
> // As part of the initial CREATE state, we *request* that the system
> // start processing SayHello requests. In this request, "this" acts are
> // the tag uniquely identifying the request (so that different CallData
> // instances can serve different requests concurrently), in this case
> // the memory address of this CallData instance.
> std::cout << "ready to accept one request" << std::endl;
> service_->RequestSayHello(&ctx_, &request_, &responder_, cq_, cq_,
> this);
>
> --
> You received this message because you are subscribed to the Google Groups "
> grpc.io" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to grpc-io+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/grpc-io/173872e1-1b52-4e4c-8fd9-d8383925da7fn%40googlegroups.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CAAvp3oM1Q_8P1HD038VRfwDrCHU9Ot0G1cq95LuMBSJYBoPj4g%40mail.gmail.com.


smime.p7s
Description: S/MIME Cryptographic Signature


[grpc-io] Memory usage and grpc::ResourceQuota

2021-07-28 Thread 'Alex Zuo' via grpc.io
I am writing an async grpc server, and I want to control the total memory 
it may use. Looks like that grpc::ResourceQuota is useful, but by checking 
the places where it is called, it seems that we only check memory quota 
when accepting a new connection.

What if a client keeps sending API calls through the same connection? The 
async gRPC server won't be able to handle them quickly. Where will they be 
buffered? When I say the server cannot handle them quickly, I mean 
*RequestSayHello* is not even called in the following Helloworld sample 
code.

void Proceed() {
if (status_ == CREATE) {
// Make this instance progress to the PROCESS state.
status_ = PROCESS;

// As part of the initial CREATE state, we *request* that the system
// start processing SayHello requests. In this request, "this" acts are
// the tag uniquely identifying the request (so that different CallData
// instances can serve different requests concurrently), in this case
// the memory address of this CallData instance.
std::cout << "ready to accept one request" << std::endl;
service_->RequestSayHello(&ctx_, &request_, &responder_, cq_, cq_,
this);

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/173872e1-1b52-4e4c-8fd9-d8383925da7fn%40googlegroups.com.


Re: [grpc-io] Unary and Stream Interceptors - Per session authorization

2021-07-28 Thread Amit Saha
On Wed, Jul 28, 2021 at 1:33 AM Inian Vasanth  wrote:
>
> Hi,
>
> I have written Unary and Stream server side interceptors to handle client 
> requests. Our use case (at the server) is to authorise the client by looking 
> at the Subject Alternate Name  (Domain Name) as part of the x509 cert 
> presented to the server by matching it against an accepted list. The problem 
> is, this validation is happening on every client<->server interaction over a 
> particular gRPC session. We want this to limit it only once per session 
> during initial handshake?
>
> Is it possible to tune the interceptors to do this on only on initial session 
> handshake?

Did you mean that for streaming communication, you only want the
authnz to happen during the RPC call and not subsequent message
exchanges?


>
> --
> You received this message because you are subscribed to the Google Groups 
> "grpc.io" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to grpc-io+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/grpc-io/7ac235e8-d769-4793-8c43-ac01bc84eab7n%40googlegroups.com.

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CANODV3k2U71epLCjEEMEKX1gTMXO30ZkepG64uiF%2B8DuyDJcvg%40mail.gmail.com.


[grpc-io] Re: Unary and Stream Interceptors - Per session authorization

2021-07-28 Thread 'sanjay...@google.com' via grpc.io
In general the client authorization will be per RPC based on call metadata. 
You can also use the TLS session data (such as SAN values from the client 
cert) but the check will still be per RPC based on the server side 
interceptor architecture. You might be able to optimize by caching the 
authorization decisions in the interceptor if you are only going to use 
peer cert SAN entries for those decisions so your interceptor will be able 
to use the cache for subsequent calls if there is a cache hit. That depends 
on the gRPC language you are using but it's not clear which language you 
are using.
On Tuesday, July 27, 2021 at 8:33:24 AM UTC-7 Inian Vasanth wrote:

> Hi,
>
> I have written Unary and Stream server side interceptors to handle client 
> requests. Our use case (at the server) is to authorise the client by 
> looking at the Subject Alternate Name  (Domain Name) as part of the x509 
> cert presented to the server by matching it against an accepted list. The 
> problem is, this validation is happening on every client<->server 
> interaction over a particular gRPC session. We want this to limit it only 
> once per session during initial handshake?
>
> Is it possible to tune the interceptors to do this on only on initial 
> session handshake?
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/f8c28934-f0b7-4c4c-b975-218efe2d462bn%40googlegroups.com.


[grpc-io] Re: Metadata and Header/Trailer in Go applications

2021-07-28 Thread 'Zach Reyes' via grpc.io
https://github.com/grpc/grpc/blob/master/doc/PROTOCOL-HTTP2.md specifies 
that client messages involve headers and data only, and no trailers. Thus, 
there is no concept of trailers sent from the client side. However, the 
reason the two examples are different, is one is at the RPC layer API, and 
the other is at the ServerStream layer API.

On Thursday, June 17, 2021 at 3:59:41 AM UTC-4 amits...@gmail.com wrote:

> Hi all - I will use meta-data to mean generic information that is
> usually sent and received besides the request/response.
>
> So far this is what i think is the case:
> (Referring to 
> https://github.com/grpc/grpc-go/blob/master/Documentation/grpc-metadata.md
> )
>
> - For clients to send any kind of meta-data, use context + metadata to
> create a new context
> - For servers to send back any kind of meta use, header and trailers,
> which the client can access using Header and Trailer Call Options.
>
> Is there any reason why this is so - why couldn't we use the Header
> (and Trailers) to send the meta-data to the server as well instead of
> creating a new context?
>
> Thanks,
> Amit.
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/b438020d-a962-4357-8d9d-7d061f084284n%40googlegroups.com.


[grpc-io] Re: Go - IDLE_TIMEOUT support

2021-07-28 Thread 'Zach Reyes' via grpc.io
I don't believe we have this logic about the state change to IDLE after a 
certain time of not sending RPC's to connection. Forwarding this to my 
teammate who has been here much longer and can answer this question better.

On Saturday, July 24, 2021 at 3:32:54 AM UTC-4 amits...@gmail.com wrote:

> The connectivity semantics documentation [1] states that a connection
> will move from READY to IDLE if no active requests have been made for
> IDLE_TIMEOUT duration.
> However, in Go, i don't see such a configuration, is this state used
> in Go grpc implementation or is this being implemented? [2]
>
> Thanks,
> Amit
>
> [1] 
> https://github.com/grpc/grpc/blob/master/doc/connectivity-semantics-and-api.md
> [2] https://github.com/grpc/grpc-go/pull/4613
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/ce4c685f-c2fc-42ef-a05e-38f64b52ef97n%40googlegroups.com.


[grpc-io] Re: What's the Threading Model behind Completion Queue?

2021-07-28 Thread 'AJ Heller' via grpc.io
Hi Lixin. Good questions! I can offer a high-level summary.

> I'm wondering what's the threading model behind the completion queue?

This is a bit of an oversimplification, but the C++ API's `CompletionQueue` 
borrows threads from the application. Work is done when applications make a 
blocking call to `CompletionQueue::Next`. See the API docs here 
https://grpc.github.io/grpc/cpp/classgrpc_1_1_completion_queue.html#a86d9810ced694e50f7987ac90b9f8c1a.

> Who produces items to the completion queue?

Applications do, for the most part. Some of this is covered in the C++ 
Asynchronous-API tutorial: https://grpc.io/docs/languages/cpp/async/. 

> What is between the completion queue and the network?

Quite a few things - the majority of gRPC sits between them. At a high 
level, there's the transport layer, handling things like HTTP/2 and cronet 
transports. Then there are filters that both filter and augment calls, 
adding things like max_age filtering and load balancing for client 
channels. The bottom-most layer is called iomgr, providing things like 
network connectivity and timers.

On Thursday, July 22, 2021 at 11:20:02 PM UTC-7 Lixin Wei wrote:

> I'm wondering what's the threading model behind the completion queue?
>
> Who produces items to the completion queue? What is between the completion 
> queue and the network?
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/fa0e81e7-f04e-47c6-9ab0-f5edb59e1119n%40googlegroups.com.


[grpc-io] Re: Go - Reconnection in bidi streaming RPC communication

2021-07-28 Thread 'Zach Reyes' via grpc.io
Nice, this looks solid to me :).

On Friday, July 23, 2021 at 4:42:12 AM UTC-4 amits...@gmail.com wrote:

>
>
> > On 11 Jul 2021, at 10:37 am, Amit Saha  wrote:
> > 
> > Hi all,
> > 
> > I am implementing a reconnection logic in my client for a bidi RPC 
> method.
> > 
> > This is similar to what
> > 
> https://stackoverflow.com/questions/66353603/correct-way-to-perform-a-reconnect-with-grpc-client
> > seeks to do. The summary version is:
> > 
> > If Recv() returns an error other than io.EOF, reconnect/recreate the
> > streaming connection.
> > 
> > However, this logic doesn't quite seem straightforward with the Send()
> > method (I think). Unlike the Recv() method, the Send() method's error
> > in this scenario is an io.EOF,
> > not a "transport is closing" error. Thus, it's tricky to assume that
> > the error means we should create a new stream.
>
> I think this may work:
>
> If we get an io.EOF error from the Send() method, and we call the 
> RecvMsg() method, we can make use of the error value returned by this 
> method to deduce an abnormal termination:
>
> err := stream.Send(&r)
> if err == io.EOF {
>
> var m respoonseType
>
> err := stream.RecvMsg(&m)
>
> if err != nil {
>
> // Implement stream recreation logic
>
> }
>
> ..
> }
>
>
>
>
>
> > 
> > What are the community's thoughts on this?
> > 
> > Thanks,
> > Amit.
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/6c77845f-591b-4b05-80f9-a2480817908en%40googlegroups.com.


[grpc-io] Re: Missing protocol buffers submodule commit

2021-07-28 Thread 'Srini Polavarapu' via grpc.io
Must be a temporary issue. I see both commits fine.

On Wednesday, July 28, 2021 at 9:04:54 AM UTC-7 mark...@gmail.com wrote:

> The heap of the grpc repository is pointing at a missing protocol buffers 
> commit:
>
>
> https://github.com/google/protobuf/tree/436bd7880e458532901c58f4d9d1ea23fa7edd52
>
> If I go back to the last release protocol buffers were updates, also a 
> missing commit:
>
>
> https://github.com/google/protobuf/tree/d7e943b8d2bc444a8c770644e73d090b486f8b37
>
> Is there something happening with the protocol buffers repository?
>
> Mark
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/af1b5c43-d3b6-4db8-8d68-0318effd39edn%40googlegroups.com.


[grpc-io] Re: grpc proxyless using istio

2021-07-28 Thread 'Srini Polavarapu' via grpc.io
Hi,

AFAIK, Istio doesn't yet support gRPC clients. I believe they have it on 
the roadmap but please confirm with the Istio community.

On Sunday, July 25, 2021 at 2:11:49 AM UTC-7 tiz...@gmail.com wrote:

> Hi
> I'm looking for examples of using istio as an xds server. I want my grpc
> client to get the routing configuration from that server. Anyone knows of 
> such an example?
>
> Thanks!
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/db40c644-d545-49b1-abda-c9144228ca09n%40googlegroups.com.


[grpc-io] Re: In python asyncio grpc server, handling a client-triggered cancellation of an RPC on the server-side

2021-07-28 Thread 'Lidi Zheng' via grpc.io
Adding the SO link for this question: 
https://stackoverflow.com/questions/68491834/handle-client-side-cancellation-in-grpc-python-asyncio

In case new discussions happen there.

On Thursday, July 22, 2021 at 3:41:11 PM UTC-7 Brunston wrote:

> ...Turns out you *can* catch the asyncio.CancelledError raised by grpc 
> core. That should work for my current usecase. Any other suggestions that 
> can shed light on the shared context would still be helpful!
> Brunston
>
> On Thursday, July 22, 2021 at 2:51:03 PM UTC-7 Brunston wrote:
>
>> Hi all! I have a question about a Python asyncio gRPC server:
>>
>> How can I perform some server-side action (eg, cleanup) based on a 
>> cancellation of an RPC from the client?
>>
>> In my microservice, I have an asyncio gRPC server whose main RPCs are 
>> bidirectional streams.
>>
>> On the client side (which is also using asyncio), when I cancel 
>> something, it raises an asyncio.CancelledError which is caught and not 
>> reraised by the grpc core:
>>
>>
>> https://github.com/grpc/grpc/blob/master/src/python/grpcio/grpc/_cython/_cygrpc/aio/server.pyx.pxi#L679
>> except asyncio.CancelledError: _LOGGER.debug('RPC cancelled for servicer 
>> method [%s]', _decode(rpc_state.method())) 
>>
>> So I cannot rely on catching the asyncio.CancelledError in my own code, 
>> because it's caught beforehand and not reraised.
>>
>> The shared context is supposed to contain information as to whether the 
>> RPC was canceled on the client side, by calling .cancel() from the RPC 
>> call and being able to see if it was canceled by calling .cancelled():
>>
>> https://grpc.github.io/grpc/python/grpc_asyncio.html#shared-context
>>
>> abstract cancel()
>> Cancels the RPC. Idempotent and has no effect if the RPC has already 
>> terminated. Returns A bool indicates if the cancellation is performed or 
>> not. Return type bool 
>>
>> abstract cancelled()
>> Return True if the RPC is cancelled. The RPC is cancelled when the 
>> cancellation was requested with cancel(). Returns A bool indicates 
>> whether the RPC is cancelled or not. Return type bool 
>>
>> However, this shared context is not attached to the context variable 
>> given to the RPC on the server side by the gRPC generated code. (I cannot 
>> run context.cancelled() or context.add_done_callback; they're not 
>> present)
>>
>> Thoughts? Much appreciated!
>>
>> -Brunston
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/d072c7b0-4a1c-4e47-8341-69c7afcf63b1n%40googlegroups.com.


[grpc-io] Missing protocol buffers submodule commit

2021-07-28 Thread Mark Fine
The heap of the grpc repository is pointing at a missing protocol buffers
commit:

https://github.com/google/protobuf/tree/436bd7880e458532901c58f4d9d1ea23fa7edd52

If I go back to the last release protocol buffers were updates, also a
missing commit:

https://github.com/google/protobuf/tree/d7e943b8d2bc444a8c770644e73d090b486f8b37

Is there something happening with the protocol buffers repository?

Mark

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CANRZ_fnBoQMyJ0b%3D2V7yXvRKv_YNMGnWbzYZSBWNomT4ENe5nQ%40mail.gmail.com.