[grpc-io] Re: [gRPC-C++] Using channelZ with OpenTelemetry metrics

2024-05-15 Thread 'yas...@google.com' via grpc.io
Hey Ben, what kind of metrics are you looking for?

We already have gRPC metrics exposed via OpenTelemetry. Please take a look 
at -
https://github.com/grpc/proposal/blob/master/A66-otel-stats.md
https://github.com/grpc/proposal/blob/master/A78-grpc-metrics-wrr-pf-xds.md

On Wednesday, May 15, 2024 at 9:53:09 AM UTC-7 Ben Harkins wrote:

> Hi all. I'm currently tasked with improving general telemetry support in a 
> C++ library, where one of the primary goals is to expose some of gRPC's 
> metrics to the top-level application via OpenTelemetry.
>
> One proposed idea was to expose channelZ's metrics to OpenTelemetry, 
> however, I'm not familiar enough with how channelZ operates to know if this 
> is fundamentally sound. Does it seem feasible to access information from 
> the channelZ service through OpenTelemetry instruments? I get the feeling 
> that this may be an unusual use-case based on the research I've done so far.
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/071d8e9d-794b-42b5-a67b-f04ec7e1d972n%40googlegroups.com.


[grpc-io] L116: C++-Core: Loosen behavior of GRPC_ARG_HTTP2_MAX_PINGS_WITHOUT_DATA

2024-04-25 Thread 'yas...@google.com' via grpc.io
Hey all, 

https://github.com/grpc/proposal/pull/429 is a proposal to modify the 
behavior GRPC_ARG_HTTP2_MAX_PINGS_WITHOUT_DATA to throttle pings to a 
frequency of 1 minute instead of completely blocking pings when too many 
pings have been sent without data/header frames being sent.

Comments are welcome!

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/70c872b8-985a-4280-a8fd-f145ead969fbn%40googlegroups.com.


[grpc-io] Re: [gRPC-C++] grpc-c++ client seems not to index custom headers to http2 dynamic-table?

2024-04-09 Thread 'yas...@google.com' via grpc.io
gRPC does support HPACK. By grpc-v1100, do you mean gRPC 1.1? That seems 
like a really old version? Please use the latest release instead.

On Friday, March 22, 2024 at 8:53:07 AM UTC-7 A-SaltedFish wrote:

> I want to add a custom header in request with C++ client using “
> *Custom-Metadata”,*
> and the header-value pair is fixed。
> Acrrording to HTTP2, the header should be cached by client and server, and 
> only transfer a  HPACKED header presented by a index-num . but when i check 
> the Tcpdump data,  I found the header was not indexed at all.
> the version i used : grpc-v1100. 
> I checked the source code, and i found that grpc not support custom-header 
> to be indexed.
> I wander why doed grpc  c++ not  impelement this , and any project to add 
> support for indexing custom-header?
>
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/954c31dd-eedd-4018-b0b1-c4d1d31a1202n%40googlegroups.com.


[grpc-io] Re: gRPC build with gcc on macOS fails because of boringssl-with-basel

2024-03-22 Thread 'yas...@google.com' via grpc.io
Do you have `openssl` installed?

If that's the case, you don't need boringssl, you can simply specify 
`-DgRPC_SSL_PROVIDER="package"` to use the installed openssl version.

See https://github.com/grpc/grpc/blob/master/cmake/ssl.cmake 
and https://cmake.org/cmake/help/v3.6/module/FindOpenSSL.html for more 
information.

On Monday, January 22, 2024 at 10:48:53 AM UTC-8 Dan Cohen wrote:

> Hello,
>
> I need to build gRPC with gcc on mac. When trying to do so, 
> boringssl-with-basel compilation fails with:
>
>
>
>
> *[ 12%] Building ASM object 
> third_party/boringssl-with-bazel/CMakeFiles/crypto.dir/apple-aarch64/crypto/chacha/chacha-armv8-apple.S.oclang:
>  
> error: unsupported option '--noexecstack'make[2]: *** 
> [third_party/boringssl-with-bazel/CMakeFiles/crypto.dir/apple-aarch64/crypto/chacha/chacha-armv8-apple.S.o]
>  
> Error 1make[1]: *** 
> [third_party/boringssl-with-bazel/CMakeFiles/crypto.dir/all] Error 2*
>
> This is how my cmake configuration of gRPC looks like:
> (running from grpc/cmake/):
> cmake -B release -DCMAKE_EXE_LINKER_FLAGS="-ld_classic" 
> -DCMAKE_BUILD_TYPE=Release 
> -DgRPC_INSTALL=ON -DgRPC_ABSL_PROVIDER=package \
> -DgRPC_PROTOBUF_PROVIDER=package -DCMAKE_CXX_STANDARD=17 
> -DCMAKE_CXX_COMPILER=/opt/homebrew/bin/g++-13 
> \
> -DCMAKE_C_COMPILER=/opt/homebrew/bin/gcc-13 
> -DCMAKE_INSTALL_PREFIX=../../install -DCMAKE_PREFIX_PATH=../../install -S 
> ../
>
> I've tried several things, including cloning the boringssl-with-basel in a 
> separate directory and building it locally (not as part of grpc) but this 
> also fails with different reasons.
>
> Is there a standard way to build gRPC on macOs with gcc? or disable the 
> offending package from boringssl build?
>
> Thanks,
> Dan
>
> Versions
> -
> cmake: 3.28.0
> macOS: Sonoma 14.2.1 (23C71)
> Xcode: 15.1.0.0.1.1700200546
> gcc: gcc-13 (Homebrew GCC 13.2.0) 13.2.0
> clang: Apple clang version 15.0.0 (clang-1500.0.40.1)
> grpc: v1.60.0
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/2da8a6ce-8231-4d7b-8202-92376dee3eban%40googlegroups.com.


[grpc-io] Re: Client receives out-of-order stream

2024-03-18 Thread 'yas...@google.com' via grpc.io
This might be something to raise with maintainers 
of https://github.com/scalapb/zio-grpc. I'm not sure if they monitor this 
forum.

On Tuesday, March 12, 2024 at 6:54:04 PM UTC-7 Xiaokun Luan wrote:

> Thanks for your reply, here is a minimal example: 
> https://drive.google.com/file/d/1Eew2sOhjSt2tCBEupE1glo6PALYkB0t1/view
>
> Running this example on my machine gives me something like:
>
> Expected: [0, 1, 2, ...]
> Got:   [0, 1, 2, ...]
> Diff:   [x, y, ...]
>
> 在2024年3月13日星期三 UTC+8 01:11:52 写道:
>
>> Are you sending this sequence of data on the same stream? Using a 
>> bidirectional or server-streaming RPC, for example?
>>
>> This should indeed not be happening. Without knowing more, I would guess 
>> that the data is being accidentally written to the stream in a bad orders, 
>> when this happens. Otherwise, a reproduction may help.
>>
>> On Friday, March 1, 2024 at 5:55:59 AM UTC-8 Xiaokun Luan wrote:
>>
>>> Hi all, I have a server implemented in Scala using zio-grpc, and a 
>>> client in Python.
>>>
>>> I found that sometimes the stream received by the client is out of 
>>> order. For example, the sequence of data sent by the server is [1, 2, 3, 4, 
>>> 5], but those received by the client are [1, 2, 4, 3, 5]. Though I'm new to 
>>> grpc, I don't think this is an expected behavior.
>>>
>>> According to my testing results, this happens rarely, and usually only 
>>> one or two pairs of adjacent items are swapped. I have checked and made 
>>> sure that the sequence is scrambled after sending, so the error should not 
>>> be on the server side.
>>>
>>> I'm still working on a minimal example, and I cannot help wondering has 
>>> anyone had a similar situation? Any things that could go wrong? Or maybe I 
>>> didn't do it in the right way? Any help or advice would be appreciated.
>>>
>>> Below is some relevant information:
>>> OS: Ubuntu 22.04
>>> Python version: 3.10.13
>>> grpcio version: 1.51.1  (couldn't find 1.50.1)
>>> grpcio-tools version: 1.51.1
>>> grpc version: 4.25.3
>>> Scala version: 2.13.13
>>> grpc-netty version: 1.50.1
>>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/a60d8f6d-60a0-4c27-9ff5-f3eb8ea4fecbn%40googlegroups.com.


[grpc-io] Re: Can CVE-2023-33953 be solved by limiting the length of the HTTP header on an earlier version?

2024-03-18 Thread 'yas...@google.com' via grpc.io
I agree with Craig's recommendation to try and upgrade. There have been 
tons of bug fixes and feature upgrades since grpc 1.0.0.

On Monday, March 11, 2024 at 8:48:41 AM UTC-7 Harminder Singh wrote:

> Need a solution for Question asked in this ticket.
> https://github.com/grpc/grpc/issues/34251
>
> any help is much appretiated.
> Regards
> Harminder
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/1f58f1be-477d-493d-b272-29d52b2cd018n%40googlegroups.com.


[grpc-io] Re: Need basic help with unexplained steps building on Windows

2024-03-18 Thread 'yas...@google.com' via grpc.io
Running `cmake --help` should give you a list of the available generators.

If you have for example, Visual Studio 2022 installed, you should see a 
line like "Visual Studio 17 2022" or something like that. That should work 
as well. Very likely, it will work without overriding the default generator 
(`-G` option).

You can also use bazel as a build system if you prefer.

On Thursday, March 7, 2024 at 10:10:08 AM UTC-8 Kevin Mendel wrote:

> Hi. 
>
> (Sorry for not getting back right away. Work took some turns.)
>
> BUILDING.md is where I am starting from. 
> This is what I have done so far, and then I am blocked by build failures.
>
> md .build
> cd .build
> cmake .. -G "Visual Studio 16 2019"
> cmake --build . --config Release
>
> The build fails. There are *168 *errors all basically the same, but from 
> different files:
>
> 19>###HIDING###\grpc\third_party\boringssl-with-bazel\src\crypto\asn1\../internal.h(136,1):
>  
> fatal error C1083: Cannot open include file: 'stdalign.h': No such file or 
> directory
>
> I have sleuthed this to the point where I feel sure I am building with the 
> wrong Windows SDK.
>
> 19>C:\Program Files (x86)\Windows 
> Kits\10\Include\10.0.17763.0\ucrt\corecrt_memory.h(76,5): warning C5105: 
> macro expansion producing 'defined' has undefined behavior
>
> 10.0.17763.0 doesn't have stdalign.h.
> However, I have 10.0.22000.0 installed.
> This version does have 10.0.22000.0.
> And various things I have read on the web indicate that I need to use 
> 10.0.22000.0 or later.
>
> So I am supposing that if I can induce grpc to build with Windows SDK 
> 10.0.22000.0 I will solve this problem.
> *But how???*
>
> Thanks as always,
> Kevin
>
> On Wednesday, March 6, 2024 at 4:34:57 AM UTC-5 Tony Newell wrote:
>
>> Instructions for building on Windows are here: 
>> https://github.com/grpc/grpc/blob/v1.62.0/BUILDING.md
>>
>> How are you intending to use gRPC on Windows? If with C++ then I think 
>> you need to do the build as instructed about, if with other languages (e.g. 
>> python, Java or C#) then there are prebuilt packages that you can use.
>>
>> On Wednesday 6 March 2024 at 00:06:41 UTC Kevin Mendel wrote:
>>
>>> Hi. 
>>> I'm new to gRPC. Trying to build gRPC on Windows to evaluate for a 
>>> product.
>>>
>>> I've been writing software for 30 years, but no one has experience with 
>>> everything. 
>>> And so much of BUILDING.md is going right over my head -- I've never 
>>> used CMAKE in my life. Also, I swear many of the instructions are for Linux 
>>> and I guess I am supposed to infer what to do on Windows, except I can't. 
>>>
>>> So if anyone is around for noob-level questions, I would greatly 
>>> appreciate. 
>>>
>>> Just to explain my experience so far. 
>>>
>>> First thing I tried is the vcpkg approach. But this failed so badly I 
>>> decided it wasn't prudent to go that way. 
>>>
>>> I got farther by downloading the repo and using CMAKE. 
>>> But all the builds are blowing up with fatal errors that stdalign.h 
>>> cannot be found. 
>>> It's building against WinSDK 10.0.17763.0 -- which I know is wrong. 
>>> I have 10.0.22000.0 installed. 
>>> But I have tried a dozen different things and I cannot get the build to 
>>> use 10.0.22000.0. 
>>>
>>> It seems to be building something though.
>>> I'd like to use the product of the build if I could -- maybe it's 
>>> sufficient. 
>>> But the build artifacts I want to use is another thing that's 
>>> unexplained. 
>>> I am supposed to "install" grpc. 
>>> But that's also unexplained.
>>>
>>> So here I am, fairly severely stuck. 
>>>
>>> If you ask me what I want to do, it is "build code to use grpc". YOU 
>>> tell ME how to get there, I don't have a preference. 
>>>
>>> Thanks in advance.
>>> Kevin 
>>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/dba581bf-9940-413e-adcc-7f3373f0787en%40googlegroups.com.


[grpc-io] gRFC A79: Non-per-call Metrics Architecture

2024-02-23 Thread 'yas...@google.com' via grpc.io
https://github.com/grpc/proposal/pull/421 
 is a gRFC for describing a 
cross-language architecture for collecting non-per-call metrics.

Feedback welcome.
- Yash

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/82404319-a9ba-48ce-8929-8e9ac446f699n%40googlegroups.com.


[grpc-io] Re: Facing issue to use protobuf in dot net core 6

2024-01-02 Thread 'yas...@google.com' via grpc.io
Are you using https://github.com/protobuf-net/protobuf-net? If so, you 
might need to raise an issue there.

On Wednesday, December 27, 2023 at 10:35:11 PM UTC-8 Utpal Dutta wrote:

> Hello,
> I am using Protobuff with dotnet core 6. I am creating CRUD operation 
> through gRPC communication. In my get response I have some nullable 
> properties (string, datetime).
> Now facing issue when datetime nullable property want to return. Also 
> datetime is returning as a json object. Same for nullable string also. 
> Where as I want the response as normal datetime and string value. Can 
> anyone please help to solve this issue.
>
> My proto file look like below:
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> *syntax = "proto3";import "google/protobuf/timestamp.proto";import 
> "google/protobuf/wrappers.proto";message GetSiteResponse{ int32 id = 
> 1; string name = 2; google.protobuf.StringValue description = 3; bool 
> deleted = 4; int32 created_user = 5; google.protobuf.Timestamp created_date 
> = 6;  google.protobuf.Int32Value modified_user = 7; 
>  google.protobuf.Timestamp modified_date = 8;}*
>
> *---*
>
> *My response is coming in below format:*
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> * {"id": 1,"name": "name20",
> "description": {"value": "desc20"},
> "deleted": false,"created_user": 1,"created_date": 
> {"seconds": "1702460720","nanos": 
> 41400},"modified_user": {
> "value": 1},"modified_date": null}*
>
>
> *---*
>
> *But I want my response in below format*
>
>
>
>
>
>
>
>
>
>
>
> * {"id": 1,"name": "name20",
> "description": "desc20","deleted": false,
> "created_user": 1,"created_date": "2023-12-13T09:45:20.414Z",  
>   "modified_user": 1,"modified_date": null}*
>
> *---*
>
>
> *Thanks in advance*
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/addd2eba-bd49-4e70-9ab3-157a69ec3d5cn%40googlegroups.com.


[grpc-io] Re: Looking for organizational/process best practices

2024-01-02 Thread 'yas...@google.com' via grpc.io
This sounds like a topic suited for https://groups.google.com/g/protobuf

On Wednesday, December 27, 2023 at 6:20:20 PM UTC-8 Frederic Marand (FGM) 
wrote:

> Hello. 
>
> After teaching a course on protobuf and gRPC in Go, I’ve had requests for 
> best organizational practices for the use of protobufs (and gRPC) at some 
> degree of scale, and this does not appear to be something that is covered 
> in the protobuf.dev and grpc.io sites, as opposed to the technical best 
> practices.
>
> Things like: 
> - how do you split your protobufs in packages/directories ? 
> - what kind of common fields or custom options (e.g. validators) should 
> one add ? 
> - How do you store your .proto files: isolated repo ? all-projects 
> monorepo ? 
> - And how should you commit your generated code per language ? One repo 
> per language, language directories in the isolated protobuf repos, vendored 
> in each project, or just generated on the fly ? 
> - Should you always include max items count for responses containing 
> repeated items ?
> - When do you switch paging from and id group to a timestamp, or a Bloom 
> filter ?
>
> Basically, all the questions a team is asking themselves to put these 
> technologies in practice once they know how they technically work but are 
> still green on actual production use. 
>
> Any pointers to resources welcome !
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/f356c762-7aea-4d64-9715-eaf8767f6b2bn%40googlegroups.com.


[grpc-io] Re: Async C++ server w/multi-threading approach

2023-10-26 Thread 'yas...@google.com' via grpc.io
> C++ gRPC team is working towards or maybe putting more effort in 
perfecting/optimizing the callback API approach?
Yes

> 1.- By using the callback API approach, will we be able to serve 
different users concurrently the same way we do with our current 
implementation?
Yes
> 2.- Will we need to implement a threading logic like the one we have, or 
is not needed?
Not needed with C++ callback API 

On Wednesday, October 25, 2023 at 6:21:28 PM UTC-7 Pedro Alfonso wrote:

> Hi Yas,
>
> First of all, thanks for coming back to us.
> That's a really important comment and please correct us if we are wrong, 
> the understanding is C++ gRPC team is working towards or maybe putting more 
> effort in perfecting/optimizing the callback API approach? and btw, we are 
> also agree, it's easier to use.
>
> Kindly help us with these additional questions:
>
> 1.- By using the callback API approach, will we be able to serve different 
> users concurrently the same way we do with our current implementation?
> 2.- Will we need to implement a threading logic like the one we have, or 
> is not needed?
>
> Thanks in advance.
>
> Regards,
>
> Pedro
> On Wednesday, October 25, 2023 at 1:17:23 PM UTC-5 yas...@google.com 
> wrote:
>
>> We have been recommending using the C++ callback API instead of the 
>> completion queue based API since it's easier to use. All performance 
>> optimizations that we are working are targeting the callback API.
>>
>> On Thursday, October 19, 2023 at 8:03:42 AM UTC-7 Pedro Alfonso wrote:
>>
>>> Hello,
>>>
>>> First let me explain what we have in our C++ gRPC Async server codebase:
>>>
>>> - We have 2 unary based response RPCs.
>>> - And we have 2 stream based response RPCs which will cover over 95% of 
>>> the client's API consumption, meaning they are really important to our 
>>> streaming based implementation.
>>>
>>> From the 2 stream based response RPCs, below one is the most critical to 
>>> us:
>>>
>>> // Inner class StreamAssetNodes
>>> class StreamAssetNodes : public RequestBase {
>>> public:
>>> StreamAssetNodes( AsyncAssetStreamerManager& owner ) : RequestBase( 
>>> owner ), ownerClass( owner ) {
>>> owner_.grpc().service_.RequestStreamAssetNodes(
>>> _, _, cq(), cq(), in_handle_.tag( Handle::Operation::
>>> CONNECT, [this, ]( bool ok, Handle::Operation /* op */ ) {
>>> LOG_DEBUG << "\n" + me( *this ) << "\n\n
>>> *\n"
>>> << "- Processing a new connect from " << context_.peer()
>>> << "\n\n
>>> *\n"
>>> << endl;
>>> cout << "\n" + me( *this ) << "\n
>>> *\n"
>>> << "- Processing a new connect from " << context_.peer() << "\n
>>> *\n"
>>> << endl;
>>>
>>> if ( !ok ) [[unlikely]] {
>>> LOG_DEBUG << "The CONNECT-operation failed." << endl;
>>> cout << "The CONNECT-operation failed." << endl;
>>> return;
>>> }
>>>
>>> // Creates a new instance so the service can handle requests from a new 
>>> client
>>> owner_.createNew( owner );
>>> // Reads request's parameters
>>> readNodeIds();
>>> } ) );
>>> }
>>>
>>> private:
>>> // Objects and variables
>>> AsyncAssetStreamerManager& ownerClass;
>>> ::Illuscio::AssetNodeIds request_;
>>> ::Illuscio::AssetNodeComponent reply_;
>>> ::grpc::ServerContext context_;
>>> ::grpc::ServerAsyncReaderWriter>> > stream_ { _ };
>>>
>>> vector nodeids_vector;
>>> // Contains mapping for all the nodes of a set of assets
>>> json assetsNodeMapping;
>>> // Contains mapping for all the nodes of a particular asset
>>> json assetNodeMapping;
>>> ifstream nodeFile;
>>> // Handle for messages coming in
>>> Handle in_handle_ { *this };
>>> // Handle for messages going out
>>> Handle out_handle_ { *this };
>>>
>>> int fileNumber = 0;
>>> const int chunk_size = 16 * 1024;
>>> char buffer[16 * 1024];
>>>
>>> // Methods
>>>
>>> void readNodeIds() {
>>> // Reads RPC request parameters
>>> stream_.Read( _, in_handle_.tag( Handle::Operation::READ, [this]( 
>>> bool ok, Handle::Operation op ) {
>>> if ( !ok ) [[unlikely]] { return; }
>>>
>>> // Assigns the request to the nodeids vector
>>> nodeids_vector.assign( request_.nodeids().begin(), request_.nodeids().
>>> end() );
>>> request_.clear_nodeids();
>>>
>>> if ( !nodeids_vector.empty() ) {
>>> ownerClass.assetNodeMapping = ownerClass.assetsNodeMapping[request_.uuid
>>> ()];
>>> if ( ownerClass.assetNodeMapping.empty() ) {
>>> stream_.Finish( grpc::Status( grpc::StatusCode::NOT_FOUND, "Asset's 
>>> UUID not found in server..." ),
>>> in_handle_.tag( Handle::Operation::FINISH, [this]( bool ok, Handle::
>>> Operation /* op */ ) {
>>> if ( !ok ) [[unlikely]] {
>>> LOG_DEBUG << "The FINISH request-operation failed." << endl;
>>> cout << "The FINISH request-operation failed." << endl;
>>> }
>>>
>>> LOG_DEBUG << "Asset's UUID not found in server: " << request_.uuid() << 
>>> endl;
>>> 

[grpc-io] Re: Async C++ server w/multi-threading approach

2023-10-25 Thread 'yas...@google.com' via grpc.io
We have been recommending using the C++ callback API instead of the 
completion queue based API since it's easier to use. All performance 
optimizations that we are working are targeting the callback API.

On Thursday, October 19, 2023 at 8:03:42 AM UTC-7 Pedro Alfonso wrote:

> Hello,
>
> First let me explain what we have in our C++ gRPC Async server codebase:
>
> - We have 2 unary based response RPCs.
> - And we have 2 stream based response RPCs which will cover over 95% of 
> the client's API consumption, meaning they are really important to our 
> streaming based implementation.
>
> From the 2 stream based response RPCs, below one is the most critical to 
> us:
>
> // Inner class StreamAssetNodes
> class StreamAssetNodes : public RequestBase {
> public:
> StreamAssetNodes( AsyncAssetStreamerManager& owner ) : RequestBase( owner 
> ), ownerClass( owner ) {
> owner_.grpc().service_.RequestStreamAssetNodes(
> _, _, cq(), cq(), in_handle_.tag( Handle::Operation::
> CONNECT, [this, ]( bool ok, Handle::Operation /* op */ ) {
> LOG_DEBUG << "\n" + me( *this ) << "\n\n
> *\n"
> << "- Processing a new connect from " << context_.peer()
> << "\n\n*
> \n"
> << endl;
> cout << "\n" + me( *this ) << "\n
> *\n"
> << "- Processing a new connect from " << context_.peer() << "\n
> *\n"
> << endl;
>
> if ( !ok ) [[unlikely]] {
> LOG_DEBUG << "The CONNECT-operation failed." << endl;
> cout << "The CONNECT-operation failed." << endl;
> return;
> }
>
> // Creates a new instance so the service can handle requests from a new 
> client
> owner_.createNew( owner );
> // Reads request's parameters
> readNodeIds();
> } ) );
> }
>
> private:
> // Objects and variables
> AsyncAssetStreamerManager& ownerClass;
> ::Illuscio::AssetNodeIds request_;
> ::Illuscio::AssetNodeComponent reply_;
> ::grpc::ServerContext context_;
> ::grpc::ServerAsyncReaderWriter 
> stream_ { _ };
>
> vector nodeids_vector;
> // Contains mapping for all the nodes of a set of assets
> json assetsNodeMapping;
> // Contains mapping for all the nodes of a particular asset
> json assetNodeMapping;
> ifstream nodeFile;
> // Handle for messages coming in
> Handle in_handle_ { *this };
> // Handle for messages going out
> Handle out_handle_ { *this };
>
> int fileNumber = 0;
> const int chunk_size = 16 * 1024;
> char buffer[16 * 1024];
>
> // Methods
>
> void readNodeIds() {
> // Reads RPC request parameters
> stream_.Read( _, in_handle_.tag( Handle::Operation::READ, [this]( 
> bool ok, Handle::Operation op ) {
> if ( !ok ) [[unlikely]] { return; }
>
> // Assigns the request to the nodeids vector
> nodeids_vector.assign( request_.nodeids().begin(), request_.nodeids().end() 
> );
> request_.clear_nodeids();
>
> if ( !nodeids_vector.empty() ) {
> ownerClass.assetNodeMapping = ownerClass.assetsNodeMapping[request_.uuid()
> ];
> if ( ownerClass.assetNodeMapping.empty() ) {
> stream_.Finish( grpc::Status( grpc::StatusCode::NOT_FOUND, "Asset's UUID 
> not found in server..." ),
> in_handle_.tag( Handle::Operation::FINISH, [this]( bool ok, Handle::
> Operation /* op */ ) {
> if ( !ok ) [[unlikely]] {
> LOG_DEBUG << "The FINISH request-operation failed." << endl;
> cout << "The FINISH request-operation failed." << endl;
> }
>
> LOG_DEBUG << "Asset's UUID not found in server: " << request_.uuid() << 
> endl;
> cout << "Asset's UUID not found in server: " << request_.uuid() << endl;
> } ) );
> return;
> }
>
> writeNodeFile( nodeids_vector.front() );
> } else {
> stream_.Finish( grpc::Status( grpc::StatusCode::DATA_LOSS, "Asset' node 
> ids empty. Without node ids node streaming can't start..." ),
> in_handle_.tag( Handle::Operation::FINISH, [this]( bool ok, Handle::
> Operation /* op */ ) {
> if ( !ok ) [[unlikely]] {
> LOG_DEBUG << "The FINISH request-operation failed.";
> cout << "The FINISH request-operation failed.";
> }
>
> LOG_DEBUG << "Asset' node ids coming empty on the request. Without node 
> ids node streaming can't start..." << endl;
> cout << "Asset' node ids coming empty on the request. Without node ids 
> node streaming can't start..." << endl;
> } ) );
> }
> } ) );
> }
>
> void writeNodeFile( const string& nodeId ) {
> // Opens the file which contains the requested node
> nodeFile.open( string( ownerClass.assetNodeMapping[nodeId] ), ios::binary 
> );
>
> if ( !nodeFile.is_open() ) {
> LOG_DEBUG << "Asset's node file open operation failed for node:" << nodeId 
> << endl;
> cout << "Asset's node file open operation failed for node:" << nodeId << 
> endl;
> }
>
> splitFileAndWriteChunks();
> }
>
> void splitFileAndWriteChunks() {
> setReplyWithBuffer();
>
> stream_.Write( reply_, out_handle_.tag( Handle::Operation::WRITE, [this]( 
> bool ok, Handle::Operation op ) {
> if ( !nodeFile.eof() ) {
> 

[grpc-io] gRFC A66: OpenTelemetry Metrics

2023-07-21 Thread 'yas...@google.com' via grpc.io
https://github.com/grpc/proposal/pull/380 is a gRFC for adding 
OpenTelemetry support to gRPC.

Feedback welcome.
- Yash

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/f1bf80ee-db80-4943-9a89-be70800c04c7n%40googlegroups.com.


[grpc-io] Re: Live camera streaming using grpc python

2023-06-07 Thread 'yas...@google.com' via grpc.io
Are you tied to gRPC Python or could you also experiment with another 
language?

On Saturday, June 3, 2023 at 12:16:42 AM UTC-7 Sanket Kumar Mali wrote:

> my proto file
>
> syntax = "proto3";
>
> package camera_stream;
>
> // Camera frame message
> message Frame {
>   bytes frame = 1;
>   int64 timestamp = 2;
> }
>
> // Camera stream service definition
> service CameraStream {
>   // Method to connect and start receiving camera frames
>   rpc CameraStream(Empty) returns (stream Frame) {}
> }
> // Empty message
> message Empty {}
>
> On Saturday, 3 June 2023 at 12:32:28 UTC+5:30 Sanket Kumar Mali wrote:
>
>> my server code
>>
>> camera_buffer = queue.Queue(maxsize=20)
>>
>> # Define the gRPC server class
>> class CameraStreamServicer(camera_stream_pb2_grpc.CameraStreamServicer):
>> def __init__(self):
>> self.clients = []
>>
>> def CameraStream(self, request, context):
>> global camera_buffer
>> # Add the connected client to the list
>> self.clients.append(context)
>> try:
>> while True:
>> print("size: ",camera_buffer.qsize())
>> frame = camera_buffer.get(timeout=1)  # Get a frame 
>> from the buffer
>>
>> # Continuously send frames to the client
>> for client in self.clients:
>> try:
>> response = camera_stream_pb2.Frame()
>> response.frame = frame
>> response.timestamp = int(time.time())
>> yield response
>> except grpc.RpcError:
>> # Handle any errors or disconnections
>> self.clients.remove(context)
>> print("Client disconnected")
>> except Exception as e:
>> print("unlnown error: ", e)
>>
>>
>> in a seperate thread I am getting frames from camera and populating the 
>> buffer
>>
>>
>> On Monday, 22 May 2023 at 12:16:57 UTC+5:30 torpido wrote:
>>
>>> What happens if you run the same process in parallel, and serve in each 
>>> one a different client?
>>> just to make sure that there is no issue with the bandwidth in the 
>>> server.
>>>
>>> I would also set debug logs for gRPC to get more info
>>>
>>> Can you share the RPC and server code you are using? Seems like it 
>>> should be A *request-streaming RPC* 
>>> ב-יום שבת, 13 במאי 2023 בשעה 16:21:41 UTC+3, Sanket Kumar Mali כתב/ה:
>>>
 Hi,
 I am trying to implement a live camera streaming setup using grpc 
 python. I was able to stream camera frame(1280x720) in 30 fps to a single 
 client. But whenever I try to consume the stream from multiple client it 
 seems frame rate is getting divided (e.g if I connect two client frame 
 rate 
 becomes 15fps).
 I am looking for a direction where I am doing wrong. Appreciate any 
 clue in the right way to achieve multi client streaming.

 Thanks

>>>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/b66d7ceb-3b1f-4d4a-87a9-eb48c5db2447n%40googlegroups.com.


[grpc-io] Re: C++ Client Async write after read

2023-05-31 Thread 'yas...@google.com' via grpc.io
Sorry for the late response. This fell through the cracks.

It's fine to have a read and a write active at the same time. It's only 
problematic to multiple reads or multiple writes at the same time.

On Sunday, March 13, 2022 at 2:06:36 PM UTC-7 Trending Now wrote:

> Any update please ?
>
> Le samedi 12 mars 2022 à 11:55:28 UTC+1, Trending Now a écrit :
>
>> Hello
>>
>> Any update please.
>> sorry, it's blocking for me :(
>>
>> Thank you very much !
>>
>> Le vendredi 11 mars 2022 à 19:17:04 UTC+1, Trending Now a écrit :
>>
>>> Hello,
>>>
>>> I'm coding a bidirectional rpc using grpc. I'm using the asynchronous 
>>> API.
>>>
>>> The idea is to write the msg to the grpc::ClientAsyncReaderWriter< W, R 
>>> > stream and then call the Read in while loop till getting a false 
>>> status
>>>
>>> If I write to the stream, the program will simply crash. The reason is 
>>> the asynchronous API allows only “1 outstanding asynchronous write on the 
>>> same side of the same stream without waiting for the completion queue 
>>> notification“.
>>>
>>> Is there a way to force/prioritize the write operation after making a 
>>> read operation ?
>>>
>>> Thank you very much
>>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/1aef2cfe-6729-4129-b0bb-440d7e4c05d5n%40googlegroups.com.


[grpc-io] Re: Server Threadpool Exhausted

2023-05-31 Thread 'yas...@google.com' via grpc.io
Hi, when wanting such granular control over the threading model, it's 
better to use the async model. Currently, the CQ based async API is the 
only API that can serve this purpose. I would've wanted to recommend using 
the callback API along with the ability to set your own `EventEngine`, but 
we don't have that built out yet.

On Sunday, May 1, 2022 at 9:33:49 PM UTC-7 Roshan Chaudhari wrote:

> More context:
> I am using C++ sync server. Currently when I have multiple concurrent  
> clients, number of threads used by the server increases linearly and it 
> seems each client is served with separate thread. 
>
> streaming RPC I am using, will be idle 90 percent of the time, so rarely 
> data will be sent across it. So server can have minimal number of threads 
> and multiple client requests can be served by say fixed number of threads. 
> And it is okay if there is some delay in serving the client.
>
> Is it possible to achieve this in sync server? Or async is the only option.
> On Friday, April 29, 2022 at 12:17:27 PM UTC+5:30 Roshan Chaudhari wrote:
>
>> i have gRPC sync server with one service and 1 RPC.
>>
>> I am not setting ResourceQuota on serverbuilder. If n clients wants to 
>> connect, there will be n request handler threads created by gRPC. I want to 
>> keep some limit on these threads. lets say 10. And if it costs some latency 
>> in serving client, it is okay.
>>
>> So I tried these settings:
>> grpc::ServerBuilder builder; grpc::ResourceQuota rq; 
>> rq.SetMaxThreads(10); builder.SetResourceQuota(rq); 
>> builder.SetSyncServerOption( 
>> grpc::ServerBuilder::SyncServerOption::MIN_POLLERS, 1); 
>> builder.SetSyncServerOption( 
>> grpc::ServerBuilder::SyncServerOption::MAX_POLLERS, 1); 
>> builder.SetSyncServerOption(grpc::ServerBuilder::SyncServerOption::NUM_CQS, 
>> 1); 
>>
>> From another process, I am firing up 800 clients in parallel. So I expect 
>> there will be 1 completion queue for each of them and 10 threads sharing 
>> it. However, on client side there is an error:
>>
>> "*Server Threadpool Exhausted*"
>>
>> and none of the client succeeds.
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/47dab6b0-8b54-46a5-9283-d3e85248053dn%40googlegroups.com.


[grpc-io] Re: grpc-cpp: Interceptor release plan

2023-05-31 Thread 'yas...@google.com' via grpc.io
Hi, sorry for the late response. We've identified some improvements we want 
to make to the API, and hence the delay in stabilizing it. We'll be working 
on this soon though. Please stay posted.

On Friday, May 6, 2022 at 1:03:31 AM UTC-7 Luca Dev wrote:

> Dear Maintainer of grpc,
>
> Are there some plan to release into the short-term the experimental 
> Interceptor interface (
> https://github.com/grpc/grpc/blob/1d94aa92d883c40abe8b064d79e682f27b432cd3/include/grpcpp/impl/codegen/interceptor.h)?
>  
> This was introduced about 4 years ago and looks very promising!
>
>
> Cheers
> Luca
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/98dd350a-800e-4ef0-b7de-c98d10ab4196n%40googlegroups.com.


[grpc-io] Re: Progress indicator for a grpc C++ async or callback service?

2023-05-31 Thread 'yas...@google.com' via grpc.io
The gRPC library does not provide any such mechanism inbuilt, but you could 
imagine writing a gRPC service that has a pause/resume functionality where 
it stops serving requests or cancels incoming requests till resume is 
invoked.

On Thursday, March 24, 2022 at 4:15:14 AM UTC-7 Iro Karyoti wrote:

> Can a grpc C++ client request for a pause/resume of an async or callback 
> service?

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/910f5c9f-0f59-4232-860e-d12bf0c89788n%40googlegroups.com.


[grpc-io] Re: how to check if server is up/down from a client

2023-05-31 Thread 'yas...@google.com' via grpc.io
Sorry for the late reply. 

>From what I'm reading, health checking is exactly what you want. I don't 
understand why you don't want to use 
it. https://github.com/grpc/grpc/blob/master/doc/health-checking.md

About using the channel state - Just because a channel is not in the 
connected state, it does not necessarily mean that the server is down. 

On Wednesday, December 8, 2021 at 2:13:25 PM UTC-8 Viktor Khristenko wrote:

> Hello,
>
> Setup:
> Client, server using callback unary api
>
> question:
> How do i check from a client side that server is up/down? What I'm 
> currently doing is to issue grpc with deadline set + wait_for_ready as 
> false. the return code if shows UNAVAILABLE, then server is not there, 
> otherwise needs a retry... 
>
> Here it's not about health check service that could be used, but rather 
> about mechanisms to check either thru channel or stub (issueing rpc)? I was 
> also trying to query the channel state, however not quite clear what 
> indicates an unavailable server (using grpc_connectivity_state)...
>
> the use case is I have a client connected to N servers (1 channel per 
> server) and does some simple load balancing with priorities. this client is 
> actually another server 
>
> any help is greatly appreciated!
> thanks!
>
> VK
> Reply all
> Reply to author
> Forward
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/e2c1fbef-0105-4a66-8c54-8f0103ab832en%40googlegroups.com.


[grpc-io] Re: relaying rpcs Calls in grpc C++

2023-05-31 Thread 'yas...@google.com' via grpc.io
https://github.com/grpc/grpc/blob/2892b24eabbb22b2344aba9c3ba84e529017b684/include/grpcpp/generic/generic_stub.h#L114
The generic APIs are what you are looking for. 

I don't have an exact example for you, but you could use this as a 
reference for the Generic APIs 
- 
https://github.com/grpc/grpc/blob/2892b24eabbb22b2344aba9c3ba84e529017b684/test/cpp/end2end/client_callback_end2end_test.cc#L270

On Thursday, March 2, 2023 at 2:30:37 AM UTC-8 Anil Kumar wrote:

> Can someone please reply ?
>
> On Tuesday, February 21, 2023 at 4:52:19 PM UTC+5:30 Anil Kumar wrote:
>
>> My question is very similar to 
>> https://groups.google.com/g/grpc-io/c/Yruej18KJ_M/m/oGp5vYocCgAJ
>>
>> I want to implement a service agnostic gRPC proxy in C++, which forwards 
>> the request from the Client to the Server and forwards the response back 
>> from the server to the client ?
>>
>> Is there a generic manner to do so ?
>>
>> I see example mentioned in here 
>> 
>>
>>
>> How do I achieve the same in C++ ? Unable to find the C++ equivalent APIs 
>> for 
>>
>> ServerCallHandler, ServerCall.Listener, ClientCallListener
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/4bb1f673-10c9-4156-9118-68303b97d1f2n%40googlegroups.com.


[grpc-io] Re: maximum concurrent streams in cpp

2023-05-31 Thread 'yas...@google.com' via grpc.io
I don't think that you are running into a limit from max concurrent 
streams. If you haven't explicit set a limit of 15, you are not getting 
limited by that arg.

What are the symptoms that you are seeing? If it is simply a case of only 
15 RPCs being served concurrently, I suspect that the issue that you are 
running into is that your threads are blocked and hence not able to 
serve/poll other RPCs.

On Monday, May 15, 2023 at 2:17:18 AM UTC-7 karthik karra wrote:

> and also tried using different channels for each client. nothing worked
>
> On Monday, May 15, 2023 at 2:45:14 PM UTC+5:30 karthik karra wrote:
>
>> tried this call but of no use
>> server_builder.AddChannelArgument(GRPC_ARG_MAX_CONCURRENT_STREAMS,30)
>>
>> On Monday, May 15, 2023 at 2:04:04 PM UTC+5:30 karthik karra wrote:
>>
>>> Hi All, 
>>>
>>> currently i am getting max of 15 streams to server from 2 clients. 
>>> how to set the max concurrent streams ?
>>> do i need to create a new channel or should i increase the max 
>>> concurrent streams ?
>>>
>>> Any suggestions would be helpful.
>>>
>>> Thanks,
>>> Karthik
>>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/500c8c7b-aa89-4738-9473-94e052785b00n%40googlegroups.com.


[grpc-io] Re: C++: AsyncWrite constraint on completion queue

2023-05-31 Thread 'yas...@google.com' via grpc.io
You would find some examples here 
- https://github.com/grpc/grpc/tree/master/examples/cpp

The documentation would be best found in the headers - 
https://github.com/grpc/grpc/blob/master/include/grpcpp/support/client_callback.h
https://github.com/grpc/grpc/blob/master/include/grpcpp/support/server_callback.h

Also, https://github.com/grpc/proposal/blob/master/L67-cpp-callback-api.md 
for some additional reading

On Thursday, May 18, 2023 at 12:15:47 PM UTC-7 Ashutosh Maheshwari wrote:

> Hello Yash,
>
> Can you please point me to the documentation of the Callback API?
>
> Regards
> Ashutosh
>
>
> On Wednesday, May 17, 2023 at 6:54:25 AM UTC+5:30 yas...@google.com wrote:
>
> I'll preface this by saying - Use the C++ callback API. Instead of trying 
> to understand the Async CQ-based API, the callback API should be the choice 
> and is our current recommendation.
>
> >  Only one write is permissible per stream. So we cannot write another 
> tag on a stream until we receive a response tag from the completion queue 
> for the previous write.
>
> This is correct.
>
> I'll end this by again saying - Use the C++ callback API.
>
> > Recently,  I came across an issue where the gRPC client became a zombie 
> process as its parent Python application was aborted. In this condition, 
> the previous Write done on the stream connected with the client did not get 
> ack, probably,  and I did not receive the Write tag back in the completion 
> queue for that Write. My program kept waiting for the write tag and other 
> messages continued to queue up as the previous Write did not finish its 
> life cycle and hence I could not free the resources also for that tag.
>
> This can be easily avoided by configuring keepalive. Refer -
> 1) https://github.com/grpc/grpc/blob/master/doc/keepalive.md
> 2) https://github.com/grpc/proposal/blob/master/A9-server-side-conn-mgt.md
> 3) 
> https://github.com/grpc/proposal/blob/master/A8-client-side-keepalive.md
>
> That also answers your question on what happens if for some reason, a 
> client stops reading. Keepalive would handle it.
>
> > My question is, if a write tag for a previous write does not surface on 
> the completion queue, shall we wait for it indefinitely? What should be the 
> strategy to handle this scenario?
> Depends highly on your API/service. If for some reason, the RPC is taking 
> much longer than you want and you are suspecting that the client is being 
> problematic (i.e. responding to http keepalives but not making progress on 
> RPCs), you could always just end the RPC.
>
> On Wednesday, May 10, 2023 at 12:17:46 AM UTC-7 Ashutosh Maheshwari wrote:
>
> Hello,
>
> My question is, if a write tag for a previous write does not surface on 
> the completion queue, shall we wait for it indefinitely? What should be the 
> strategy to handle this scenario?
>
> Regards
> Ashutosh
> On Wednesday, April 26, 2023 at 11:11:57 PM UTC+5:30 apo...@google.com 
> wrote:
>
> First, it's important to clarify what it means to wait for a "Write" tag 
> to complete on a completion queue:
>
> When async "Write" is initially attempted, the message can be fully or 
> partially buffered within gRPC. The corresponding tag will surface on the 
> completion queue that the Write is associated with essentially after gRPC 
> is done buffering the message, i.e. after it's written out relevant bytes 
> to the wire.
>
> This is unrelated to whether or not a "response" has been received from 
> the peer, on the same stream.
>
> So, the highlighted comment means that you can only have one async write 
> "pending" per RPC, at any given time. I.e. in order to start a new write on 
> a streaming RPC, one must wait for the previous write on that same stream 
> to "complete" (i.e. for it's tag to be surfaced).
>
> Multiple pending writes on different RPCs of the same completion queue are 
> fine.
> On Saturday, April 22, 2023 at 12:58:57 PM UTC-7 Ashutosh Maheshwari wrote:
>
> Hello gRPC Team,
>
> I have taken an extract from 
> *“include/grpcpp/impl/codegen/async_stream.h”*
>
>  *“*
>
>   /// Request the writing of \a msg with identifying tag \a tag.
>
>   ///
>
>   /// Only one write may be outstanding at any given time. This means that
>
>   /// after calling Write, one must wait to receive \a tag from the 
> completion
>
>   /// queue BEFORE calling Write again.
>
>   /// This is thread-safe with respect to \a AsyncReaderInterface::Read
>
>   ///
>
>   /// gRPC doesn't take ownership or a reference to \a msg, so it is safe 
> to
>
>   /// to deallocate once Write returns.
>
>   ///
>
>   /// \param[in] msg The message to be written.
>
>   /// \param[in] tag The tag identifying the operation.
>
>   virtual void Write(const W& msg, void* tag) = 0;
>
> “
>
>  After reading the highlighted part,  I can make the following two 
> inferences:
>
>1. Only one write is permissible per stream. So we cannot write 
>another tag on a stream until we receive a response tag from the 
> completion 
>queue 

[grpc-io] Re: C++: AsyncWrite constraint on completion queue

2023-05-16 Thread 'yas...@google.com' via grpc.io
I'll preface this by saying - Use the C++ callback API. Instead of trying 
to understand the Async CQ-based API, the callback API should be the choice 
and is our current recommendation.

>  Only one write is permissible per stream. So we cannot write another tag 
on a stream until we receive a response tag from the completion queue for 
the previous write.

This is correct.

I'll end this by again saying - Use the C++ callback API.

> Recently,  I came across an issue where the gRPC client became a zombie 
process as its parent Python application was aborted. In this condition, 
the previous Write done on the stream connected with the client did not get 
ack, probably,  and I did not receive the Write tag back in the completion 
queue for that Write. My program kept waiting for the write tag and other 
messages continued to queue up as the previous Write did not finish its 
life cycle and hence I could not free the resources also for that tag.

This can be easily avoided by configuring keepalive. Refer -
1) https://github.com/grpc/grpc/blob/master/doc/keepalive.md
2) https://github.com/grpc/proposal/blob/master/A9-server-side-conn-mgt.md
3) https://github.com/grpc/proposal/blob/master/A8-client-side-keepalive.md

That also answers your question on what happens if for some reason, a 
client stops reading. Keepalive would handle it.

> My question is, if a write tag for a previous write does not surface on 
the completion queue, shall we wait for it indefinitely? What should be the 
strategy to handle this scenario?
Depends highly on your API/service. If for some reason, the RPC is taking 
much longer than you want and you are suspecting that the client is being 
problematic (i.e. responding to http keepalives but not making progress on 
RPCs), you could always just end the RPC.

On Wednesday, May 10, 2023 at 12:17:46 AM UTC-7 Ashutosh Maheshwari wrote:

> Hello,
>
> My question is, if a write tag for a previous write does not surface on 
> the completion queue, shall we wait for it indefinitely? What should be the 
> strategy to handle this scenario?
>
> Regards
> Ashutosh
> On Wednesday, April 26, 2023 at 11:11:57 PM UTC+5:30 apo...@google.com 
> wrote:
>
>> First, it's important to clarify what it means to wait for a "Write" tag 
>> to complete on a completion queue:
>>
>> When async "Write" is initially attempted, the message can be fully or 
>> partially buffered within gRPC. The corresponding tag will surface on the 
>> completion queue that the Write is associated with essentially after gRPC 
>> is done buffering the message, i.e. after it's written out relevant bytes 
>> to the wire.
>>
>> This is unrelated to whether or not a "response" has been received from 
>> the peer, on the same stream.
>>
>> So, the highlighted comment means that you can only have one async write 
>> "pending" per RPC, at any given time. I.e. in order to start a new write on 
>> a streaming RPC, one must wait for the previous write on that same stream 
>> to "complete" (i.e. for it's tag to be surfaced).
>>
>> Multiple pending writes on different RPCs of the same completion queue 
>> are fine.
>> On Saturday, April 22, 2023 at 12:58:57 PM UTC-7 Ashutosh Maheshwari 
>> wrote:
>>
>>> Hello gRPC Team,
>>>
>>> I have taken an extract from 
>>> *“include/grpcpp/impl/codegen/async_stream.h”*
>>>
>>>  *“*
>>>
>>>   /// Request the writing of \a msg with identifying tag \a tag.
>>>
>>>   ///
>>>
>>>   /// Only one write may be outstanding at any given time. This means 
>>> that
>>>
>>>   /// after calling Write, one must wait to receive \a tag from the 
>>> completion
>>>
>>>   /// queue BEFORE calling Write again.
>>>
>>>   /// This is thread-safe with respect to \a AsyncReaderInterface::Read
>>>
>>>   ///
>>>
>>>   /// gRPC doesn't take ownership or a reference to \a msg, so it is 
>>> safe to
>>>
>>>   /// to deallocate once Write returns.
>>>
>>>   ///
>>>
>>>   /// \param[in] msg The message to be written.
>>>
>>>   /// \param[in] tag The tag identifying the operation.
>>>
>>>   virtual void Write(const W& msg, void* tag) = 0;
>>>
>>> “
>>>
>>>  After reading the highlighted part,  I can make the following two 
>>> inferences:
>>>
>>>1. Only one write is permissible per stream. So we cannot write 
>>>another tag on a stream until we receive a response tag from the 
>>> completion 
>>>queue for the previous write. 
>>>2. Only one write is permissible on the completion queue with no 
>>>dependency on available streams. When multiple clients connect to the 
>>> grpc 
>>>server, then we will have multiple streams present. Now in such a 
>>> scenario, 
>>>only one client can be responded to at a time due to the 
>>> above-highlighted 
>>>limitation. 
>>>
>>>  Can you please help us in understanding which one of our above 
>>> inferences is true?
>>>
>>> Recently,  I came across an issue where the gRPC client became a zombie 
>>> process as its parent Python application was aborted. In this condition, 

Re: [grpc-io] GRPC Get Server Address

2023-05-01 Thread 'yas...@google.com' via grpc.io
Wow, I forgot to respond to this. Apologies!

`ServerContext` does give you `peer()` which I believe is filled with the 
appropriate address. For security related purposes, we do not recommend 
that this be used though.

You probably want to use a proper authentication mechanism like the ones 
documented at https://grpc.io/docs/guides/auth/ and then use 
`grpc::AuthContext()` to get the context per request.

On Wednesday, April 13, 2022 at 5:50:13 PM UTC-7 Mingyu Lu wrote:

> Can anyone give any advice ?
>
> On Monday, April 11, 2022 at 8:54:32 AM UTC+8 Mingyu Lu wrote:
>
>> C/C++
>>
>> On Friday, April 8, 2022 at 11:54:06 PM UTC+8 Eric Anderson wrote:
>>
>>> What programming language are you using?
>>>
>>> On Fri, Apr 8, 2022 at 7:55 AM Mingyu Lu  wrote:
>>>
 Hi, 
 I am using GRPC with UNIX domain socket. I'd like to have one server 
 listen on two sockets A and B.
 If requests come from A, I'd like to do something but if from B, I want 
 to do something different.
 For security concern, I can't trust what comes from request.
 I suppose if I can get server address when requests come in, problem 
 solved.
 Does anyone have any idea ? Thanks.

 -- 
 You received this message because you are subscribed to the Google 
 Groups "grpc.io" group.
 To unsubscribe from this group and stop receiving emails from it, send 
 an email to grpc-io+u...@googlegroups.com.
 To view this discussion on the web visit 
 https://groups.google.com/d/msgid/grpc-io/b4c59951-59c3-4b2c-9773-9a5ba6f589f3n%40googlegroups.com
  
 
 .

>>>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/df9d0687-3aa3-4271-a711-36b5a51e00dfn%40googlegroups.com.


[grpc-io] Re: Doubt about "Next" API.

2023-05-01 Thread 'yas...@google.com' via grpc.io
Hi,

Wow, I forgot about replying to this. Apologies!
You are right in your assumptions. By default, "is going to go to the wire" 
just means that data has been accepted by the gRPC stack and that gRPC will 
try to write the data to the socket (or whatever transport mechanism is 
being used). From gRPC's HTTP/2 layer perspective, it means that HTTP/2 
flow control succeeded on the message and that it will be sent to the TCP 
layer for further processing.

Also, please note that gRPC C++ is now recommending the use of new callback 
API, which is much easier to use.

On Saturday, April 16, 2022 at 11:13:27 PM UTC-7 karthik karra wrote:

> Hi,
>
> *Context :*
>
> For the Async design, we use "Next" API with completion queue and this 
> blocking in nature until an event happens. 
>
> In the description for "Next" API (
> https://grpc.github.io/grpc/cpp/classgrpc_1_1_completion_queue.html#a86d9810ced694e50f7987ac90b9f8c1a),
>  
> its mentioned as* if "ok" is true, then it means that data is going to go 
> the wire*. 
>
> so the moment ok is returned either with true or false, the "Next" API 
> gets unblocked.
>
> *My Understanding:*
>
> Underneath GRPC we have many layers and GRPC being the top most layer.
> GRPC <-> HTTP/2 <-> Regular Network Stack (TCP <-> IP <-> Ethernet <-> 
> Physical Wire)
> (*Please correct if any of this assumption is wrong*)
>
> *Doubt*: 
>
> When the description is "Next" API says "...data going to wire" and 
> returning either true or false for "ok" variable, *what exactly does the 
> wire means ?*  
> *Is it after the GRPC layer or HTTP/2 layer or Physical wire itself or 
> something else ?*
>
> Any insights would be helpful.
>
> Thanks
>
>
>
>  
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/1c9dfa4b-1334-48ef-91c7-f2ab67675315n%40googlegroups.com.


[grpc-io] Re: multiple async clients for route_guide_callback_server example

2023-03-29 Thread 'yas...@google.com' via grpc.io
Could I ask you to create a tracking issue for this on github please? Also 
if you've got a working solution, contributions to source are welcome :)

On Wednesday, March 22, 2023 at 2:52:22 AM UTC-7 Dmitry Gorelov wrote:

> Please check the following code, it fixes the crash of the 
> route_guide_callback_server!
>
>  #include 
>  #include 
>  #include 
>  #include 
>  #include 
>
>  #include 
>  #include 
>  
>  #include "helper.h"
>  
>  #include 
>  #include 
>  #include 
>  #include 
>  #include 
>  #ifdef BAZEL_BUILD
>  #include "examples/protos/route_guide.grpc.pb.h"
>  #else
>  #include "route_guide.grpc.pb.h"
>  #endif
>  
>  using grpc::CallbackServerContext;
>  using grpc::Server;
>  using grpc::ServerBuilder;
>  using grpc::Status;
>  using routeguide::Feature;
>  using routeguide::Point;
>  using routeguide::Rectangle;
>  using routeguide::RouteGuide;
>  using routeguide::RouteNote;
>  using routeguide::RouteSummary;
>  using std::chrono::system_clock;
>
>  
>  class RouteGuideImpl final : public RouteGuide::CallbackService {
>   public:
>explicit RouteGuideImpl(const std::string& db) {
>  routeguide::ParseDb(db, _list_);
>}
>  
>grpc::ServerBidiReactor* RouteChat(
>CallbackServerContext* context) override {
>  class Chatter : public grpc::ServerBidiReactor RouteNote> {
>   public:
>Chatter(absl::Mutex* mu, std::vector* 
> received_notes)
>: mu_(mu), received_notes_(received_notes) {
>  StartRead(_);
>}
>  
>void OnDone() override { delete this; }
>void OnReadDone(bool ok) override 
>{
>  if (ok) 
>  {
>// Unlike the other example in this directory that's 
> not using
>// the reactor pattern, we can't grab a local lock to 
> secure the
>// access to the notes vector, because the reactor will 
> most likely
>// make us jump threads, so we'll have to use a 
> different locking
>// strategy. We'll grab the lock locally to build a 
> copy of the
>// list of nodes we're going to send, then we'll grab 
> the lock
>// again to append the received note to the existing 
> vector.
>mu_->Lock();
>std::copy_if(received_notes_->begin(), 
> received_notes_->end(),
> std::back_inserter(to_send_notes_),
> [this](const RouteNote& note) {
>   return note.location().latitude() ==
>  note_.location().latitude() &&
>  note.location().longitude() ==
>  note_.location().longitude();
> });  
>notes_iterator_ = to_send_notes_.begin();
>mu_->Unlock();
>NextWrite();
>  } else {
>//std::cout << "some client finished" << std::endl;
>Finish(Status::OK);
>  }
>}
>void OnWriteDone(bool ok) override 
>{ 
>  if (ok)
>  {
>NextWrite(); 
>  }
>  else
>  {
>std::cout << "some client finished write" << std::endl;
>Finish(Status::OK);
>  }
>}
>  
>   private:
>void NextWrite() 
>{
>  mu_->Lock();
>
>  if (notes_iterator_ != to_send_notes_.end()) {
>StartWrite(&*notes_iterator_);
>notes_iterator_++;
>  } else {  
>received_notes_->push_back(note_);  
>StartRead(_);
>  }
>  mu_->Unlock();
>
>}
>RouteNote note_;
>absl::Mutex* mu_;
>std::vector* received_notes_;
>std::vector to_send_notes_;
>std::vector::iterator notes_iterator_;
>  };
>  return new Chatter(_, _notes_);
>}
>  
>   private:
>std::vector feature_list_;
>absl::Mutex mu_;
>std::vector received_notes_ ABSL_GUARDED_BY(mu_);
>  };
>  
>  void 

[grpc-io] Re: /usr/local/include/google/protobuf/repeated_field.h:145:1: error: invalid application of 'sizeof' to incomplete type 'google::protobuf::__uint128_t'

2023-03-29 Thread 'yas...@google.com' via grpc.io
This might be an environment issue. Can you talk more about where and how 
you are building your application?

On Wednesday, March 22, 2023 at 11:51:48 PM UTC-7 xiaoliang jiao wrote:

> I use gRPC 1.53.0-pre1 and protobuf 3.21.12.0. 
> I can create *.pb.h,  *.pb.cc, *.grpc.pb.h, *.grpc.pb.cc files from 
> *.proto file.
> But when i link grpc, it tells me about: this is the problem link 
> .  how can i solve it, thanks 
> for your help and time!
>
>
> In file included from 
> /usr/local/include/google/protobuf/implicit_weak_message.h:39:0,
> from /usr/local/include/google/protobuf/generated_message_util.h:54,
> from /catkin_ws/build/nav_interfaces/dynamic_localmap.pb.h:26,
> from 
> /catkin_ws/src/Navigation_ARM/Navigation/MappingModule/src/ros/DynamicMapManager/include/grpc_proto_utils.h:5,
> from 
> /catkin_ws/src/Navigation_ARM/Navigation/MappingModule/src/ros/DynamicMapManager/include/DynamicMapManager.h:63,
> from 
> /catkin_ws/src/Navigation_ARM/Navigation/MappingModule/src/ros/DynamicMapManager/src/DynamicMapManager_node.cpp:1:
> /usr/local/include/google/protobuf/repeated_field.h: At global scope:
> /usr/local/include/google/protobuf/repeated_field.h:145:1: error: invalid 
> application of 'sizeof' to incomplete type 'google::protobuf::__uint128_t'
> PROTO_MEMSWAP_DEF_SIZE(__uint128_t, (1u << 31))
> ^
> /usr/local/include/google/protobuf/repeated_field.h:145:1: error: template 
> argument 1 is invalid
> PROTO_MEMSWAP_DEF_SIZE(__uint128_t, (1u << 31))
> ^
> /usr/local/include/google/protobuf/repeated_field.h: In function 'int 
> google::protobuf::internal::memswap(char*, char*)':
> /usr/local/include/google/protobuf/repeated_field.h:145:1: error: invalid 
> application of 'sizeof' to incomplete type 'google::protobuf::__uint128_t'
> PROTO_MEMSWAP_DEF_SIZE(__uint128_t, (1u << 31))
> ^
> /usr/local/include/google/protobuf/repeated_field.h:145:1: error: invalid 
> application of 'sizeof' to incomplete type 'google::protobuf::__uint128_t'
> PROTO_MEMSWAP_DEF_SIZE(__uint128_t, (1u << 31))
> ^
> /usr/local/include/google/protobuf/repeated_field.h:145:1: error: invalid 
> application of 'sizeof' to incomplete type 'google::protobuf::__uint128_t'
> PROTO_MEMSWAP_DEF_SIZE(__uint128_t, (1u << 31))
> ^
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/7386a443-2954-41a8-9f78-bc92a49978dbn%40googlegroups.com.


[grpc-io] Re: Regarding grpc logging [C++]

2023-03-29 Thread 'yas...@google.com' via grpc.io
There are ways to redirect stderr process-wide, but that's probably not 
what you are looking for. We don't have a C++ API for overriding the 
default log mechanism. We do have a gRPC Core API to set the logging 
function though 
- 
https://github.com/grpc/grpc/blob/2cd1501ca5ec0cf7db9fd63dd07508b54eaf8d4d/include/grpc/support/log.h#L85
Note that this API is not considered stable and is subject to change.

On Saturday, March 25, 2023 at 9:33:16 AM UTC-7 vinay Nayak wrote:

> Hi, have recently started working on grpc v1.52.1 using C++ on linux 
> platform.
> Found out that gpr_log function is being used for logging mechanism.
> Was looking for redirecting gpr_logs to any given file by user so that can 
> be analyzed later if any need arises. Is it supported in the above grpc 
> version for C++ on linux.
>
> When looked at code, it is printing all grpc logs to std_err. 
> Also, found out that for php there is an option to mention the log file 
> using variable "grpc.log_filename"
>
> Can any one confirm, does "logs redirecting to file" is supported for c++ 
> language also ? If so, how to enable it ?
>
>  
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/2154760f-76d3-4366-993f-5c7dc8c7aa6en%40googlegroups.com.


[grpc-io] Re: Per ListeningPort network namespace (c++)

2023-03-29 Thread 'yas...@google.com' via grpc.io
gRPC C++ doesn't have a way of doing that. It looks like the way processes 
switch network namespaces is by using `ip netns exec` but that's not what 
we want.

On Tuesday, March 28, 2023 at 4:11:10 AM UTC-7 Dylan Walsh wrote:

> Hey all,
>
> I was wondering if a single c++ gRPC server has the ability to bind to 
> different addresses within different network namespaces:
>
> e.g localhost:4566 in network namespace *default*
>localhost:4567 in network namespace *red*
>
> Currently, I can switch namespace (setns) before calling BuildAndStart() 
> for a ServerBuilder object. But this causes all configured listening ports 
> to bind within the same namespace. 
>
> I was curious if there is any existing functionality for allowing calls to 
> AddListeningPort(..) to specify the network namespace to use (or some other 
> way of achieving this).
>
> Kind Regards,
> Dylan
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/f278a4f4-c411-4009-9bc8-343395d3deean%40googlegroups.com.


[grpc-io] Re: building grpc C++ on AIX

2023-03-29 Thread 'yas...@google.com' via grpc.io
Unfortunately, we don't have CI testing on AIX. Can you try using Bazel 
instead of CMake?

On Tuesday, March 28, 2023 at 8:23:33 PM UTC-7 勿羽 wrote:

> Hi Amandeep,
>
> I'm also trying to compile gRPC on AIX7.2. There are a lot of errors that 
> I can't figure out. Did you compile successfully? 
> 在2018年11月8日星期四 UTC+8 17:08:56 写道:
>
> I am trying to compile gRPC on AIX. With some workarounds, I was able to 
> reach the very last step and this is where I get this error:
>
> [MAKE] Generating cache.mk
> [HOSTLD] Linking /home/amandeep/grpc-1.15.1/bins/opt/grpc_cpp_plugin
> collect2: fatal error: 
> /home/amandeep/grpc-1.15.1/libs/opt/libgrpc_plugin_support.a: not a COFF 
> file
> compilation terminated.
> make: *** [Makefile:17926: 
> /home/amandeep/grpc-1.15.1/bins/opt/grpc_cpp_plugin] Error 1
>
> How could this be fixed?
>
> Note: As per https://github.com/grpc/grpc/pull/15926, we should be able 
> to successfully build gRPC on AIX.
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/4d5934d2-d8fc-4aff-a1d5-71cd5de493cfn%40googlegroups.com.


[grpc-io] gRFC L104: C++ OpenCensus Plugin Public APIs

2022-10-12 Thread 'yas...@google.com' via grpc.io
Please see https://github.com/grpc/proposal/pull/334 for the proposal. The 
proposal is to move the APIs that were intended for public usage to 
`include/grpcpp/opencensus.h` from the current 
`src/cpp/ext/filter/census/...`

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/9831434c-6aea-40cb-98eb-16e16424d9a3n%40googlegroups.com.


[grpc-io] Re: remote grpc server and local client

2022-07-20 Thread 'yas...@google.com' via grpc.io
Yes, you would just use that `:` string as the target when 
creating the channel.

On Monday, July 11, 2022 at 2:19:30 PM UTC-7 srishtik...@gmail.com wrote:

> Hi,
> Is it possible to connect to remote Grpc Server from client using IP 
> address.Can anybody suggest how to do that?
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/2779373a-8b5e-43cd-b6e4-b64c0cc76b55n%40googlegroups.com.


[grpc-io] Re: Grpc with C++ server and C# Client

2022-07-20 Thread 'yas...@google.com' via grpc.io
You need to replace `RELEASE_TAG_HERE` with an actual release tag. 
`v1.48.0` for example.

On Friday, July 8, 2022 at 3:47:54 PM UTC-7 srishtik...@gmail.com wrote:

>
> I am trying to do a POC where i am trying to make my Site controller 
> studio on local system written in C# interact with the Server written in 
> C++ hosted on remote desktop by using Grpc Protocol. I tried following the 
> below blog for C# client : 
> https://docs.microsoft.com/en-us/aspnet/core/grpc/basics?view=aspnetcore-6.0
>
> The server is written in C++ and client is written in C#. The operating 
> system i am using is Windows.
>
> For C++,I followed the following blog: 
> https://medium.com/@andrewvetovitz/grpc-c-introduction-45a66ca9461f
>
> But I got one error after executing command
>
> git clone -b RELEASE_TAG_HERE https://github.com/grpc/grpc
>
> The error was RELEASE_TAG_HERE not found in origin.
>
> I have not established the C++ Server yet .So, I am stuck to proceed 
> further with C++ and C# interaction. A help in this would be great.
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/b3b75a0e-9fe5-4b56-9587-0c78ba3a92den%40googlegroups.com.


[grpc-io] Re: Observed an error trace from grpc core while invoking rpc on client side

2022-07-20 Thread 'yas...@google.com' via grpc.io
Note that the epollex poller has been deleted.

On Tuesday, July 19, 2022 at 10:25:05 AM UTC-7 balajisr...@gmail.com wrote:

> I have seen the below error trace during rpc gets invoked in the client 
> side.
>
> E0108 23:57:39.627493254 13286 ev_epollex_linux.cc:512] Error shutting 
> down fd 43. errno: 9
>
> Adding the backtrace taken from gdb below for reference.
> (gdb) bt #0 0x76a21770 in shutdown () from 
> /lib/x86_64-linux-gnu/libc.so.6 #1 0x75c3a085 in fd_shutdown 
> (fd=0x7fff8006d8c0, why=0x7fff640008e0) at 
> src/core/lib/iomgr/ev_epollex_linux.cc:491 #2 0x75ce4db5 in 
> fd_node_shutdown_locked (reason=reason@entry=0x75dce61d 
> "grpc_ares_ev_driver_shutdown", fdn=, fdn=) 
> at 
> src/core/ext/filters/client_channel/resolver/dns/c_ares/grpc_ares_ev_driver.cc:127
>  
> #3 0x75ce5763 in fd_node_shutdown_locked (fdn=0x7fff80059190, 
> fdn=0x7fff80059190, reason=0x75dce61d "grpc_ares_ev_driver_shutdown") 
> at 
> src/core/ext/filters/client_channel/resolver/dns/c_ares/grpc_ares_ev_driver.cc:124
>  
> #4 grpc_ares_ev_driver_shutdown_locked (ev_driver=) at 
> src/core/ext/filters/client_channel/resolver/dns/c_ares/grpc_ares_ev_driver.cc:184
>  
> #5 0x75ce57ee in on_timeout_locked (arg=0x7fff8005b5f0, error=0x0) 
> at 
> src/core/ext/filters/client_channel/resolver/dns/c_ares/grpc_ares_ev_driver.cc:214
>  
> #6 0x75c3584a in grpc_combiner_continue_exec_ctx () at 
> src/core/lib/iomgr/combiner.cc:268 #7 0x75c3f357 in 
> grpc_core::ExecCtx::Flush (this=0x7fff799f2d70) at 
> src/core/lib/iomgr/exec_ctx.cc:151 #8 0x75c3aba4 in pollset_work 
> (pollset=0x7fff800592b0, worker_hdl=0x7fff799f2c20, 
> deadline=140735341239128) at src/core/lib/iomgr/ev_epollex_linux.cc:1151 #9 
> 0x75cb0741 in run_poller (arg=0x7fff80059a00, error= out>) at src/core/ext/filters/client_channel/backup_poller.cc:122 #10 
> 0x75c3f0ce in exec_ctx_run (closure=, error=0x0) at 
> src/core/lib/iomgr/exec_ctx.cc:40 #11 0x75c3f33c in 
> grpc_core::ExecCtx::Flush (this=0x7fff799f2d70) at 
> src/core/lib/iomgr/exec_ctx.cc:148 #12 0x75c4f134 in 
> run_some_timers () at src/core/lib/iomgr/timer_manager.cc:140 #13 
> timer_main_loop () at src/core/lib/iomgr/timer_manager.cc:246 #14 
> timer_thread (completed_thread_ptr=0x7fff78e0) at 
> src/core/lib/iomgr/timer_manager.cc:293 #15 0x75cef443 in 
> grpc_core::(anonymous 
> namespace)::ThreadInternalsPosix::ThreadInternalsPosix(char const*, void 
> (*)(void*), void*, bool*, grpc_core::Thread::Options 
> const&)::{lambda(void*)#1}::_FUN(void*) () at 
> src/core/lib/gprpp/thd_posix.cc:114 #16 0x777a7064 in start_thread 
> () from /lib/x86_64-linux-gnu/libpthread.so.0 #17 0x76a2062d in 
> clone () from /lib/x86_64-linux-gnu/libc.so.6 
>
>  Please let me know why this error trace is coming and how to resolve 
> this. thanks!!!
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/3aebb080-19e0-4fa8-9e72-86dd19f7c891n%40googlegroups.com.


[grpc-io] gRPC-Core Release 1.48.0

2022-07-19 Thread 'yas...@google.com' via grpc.io
This is 
1.48.0([g](https://github.com/grpc/grpc/blob/master/doc/g_stands_for.md)arum) 
release announcement for gRPC-Core and the wrapped languages C++, C#, 
Objective-C, Python, PHP and Ruby. Latest release notes are 
[here](https://github.com/grpc/grpc/releases/tag/v1.48.0).

This release contains refinements, improvements, and bug fixes, with 
highlights listed below.
Core
   
   - Upgrade Abseil to LTS 20220623.0 . (#30155 
   )
   - Call: Send cancel op down the stack even when no ops are sent. (#30004 
   )
   - FreeBSD system roots implementation. (#29436 
   )
   - xDS: Workaround to get gRPC clients working with istio. (#29841 
   )

Python
   
   - Set Correct Platform Tag in Wheels on Mac OS with Python 3.10. (#29857 
   )
   - [Aio] Ensure Core channel closes when deallocated. (#29797 
   )
   - [Aio] Fix the wait_for_termination return value. (#29795 
   )

Ruby
   
   - Make the gem build on TruffleRuby. (#27660 
   )
   - Support for prebuilt Ruby binary on x64-mingw-ucrt platform. (#29684 
   )
   - [Ruby] Add ruby_abi_version to exported symbols. (#28976 
   )

Objective-C

First developer preview of XCFramework binary distribution via Cocoapod (
#28749 ). This brings in 
significant speed up to local compile time and includes support for Apple 
Silicon build.

   - The following binary pods are made available for ObjC V1 & V2 API
  - gRPC-XCFramework (source pod gRPC)
  - gRPC-ProtoRPC-XCFramework (source pod gRPC-ProtoRPC)
   - The following platforms and architectures are included
  - ios: armv7, arm64 for device. arm64, i386, x86_64 for simulator
  - macos: x86_64 (Intel), arm64 (Apple Silicon)
   

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/f82a72f1-22bd-4a8c-9878-96692dc6b762n%40googlegroups.com.


[grpc-io] Re: ssl/tls grpc not available for c++?

2022-02-16 Thread 'yas...@google.com' via grpc.io
Note that you are using `GRPC_SSL_DONT_REQUEST_CLIENT_CERTIFICATE`. In that 
mode, the server does not request (nor require) client certificates.

If you want the server to require client certificates, you could use 
`GRPC_SSL_REQUEST_AND_REQUIRE_CLIENT_CERTIFICATE_AND_VERIFY` instead of 
`GRPC_SSL_DONT_REQUEST_CLIENT_CERTIFICATE`.

Also, note that in your client code, you would need to set the private key 
or the cert chain.

On Tuesday, February 15, 2022 at 7:56:56 PM UTC-8 吴烨烽 wrote:

> Here are two questions
>
> Q1.Why the client can communicate with the server?
>
> step1: the server configures SslServerCredentials (including server 
> certificate and private key) to listen to the port. step2: The client 
> configures InsecureChannelCredentials to create the channel
>
> Q2.The client can communicate with the server, but it is not TLS through 
> wireshark packet capture.
>
> step1: the server configures SslServerCredentials (including server 
> certificate and private key) to listen to the port. step2: Client 
> configures SslCredentials (including CA certificates) to create a channel.
>
> server codes:
> std::string server_address ( "0.0.0.0:30051" );
>  std::string key; 
> std::string cert; 
> read ( "E:\\DataCert\\server1.pem", cert ); 
> read ( "E:\\DataCert\\server1.key", key ); 
> grpc::SslServerCredentialsOptions::PemKeyCertPair keycert = { key, cert }; 
> grpc::SslServerCredentialsOptions 
> sslOps(GRPC_SSL_DONT_REQUEST_CLIENT_CERTIFICATE); 
> sslOps.pem_key_cert_pairs.push_back(keycert); 
> std::shared_ptr creds = 
> grpc::SslServerCredentials(sslOps); ServerBuilder builder;
>  builder.AddListeningPort(server_address, creds); GreeterServiceImpl 
> service; 
> builder.RegisterService(); 
>  std::unique_ptr < Server > server ( builder.BuildAndStart () ); 
> std::cout << "Server listening on " << server_address << std::endl; 
> server->Wait (); 
>
> client codes:
> std::string cert; 
> std::string key;
>  std::string root; 
> read("E:\\DataCert\\ca.pem", root); 
>  grpc::SslCredentialsOptions opts; 
> opts.pem_root_certs = root; 
>  grpc::ChannelArguments cargs; 
> cargs.SetSslTargetNameOverride("foo.test.google.fr"); 
>  std::string server{ "192.168.20.182:30051" }; 
> std::unique_ptr stub_ = 
> Greeter::NewStub(grpc::CreateCustomChannel(server, 
> grpc::SslCredentials(opts), cargs)); 
> //std::unique_ptr stub_ = 
> Greeter::NewStub(grpc::CreateChannel(server, 
> grpc::InsecureChannelCredentials())); 
> std::string user ( "world" ); 
> HelloRequest request; 
> request.set_name(user); 
> HelloReply reply; 
> ClientContext context; 
> Status status = stub_->SayHello(, request, );  
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/cd744844-098f-4147-b7f1-7aba296ccf15n%40googlegroups.com.


[grpc-io] Re: C++ header list size violation

2022-02-16 Thread 'yas...@google.com' via grpc.io
The channel arg `GRPC_ARG_MAX_METADATA_SIZE 
`
 
serves that purpose.

On Thursday, February 10, 2022 at 12:14:16 PM UTC-8 A M wrote:

> Hello
>
> I am getting this error on a C++ gRPC server. 
> "header list size to send violates the maximum size (4096 bytes) set by 
> server."
> I see this in the gRPC docs at least `
> https://pkg.go.dev/google.golang.org/grpc#WithMaxHeaderListSize` for the 
> golang client and `
> https://pkg.go.dev/google.golang.org/grpc#MaxHeaderListSize` for the 
> golang server.
>
> Is there any equivalent like ServerBuilder.MaxHeaderListSize() in C++?
>
> I appreciate any help you can provide.
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/5e0d98f8-87a6-451a-bc3b-f73cc335643fn%40googlegroups.com.


[grpc-io] Re: gRPC Keepalive/idletimeout

2022-01-31 Thread 'yas...@google.com' via grpc.io
1. This would depend on the settings that you've configured for keepalive. 
Note that this doc is specific to gRPC Core and dependents. 
2. Again, for Core and dependents, at present the idle timeout is disabled 
by default. GRPC_ARG_CLIENT_IDLE_TIMEOUT_MS is the channel arg to specify 
the modify this behavior.
On Monday, January 17, 2022 at 10:46:54 PM UTC-8 Roshan Chaudhari wrote:

> I am trying to understand how keepalive or idle connection works with 
> gRPC. I have bidirectional streaming RPC, where I create session and do 
> nothing so that there is no activity on the channel.
>
> 1. If there is no activity, GRPC_ARG_KEEPALIVE_TIME_MS signal will be 
> blocked (https://github.com/grpc/grpc/blob/master/doc/keepalive.md#faq) 
> and connection will be closed after this interval, however, it does not 
> terminate and I see keepalive ping is sent and received. why?
>
> 2. If we do not set any params, is there any timeout after which 
> connection will be automatically closed? If yes, how do I change this 
> behaviour, which param?

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/2f487145-429a-4c10-a64e-0aabb9264a18n%40googlegroups.com.


[grpc-io] Re: Incoming frame of size N exceeds local window size of 0

2021-12-01 Thread 'yas...@google.com' via grpc.io
It looks like there is a bug around handling of flow control windows either 
in the client or the server. Based on the error log, I would presume that 
the server's flow control implementation is buggy. To dig deeper, we would 
need to look at what flow control updates are being sent.

On Monday, November 22, 2021 at 10:47:08 PM UTC-8 fli...@gmail.com wrote:

> I have a gRPC service with a bidirectional streaming method.
>
>- Client: python grpcio 1.41.1.
>- Server: akka-grpc 2.1.0.
>
> The client is a slow consumer (the server could potentially perform at a 
> higher rate).
>
> Occasionally (with some random delay after method call), client logs 
> message like the following:
> * E1122 13:42:55.763763501 108048 flow_control.cc:240] Incoming frame of 
> size 317205 exceeds local window size of 0. The (un-acked, future) window 
> size would be 1708209 which is not exceeded. This would usually cause a 
> disconnection, but allowing it due tobroken HTTP2 implementations in the 
> wild. See (for example) https://github.com/netty/netty/issues/6520 
> . * 
>
> Sometimes this message is followed by exception:
>
> *Exception in thread Thread-2: *
>
> *Traceback (most recent call last): *
>
> *  File "/usr/lib/python3.8/threading.py", line 932, in _bootstrap_inner *
>
> *self.run() *
> *  File "/usr/lib/python3.8/threading.py", line 870, in run*
>
> *self._target(*self._args, **self._kwargs) *
>
> *  File "[...]/client.py", line 107, in fetch *
>
> *for response in responses: *
>
> *  File "[...]/venv/lib/python3.8/site-packages/grpc/_channel.py", line 
> 426, in __next__ *
>
> *return self._next() *
>
> *  File "[...]/venv/lib/python3.8/site-packages/grpc/_channel.py", line 
> 826, in _next *
>
> *raise self *
> *grpc._channel._MultiThreadedRendezvous: <_MultiThreadedRendezvous of RPC 
> that terminated with: status = StatusCode.UNKNOWN details = "Stream 
> removed" debug_error_string = 
> "{"created":"@1637649068.837642637","description":"Error received from peer 
> ipv4:***.***.***.***:","file":"src/core/lib/surface/call.cc","file_line":1069,"grpc_message":"Stream
>  
> removed","grpc_status":2}" * 
>
> But sometimes overall call succeeds with no exception.
>
> Some research:
>
>- Disabling BDP by setting grpc.http2.bdp_probe = 0 seems to resolve 
>the problem, but I suppose it's just a side effect of overall throughput 
>decrease.
>- There is somewhat similar issue 
> on GitHub, but it looks 
>like it's about an *unary* call. In that case, server starts to use 
>increased initial window size immediately after receiving client's 
> SETTINGS 
>frame and before sending SETTINGS ack (if I understood right). In my case, 
>frame ordering looks correct.
>- Exploring captured network packets and client-side gRPC tracing logs 
>(GRPC_VERBOSITY=DEBUG, GRPC_TRACE=flowctl) doesn't give me any 
>insights.
>
> I'll greatly appreciate any ideas on how to resolve or diagnose the 
> problem.
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/64540ae4-0574-4eaf-a4f4-dae5d07e69f8n%40googlegroups.com.


[grpc-io] Re: Cross Language support C++, .NET 4.8, .Net Core...

2021-12-01 Thread 'yas...@google.com' via grpc.io
I don't know of any issues doing this.

On Friday, November 19, 2021 at 3:40:58 AM UTC-8 thecpu...@gmail.com wrote:

> How feasible is it to have implementations where one end is in:
>
> * C++ (native)
> * C# running .NET 4.8 full framework
> * C# running .NET Core 5.0 or later
>
> And the other end running in one of the other two
>
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/329acb6f-05c3-4ea7-bbff-b204ab580e3en%40googlegroups.com.


[grpc-io] Re: Some clients can't connect to our server.

2021-12-01 Thread 'yas...@google.com' via grpc.io
That is probably the issue that you are running into. Please refer 
- https://github.com/grpc/grpc/blob/master/doc/environment_variables.md for 
the environment variables that gRPC uses. You can override by using the 
channel arg - GRPC_ARG_ENABLE_HTTP_PROXY and setting it to 0.

On Wednesday, November 17, 2021 at 7:14:15 AM UTC-8 bengtgus...@gmail.com 
wrote:

> We were thinking that maybe the problem is that the gRPC library is trying 
> to use a http proxy set up by our customers' IT administrators on employee 
> PCs. To me it is not clear whether gRPC (C++ version, out of the box) would 
> try to use proxy settings in a PC if there are any.
>
> One customer tried a laptop at home which worked, brought it to the 
> office, and it couldn connect anymore.
>
> tisdag 16 november 2021 kl. 13:58:47 UTC+1 skrev Bengt Gustafsson:
>
>> Hi. We have a gRPC based server on the internet and our own client 
>> software connecting to it at different customer sites. Quite often these 
>> clients can't connect due to probably some firewall issue in their intranet 
>> to internet connection. We changed to using port 443 to avoid firewalls 
>> blocking outgoing traffic on unknown ports but that doesn't help.
>>
>> I turned on logging using gpr_set_log_verbosity(GPR_LOG_SEVERITY_INFO); 
>> but I got nothing at all before I got the return value deadline_exceeded. I 
>> have increased the timeout to 10s so it's not just slow.
>>
>> I replaced the URL with the corresponding ip number and then I got could 
>> not connect to all addresses with no measurable delay.
>>
>> Any tips and tricks that could be useful?
>>
>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/9ffbaa74-23a7-4300-a6f6-fc2ad442d615n%40googlegroups.com.


[grpc-io] Re: grpcio_testing code samples

2021-12-01 Thread 'yas...@google.com' via grpc.io
Thank you for the feedback! Do the examples 
in https://github.com/grpc/grpc/tree/master/examples 
and https://github.com/grpc/grpc/tree/master/test help?

On Tuesday, November 16, 2021 at 6:32:13 AM UTC-8 M T wrote:

> Hi all,
>
> I'm just starting to get into grpc, and was wondering about tests. The 
> documentation of the testing package only lists what classes etc. are 
> available, but not really how to use them - would it be possible to add 
> small testing examples to the general code examples, to help people 
> starting out?
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/a787b200-e683-4a48-9d90-f1da02430f41n%40googlegroups.com.


[grpc-io] Re: Server restart during client side streaming

2021-12-01 Thread 'yas...@google.com' via grpc.io
I can think of a few ways to achieve this (with keepalives configured 
ofcourse). One way to detect this would be to perform a `Read()` on the 
stream. If the channel dies, the stream would die too and the read would 
fail.

On Tuesday, November 16, 2021 at 3:08:46 AM UTC-8 paulin...@gmail.com wrote:

> Hi all,
>
> During client side streaming, I'd like to know how the client could detect 
> that the server process has been abruptly restarted (e.g. due to some 
> critical failure), and how the client could recover from this.
>
> Apparently for server side streaming we can use IsCancelled() and 
> AsyncNotifyWhenDone() to detect such issues. But is there any similar 
> mechanism on the client side?
>
> This is what I have tested so far:
>
>
>1. *I started a synchronous client side streaming RPC. *The channel is 
>in a GRPC_CHANNEL_READY state
>2. *Killed the server process.* After the server dies, the channel is 
>briefly GRPC_CHANNEL_IDLE. Then the channel started alternating between 
>GRPC_CHANNEL_CONNECTING and GRPC_CHANNEL_TRANSIENT_FAILURE
>3. *Then the server is started again. *Тhe channel then goes in a 
>GRPC_CHANNEL_READY state. However, the server no longer receives any of 
> the 
>streamed messages. I suppose at this stage the client should start a new 
>RPC and abandon the old one? But how would the client know that it has to 
>do this? *Is there a reliable way for the client to know that it has 
>to re-establish the connection?*
>
> I attempted to detect the issue using the retval of WaitForConnected() and 
> Write() but both of these functions returned “true” while the server was 
> down.
>
> Also, I tried setting these configuration variables on both the server and 
> client, but it didn't seem to help:
>
> args.SetInt(GRPC_ARG_KEEPALIVE_TIME_MS, 1);
>
> args.SetInt(GRPC_ARG_KEEPALIVE_TIMEOUT_MS, 5000);
>
> args.SetInt(GRPC_ARG_KEEPALIVE_PERMIT_WITHOUT_CALLS, 1);
>
> args.SetInt(GRPC_ARG_HTTP2_MAX_PINGS_WITHOUT_DATA, 0);
>
> Regards,
>
> Paulin
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/66a022e7-638d-4441-b532-1fa3589c441cn%40googlegroups.com.


[grpc-io] Re: Is this a valid way to multi thread RPC handlers in gRPC async server in C++?

2021-12-01 Thread 'yas...@google.com' via grpc.io
Completion queues are thread-safe.
CallData is an application-level construct. gRPC itself won't be accessing 
the internals of the struct. It just uses it as an opaque pointer. If the 
application level logic is such that CallData can be accessed from multiple 
threads, you would need to synchronize access to it.

Hope that helps!

On Sunday, November 14, 2021 at 9:59:34 PM UTC-8 Rajanarayana A wrote:

> Hello, 
>
> Extremely sorry if this is a basic question. 
>
> I'm trying to work on the basic async gRPC C++ code as mentioned in 
> https://github.com/grpc/grpc/tree/master/examples/cpp/helloworld
>
> Aim is to experiment multi threading of RPC handlers. Can someone please 
> let me know that whether the programming logic is correct? ( Please refer 
> to the stackoverflow link -   
> https://stackoverflow.com/questions/69944030/is-this-a-valid-way-to-multi-thread-rpc-handlers-in-grpc-async-server-in-c
>  
> )
>
> Few questions,
>
>- Is the completion queue thread-safe?
>- I see that there is a possibility of one instance of CallData being 
>accessed in multiple threads (returned as part of tag). Is CallData 
>thread-safe here or do we need to have a mutex for the same?
>
> Please note that this is an incomplete program.
> Appreciate any help on this.
>
> Thanks and Regards,
> Raj
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/1b460abb-496d-4357-b6c4-413e45a4fc0dn%40googlegroups.com.


[grpc-io] Re: Why GRPC server running on EC2/AWS returns RST_STREAM error for a GET Call?

2021-12-01 Thread 'yas...@google.com' via grpc.io
I think more logs are needed here. From the logs that you did post, it 
seems like the connection was established but I don't see any details of 
the RPC itself. Your code doesn't show the RPC either.

On Thursday, November 11, 2021 at 5:21:39 PM UTC-8 kiranku...@gmail.com 
wrote:

> GRPC Client is written in GoLang runs over Ubuntu VM 16.04. This is 
> running on a bare metal server.
>
> GRPC Client in GoLang 
>
> ==
>
> var opts []grpc.DialOption 
>
> opts = append(opts, grpc.WithInsecure()) 
>
> opts = append(opts, grpc.WithBlock()) 
>
> serverAddr := serverIp + ":" + serverPort 
>
> conn, err := grpc.Dial(serverAddr, opts...) 
>
> if err != nil { 
>
>common.Logger().Errorf("fail to dial: %v", err) 
>
> return nil 
>
>  } 
>
> //defer conn.Close() 
>
>  client := xxx_pb.xxxClient(conn) 
>
> return client
>
>  
>
> My GRPC Servers runs in an EC2 instance (AWS) 
>
> GRPC Server in Python 
>
> 
>
> def serve(): 
>
>print("GRPC Server Starting") 
>
>server = grpc.server(futures.ThreadPoolExecutor(max_workers=10))
>
>
>xxx_grpc_pb2_grpc.add_xxxServicer_to_server( xxxFwServicer(), 
> server)   
>
>server.add_insecure_port('[::]:50051') 
>
>server.start() 
>
>server.wait_for_termination()  
>
>  
>
> if __name__ == '__main__':
>
>   serve() 
>
> The connection is successful between GRPC client/server. However, when the 
> client code does a GET using one of the predefined GRPC API, GRPC server 
> responds with the below error.
>
> *rpc error: code = Internal desc = stream terminated by RST_STREAM with 
> error code: NO_ERROR*
>
> When the same GRPC server is run on ubuntu VM on another bare metal server 
> instead of EC2 AWS instance, then I don't see this error. Everything works 
> fine.
>
> On AWS Front, I enabled the security groups and other permissions for port 
> 50051. I assume this is good enough as GRPC Client/Server connection is 
> established successfully. 
>
> GRPC Client and Server communicate over Public Network (Internet) here.
>
> From the process logs that I added, Server RPC API is not even getting 
> called. The log message present in the RPC API code on server side doesn't 
> even get printed.
>
> Enabled GRPC logs on the server and they are as below during Client GRPC 
> request. I can't make out much though.
>
> I 23:21:45.544276213   21932 timer_generic.cc:512] .. 
> shard[19]->queue_deadline_cap --> 13015
>
> I 23:21:45.544278865   21932 timer_generic.cc:577] .. 
> shard[19] popped 0
>
> I 23:21:45.544286851   21932 timer_generic.cc:632] .. result 
> --> 1, shard[19]->min_deadline 12015 --> 13016, now=12015
>
> I 23:21:45.544290422   21932 timer_generic.cc:741]   TIMER CHECK 
> END: r=1; next=13012
>
> I 23:21:45.544297936   21932 timer_manager.cc:188]   sleep for a 
> 997 milliseconds
>
> I 23:21:45.552848998   21933 
> completion_queue.cc:1078]   RETURN_EVENT[0x196d9f5321d0]: QUEUE_TIMEOUT
>
> I 23:21:45.552879498   21933 
> completion_queue.cc:969]grpc_completion_queue_next(cq=0x196d9f5321d0, 
> deadline=gpr_timespec { tv_sec: 1636672905, tv_nsec: 752877317, clock_type: 
> 1 }, reserved=(nil))
>
> I 23:21:45.754174520   21933 
> completion_queue.cc:1078]   RETURN_EVENT[0x196d9f5321d0]: QUEUE_TIMEOUT
>
> I 23:21:45.754204954   21933 
> completion_queue.cc:969]grpc_completion_queue_next(cq=0x196d9f5321d0, 
> deadline=gpr_timespec { tv_sec: 1636672905, tv_nsec: 954198396, clock_type: 
> 1 }, reserved=(nil))
>
> I 23:21:45.955498820   21933 
> completion_queue.cc:1078]   RETURN_EVENT[0x196d9f5321d0]: QUEUE_TIMEOUT
>
> I 23:21:45.955651038   21933 
> completion_queue.cc:969]grpc_completion_queue_next(cq=0x196d9f5321d0, 
> deadline=gpr_timespec { tv_sec: 1636672906, tv_nsec: 155648812, clock_type: 
> 1 }, reserved=(nil))
>
> I 23:21:46.156859185   21933 
> completion_queue.cc:1078]   RETURN_EVENT[0x196d9f5321d0]: QUEUE_TIMEOUT
>
> I 23:21:46.156955462   21933 
> completion_queue.cc:969]grpc_completion_queue_next(cq=0x196d9f5321d0, 
> deadline=gpr_timespec { tv_sec: 1636672906, tv_nsec: 356953290, clock_type: 
> 1 }, reserved=(nil))
>
> I 23:21:46.358278383   21933 
> completion_queue.cc:1078]   RETURN_EVENT[0x196d9f5321d0]: QUEUE_TIMEOUT
>
> I 23:21:46.358295443   21933 
> completion_queue.cc:969]grpc_completion_queue_next(cq=0x196d9f5321d0, 
> deadline=gpr_timespec { tv_sec: 1636672906, tv_nsec: 552873984, clock_type: 
> 1 }, reserved=(nil))
>
> I 23:21:46.541256973   21932 timer_manager.cc:204]   wait ended: 
> was_timed:1 kicked:0
>
> I 23:21:46.541274502   21932 timer_generic.cc:719]   TIMER CHECK 
> BEGIN: now=13012 next=9223372036854775807 tls_min=12015 glob_min=13012
>
> I 23:21:46.541279016   21932 timer_generic.cc:614] .. 
> shard[31]->min_deadline = 13012
>
> I 23:21:46.541282578   21932 timer_generic.cc:537] .. 
> 

[grpc-io] Re: gRPC client stream error semantics when server is shutdown but TCP connection remains

2021-12-01 Thread 'yas...@google.com' via grpc.io
Having a HTTP/2 proxy in between is muddying the waters for keepalive. I 
believe istio/envoy have settings for keepalives that you might be able to 
employ here. If that doesn't work for you either, you might want to 
consider a custom application level ping.

On Tuesday, November 9, 2021 at 6:28:41 PM UTC-8 C. Schneider wrote:

> Hi,
>
> For a chat service I have the client connect to a gRPC server running in 
> Istio (and using FlatBuffers).
>
> When the server is shutdown the TCP connection remains connected (to Istio 
> it appears) but the client doesn't detect the server went away, so 
> continues to send Keep Alives thinking the server should be sending data 
> eventually, but the server never will since its RPC call state was lost 
> when it was shutdown.
>
> What is the expected RPC stream semantics in the case where the server 
> goes away mid stream? Should the client be able to detect this and restart 
> the RPC stream?
>
> Thanks!
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/7cc40071-2dc3-42f2-aebf-c591bfac570dn%40googlegroups.com.


[grpc-io] Re: gRPC wait server ready

2021-12-01 Thread 'yas...@google.com' via grpc.io
I think this API is pretty stable and should be 
promoted. https://github.com/grpc/grpc/pull/28247

On Tuesday, November 9, 2021 at 1:06:03 PM UTC-8 Сергей Соболев wrote:

> Hi, everyone!
> I'm writing test (with gtest) with grpc similar to the following (in C++ 
> pseudocode):
>
> class Fixture {
>   void SetUp() { m_server = BuildAndStart(); }
>   void TearDown() { m_server->Shutdown(); }
> };
>
> TEST_F(Fixture, *Test1*) { ASSERT_EQ(clientStub->rpcCall(ctx), 
> grpc::Status::OK); }
> TEST_F(Fixture, Test2) { ASSERT_EQ(clientStub->rpcCall(ctx), 
> grpc::Status::OK); }
>
> When compiling (clang++11) with a thread sanitizer (grpc library compiling 
> with tsan too) sometimes I get an error UNAVALABLE (code 14) with logs:
> "Failed to pick subchannel", "Failed to connect to all addresses" and 
> observe the state of the channel is TRANSIENT_FAILURE.
> But only the first test always fails (*Test1*). Searching the internet I 
> found the following suitable solution 
> https://chromium.googlesource.com/external/github.com/grpc/grpc/+/HEAD/examples/python/wait_for_ready/
> In other words, with tsan I guess grpc library does not have enough time 
> to fully initialize (Test2 is always OK). After use 
> grpc::ClientContext::set_wait_for_ready(bool) method problem solved
>
> TEST_F(Fixture, Test1) { ASSERT_EQ(clientStub->rpcCall(ctx*WithWait*), 
> grpc::Status::OK); }
> TEST_F(Fixture, Test2) { ASSERT_EQ(clientStub->rpcCall(ctx*WithWait*), 
> grpc::Status::OK); }
>
> My questions are:
>
>1. grpc::ClientContext::set_wait_for_ready(bool) marked as 
>EXPERIMENTAL (C++ api) and may be removed in the future, but 
>grpc::ClientContext::set_fail_fast(bool) method is DEPRECATED. It's 
>recommended to use wait_for_ready(). Is it true that in the future 
>set_wait_for_ready should become a replacement set_fail_fast?
>2. Are there any other ways to solve this problem? I was also looking 
>at ChannelState API 
>
> https://chromium.googlesource.com/external/github.com/grpc/grpc/+/refs/heads/chromium-deps/2016-09-09/doc/connectivity-semantics-and-api.md
>  Which 
>way is more correct and reliable?
>
> I will also need to implement similar logic in C# code. In C# this is a 
> CallOptions.WithWaitForReady(bool) method, but it's EXPERIMENTAL too
>
> Wished to clear up a misunderstanding and choose the right solution.
>
> Thank you for your time.
> Highly appreciated.
> Sergey
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/77c530be-8fea-4eb4-8aac-5d937f568a09n%40googlegroups.com.


[grpc-io] Re: Flow Based Load Balancing

2021-12-01 Thread 'yas...@google.com' via grpc.io
I am not sure I completely understand the question but I'll give it a shot. 
On any new RPC (can either be unary or streaming), after load-balancing is 
done, gRPC ends up choosing a backend server for that RPC. If there isn't 
an existing transport connection to that backend, a connection is started 
and a stream is started on that connection. For the entire duration of the 
RPC, the stream remains on the same transport and hence the same backend.

On Sunday, November 7, 2021 at 9:35:26 PM UTC-8 noahc...@gmail.com wrote:

> I have a set up where I have 1 gRPC-client serving multiple backend 
> gRPC-servers. Now I want that the packets(flows) in each RPC call are 
> mapped to 1 particular instance of gRPC-server. The mapping could be based 
> on a 5 tuple hash(Source IP, port, Dest IP, port, Protocol)
> Is there some way to achieve this?  As I am of the idea that gRPC uses a 
> single long-lived session?

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/96eb919b-b23e-462a-bcdb-a57deafc0ecdn%40googlegroups.com.


[grpc-io] Re: Stream flow control on C++ sync API?

2021-09-23 Thread 'yas...@google.com' via grpc.io
What timeout are you referring to here? Some logs might be helpful here.

If by chance, you are referring to keepalives then, 
`GRPC_ARG_KEEPALIVE_TIME_MS` and `GRPC_ARG_KEEPALIVE_TIMEOUT_MS` are the 
corresponding channel args.

On Friday, August 6, 2021 at 12:26:40 AM UTC-7 tobias.krueger wrote:

> I have to apologize for me vague first post.
> I should have watched my logs more precisely - the buffering is not as big 
> as I thought.
> Every packet contains 3 integer values and 1 MByte (1024*1024) of bytes 
> (==blob)
>
> The timing looks as follows:
> Write(Chunk#0)  : ~ 1 msec : OK
> Write(Chunk#1)  : ~ 1 msec : OK
> Write(Chunk#2)  : ~ 1 msec : OK
> Write(Chunk#3)  : ~   900 msec : OK
> Write(Chunk#4)  : ~ 1 msec : OK
> Write(Chunk#5)  : ~20.000 msec : fails
> Write(Chunk#6)  : < 1 msec : fails
> Write(Chunk#7)  : < 1 msec : fails
> :
> :
> Write(Chunk#n)  : < 1 msec : fails
>
> Watching on Wireshark the sending starts as soon as the first chunk is 
> written. 
> The 6th call sending chunk #5 actually timeouts after 20 seconds, while 
> the first chunk is still transmitting.
> The transmission of the first chunk takes a total time of 70 seconds (seen 
> in Wireshark).
>
> Currently I see no way to slow down my sending client.
> From Sync API I see no way to recognize that I should slow down to avoid 
> the timeout on Chunk#5.
>
> Is there a knob where I can increase the 20 seconds timeout?
> Is there a way to get a more detailed error message - not only a bool 
> indicating false?
>
> Any other good ideas?
>
> Thanks
>
>
> On Wednesday, August 4, 2021 at 7:26:42 PM UTC+2 apo...@google.com wrote:
>
>> A few clarifications that might help to clarify here:
>>
>> > I am trying send several hundred packets (each ~1MB) over a stream from 
>> a client to a server.
>>
>> Do you know the exact number of packets, and the exact size of each 
>> packet?
>>
>> > Every call to  stream.Write(packet) returns immediately.
>>
>> Can we measure the amount of time it takes for stream.Write calls to 
>> complete in terms of microseconds or milliseconds?
>>
>>  > Watching with Wireshark I can see, that not even the first packet has 
>> left the sending computer. 
>>
>> This seems unlikely, as we should at least have seen the initial TCP 
>> handshake packets when the RPC was started. Are you sure that you're 
>> sniffing the correct network interface, and looking at the correct TCP 
>> connection?
>>
>> On Tuesday, August 3, 2021 at 8:49:38 AM UTC-7 tobias.krueger wrote:
>>
>>> Hi, 
>>> we have encountered some strange behaviors using the Write method of the 
>>> sync API.
>>>
>>> I have two effects that might be different, but perhaps some 
>>> understanding about the sync API might help.
>>> Both are related to slow / aborting connections.
>>>
>>>1. I am trying send several hundred packets (each ~1MB) over a 
>>>stream from a client to a server.
>>>The network is using a custom radio module with a very low bandwidth 
>>>like Wi-Fi direct.
>>>Every call to  stream.Write(packet) returns immediately.
>>>Even the stream.WriteLast returns within some seconds.
>>>Watching with Wireshark I can see, that not even the first packet 
>>>has left the sending computer. 
>>>That means all packets are buffered in my system.
>>>
>>>Is there any way using the sync API to control/limit the buffering?
>>>
>>>2. In the opposite we encounter sometimes a blocking of the server's 
>>>Write method when streaming very small packets to the client.
>>>This can be reproduced when I pull the client's network cable.
>>>Then the server's Write call is blocked for ~8 seconds
>>>
>>> On the on hand I can flood the sync API and get no backpressure at all, 
>>> on the other hand the Write call gets blocked for several seconds.
>>>
>>> Any ideas to get a better understanding of the behavior?
>>>
>>> Anything I can do without switching to the async API?
>>>
>>> Thanks
>>> Tobias
>>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/66c905a4-e7ed-4cea-85df-01c56a32642dn%40googlegroups.com.


[grpc-io] Re: Intermittent Unavailable/Unknown RpcException (C++/C#)

2021-09-23 Thread 'yas...@google.com' via grpc.io
For reference, https://github.com/grpc/grpc/issues/27292 is the related 
issue

On Thursday, August 26, 2021 at 10:09:41 AM UTC-7 Jacob B wrote:

> We are using gRPC (version  1.37.1) for our inter-process communication 
> between our C# process and C++ process. Both processes act as a server and 
> client with the other and run on the same machine over localhost using the 
> HTTP/2 transport. All of the calls are use blocking synchronous unary calls 
> and not bi-directional streaming. Some average(ish) stats:
>
> From C++->C#: 0-2 calls per second, 0-40 calls per minute
>
> From C#->C++: 0-5 calls per second, 0-200 calls per minute
>
> Intermittently, we were getting one of 3 issues
>
>- C# client call to C++ server comes back with an RpcException, 
>usually “HTTP2/Parse Error”, “Endpoint Read Failed”, or “Transport Closed” 
>- C++ client call to C# server comes back with Unavailable or Unknown 
>- C++ client WaitForConnected call to check the channel fails after 
>500ms 
>
>  
>
> The top most one is the most frequent and where we have the most 
> information about. Usually, what we’ll see is the Client receives the RPC 
> call and runs into an unknown frame type. Then the subchannel goes into 
> shutdown and everything usually re-connects fine. We also generally see an 
> embedded error like the following (note that we replaced all __FILE__ 
> instances to __FUNCTION__ in our gRPC source):
>
> win_read","file_line":307,"os_error":"The system detected an invalid 
> pointer address in attempting to use a pointer argument in a 
> call.\r\n","syscall":"WSARecv","wsa_error":10014}]},{"created":"@1622120588.49400","description":"frame
>  
> of size 262404 overflows local window of 
> 65535","file":"grpc_core::chttp2::TransportFlowControl::ValidateRecvData","file_line":213}]}
>
> What we’ve seen with the unknown frame type, is that it parses the 
> HEADERS, WINDOW_UPDATE, DATA, WINDOW_UPDATE and then gets a TCP: on_read 
> without a corresponding READ and then tries to parse again. It’s this parse 
> where it looks like the parser is at the wrong offset in the buffer, 
> because it gets the unknown frame type, incoming frame size and incoming 
> stream_id all map to the middle of the RPC call that it just parsed.
>
>  
>
> The above was what we were encountering prior to a change to create a new 
> channel for each rpc call. While we realize it is not great from a 
> performance standpoint, we have seen increased stability since making the 
> change. However, we still do occasionally get rpc exceptions. Now, the most 
> common is “Unknown”/”Stream Removed” rather than the ones listed above.
>
>
> Any ideas on what might be going wrong is appreciated.
>
>  
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/5d029abb-c274-4a4d-a10c-0e806bb32bd4n%40googlegroups.com.


Re: [grpc-io] Re: Would like to know more details about the binary log.

2021-09-23 Thread 'yas...@google.com' via grpc.io
That is correct. Binary logging is not yet supported in Core based 
languages.

On Monday, August 2, 2021 at 11:26:54 AM UTC-7 Eric Anderson wrote:

> I'll let someone else explain more of the state for C-based languages.
>
> As I understand it, binary logging isn't supported in the C-based 
> languages which includes C#, C++, Python, and others. grpc-dotnet does 
> not support binary logging 
> .
>  
> grpc-go's implementation is in grpc/binarylog 
> . 
> grpc-java's is 
> 
>  at 
> io.grpc.protobuf.services.BinaryLogs 
> .
>  
> Should be clear in both cases how to write to a different collector. Java 
> doesn't expose the concrete protobuf type to the sink, so filtering will be 
> harder than in Go. In general, filtering based on RPC request will be hard 
> because 1) the message is untyped and serialized, so you'll need some 
> custom logic to decode it and 2) the message may be truncated which can 
> trivially fail deserialization even if the field you want wasn't truncated.
>
> On Mon, Jul 19, 2021 at 10:02 AM Xiaofeng Han  wrote:
>
>> friendly ping, thanks.
>>
>> On Thursday, July 15, 2021 at 9:08:51 PM UTC-7 Xiaofeng Han wrote:
>>
>>> Hello grpc-io group,
>>>
>>> This is Xiaofeng from Roblox, I am very interested in using the grpc 
>>> binary logging to build a debugging tool for all the micro-services at 
>>> Roblox. So far I only find this grfc 
>>> 
>>>  online. 
>>> I have the following questions and would greatly appreciate it if someone 
>>> could help. 
>>>
>>> 1. Is the binary log fully implemented for all major programming 
>>> languages, like C#, C++, go, python, etc?
>>>
>>> 2. If yes, how can we customized it to support
>>> a. conditional logging, e..g, random sampling, or when the rpc request 
>>> contains certain values.
>>> b. redirect the logging to other collectors instead of a local file.
>>>
>>> Thanks,
>>> Xiaofeng
>>>
>>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "grpc.io" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to grpc-io+u...@googlegroups.com.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/grpc-io/fc398e75-5d29-4a93-808d-9c50a7c145c2n%40googlegroups.com
>>  
>> 
>> .
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/f7b72e52-4fa6-4a91-abf4-362051e5585an%40googlegroups.com.


[grpc-io] Re: grpc c++: how to create async callback client

2021-09-13 Thread 'yas...@google.com' via grpc.io
Hi,

If using the callback API, you wouldn't need to create a separate thread 
for each RPC. The callbacks in the reactor would be automatically invoked 
by the threads managed by the gRPC library. That being said, if the reactor 
is going to be blocking for a substantial amount of time, you would want to 
perform that in a separate application thread, so that the library threads 
can focus on polling connections and executing callbacks.

Creating a different reactor object per RPC would be a very common way of 
doing things and should work fine.

On Tuesday, September 7, 2021 at 2:22:47 PM UTC-7 oleg@idt.net wrote:

> Hello, could you possibly help me with creating async callback client? 
> Should I create UnaryClientReactor per rpc or I should run every rpc in 
> separate thread?
> There cq-based async client example but callback client example looks like 
> sync rpc with thread blocking.
>
> Thank you.
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/edeef857-31e9-442a-996d-f4b078e6813fn%40googlegroups.com.


Re: [grpc-io] Re: gRFC A36: xDS-Enabled Servers

2021-09-09 Thread 'yas...@google.com' via grpc.io
The gRFC is being updated in https://github.com/grpc/proposal/pull/264 for 
the Core/C++ API to use a struct for the serving status update. Please take 
a look!
On Friday, June 11, 2021 at 1:18:39 PM UTC-7 Eric Anderson wrote:

> This gRFC is being updated to describe RouteConfiguration behavior as part 
> of https://github.com/grpc/proposal/pull/237
>
> On Fri, Apr 30, 2021 at 2:27 PM 'yas...@google.com' via grpc.io <
> grp...@googlegroups.com> wrote:
>
>> Concrete details for the Core/C++ API for creating xDS-enabled servers 
>> are up at https://github.com/grpc/proposal/pull/234. Please take a look.
>>
>> On Wednesday, January 20, 2021 at 2:41:17 PM UTC-8 Eric Anderson wrote:
>>
>>> Please review and comment. The gRFC is at 
>>> https://github.com/grpc/proposal/pull/214. The gRFC covers new APIs and 
>>> their plumbing to allow having xDS-enabled servers. The precise C++/wrapped 
>>> language APIs are not covered, but an idea of what they may look like is 
>>> covered. Java and Go have their concrete API designs presented.
>>>
>>> A36 itself doesn't provide any xDS features, but is the basis for them 
>>> to be built without applications needing to make code changes for each 
>>> added xDS feature. For an xDS feature, see the related gRFC A29: 
>>> xDS-Based Security for gRPC Clients and Servers 
>>> <https://github.com/grpc/proposal/pull/184>.
>>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "grpc.io" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to grpc-io+u...@googlegroups.com.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/grpc-io/9387c3fc-2afa-4924-aab3-b41e9d355248n%40googlegroups.com
>>  
>> <https://groups.google.com/d/msgid/grpc-io/9387c3fc-2afa-4924-aab3-b41e9d355248n%40googlegroups.com?utm_medium=email_source=footer>
>> .
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/64ffe43c-c9e1-4811-bc19-cfdbbee90ac8n%40googlegroups.com.


[grpc-io] Re: LiveStream Android Camera to Server

2021-09-08 Thread 'yas...@google.com' via grpc.io
https://grpc.io/docs/platforms/android/java/quickstart/ 
I hope that helps!

On Tuesday, September 7, 2021 at 2:00:18 AM UTC-7 tsiau...@gmail.com wrote:

> Hi Team,
> I am very new in GRPC, 
> I need to implement android  java application for streaming my android 
> device  camera to server with GRPC framework . so how to quick start .
>
> please share if any sample project link available  
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/d945cd9b-fdc1-46f3-a782-7deeb3204b94n%40googlegroups.com.


[grpc-io] Re: Using 0.0.0.0 as server listen address

2021-09-08 Thread 'yas...@google.com' via grpc.io
For C++, the bound port is available in the `selected_port` parameter of 
the `AddListeningPort` API.

On Wednesday, September 1, 2021 at 10:39:11 AM UTC-7 Yuri Golobokov wrote:

> Hi,
>
> Which language are you using for gRPC server?
>
> On Friday, August 27, 2021 at 12:44:50 PM UTC-7 sumuk...@gmail.com wrote:
>
>> After creating a server and adding a listen port with "0.0.0.0:", is 
>> it possible to figure out which interface grpc picked to bind and query 
>> that ?
>>
>> This is similar to using port 0 and then querying the port number that 
>> was dynamically assigned, except for the IP address.
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/bd690d25-4f5c-4221-ba70-d5ba1a4d6275n%40googlegroups.com.


[grpc-io] Re: gRPC executor threads and timer thread

2021-07-08 Thread 'yas...@google.com' via grpc.io

1. What is the purpose of the timer thread?
Throughout the gRPC stack, there are a bunch of deadlines and timeouts that 
need to be tracked. The way gRPC Core does this is through timers. It 
schedules a closure to be executed when that timer expires and this closure 
is run on the timer thread.

2. Should everything Just Work™ even if we call 
`grpc_timer_manager_set_threading(false)`
No, it won't. :)
On Tuesday, June 29, 2021 at 11:00:36 AM UTC-7 Jonathan Basseri wrote:

> The context, following from our previous thread, is that we want to add 
> grpc endpoints to an existing high-performance application. Our application 
> already has extensive control over the allocations and threading on the 
> system, so *we would prefer a single-threaded grpc server* that hands off 
> async requests to our own work queue.
>
> All of the above seems to be working in Alex's prototype, but we want to 
> make sure that stopping these threads is not going to cause problems down 
> the line.
>
> 1. What is the purpose of the timer thread?
> 2. Should everything Just Work™ even if we call 
> `grpc_timer_manager_set_threading(false)`
>
> Thanks,
> Jonathan
>
> On Monday, June 28, 2021 at 9:19:31 PM UTC-7 Alex Zuo wrote:
>
>> For executor threads, we can use Executor::SetThreadingAll(false) to shut 
>> down. If there is no thread, it still works according to the following code.
>>
>> void Executor::Enqueue(grpc_closure* closure, grpc_error_handle error,
>> bool is_short)
>> ... 
>> do {
>> retry_push = false;
>> size_t cur_thread_count =
>> static_cast(gpr_atm_acq_load(_threads_));
>>
>> * // If the number of threads is zero(i.e either the executor is not 
>> threaded*
>> * // or already shutdown), then queue the closure on the exec context 
>> itself*
>> *if (cur_thread_count == 0) {*
>> #ifndef NDEBUG
>> EXECUTOR_TRACE("(%s) schedule %p (created %s:%d) inline", name_, closure,
>> closure->file_created, closure->line_created);
>> #else
>> EXECUTOR_TRACE("(%s) schedule %p inline", name_, closure);
>> #endif
>> grpc_closure_list_append(grpc_core::ExecCtx::Get()->closure_list(),
>> closure, error);
>> return;
>> }
>>
>> For the timer thread, there is a function to shut it down. However I 
>> cannot tell what is the impact if there is no such a thread. I also don't 
>> know the timer is used.
>>
>> void grpc_timer_manager_set_threading(bool enabled);
>>
>> Anybody has any insight? 
>>
>> Thanks,
>> Alex
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/4d9f8c2a-6725-4d45-9647-60571d44f266n%40googlegroups.com.


[grpc-io] Re: Using async grpc streaming APIs

2021-06-16 Thread 'yas...@google.com' via grpc.io
> Does that mean there shouldn't be any outstanding Writes / Reads when 
this function is called? Or just that there shouldn't be a call to "Finish" 
when a call to "Read" or "Write" is in progress (as in multiple threads 
competing with each other)? I didn't see a problem with an outstanding read 
when I tested (by issuing a read, closing server side, closing client side 
in parallel) this, but just wanna confirm what’s ideal.

This has to do with API usage. It is not safe to call `Finish()` when a 
`Read()`/`Write()` operation is in progress (for example, if it is an 
asynchronous operation and the tag has not yet been received from the 
completion queue.)

> Does this mean that a call to "WritesDone" before calling "Finish" is 
optional and I can just directly call "Finish" instead? If yes, in what 
circumstances would one find calling "WritesDone" before "Finish" useful?

A `WritesDone()` would result in a half-close from the client and I can 
imagine cases where a half-close from the client is useful for the server 
to know (cases where there are still more messages to be read).

> The documentation for CompletionQueue::Next 

 says that for a client-side Read, a call to Next returns not-ok only when 
the "call is dead". What all does that exactly mean? What constitutes a 
"call"? I thought it could be not-ok if the server-side or client-side is 
already "Finish"-ed or if the underlying network was compromised.

A "call" here is an RPC or a stream from a HTTP2 perspective. When `Read()` 
fails on the client, it is a good signal that the RPC has ended one way or 
another, either through a proper status received from the server or some 
error (includes network errors). This status/error would be received when 
`Finish()` is called.

> Also, is it ok to invoke ClientAsyncStreamingInterface::Finish before the 
server has sent all its messages and before a AsyncReaderInterface::Read yields 
a not-ok tag? I see that the second point in documentation 

 for 
this "Finish" instructs exactly against such a usage, but I just wanted to 
confirm if it's an illegal usage and could result in run time errors 
(assert fails) from grpc code?

It is illegal usage of the API. I'm not sure if we have asserts in place 
for this though.

> Also, what's the accepted way to close the streams when it's not 
implicitly known that no more messages are to be received from the server? 

1. Invoking `WritesDone` is just a half-close and does not mean that the 
RPC is done. 
2. Waiting for `Read()` to fail is a sure-shot way of knowing that the RPC 
is done. 
3. It's not safe to invoke `Finish()` if we don't know that we are done 
reading. I believe the call would just get stuck here but I might be wrong 
as to the observable behavior here.
If we just want to end the RPC without caring about the server's status, 
then the client can also cancel the RPC with a `TryCancel()` on the 
ClientContext.



On Friday, June 4, 2021 at 8:42:02 PM UTC-7 Piyush Sharma wrote:

> *(Working with grpc async bidirectional streaming)*
>
>
>1. 
>
>For ClientAsyncStreamingInterface::Finish 
>
> 
>  
>and for ServerAsyncReaderWriterInterface::Finish 
>
> 
>  the 
>documentation says: 
>
>"Should not be used concurrently with other operations"
>
>Does that mean there shouldn't be any outstanding Writes / Reads when 
>this function is called? Or just that there shouldn't be a call to 
> "Finish" 
>when a call to "Read" or "Write" is in progress (as in multiple threads 
>competing with each other)? I didn't see a problem with an outstanding 
> read 
>when I tested (by issuing a read, closing server side, closing client side 
>in parallel) this, but just wanna confirm what’s ideal.
>
>2. 
>
>For ClientAsyncStreamingInterface::Finish the doc 
>
> 
>  
>says:
>
>"
>/// It is appropriate to call this method exactly once when both:
>/// * the client side has no more message to send
>/// (this can be declared implicitly by calling this method, or
>/// explicitly through an earlier call to the WritesDone method
>/// of the class in use, e.g. \a 
>ClientAsyncWriterInterface::WritesDone or
>/// \a ClientAsyncReaderWriterInterface::WritesDone).
>"
>
>Does this mean that a call to "WritesDone" before calling 

[grpc-io] Re: Use of insecure C functions/API(s)

2021-06-16 Thread 'yas...@google.com' via grpc.io
There is no roadmap to remove them that I am aware of. As for whether these 
functions are handled safely everywhere, that seems like an alternate way 
of asking if gRPC has bugs related to this. What I CAN tell you is that the 
code is continuously tested, and I would imagine that if there was a bug 
related to the usage, it would get fixed when found.

On Wednesday, June 2, 2021 at 2:31:26 AM UTC-7 Nilesh Gajwani wrote:

> Hi, 
> We had a penetration testing done for our iOS app, which uses gRPC-Core 
> pod.
> We received comments specifying that unsafe C functions (example: memcpy, 
> malloc, etc) are being used in the binary.
> On searching in the project directory for a example function (memcpy), I 
> can see the gRPC-Core files using this function.
> Can you give an confirmation if these functions are handled safely 
> everywhere, or is the removal of these in the roadmap?
> Please check for the example API call, memcpy for now, I can provide list 
> of all functions if needed
> Thanks and regards,
> Nilesh
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/c479856b-862e-4c7b-9851-d3e352097b0fn%40googlegroups.com.


[grpc-io] Re: What is the mechanism of Completion Queue?

2021-06-16 Thread 'yas...@google.com' via grpc.io
> However, HandleRpc() only call proceed() once,. It seems we always stay 
in the PROCESS state. Will it generate the memory leak?

No, the same CallData object is used multiple times as a completion queue 
tag, allowing the state to progress.

> What does it mean to have cq->next(, ) return the out param ok as 
false? 

A `false` value for `ok` signifies a failure to read a successful event, 
but the documentation already mentions that. Do you have a more specific 
question that you have in mind?
On Sunday, May 30, 2021 at 12:44:25 PM UTC-7 Mohan Gyara wrote:

> What does it mean to have cq->next(, ) return the out param ok as 
> false? 
> I appreciate if someone answer this. 
>
> Regards,
> Mohan
>
> On Thursday, 5 November 2015 at 08:48:33 UTC+5:30 hardy wrote:
>
>> Is there anyone familiar with CompleteQueue could tell me how the 
>> mechanism is? The only piece of codes I can found is 
>>
>>   void HandleRpcs() {
>> // Spawn a new CallData instance to serve new clients.
>> new CallData(_, cq_.get());
>> void* tag;  // uniquely identifies a request.
>> bool ok;
>> while (true) {
>>   // Block waiting to read the next event from the completion queue. 
>> The
>>   // event is uniquely identified by its tag, which in this case is 
>> the
>>   // memory address of a CallData instance.
>>   cq_->Next(, );
>>   GPR_ASSERT(ok);
>>   static_cast(tag)->Proceed();
>> }
>>   }
>>
>>
>> I really would like to know how I could response to different kinds of 
>> request? And will the loop keep iterating over the CompleteQueue again and 
>> again?
>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/04221c3d-b036-4f33-be64-24e56e065c0fn%40googlegroups.com.


[grpc-io] Re: gRFC A36: xDS-Enabled Servers

2021-04-30 Thread 'yas...@google.com' via grpc.io
Concrete details for the Core/C++ API for creating xDS-enabled servers are 
up at https://github.com/grpc/proposal/pull/234. Please take a look.

On Wednesday, January 20, 2021 at 2:41:17 PM UTC-8 Eric Anderson wrote:

> Please review and comment. The gRFC is at 
> https://github.com/grpc/proposal/pull/214. The gRFC covers new APIs and 
> their plumbing to allow having xDS-enabled servers. The precise C++/wrapped 
> language APIs are not covered, but an idea of what they may look like is 
> covered. Java and Go have their concrete API designs presented.
>
> A36 itself doesn't provide any xDS features, but is the basis for them to 
> be built without applications needing to make code changes for each added 
> xDS feature. For an xDS feature, see the related gRFC A29: xDS-Based 
> Security for gRPC Clients and Servers 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/9387c3fc-2afa-4924-aab3-b41e9d355248n%40googlegroups.com.


[grpc-io] Re: C++ server stream replies not reaching client

2021-03-24 Thread 'yas...@google.com' via grpc.io
The deserialization happens at the surface layer instead of the transport 
layer, unless we suspect that HTTP/2 frames themselves were malformed. If 
we suspect the serialization/deserialization code, we can check if simply 
serializing the proto to bytes and back is causing issues. Protobuf has 
utility functions to do this. Alternatively, gRPC has utility functions 
here 
https://github.com/grpc/grpc/blob/master/include/grpcpp/impl/codegen/proto_utils.h

I am worried for memory corruption though so that is certainly something to 
check.


On Wednesday, March 24, 2021 at 11:02:30 AM UTC-7 Bryan Schwerer wrote:

> Thanks for replying.
>
> I was able to get a tcpdump capture and run it through the wireshark 
> disector.  It indicated that there were malformed protobuf fields in the 
> message.  I'm guessing the client threw the messages away.   I didn't see a 
> trace message indicating that.  Is there some sort of stat I can check?  
> Would it be possible that older versions didn't discard malformed message?  
> I haven't loaded up an old version of our code, but I suspect it has always 
> been there.  The end of the message has counters and such that if they were 
> a bit off, no one would notice.
>
> I think we are corrupting the messages on the server side,  I turned on 
> -fstack-protector-all and the problem went away.  If there's a possible way 
> to check the message before sending to Writer, that may give us more 
> information.  We don't use arenas.  The message itself is uint32's, bool's 
> and one string.  I assume protobufs makes a copy of the string and not the 
> pointer to the buffer.
>
> On Wednesday, March 24, 2021 at 1:35:29 PM UTC-4 yas...@google.com wrote:
>
>> This is pretty strange. It is possible that we are being blocked on flow 
>> control. I would check that we are making sure that the application layer 
>> is reading. If I am not mistaken, `perform_stream_op[s=0x7f0e16937290]:  
>> RECV_MESSAGE` is a log that is seen at the start of an operation meaning 
>> that the HTTP/2 layer hasn't yet been instructed to read a message, (or 
>> there is a previous read on the stream already that hasn't finished). Given 
>> that you are just updating the gRPC version from 1.20 to 1.36.1, I do not 
>> have an answer as to why you would see this without any application 
>> changes. 
>>
>> A few questions - 
>> Do the two streams use the same underlying channel/transport?
>> Are the clients and the server in the same process?
>> Is there anything special about the environment this is being run in?
>>
>> (One way to make sure that the read op is being propagated to the 
>> transport layer, is to check the logs with the "channel" tracer.)
>> On Friday, March 19, 2021 at 12:59:30 PM UTC-7 Bryan Schwerer wrote:
>>
>>> Hello,
>>>
>>> I'm in the long overdo process of updating gRPC from 1.20 to 1.36.1.  I 
>>> am running into an issue where the streaming replies from the server are 
>>> not reaching the client in about 50% of the instances.  This is binary, 
>>> either the streaming call works perfectly or it doesn't work at all.  After 
>>> debugging a bit, I turned on the http tracing and from what I can tell, the 
>>> http messages are received in the client thread, but where in the correct 
>>> case, perform_stream_op[s=0x7f0e16937290]:  RECV_MESSAGE is logged, but in 
>>> the broken case it isn't.  No error messages occur.
>>>
>>> I've tried various tracers, but haven't hit anything.  The code is 
>>> pretty much the same pattern as the example and there's no indication any 
>>> disconnect has occurred which would cause the call to terminate.  Using gdb 
>>> to look at the thread, it is still in epoll_wait.
>>>
>>> The process in which this runs calls 2 different synchronous server 
>>> streaming calls to the same server in separate threads.  It also is a gRPC 
>>> server.  Everything is run over the internal 'lo' interface.  Any ideas on 
>>> where to look to debug this?
>>>
>>> Thanks,
>>>
>>> Bryan
>>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/41e29b56-535e-47f4-a529-a23fface1b40n%40googlegroups.com.


[grpc-io] Re: Python: client side tls credentials reload

2021-03-24 Thread 'yas...@google.com' via grpc.io
I believe the old/current API only allows for tls credentials reloading on 
the server side. We do have a new API for TLS credentials that will allow 
for this. It has been developed in gRPC Core and is available for use in 
gRPC C++. gRPC Python should be able to expose it too.

On Tuesday, March 23, 2021 at 1:55:52 AM UTC-7 Abrar Shivani wrote:

> For grpc python implementation, is there a way to reload client tls 
> credentials without restarting it? 
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/cab24d5e-bd77-4258-b557-b8fdbd5ae6bfn%40googlegroups.com.


[grpc-io] Re: C++ server stream replies not reaching client

2021-03-24 Thread 'yas...@google.com' via grpc.io
This is pretty strange. It is possible that we are being blocked on flow 
control. I would check that we are making sure that the application layer 
is reading. If I am not mistaken, `perform_stream_op[s=0x7f0e16937290]:  
RECV_MESSAGE` is a log that is seen at the start of an operation meaning 
that the HTTP/2 layer hasn't yet been instructed to read a message, (or 
there is a previous read on the stream already that hasn't finished). Given 
that you are just updating the gRPC version from 1.20 to 1.36.1, I do not 
have an answer as to why you would see this without any application 
changes. 

A few questions - 
Do the two streams use the same underlying channel/transport?
Are the clients and the server in the same process?
Is there anything special about the environment this is being run in?

(One way to make sure that the read op is being propagated to the transport 
layer, is to check the logs with the "channel" tracer.)
On Friday, March 19, 2021 at 12:59:30 PM UTC-7 Bryan Schwerer wrote:

> Hello,
>
> I'm in the long overdo process of updating gRPC from 1.20 to 1.36.1.  I am 
> running into an issue where the streaming replies from the server are not 
> reaching the client in about 50% of the instances.  This is binary, either 
> the streaming call works perfectly or it doesn't work at all.  After 
> debugging a bit, I turned on the http tracing and from what I can tell, the 
> http messages are received in the client thread, but where in the correct 
> case, perform_stream_op[s=0x7f0e16937290]:  RECV_MESSAGE is logged, but in 
> the broken case it isn't.  No error messages occur.
>
> I've tried various tracers, but haven't hit anything.  The code is pretty 
> much the same pattern as the example and there's no indication any 
> disconnect has occurred which would cause the call to terminate.  Using gdb 
> to look at the thread, it is still in epoll_wait.
>
> The process in which this runs calls 2 different synchronous server 
> streaming calls to the same server in separate threads.  It also is a gRPC 
> server.  Everything is run over the internal 'lo' interface.  Any ideas on 
> where to look to debug this?
>
> Thanks,
>
> Bryan
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/8fdb96e8-e33e-4202-b218-c93a0baaad67n%40googlegroups.com.


[grpc-io] Re: How to change Channel Arguments After Server start up

2021-03-24 Thread 'yas...@google.com' via grpc.io
No, it's a build time argument. I am curious as to what your use-case is 
though.

On Wednesday, March 17, 2021 at 7:59:43 PM UTC-7 zhan...@gmail.com wrote:

> Hi,
> Wonder if  we can change the channel argument from grpc server side after 
> server start up ?
>
> For example, can we change "GRPC_ARG_MAX_CONNECTION_IDLE_MS" dynamically ?
>
> Thanks
>
> Liping
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/c8edaafc-94ba-4eef-96fd-f4cd5cc23b49n%40googlegroups.com.


[grpc-io] Re: Does the lastest grpc c++ version support vxworks?

2020-12-24 Thread 'yas...@google.com' via grpc.io
It is not officially supported (i.e. there is no continuous testing 
infrastructure set up for this environment) but please don't let that 
discourage you from trying it out.

On Friday, December 4, 2020 at 6:49:39 AM UTC-8 yuan xuan wrote:

> Hi, does the lastest grpc c++ version support vxworks or whether there is 
> a plan to support vxworks?

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/a488c751-bdfd-46a1-82fb-f2ae5ef94792n%40googlegroups.com.


[grpc-io] Re: Using ServerBuilder, how do I tell that the port is in use? (C++)

2020-12-24 Thread 'yas...@google.com' via grpc.io
gRPC Core sets SO_REUSEPORT by default and I believe that's why you are 
running into this. To disable this, you can set the channel argument 
GRPC_ARG_ALLOW_REUSEPORT 

 to 
0.

On Thursday, December 3, 2020 at 4:16:02 PM UTC-8 hek...@gmail.com wrote:

> int bound_port=0;
> std::string server_address("0.0.0.0:50051");
> builder.AddListeningPort(server_address, 
> grpc::SslServerCredentials(ssl_options) _port);
> builder.RegisterService();
> std::unique_ptr server(builder.BuildAndStart());
> if(bound_port == 0) { cout<<"port already in use!" << endl; }
>
> I don't know why above code is not work, the bound_port return always is 
> 50051, I can't get 0 as expected when port already in use.
>
> Thanks for your help in advance!
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/9a520082-e5f4-43a6-b204-a2aeb47c30a3n%40googlegroups.com.


[grpc-io] Re: How to use SetSocketMutator

2020-12-24 Thread 'yas...@google.com' via grpc.io
Hi,

The socket mutator API is usually used to add socket options to fds and not 
to use a custom endpoint altogether. Currently, gRPC C++ does not provide a 
public API to use a  custom endpoint, so there is no straightforward to do 
this without getting into the gRPC internals.

On Sunday, November 8, 2020 at 7:53:46 PM UTC-8 zhan...@gmail.com wrote:

> Hi, We need to implement an endpoint with a custom socket... with a quick 
> search, found "Socket_Mutator" seems a starting point, wonder anyone has 
> experience in this respect ?  how to use "SetSocketMutator" during runtime 
> to specify the custom endpoint?
>
> Thanks
> Liping
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/a9dfbb65-148a-420f-b1d2-bdb61c604681n%40googlegroups.com.


[grpc-io] Re: Can sync grpc tell me how many requests are not handled?

2020-12-16 Thread 'yas...@google.com' via grpc.io
That is a really good question! If you set memory limits via the resource 
quota, there are cases where incoming requests and connections are rejected 
due to unavailable memory quota. Unfortunately, this is not exposed as any 
metric right now (if I recall correctly).

On Tuesday, December 8, 2020 at 3:39:44 AM UTC-8 zhi qi wrote:

> In grpc 1.26.0(not necessary), c++ server(necessary).
>
> I just build a sync grpc server with ResourceQuota'MaxThreads set. If 
> clients requests too frequently, there will be some requests not be 
> executed.
>
> How can I get the number and the suspending time of these requests? Or is 
> there some metrics?
>
> Thank you all.
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/0e938ba5-d487-4ab8-bc0f-2d35bbe0a3ecn%40googlegroups.com.


Re: [grpc-io] flow control problem in GRPC C++ release v1.25.0

2020-12-16 Thread 'yas...@google.com' via grpc.io
That's some really good investigation! Kudos! 
The transport layer (chttp2) does not really care whether the application 
is using the async API or the sync API. What matters really is whether 
there is a thread that is polling the underlying fds. For the sync API, 
this generally happens through `Read()`/`Write()`/`Finish()` calls, but for 
the cq based async API, this happens when `Next()` is called on the 
associated cq. Given, that in the async case, it is the timer thread that 
ends up receiving the update and not any application thread, I would 
suspect that the application is not polling the cqs.

On Monday, December 14, 2020 at 9:05:31 PM UTC-8 hanfei1...@gmail.com wrote:

> update : after I use sync mode on the client side, this problem is fixed. 
> Does it mean a bug of a mixed use of sync and async mode?
>
>
> On Tue, Dec 15, 2020 at 3:11 AM 韩飞  wrote:
>
>> I use GRPC C++ lib in my distributed SQL engine. I use server stream that 
>> client side sends "connect" request then the server side sends data packets 
>> by a *sync writer* and client side receive data packets by a *async 
>> reader*.
>> I use a single stream to pass massive data and be sure that the reader 
>> fetches data packets promptly, but I find the stream is halted by *every 
>> 4-5 seconds* sometimes.
>> To look into this problem, I open the flowctl and timer trace log. I 
>> finder the server (sender) consumes out the remote window very fast and the 
>> stream is moved to *stalled list.* the log is in client.log 
>> 
>>
>> We can find that in 12:39s stream 11 is added to stalled list and waited 
>> for a stream updt, in 12:43s , a timer thread receive it and reset the 
>> window size, then unblock this write call.
>>
>> On server side, the log is server.log 
>> , 
>> in 12:43 server begin to receive all of the data in a window at once and 
>> sent the updt packet to the client.
>>
>> Why the updt request halt for four seconds ? It seems server process too 
>> much data *at once*, but why?
>>
>> -- 
>> You received this message because you are subscribed to a topic in the 
>> Google Groups "grpc.io" group.
>> To unsubscribe from this topic, visit 
>> https://groups.google.com/d/topic/grpc-io/hR2g4hFvp3M/unsubscribe.
>> To unsubscribe from this group and all its topics, send an email to 
>> grpc-io+u...@googlegroups.com.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/grpc-io/07626768-cf49-4670-9b8c-62da6112f523n%40googlegroups.com
>>  
>> 
>> .
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/8067facd-ef47-4135-a174-eb3440f011b3n%40googlegroups.com.


[grpc-io] Re: Which stream does gRPC log to?

2020-10-01 Thread 'yas...@google.com' via grpc.io
This is how logging is implemented for windows 
- 
https://github.com/grpc/grpc/blob/8c142a1d8c66a25598151bf154655da9288ffa5a/src/core/lib/gpr/log_windows.cc#L95

On Tuesday, September 29, 2020 at 10:35:35 AM UTC-7 paulin...@gmail.com 
wrote:

> Hi,
>
> I've been trying to save the gRPC output to a log file when using 
> GRPC_TRACE and GRPC_VERBOSITY, but I'm not sure how to pick it up. I use 
> C++ on Windows:
>
>- When I run my application from cmd, I see the gRPC output
>- When I start it from the Visual Studio debugger I don't see it in 
>the console output. I also don't see it in the Output tab in Visual Studio.
>
> So I was just wondering if someone could clarify to which output stream 
> the gRPC log goes please? It doesn't seem to be normal cout; is it a cerr 
> or a clog?
>
> Regards,
> Paulin
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/999c52d6-7c0a-4514-a1b6-7531babeb253n%40googlegroups.com.


[grpc-io] Re: How to build a clang memory sanitizer instrumented version of gRPC?

2020-10-01 Thread 'yas...@google.com' via grpc.io
Our current test infrastructure uses bazel for msan tests but earlier we 
were using Makefile with `CONFIG=msan` which is why I'm surprised that it 
did not work for you.

On Thursday, September 24, 2020 at 12:27:59 PM UTC-7 mandr...@gmail.com 
wrote:

> I would like to instrument some c++ code that uses grpc (clang 9, ubuntu 
> 18) with memory sanitizer, and to do this, all the libraries it uses must 
> be built with msan too.
>
> from the  memory sanitizer docs 
> 
>  :
>
> ```
> It is critical that you should build all the code in your program 
> (including libraries it uses, in particular, C++ standard library) with 
> MSan. See MemorySanitizerLibcxxHowTo for more details.  
> ```
>
> However, I have so far not succeeded in building msan instrumented grpc, 
> and would appreciate some help on how to do this. 
>
> I was able to build *address *sanitized grpc simply by setting *CONFIG=asan 
> *in the supplied Makefile, but this does not work with *CONFIG=msan (plus 
> adding instrumented libc++.so and libc++abi.so as described in docs)*
>
> cmake doesn't seem to have any options to turn on MSAN, and I'm not 
> familiar with bazel.
>
> Wondering how to build MSAN instrumented grpc? 
>
> thanks in advance!
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/d49e5012-9a24-4b60-a6c5-3b13d0bdbc47n%40googlegroups.com.


[grpc-io] Re: C++ Interceptor for authorization

2020-09-30 Thread 'yas...@google.com' via grpc.io
We've had a similar request/question 
in https://github.com/grpc/grpc/issues/24017

On Tuesday, September 29, 2020 at 11:03:21 AM UTC-7 ayeg...@gmail.com wrote:

> I have a fairly simple use case - check the headers of an incoming RPC 
> call for a special string indicating authorization to use the service. 
> Currently I am experimenting based off of this example 
> https://github.com/grpc/grpc/blob/7bf82de9eda0aa8fecfe5edb33834f1b272be30b/test/cpp/end2end/server_interceptors_end2end_test.cc.
>  
> Specifically the `LoggerInterceptor` example. The happy path involves 
> calling `Proceed` on the `InterceptorBatchMethods`, but `Hijack` kills the 
> program.
>
> My main question is how do I `fail` the connection when the authorization 
> is not present?
>
> Sincerely,
> Aleks
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/ca0cdd8f-ceae-4645-b816-a4a7c2840920n%40googlegroups.com.


[grpc-io] Re: GRPC_ARG_KEEPALIVE_TIME_MS vs GRPC_ARG_HTTP2_MIN_SENT_PING_INTERVAL_WITHOUT_DATA_MS

2020-08-05 Thread 'yas...@google.com' via grpc.io
Hi,

You are right. It is definitely not user friendly for someone trying to set 
up keepalives to also have to set 
GRPC_ARG_HTTP2_MIN_SENT_PING_INTERVAL_WITHOUT_DATA_MS. I believe the reason 
for this is mostly to deal with the fact that pings in gRPC Core can 
originate via either of 3 methods -
1) Keepalive
2) BDP
3) grpc_channel_ping()

I can think of a few possible ways to improve experience here -
1) Reduce the default value of 
GRPC_ARG_HTTP2_MIN_SENT_PING_INTERVAL_WITHOUT_DATA_MS
2) Use the minimum of GRPC_ARG_HTTP2_MIN_SENT_PING_INTERVAL_WITHOUT_DATA_MS 
and GRPC_ARG_KEEPALIVE_TIME_MS to set the minimum ping interval without 
data.
3) Make GRPC_ARG_HTTP2_MIN_SENT_PING_INTERVAL_WITHOUT_DATA_MS defunct.

On Wednesday, August 5, 2020 at 8:58:27 AM UTC-7 
tobias@dentsplysirona.com wrote:

> Hi,
> can somebody explain the relation between these two time settings on a 
> client perspective?
>
> It seems to me, that GRPC_ARG_HTTP2_MIN_SENT_PING_INTERVAL_WITHOUT_DATA_MS 
> is the lower limit for GRPC_ARG_KEEPALIVE_TIME_MS.
> But why does this GRPC_ARG_HTTP2_MIN_SENT_PING_INTERVAL_WITHOUT_DATA_MS 
> exists at all, when I control the rate of the ping with the KEEP_ALIVE time?
>
> I need to detect the interruption of long living streaming channels, that 
> might not transport any data for minutes or even hours.
>
>
>
> Thanks
> Tobias
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/c4d04d60-65c5-4ade-9d82-69e6fdf44847n%40googlegroups.com.


[grpc-io] Re: How to Pass a a list of data like array or vector (using repeated in protobuf file) from server to client using stream data.

2020-06-10 Thread 'yas...@google.com' via grpc.io
Hi, 

It seems like you are looking for help with usage of protobuf for repeated 
fields. 

protobuf has pretty good documentations for such stuff - 
https://developers.google.com/protocol-buffers/docs/reference/cpp-generated#repeatedmessage
 
Hope that helps. Cheers!

On Sunday, May 31, 2020 at 9:19:50 AM UTC-7 san...@gmail.com wrote:

>
> How to  Pass a a vector  (using repeated in protobuf file) from server to 
> client using stream
>
> The gRPC server client interaction is stream.
>
>
> //---STRUCTURE OF PROTO 
> FILE-
> message Response
> {
>
> int param1 = 20;
>
> int param2 = 30;
>
> float param2 = 30;
> }. ..
>
>
> message ParamResponse
> {
>
> *repeated *Response =20;
>
> }
>
>  // RPC calls from Client to Serve
> rpc Read( readyCommand ) returns ( stream *ParamResponse *){ }
>
>
>
> --
>
> In CPP project -
>
>
> *sendDatato Client (   ,. ..,,,  ParamResponse DataResponse  )*
> *{*
>
> typedef std::vector> Container;
>
>
> *} *
>
> Would like to know how can I pass the container.
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/522f141c-1d76-4523-921b-5a07b326d956n%40googlegroups.com.


[grpc-io] Re: RST_STREAM with error code 2 and grpc_status 13

2020-06-10 Thread 'yas...@google.com' via grpc.io
Hi,

Thanks for the feedback, but this is not a gRPC setting. This probably 
needs to be reported to GKE folks.

On Sunday, May 31, 2020 at 3:17:35 AM UTC-7 Thomas Barnekow wrote:

> I found the solution. The load balancer was configured with a default 
> timeout of 30 seconds.
>
> Based on further testing, while the number of responses the client 
> received before receiving the RST_STREAM frame still varied, I saw that 
> this consistently occurred after approximately 30 seconds. Searching 
> explicitly for timeouts, I found a StackOverflow question (see 
> https://stackoverflow.com/questions/44601191/kubernetes-on-gce-ingress-timeout-configuration)
>  
> pointing me into the right direction.
>
> Google's documentation does not talk about that default timeout. At least 
> I did not find it.
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/a2260e02-4a44-4e7e-a0f2-ac5a4bb3524cn%40googlegroups.com.


[grpc-io] Re: C++ buffered writes in Async client

2020-06-10 Thread 'yas...@google.com' via grpc.io
Admittedly, set_buffer_hint() is not used much, and all it does is to 
sometimes avoid immediate writes by the HTTP/2 transport in gRPC. Your 
usecase does not seem one where this behavior is going to be useful. 

To be clear, a write is considered to be "committed" when it passes flow 
control in gRPC's HTTP/2 layer (not TCP) and it will not always result in 
bytes being sent out to the socket immediately. 
On Thursday, May 28, 2020 at 4:53:20 PM UTC-7 igor.ber...@gmail.com wrote:

> What is the point of WriteOptions::set_buffer_hint() then? If only one 
> write can be outstanding anyway, then I cannot simply fill-up the socket 
> buffer for WriteOptions::set_buffer_hint()  to work, because the second 
> write can only happen after the first write has been committed and then 
> *acked *the event queue! This must definitely affect throughput as it is 
> not at-par to just keep pushing data to socket write buffer. *I do one 
> write, then wait for the event loop to tells me "ok your write is 
> committed, you can do another one"*. This is inherently slow by design 
> due to overcommunication for writing messages.
>
>
> On Friday, May 29, 2020 at 12:32:31 AM UTC+1, yas...@google.com wrote:
>>
>> Hi,
>>
>> From what I understand, it seems that you are getting bottlenecked by the 
>> network. You are right that gRPC allows only one outstanding write at any 
>> given time but that decision itself probably won't affect the throughput 
>> much. If the application does not buffer the messages, it would instead 
>> need to buffered by gRPC and the end performance would remain about the 
>> same.
>>
>> On Monday, May 18, 2020 at 5:41:17 AM UTC-7 igor.ber...@gmail.com wrote:
>>
>>> Hi all,
>>>
>>> I've trying to implement a high-performance async server in C++ where 
>>> throughput matters, but also don't want to keep messages too long in a 
>>> buffer (say up to 200ms). 
>>> WriteOptions::set_buffer_hint() seems like a perfect candidate to enable 
>>> high throughput. However whenever a Write method is called on 
>>> ServerAsyncReaderWriter, then it just gets blocked, because it gets 
>>> buffered! But then ironically I cannot call another Write method, because 
>>> GRPC API demands another Write to be only called after another successful 
>>> Write has completed. How then is supposed to work if async buffered Write 
>>> does not commit the write right away, but then I cannot call Write method 
>>> in a batch as well? Am I getting this wrong somehow? I appreciate any help.
>>>
>>> void grpc_impl::ServerAsyncReaderWriter 
>>> <
>>>  
>>> W, R >::Write ( const W &  msg,
>>> ::grpc::WriteOptions 
>>>   
>>> options,
>>> void *  tag 
>>> ) inlineoverridevirtual
>>>
>>> Request the writing of *msg* using WriteOptions *options* with 
>>> identifying tag *tag*.
>>>
>>> *Only one write may be outstanding at any given time. This means that 
>>> after calling Write, one must wait to receive tag from the completion queue 
>>> BEFORE calling Write again*. WriteOptions *options* is used to set the 
>>> write options of this message
>>>
>>> Kind Regards,
>>>  Igor
>>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/33ebbacb-ab3c-4790-be13-b9740d9a6b0en%40googlegroups.com.


[grpc-io] Re: Recommendations on possible shared resources in AsyncClient

2020-06-10 Thread 'yas...@google.com' via grpc.io
Hi, 
I'll try to be as clear and concise as possible.

1) Yes, channel, stub and completion queues can be shared across multiple 
requests. Other classes such as ClientContext and reader/writer classes 
would be specific for each request.

2) I would expect the stub to need to remain alive mainly because it holds 
a ref to the shared channel object, but otherwise the stub is a thin 
wrapper, so you would be able to get around requiring the stub to remain 
alive if you can make sure that the channel object will remain alive 
through all the calls.

3) CompletionQueues are thread-safe have their own synchronization 
mechanism, but there are no ordering guarantees on the requests themselves.

4) It is quite normal to have dedicated threads servicing completion 
queues. 

On Tuesday, May 26, 2020 at 7:20:49 PM UTC-7 bsti...@gmail.com wrote:

> I'm a little new to gRPC, and looking for some advice/recommendations on 
> whether it is useful (or not) to share various client objects, both from an 
> efficiency standpoint and a maintainability one (for C++).
>
>
> I'm considering a dedicated thread for each specific instance of a service 
> that a client application will require. In this "AsyncClient" thread, the 
> thread will own a Channel object and the CompletionQueue. Each "request" 
> will be derived from a common "AsyncRequest" base class that contains the 
> ClientContext, stub, Status etc. The derived class(es) will implement the 
> specifics related to each call (request, response and the operations on the 
> "stub" that are unique to the request).
>
> From an application standpoint, they construct the appropriate derived 
> object, "execute it" on the AsyncClient (which constructs a stub from it's 
> channel, assigns a tag for the request, and invokes the "run" method on the 
> AsyncRequest object (which does the Prepare, StartCall and Finish 
> operations). When the call completes, the AsyncClient "completes" the 
> request and forwards it to one higher level applications.
>
> This leads to a couple of questions:
> 1) Which client objects can be shared and which are unique/specific to the 
> call?  It seems channel and perhaps stub can be shared across multiple 
> requests, but can't find any documentation that confirms this is possible 
> (or not). CompletionQueue seems to be shareable across multiple outstanding 
> requests, as long as the tags can be uniquely correlated to the requests 
> (which is easy enough when tag is the address of the request).
>
> 2) What is the lifecycle of potentially shared objects?  For example, does 
> the channel and stub need to remain "live" throughout the call, or can the 
> stub be deleted/released as soon as the Prepare function is completed? 
> Given the Channel is shared, is it only needed in the construction of the 
> stub, and hence, not needed by the request afterwards (note: the 
> AsyncClient" will still "own" the Channel and make available for subsequent 
> requests.
>
> 3) What possible race-conditions are introduced? Can a underlying gRPC 
> thread return responses potentially faster than another thread can complete 
> the request (especially on the same physical server).
>
> 4) And the obvious one...is it worth it to have a common "AsyncClient" 
> process/handler or is it better for each application to re-construct 
> channels, Contexts, stubs (and async logic handling) for each request? 
> (Note: The goal is to have multiple outstanding requests at a time and not 
> have applications just block on CompletionQueue as in the gRPC Async C++ 
> examples.
>
> Any advice/insights much appreciated.
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/8b8c1ccb-7eca-4871-81c7-930f92c682f1n%40googlegroups.com.


[grpc-io] Re: C++ buffered writes in Async client

2020-05-28 Thread 'yas...@google.com' via grpc.io
Hi,

>From what I understand, it seems that you are getting bottlenecked by the 
network. You are right that gRPC allows only one outstanding write at any 
given time but that decision itself probably won't affect the throughput 
much. If the application does not buffer the messages, it would instead 
need to buffered by gRPC and the end performance would remain about the 
same.

On Monday, May 18, 2020 at 5:41:17 AM UTC-7 igor.ber...@gmail.com wrote:

> Hi all,
>
> I've trying to implement a high-performance async server in C++ where 
> throughput matters, but also don't want to keep messages too long in a 
> buffer (say up to 200ms). 
> WriteOptions::set_buffer_hint() seems like a perfect candidate to enable 
> high throughput. However whenever a Write method is called on 
> ServerAsyncReaderWriter, then it just gets blocked, because it gets 
> buffered! But then ironically I cannot call another Write method, because 
> GRPC API demands another Write to be only called after another successful 
> Write has completed. How then is supposed to work if async buffered Write 
> does not commit the write right away, but then I cannot call Write method 
> in a batch as well? Am I getting this wrong somehow? I appreciate any help.
>
> void grpc_impl::ServerAsyncReaderWriter 
> <
>  
> W, R >::Write ( const W &  msg,
> ::grpc::WriteOptions 
>   
> options,
> void *  tag 
> ) inlineoverridevirtual
>
> Request the writing of *msg* using WriteOptions *options* with 
> identifying tag *tag*.
>
> *Only one write may be outstanding at any given time. This means that 
> after calling Write, one must wait to receive tag from the completion queue 
> BEFORE calling Write again*. WriteOptions *options* is used to set the 
> write options of this message
>
> Kind Regards,
>  Igor
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/a7f6e91f-872f-49e2-9726-30d399dfcd1dn%40googlegroups.com.


[grpc-io] Re: Bare server throughtput limitation?

2020-05-28 Thread 'yas...@google.com' via grpc.io
Hi,

Though there is no inherent limitation to the number of requests that a 
server can handle, the sync processing model has not been the target for 
performance optimizations. The Async model would be better suited for 
performance optimizations, and you might be able to increase your 
throughput.

On Tuesday, May 19, 2020 at 8:14:27 AM UTC-7 dyla...@gmail.com wrote:

> I am designing a neural network inference server and I have built my 
> server and client using a synchronous grpc model, with a unary RPC design. 
> For reference, the protobuf formats are based on the Nvidia Triton 
> Inference server formats https://github.com/NVIDIA/triton-inference-server. 
> My design expects a large batch of inputs (16384, for a total size of 1MB)  
> to be received by the server, the inference to be run, and then the result 
> to be returned to the client. I send these inputs in a repeated bytes field 
> in my protobuf. However, even if I make my server-side function simply 
> return an OK status (no actual processing), I find that the server can only 
> process ~1500-2000 batches of inputs per second (this is run with both 
> server and client on the same machine so network limitations should not be 
> relevant). However, I know that my inference processing can handle 
> throughputs closer to 1 batches/second.
>
> Is there an inherent limitation to the number of requests that a gRPC 
> server can handle per second? Is there a server setting or design change I 
> can make to increase this maximum throughput?
>
> I am happy to provide more information if it can help in understanding my 
> issue.
>
> Thanks for your help,
>
> -Dylan
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/3ee4cb9c-c4ed-4f74-938a-2e61e401252dn%40googlegroups.com.


[grpc-io] Re: C++ client interceptor

2020-05-28 Thread 'yas...@google.com' via grpc.io
Hi,

Yes, we do want to stabilize the feature. We do not have a specific 
timeline for doing that at the moment though.

On Sunday, May 17, 2020 at 5:45:28 PM UTC-7 ibnada...@gmail.com wrote:

> Hi,
>
> gRPC appears to support C++ client interceptor, however the API is still 
> under experimental nameapace (https://github.com/grpc/grpc/pull/16842).
>
> Is there any plan to stablise/productionise the feature?
>
> It's useful but I don't want to use experimental features.
>
> Thanks.
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/057ef4d3-3dd2-4fbf-80b7-69d43a697ad4n%40googlegroups.com.


[grpc-io] Re: Grpc connection semantics

2020-02-26 Thread 'yas...@google.com' via grpc.io
1) An HTTP/2 stream is what you are looking for. A stream can be terminated 
with a RST_STREAM frame. Please refer 
https://http2.github.io/http2-spec/#RST_STREAM
2) The RPC would also be terminated. The application would be able to 
retry/restart the RPC if necessary.
3) Yes
4) The RPCs are terminated with the status showing the proper error 
message. 
On Wednesday, February 26, 2020 at 10:37:55 AM UTC-8 ravi@gmail.com 
wrote:

> As mentioned in the grpc documentation, a channel is backed by multiple 
> HTTP/2 requests. I was wondering about the behaviour/details in below 
> situations:
> 1) Does a channel have something like a failed HTTP/2 call even if the 
> connection(tcp) is fine ?
> 2) If so, what happens to long running rpcs in such case.
> 3) Is a async streaming rpc served by a single HTTP/2 request over a 
> single tcp connection ?
> 4) As mentioned in docs, a channel may retry to fix transient errors in 
> the tcp connection, so for long running rpcs does this have any impact ?
>
> Thanks
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/c8c24886-f814-40fd-83fb-7978ac5c059a%40googlegroups.com.


[grpc-io] Re: Grpc proxy with http2 ssl?

2020-02-26 Thread 'yas...@google.com' via grpc.io
Replying on Eric Anderson's behalf, envoy would be a common one. 
caddy-server should work fine. I don't know much about mitmproxy.

On Wednesday, February 19, 2020 at 8:26:28 AM UTC-8 elh.m...@gmail.com 
wrote:

> hi.. is there a simple cross platform grpc proxy i could use (wanting to 
> simulate network outages while doing a server stream) 
>
> i thought about mitmproxy or caddy-server 
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/f87a05be-16a0-4d93-8bc7-e2186eb5dc82%40googlegroups.com.


[grpc-io] Re: gRPC binding the socket to a particular interface or device (Something like SO_BINDTODEVICE)

2020-02-26 Thread 'yas...@google.com' via grpc.io
gRPC C++ does not have any API to allow the client to bind to a specific IP 
address at the moment.

On Wednesday, February 26, 2020 at 1:45:15 AM UTC-8 engr.ab...@gmail.com 
wrote:

> My application communicates with gRPC server. Is there any way for gRPC 
> client to bind to a particular interface or device? Or can I bind to 
> particular IP? Will it work? Is there any way?
>
> Linux POSIX socket provides SO_BINDTODEVICE option but I am not sure if 
> gRPC has any application layer method or function to achieve it. I tried 
> other forums but couldn't find something useful.
>
> Is there any workaround to achieve the same thing?
>
> On Tuesday, February 25, 2020 at 11:29:45 AM UTC+5:30, Abhi Arora wrote:
>
>> I have a Linux Embedded Machine with gRPC cross-compiled for it. I am 
>> looking to create multiple instances of gRPC each bind to a particular 
>> interface or device. Linux POSIX socket provides SO_BINDTODEVICE option but 
>> I am not sure if gRPC has any application layer method or function to 
>> achieve it. I tried other forums but couldn't find something useful.
>>
>> Is there any workaround to achieve the same thing?
>>
>> Please help me.
>>
>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/96a58878-648a-4faa-9d6e-4481221346b4%40googlegroups.com.


[grpc-io] Re: What is the purpose of `tcp_handle_write`?

2020-01-21 Thread 'yas...@google.com' via grpc.io
This is a relatively complex piece of code, and it would be hard to explain 
what each and every line of code does on a forum. Tracing the code would be 
the fastest way to understand.

For the second question, tcp_write is used through the 
grpc_endpoint_vtable. Effectively, grpc_endpoint_write calls would 
translate to tcp_write.

For the third question,  those functions ARE invoked through closures.

On Tuesday, August 13, 2019 at 2:05:45 AM UTC-7 xia rui wrote:

> Hello, everyone.
> I am trying to learn the code of `tcp_posix.cc`. There are some functions 
> with similar names. I am confused about the purposes/use cases of these 
> functions.
>
> I write a simple grpc app, and use GRPC_TRACE=tcp and GRPC_VERBOSITY=ERROR 
> (only show the breakpoints I am interested in).
>
> I try to hack the code from the client, so I only focus on write 
> operations.
>
> Things start from `tcp_write()`, after preparing writing buffer with:
>   tcp->outgoing_buffer = buf;
>   tcp->outgoing_byte_idx = 0;
> it goes to `tcp_flush()`.
>
> In `tcp_flush`, the writing operation is recursively invoked in a for 
> loop. It first pushes `tcp->outgoing_buffer` to `iov` (It may be simple 
> pointer copy, not memory copy). Then, the `tcp_send()` is invoked, which in 
> fact sends buffer using socket. Finally, `tcp_flush` function counts the 
> `sending_length` and `sent_length`.
> *My first question is: what is the purpose of `sending_length`?*
> The `sent_length` is the bytes sent by `tcp_send()` function. There is a 
> `trailing` variable, which is `trailing = sending_length - 
> static_cast(sent_length);` and I don't know what is the purpose of 
> the following check in `trailing`:
>
> while (trailing > 0) {
>   size_t slice_length;
>
>
>   outgoing_slice_idx--;
>   slice_length =
>   GRPC_SLICE_LENGTH(tcp->outgoing_buffer->slices[
> outgoing_slice_idx]);
>   if (slice_length > trailing) {
> tcp->outgoing_byte_idx = slice_length - trailing;
> break;
>   } else {
> trailing -= slice_length;
>   }
> }
>
> When `tcp_flush()` returns, the writing opertion is finished.
> *My second question is: Who invoke `tcp_write()`?*
> There is no explicit call on `tcp_write()` in `tcp_posix.cc`. And at the 
> end of `tcp_write()`, there is a callback:
>
> if (!tcp_flush(tcp, )) {
> TCP_REF(tcp, "write");
> tcp->write_cb = cb;
> if (GRPC_TRACE_FLAG_ENABLED(grpc_tcp_trace)) {
>   gpr_log(GPR_INFO, "write: delayed");
> }
> notify_on_write(tcp);
>   } else {
> if (GRPC_TRACE_FLAG_ENABLED(grpc_tcp_trace)) {
>   const char* str = grpc_error_string(error);
>   gpr_log(GPR_INFO, "write: %s", str);
>   gpr_log(GPR_ERROR, "in tcp_write(), maybe finish writing");
> }
>
> *GRPC_CLOSURE_SCHED**(cb, error);*
>   }
> the callback `cb` is a argument of `tcp_write()`, and I was wondering who 
> calls `tcp_write()` and what is the callback `cb`.
>
> There are some functions that related to writing operations, and they are 
> not invoked in my tracing log.
> *My third question is: what are the purposes of these uninvoked functions?*
>
>1. static void notify_on_write(grpc_tcp* tcp)
>2. static void tcp_handle_write(void* arg /* grpc_tcp */, grpc_error* 
>error)
>
>
> Thank you for your time.
>
> Best wishes,
> Xia Rui
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/6a50b89b-8517-4ac5-93fc-4da2868eead4%40googlegroups.com.