[grpc-io] Re: gRPC 1.7.2 Assert issue

2024-02-05 Thread 'apo...@google.com' via grpc.io
1.7.2 is quite old. Suggest trying a new version. If the issue is still 
reproducible, suggest filing a bug report 
on https://github.com/grpc/grpc/issues.

On Thursday, January 25, 2024 at 10:26:01 AM UTC-8 Shivteja Ayyagari wrote:

> Hello, Kindly requesting your assistance !! 
>
> On Friday, January 19, 2024 at 6:44:15 AM UTC+5:30 Shivteja Ayyagari wrote:
>
>> Hello,
>>
>> I am facing one random GRP Assert issue where in 
>> grpc_server_shutdown_and_notify 
>> 
>>  is 
>> crashing on assert of:
>>
>> GPR_ASSERT 
>> 
>> (grpc_cq_begin_op 
>> 
>> (cq 
>> , 
>> tag 
>> 
>> ));
>>
>> I debugged further to identify, then i found out the cq_next_data is 
>> pointing to pending_events which is zero, and things queued is 12, and 
>> shutdown_called is True.
>>
>> We do this operation when the gRPC channel is IDLE. 
>>
>> How do I avoid this crash ? Aborting is causing problem to application 
>> stability.
>>
>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/7f0e0bac-f38c-4527-b40d-90655fc858bfn%40googlegroups.com.


[grpc-io] Re: Python - Stop error message print on port bind error

2024-02-05 Thread 'apo...@google.com' via grpc.io
gRPC c-core based libraries has some internal logging with different 
verbosity levels. It can be controlled with the GRPC_VERBOSITY environment 
variable which is documented 
in https://github.com/grpc/grpc/blob/master/doc/environment_variables.md. 
E.g. setting that to "NONE" should disable this here.

On Friday, January 26, 2024 at 6:16:13 PM UTC-8 Akhilesh Raju wrote:

> As expected, when I provide the keyword arg options=[("grpc.so_reuseport", 
> 0)] to grpc.aio.Server and try to create a channel, it fails if another 
> instance of that server is already running.
>
> However, there is this long message that gets printed out
>
> ```
> E0126 15:09:11.781314798  724891 chttp2_server.cc:1063]   
>  UNKNOWN:No address added out of total 1 resolved for '[::]:6163' 
> {created_time:"2024-01-26T15:09:11.780990901-08:00", 
> children:[UNKNOWN:Failed to add any wildcard listeners 
> {created_time:"2024-01-26T15:09:11.780979771-08:00", 
> children:[UNKNOWN:Address family not supported by protocol 
> {target_address:"[::]:6163", syscall:"socket", os_error:"Address family not 
> supported by protocol", errno:97, 
> created_time:"2024-01-26T15:09:11.780907766-08:00"}, UNKNOWN:Unable to 
> configure socket {created_time:"2024-01-26T15:09:11.78095302-08:00", fd:12, 
> children:[UNKNOWN:Address already in use {syscall:"bind", os_error:"Address 
> already in use", errno:98, 
> created_time:"2024-01-26T15:09:11.780945967-08:00"}]}]}]}
>
> Is there a way to stop this error from printing?
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/9a0f6d6b-504e-41af-8e77-d7083836d18bn%40googlegroups.com.


[grpc-io] Re: Getting SIGABRT when starting service with TLS

2024-02-05 Thread 'apo...@google.com' via grpc.io
Are there any logs leading up to this? If the the SIGABRT is coming from 
gRPC, I'd expect to see a log of the source code line that triggered the 
abort.

If the issue is still not obvious, a runnable repro might help.

On Thursday, February 1, 2024 at 5:23:45 PM UTC-8 Tim wrote:

> Hi, I'm getting a SIGABRT when starting a gRPC service using TLS.
>
> std::shared_ptr 
> cert_provider(new 
> grpc::experimental::FileWatcherCertificateProvider(server_key_path, 
> server_cert_path, ca_cert_path, 10));
> grpc::experimental::TlsServerCredentialsOptions tlsOpts(cert_provider); 
> tlsOpts.set_cert_request_type
> (grpc_ssl_client_certificate_request_type::GRPC_SSL_REQUEST_AND_REQUIRE_CLIENT_CERTIFICATE_AND_VERIFY);
>  
> std::shared_ptr tlsCreds = grpc::experimental::
> TlsServerCredentials(tlsOpts);
> builder.AddListeningPort(uri, tlsCreds);
> builder.BuildAndStart();
>
> Is there some other precondition I'm missing?
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/799d53de-15b5-4fd2-964e-714a885e9747n%40googlegroups.com.


[grpc-io] Re: Setting gRPC internal threads' affinity

2024-02-05 Thread 'AJ Heller' via grpc.io
Hi Dan,

If you're interested in CPU affinity for the entire server process on 
Linux, you can use `taskset` https://linux.die.net/man/1/taskset. 
Otherwise, you'll likely want to patch `thd.cc` and use pthread's affinity 
APIs, but I don't recommend it.

For more advanced use cases with the C/C++ library, you can also get full 
control over the threading model and all async behavior by implementing a 
custom EventEngine 

.

Cheers,
-aj
On Thursday, January 25, 2024 at 10:26:07 AM UTC-8 Dan Cohen wrote:

> Hello,
>
> I'm implementing an async gRPC server in c++.
> I need to control and limit the cores that are used by gRPC internal 
> threads (the completion queues handler threads are controlled by me) - i.e. 
> I need to set those threads' affinity.
>
> Is there a way for me to do this without changing gRPC code? 
> If not, where in the code would you recommend to start looking for 
> changing this? 
>
> Thanks,
> Dan
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/4255a086-3c2a-4fe7-8329-026c6da17340n%40googlegroups.com.