[grpc-io] Re: Number of threads created in grpc internally

2023-07-12 Thread 'AJ Heller' via grpc.io
I think you'll find these threads answer your question:

https://groups.google.com/g/grpc-io/c/j1A0CY0YG-A/m/W0H6UrkHAwAJ
https://stackoverflow.com/a/76591101/10161

Best,
-aj
On Monday, July 10, 2023 at 5:04:00 AM UTC-7 Softgigant S wrote:

> Hello!
>
> May I ask, how to setup or manipulate number of threads created by grpc 
> (event engine?)
> I have simple callback-based grcp server-client API.
>
> I used proc/pid/status to view the status of process of grpc client.
> It showed 14 threads, same as number of CPU cores on my PC.
> Is there any parameter to limit the number of threads used by grpc by 
> default?
>
> Thank you!
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/0b4a2aaa-3419-490c-a742-a3dc9435473bn%40googlegroups.com.


[grpc-io] Re: Package version numbers for protobuf and gRPC (for Python)

2023-07-12 Thread 'Richard Belleville' via grpc.io
Hi Jens,

The grpcio package  itself is completely 
agnostic to protobuf. It only has byte-oriented interfaces. Protobuf 
integration only happens within the generated code (e.g. 
helloworld_pb2_grpc.py 
).
 
This generated code comes from running the grpcio-tools package 
, which *does *have a dependency on 
protobuf. The compatibility range with protobuf is defined by this 
package's dependency range on protobuf and can be seen by either looking at its 
setup.py file 

 
or using a dependency inspection tool such as pipdeptree:

(venv) rbellevi-macbookpro:tmp.m984j04o rbellevi$ python3 -m pipdeptree
grpcio-tools==1.56.0
├── grpcio [required: >=1.56.0, installed: 1.56.0]
├── protobuf [required: >=4.21.6,<5.0dev, installed: 4.23.4]
└── setuptools [required: Any, installed: 67.8.0]

In general, you can use the heuristic that if you do pip install 
grpcio-tools and generate your code, you'll have the right version of 
protobuf already installed.

This isn't ideal since many people generate their code only once and then 
rebuild their application many times, potentially forgetting the version of 
protobuf that they originally used to generate their code. In practice, 
even these people are generally fine, only getting bitten when protobuf 
does a major version bump, which has happened once in the past 5 years.
On Sunday, July 9, 2023 at 6:37:06 PM UTC-7 Jens Troeger wrote:

> Hello,
>
> Following this question 
>  I’m trying to find 
> the documentation that defines which versions of the grpcio 
>  package implement which version of the 
> Protocol Buffers language.
>
> And, in that context, how do the Google API common protos 
>  (and its generated 
> Python package ) 
> relate to the different Protobuf versions?
>
> Much thanks!
> Jens
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/ca9bb3d7-8459-4e9b-a0bd-c4e622bf85b9n%40googlegroups.com.


[grpc-io] Re: How to avoid overriding while creating channels?

2023-07-12 Thread 'Richard Belleville' via grpc.io

This sounds more like a question for the developers of the flower 
framework. gRPC itself absolutely supports multiple concurrent client 
channels to different server targets. If there is an issue with this 
functionality via flower, the issue almost certainly lies in the layer on 
top of gRPC.
On Friday, July 7, 2023 at 7:13:34 AM UTC-7 Saurav Pawar wrote:

> I am working with Flower which is a federated learning framework. In its 
> (grpc connection)[
> https://github.com/adap/flower/blob/main/src/py/flwr/client/grpc_client/connection.py#L91]
>  
> file they are only creating 1 channel whereas I want 2-3 channels. But when 
> I created 1 more channel with server_address `localhost:5040`, the previous 
> channel with server address `localhost:8080` is getting overridden. How can 
> I avoid that and use both the channels?
>
> ```
> # Copyright 2020 Adap GmbH. All Rights Reserved.
> #
> # Licensed under the Apache License, Version 2.0 (the "License");
> # you may not use this file except in compliance with the License.
> # You may obtain a copy of the License at
> #
> # http://www.apache.org/licenses/LICENSE-2.0
> #
> # Unless required by applicable law or agreed to in writing, software
> # distributed under the License is distributed on an "AS IS" BASIS,
> # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
> # See the License for the specific language governing permissions and
> # limitations under the License.
> # 
> ==
> """Contextmanager for a gRPC streaming channel to the Flower server."""
>
>
> from contextlib import contextmanager
> from logging import DEBUG
> from pathlib import Path
> from queue import Queue
> from typing import Callable, Iterator, Optional, Tuple, Union
>
> from flwr.common import GRPC_MAX_MESSAGE_LENGTH
> from flwr.common.grpc import create_channel
> from flwr.common.logger import log
> from flwr.proto.transport_pb2 import ClientMessage, ServerMessage
> from flwr.proto.transport_pb2_grpc import FlowerServiceStub
>
> # The following flags can be uncommented for debugging. Other possible 
> values:
> # https://github.com/grpc/grpc/blob/master/doc/environment_variables.md
> # import os
> # os.environ["GRPC_VERBOSITY"] = "debug"
> # os.environ["GRPC_TRACE"] = "tcp,http"
>
>
> def on_channel_state_change(channel_connectivity: str) -> None:
> """Log channel connectivity."""
> log(DEBUG, channel_connectivity)
>
>
> @contextmanager
> def grpc_connection(
> server_address: str,
> max_message_length: int = GRPC_MAX_MESSAGE_LENGTH,
> root_certificates: Optional[Union[bytes, str]] = None,
> ) -> Iterator[Tuple[Callable[[], ServerMessage], Callable[[ClientMessage], 
> None]]]:
> """Establish a gRPC connection to a gRPC server.
>
> Parameters
> --
> server_address : str
> The IPv4 or IPv6 address of the server. If the Flower server runs 
> on the same
> machine on port 8080, then `server_address` would be `"
> 0.0.0.0:8080"` or
> `"[::]:8080"`.
> max_message_length : int
> The maximum length of gRPC messages that can be exchanged with the 
> Flower
> server. The default should be sufficient for most models. Users 
> who train
> very large models might need to increase this value. Note that the 
> Flower
> server needs to be started with the same value
> (see `flwr.server.start_server`), otherwise it will not know about 
> the
> increased limit and block larger messages.
> (default: 536_870_912, this equals 512MB)
> root_certificates : Optional[bytes] (default: None)
> The PEM-encoded root certificates as a byte string or a path 
> string.
> If provided, a secure connection using the certificates will be
> established to an SSL-enabled Flower server.
>
> Returns
> ---
> receive, send : Callable, Callable
>
> Examples
> 
> Establishing a SSL-enabled connection to the server:
>
> >>> from pathlib import Path
> >>> with grpc_connection(
> >>> server_address,
> >>> max_message_length=max_message_length,
> >>> root_certificates=Path("/crts/root.pem").read_bytes(),
> >>> ) as conn:
> >>> receive, send = conn
> >>> server_message = receive()
> >>> # do something here
> >>> send(client_message)
> """
> if isinstance(root_certificates, str):
> root_certificates = Path(root_certificates).read_bytes()
>
> channel = create_channel(
> server_address='localhost:8080',
> root_certificates=root_certificates,
> max_message_length=max_message_length,
> )
> channel.subscribe(on_channel_state_change)
>
> queue: Queue[ClientMessage] = Queue(  # pylint: 
> disable=unsubscriptable-object
> maxsize=1
> )
> stub = FlowerServiceStub(channel)
>
> server_message_iterator: Iterator[ServerMe

Re: [grpc-io] Flush dnsmasq cache on IP address failure

2023-07-12 Thread 'Richard Belleville' via grpc.io

Depending on which language you're using, you could use the custom name 
resolver interface  to 
implement this behavior yourself.
On Wednesday, July 5, 2023 at 12:53:43 PM UTC-7 Gmail wrote:

> Thanks Frederic
> I understand that. But I only want to do it when grpc has a connection 
> failure. Is there an already existing mechanism to do that.?
>
> On Jul 5, 2023, at 12:37 PM, Frédéric Martinsons  
> wrote:
>
> 
>
> I think this is totally unrelated to grpc but for what it worth, if you 
> control your dnsmasq, you can use --clear-on-reload option and send a 
> SIGHUP to dnsmasq process to reload the cache. 
>
> Le mer. 5 juil. 2023, 21:24, Ramanujam Jagannath  a 
> écrit :
>
>> Backgrounder - Our device connects to an AWS static IP. We use dnsmasq on 
>> device to provide lookup services for downstream devices. Currently we are 
>> planning to use a long. DNS TTL on AWS to avoid too many DNS lookups from 
>> on field devices. The on-field devices use a grpc  connection to maintain 
>> long standing tcp connections. We do have multiple availability zones and 
>> so a DNS resolution does return 4 IP addresses
>>
>> Problem - When an IP address fails(on AWS) the grpc client will retry and 
>> re-resolve. But because we have dnsmasq on device it will send a cached 
>> address - which is potentially faulty. 
>>
>> Solution - This can be resolved by flushing the dnsmasq cache on device. 
>> But is there a way to flush the dnsmasq cache on device on connection 
>> failure only? grpc under the hood uses c-ares which in our case goes to the 
>> dnsmasq proxy on device.
>>
>> Any solutions/thoughts. Someone must have encountered this problem before?
>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "grpc.io" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to grpc-io+u...@googlegroups.com.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/grpc-io/75d68762-eb58-4400-b8e1-3584f6bd6e51n%40googlegroups.com
>>  
>> 
>> .
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/90825f02-6699-4f3a-8b77-3c9b7844c20dn%40googlegroups.com.


[grpc-io] Re: Python: grpc aio server parallelism multicore

2023-07-12 Thread 'Richard Belleville' via grpc.io

In general, the way to achieve performance in Python is to use a *single* 
thread, 
not multiple threads. This is because of the global interpreter lock 
. When a thread is 
accessing Python objects, no other thread in the process may access any 
objects. A god lock for all of Python. As a result, threads in Python will 
buy *concurrency* but not *parallelism.* What's more, the cost of 
inter-thread synchronization results in thread-based concurrency being less 
performant than single-thread concurrency. As a result, the assumption for 
asyncio is that you use a *single thread* except for compatibility/interop 
reasons.

This may change in the future depending on the fate of PEP 703 
, but the core CPython development team 
is currently hyper-focused on *single threaded* performance, not 
multi-threaded performance. If and when the core Python development team 
changes their stance on this, we'll reassess, but for the moment, if you're 
looking to performantly run gRPC Python, you should do it with asyncio on a 
single thread.
On Tuesday, July 4, 2023 at 7:06:17 AM UTC-7 weide zhang wrote:

> Hi, 
>
> It seems from the documentation that the grpc aio server only uses one 
> thread to do async io.  Does that mean for a multicore system, in order to 
> leverage the performance of all the cores and using Async IO to achieve 
> maximum performance , I have to spawn multiple thread or process each with 
> its own aio server? 
>
> My question is really how to leverage all the cores on one server in the 
> meanwhile still using AIO to achieve maximum performance. 
>
> Thank you,
>
> Weide 
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/a4c81ec6-87b2-422b-a73f-9d52486db549n%40googlegroups.com.


Re: [grpc-io] Grpc does not reconnect

2023-07-12 Thread 'Michael Lumish' via grpc.io
I assume you are using the grpc-js library. If not, you should be, because
the other implementation is deprecated.

The best way to determine what is happening is with gRPC trace logs. In
particular, please run the "Service A" process with the following
environment variables and share the output after you observe this failure
in that process: GRPC_VERBOSITY=DEBUG
GRPC_TRACE="index,channel,connectivity_state,dns_resolver,pick_first,round_robin,subchannel,transport,keepalive".

Also, I want to note that keepalives are only supported on the client, and
"grpc.max_connection_age_ms" and "grpc.max_connection_age_grace_ms" only
have an effect on the server. In addition, the "grpc.lb_policy_name" option
is unsupported, but that doesn't really matter because your
"grpc.service_config" option has the same effect.

On Mon, Jul 10, 2023 at 2:46 PM Jgm  wrote:

> I'm using NodeJs grpc library, with nestjs framework
>
> El lunes, 10 de julio de 2023 a las 15:09:17 UTC-6, Sanjay Pujare escribió:
>
>> Which gRPC language library are you using - on the client and server side?
>>
>> On Mon, Jul 10, 2023 at 1:53 PM Jgm  wrote:
>>
>>> Hello everyone,
>>>
>>> I am facing a problem with gRPC and microservices. When sending a
>>> request from "Service A" to "Service B," if the connection between them is
>>> lost, "Service A" sometimes fails to reconnect to "Service B" without any
>>> explicit error. Consequently, the messages remain stuck, and the gRPC
>>> continues retrying. It appears that the issue might be related to resolving
>>> the DNS of "Service B," causing the request to get stuck. Also, I added a
>>> deadline option, sometimes it works and sometimes it doesn't
>>>
>>> Is there a solution to this problem?
>>>
>>> Config file:
>>> return {
>>> transport: Transport.GRPC,
>>> options: {
>>> maxReceiveMessageLength: 99 * 1024 * 1024,
>>> maxSendMessageLength: 99 * 1024 * 1024,
>>> url: url,
>>> keepalive: {
>>> 'keepaliveTimeMs': 6,
>>> 'keepaliveTimeoutMs': 2,
>>> 'keepalivePermitWithoutCalls': 1,
>>> },
>>> channelOptions: {
>>> "grpc.max_connection_age_ms": 3,
>>> "grpc.max_connection_age_grace_ms": 1,
>>> "grpc.lb_policy_name": "round_robin",
>>> "grpc.service_config": JSON.stringify({loadBalancingConfig: [{
>>> round_robin: {} } ] } )
>>> }
>>> },
>>> };
>>>
>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "grpc.io" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to grpc-io+u...@googlegroups.com.
>>> To view this discussion on the web visit
>>> https://groups.google.com/d/msgid/grpc-io/dd6578b9-cb4f-4e3d-829c-90c8df1021een%40googlegroups.com
>>> 
>>> .
>>>
>> --
> You received this message because you are subscribed to the Google Groups "
> grpc.io" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to grpc-io+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/grpc-io/bff1012c-4c79-4653-9732-b1694adcb858n%40googlegroups.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CAPK2-4fnhrY3Gh%2Bqcz65VoNcnLb%2BfkWBT3KR%3D3jQdvd36%3DS19w%40mail.gmail.com.


smime.p7s
Description: S/MIME Cryptographic Signature


Re: [grpc-io] using native java 11 alpn with gprc-java?

2023-07-12 Thread 'Eric Anderson' via grpc.io
TL;DR: you can use gRPC fine without native dependencies, but you might see
lesser performance.

OpenJDK backported the "Java 9 ALPN API" to Java 8, so you don't need
netty-tcnative at all for ALPN. We still use it because it provides a large
performance impact on Java 8, and it may provide some extra performance on
Java 11. AES GCM was previously pretty poorly optimized. But if you are
running somewhere netty-tcnative isn't supported, things are okay
especially on Java 11+.

The other part of netty-tcnative is "are you using a good/secure TLS
stack?" Earlier in gRPC's life Java was obviously pretty far behind the
wider TLS ecosystem. Boringssl was simply a good idea, independent of
performance. That seems to have changed though, and IIRC a surprising
amount of stuff has even gotten backported to Java 8 (vs what happened with
Java 7).

So we would probably look into dropping netty-tcnative from
grpc-netty-shaded after we drop Java 8.

Beyond netty-tcnative, there's netty-native-epoll/kqueue. These provide
performance via things like edge-triggered epoll, and helpful things like
TCP_USER_TIMEOUT (which could be argued is for performance). gRPC works
without them, but they do provide benefits even on Java 21. We're using
them for unix domain sockets today, as well; we've not played with the Java
16 API yet.

On Tue, Jul 11, 2023 at 2:32 AM Elhanan Maayan  wrote:

> hi.. i saw that java 9 + includes native alpn support, does this mean we
> no longer need to use outside native depedncies like boringSSL or
> conscrypt?
> i could never figure that out properly
>
> --
> You received this message because you are subscribed to the Google Groups "
> grpc.io" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to grpc-io+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/grpc-io/d22918a5-bcbc-4220-bb44-d942a45f2908n%40googlegroups.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CA%2B4M1oNFiXzmWeydOsEzbPFzDHskZXnzMxcLXfStLsY7Q_F7yg%40mail.gmail.com.


smime.p7s
Description: S/MIME Cryptographic Signature


[grpc-io] Re: Number of threads created in grpc internally

2023-07-12 Thread Softgigant S
Thank you.
I found there the answer that sounds like "ther is no such parameter".

But may I leave it here: 
I know a very big company, that cannot use gRPC version later than v1.36 
just because of the issue with builtin event engine.
The communication blocks all other activities of the software because grpc 
occupies all available CPU cores. And the problem is, there is no easy way 
to balance between them.
On one hand, there is an option to program the user application event 
engine and insert it to replace the one built in, but the Google confirms 
that it is not a trivial task... And I did not find any example of how to 
do it.
On the other hand, many people (including the one who asks the same in the 
bottom of the thread by the link you gave me, but he also did not get any 
anwer) ask to implement a simple parameter that limits the number of cores 
for grpc. Good reason for it might be an assumptoin, that in many cases the 
communication do have greate value, but even there the other types of 
activities do also need CPU.



среда, 12 июля 2023 г. в 20:17:30 UTC+3, AJ Heller: 

> I think you'll find these threads answer your question:
>
> https://groups.google.com/g/grpc-io/c/j1A0CY0YG-A/m/W0H6UrkHAwAJ
> https://stackoverflow.com/a/76591101/10161
>
> Best,
> -aj
> On Monday, July 10, 2023 at 5:04:00 AM UTC-7 Softgigant S wrote:
>
>> Hello!
>>
>> May I ask, how to setup or manipulate number of threads created by grpc 
>> (event engine?)
>> I have simple callback-based grcp server-client API.
>>
>> I used proc/pid/status to view the status of process of grpc client.
>> It showed 14 threads, same as number of CPU cores on my PC.
>> Is there any parameter to limit the number of threads used by grpc by 
>> default?
>>
>> Thank you!
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/75511db1-1db2-470a-b6aa-2079e7a01181n%40googlegroups.com.


[grpc-io] Re: Package version numbers for protobuf and gRPC (for Python)

2023-07-12 Thread Jens Troeger
Thank you, Richard, that’s helpful!

Funny enough, your response also relates to the discussion Allow package 
references as version specifiers 

 over 
at the Python Discussion groups. It does make sense what you suggest.

And in that vein, looking at Python’s Google API Commons 
 the setup contains

  dependencies = [
  
"protobuf>=3.19.5,<5.0.0.dev0,!=3.20.0,!=3.20.1,!=4.21.1,!=4.21.2,!=4.21.3,!=4.21.4,!=4.21.5",
  ]
  extras_require = {"grpc": ["grpcio >= 1.44.0, <2.0.0.dev0"]}

thus indicating which version of Protocol Buffers and gRPC/Tools the 
package supports.

Cheers,
Jens

On Thursday, July 13, 2023 at 3:31:13 AM UTC+10 Richard Belleville wrote:

> Hi Jens,
>
> The grpcio package  itself is 
> completely agnostic to protobuf. It only has byte-oriented interfaces. 
> Protobuf integration only happens within the generated code (e.g. 
> helloworld_pb2_grpc.py 
> ).
>  
> This generated code comes from running the grpcio-tools package 
> , which *does *have a dependency 
> on protobuf. The compatibility range with protobuf is defined by this 
> package's dependency range on protobuf and can be seen by either looking at 
> its 
> setup.py file 
> 
>  
> or using a dependency inspection tool such as pipdeptree:
>
> (venv) rbellevi-macbookpro:tmp.m984j04o rbellevi$ python3 -m pipdeptree
> grpcio-tools==1.56.0
> ├── grpcio [required: >=1.56.0, installed: 1.56.0]
> ├── protobuf [required: >=4.21.6,<5.0dev, installed: 4.23.4]
> └── setuptools [required: Any, installed: 67.8.0]
>
> In general, you can use the heuristic that if you do pip install 
> grpcio-tools and generate your code, you'll have the right version of 
> protobuf already installed.
>
> This isn't ideal since many people generate their code only once and then 
> rebuild their application many times, potentially forgetting the version of 
> protobuf that they originally used to generate their code. In practice, 
> even these people are generally fine, only getting bitten when protobuf 
> does a major version bump, which has happened once in the past 5 years.
> On Sunday, July 9, 2023 at 6:37:06 PM UTC-7 Jens Troeger wrote:
>
>> Hello,
>>
>> Following this question 
>>  I’m trying to find 
>> the documentation that defines which versions of the grpcio 
>>  package implement which version of the 
>> Protocol Buffers language.
>>
>> And, in that context, how do the Google API common protos 
>>  (and its generated 
>> Python package ) 
>> relate to the different Protobuf versions?
>>
>> Much thanks!
>> Jens
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/46021df0-e514-4ba5-b1ff-8bca90e16d7dn%40googlegroups.com.