[grpc-io] Re: GRPC C++: Pre-fork worker model

2024-07-12 Thread 'Richard Belleville' via grpc.io
Instead of using a parent process / child process model, you could consider 
cutting out the parent process entirely and using SO_REUSEPORT 
 to handle multiplexing the single 
address / port combo to multiple worker processes. There's an example in 
Python here 
 
that illustrates this. This should extrapolate pretty straightforwardly to 
C++.

On Friday, July 12, 2024 at 10:12:22 AM UTC-7 Eugene Ostroukhov wrote:

> gRPC is not designed for this scenario. gRPC closes all file descriptors 
> before forking so the connections would not be available in the client. 
> This is explicit expectation and is covered with this test case - 
> https://github.com/grpc/grpc/blob/master/test/cpp/end2end/client_fork_test.cc
> On Monday, December 4, 2023 at 8:02:19 AM UTC-8 stefan pantos wrote:
>
>> Hi,
>>
>> Is it possible to create a GRPC server in C++ using a pre-fork worker 
>> model and can someone point me to an example. I see there are some C 
>> prefork related functions but I cannot find an example of using this and 
>> cannot quite see how it would work either. I did try myself using 
>> AddInsecureChannelFromFd but I didn't have much success not sure if it is 
>> me simply making a coding mistake or complete miss understanding of the use 
>> case.
>>
>> Incase it isn't clear what I want to do I'll outline the idea in more 
>> detail.
>> A parent process creates, binds and listens on the incoming port.
>> Then the child processes are forked, they then do an accept on the now 
>> shared file descriptor. Only one of the children will accept a connection 
>> and will end up handing that connection but any other connections coming 
>> will be handled by one of the other child processes.
>>
>> This is a method outlined in UNIX Network Programming, and I think is a 
>> method used in Apache Httpd. 
>>
>> The reason I want to use this method is that I have a code base which is 
>> not thread safe and it would take a long time and a lot of trial and error 
>> to make it thread safe and perform well enough. We have used this method 
>> for other protocols with good effect but none of them are as good as GRPC 
>> in my opinion.
>>
>> Thanks for any help you can give me in advance.
>> Stefan Pantos
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/86cd9f65-8791-492c-802d-0493705d82c2n%40googlegroups.com.


[grpc-io] Re: plan about support python 3.13

2024-07-11 Thread 'Richard Belleville' via grpc.io
The team has plans to both support Python 3.13 and start our investigation 
into supporting no-gil mode this quarter. We believe that no-gil mode 
presents an opportunity to provide *significant* performance improvements 
in the gRPC Python library, so we're very excited for these plans to come 
together.

On Thursday, July 11, 2024 at 11:27:54 AM UTC-7 Yq Wang wrote:

> Hello,
> I notice that prereleased python 3.13 can remove the gil by compiled 
> with --disable-gil. It make more convient to develop some high concurrency 
> python program with sharing gpu resource only in one process. Current 
> latest grpc version compiled error with python 3.13. So is there any plan 
> or infomations about supporting python 3.13 to disclose. 

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/f30f5a83-6594-45e2-b8e5-1a8e92348228n%40googlegroups.com.


Re: [grpc-io] Seeing error while installing grpcio-tools ==1.18.0

2024-05-28 Thread 'Richard Belleville' via grpc.io
Ramana, this message implies a compilation issue. Search higher in the logs
for the string "error: ". It's likely that you simply need to install a
build dependency.

Thanks,
Richard Belleville

On Tue, May 28, 2024 at 10:06 AM Ramana Reddy  wrote:

> We are trying to install python3 -m pip install grpcio==1.18.0 on the
> Ubuntu 24.04 machine with python3.12. But we seeing the following error:
>
>   self._compile(obj, src, ext, cc_args, extra_postargs, pp_opts)
> File
> "/usr/lib/python3/dist-packages/setuptools/_distutils/unixccompiler.py",
> line 187, in _compile
>   raise CompileError(msg)
>   distutils.errors.CompileError: command
> '/usr/bin/x86_64-linux-gnu-gcc' failed with exit code 1
>
>   [end of output]
>
>   note: This error originates from a subprocess, and is likely not a
> problem with pip.
>   ERROR: Failed building wheel for grpcio
>   Running setup.py clean for grpcio
> Failed to build grpcio
> ERROR: Could not build wheels for grpcio, which is required to install
> pyproject.toml-based projects
>
> This is the case with installing grpcio == 1.48.1 as well.
>
> How we can install old versions ( other than the default versions) in the
> system.
>
> We can build these grpcto-tools and grpc-io from the grpc source as well.
> In that process, grpcio-tools bring a protoful version from the cached
> wheel. How can I specify the protobuf version( this should be reflected
> here: python3 -m grpc.tools.protoc --version)  which I installed on the
> system. We have been using the old protoc and protobuf-c and want to use
> the same for sometime and trying to bring the compatible grpcio and
> grpcio-tools
>
> Thanks & Regards,
> Ramana
>
> --
> You received this message because you are subscribed to the Google Groups "
> grpc.io" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to grpc-io+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/grpc-io/CAL2CrsMmeJmzu6axTZW987pYG-ZRyAXKRfC7gtJb1-51gjoQmQ%40mail.gmail.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CAOew_sFaviDzVDeRpapEftcJeM0SnfQ9RMQiupG2iaSS4wZ5TQ%40mail.gmail.com.


Re: [grpc-io] Re: Looking for organizational/process best practices

2024-01-02 Thread 'Richard Belleville' via grpc.io
+Terry Wilson 

On Tue, Jan 2, 2024 at 10:14 AM 'yas...@google.com' via grpc.io <
grpc-io@googlegroups.com> wrote:

> This sounds like a topic suited for https://groups.google.com/g/protobuf
>
> On Wednesday, December 27, 2023 at 6:20:20 PM UTC-8 Frederic Marand (FGM)
> wrote:
>
>> Hello.
>>
>> After teaching a course on protobuf and gRPC in Go, I’ve had requests for
>> best organizational practices for the use of protobufs (and gRPC) at some
>> degree of scale, and this does not appear to be something that is covered
>> in the protobuf.dev and grpc.io sites, as opposed to the technical best
>> practices.
>>
>> Things like:
>> - how do you split your protobufs in packages/directories ?
>> - what kind of common fields or custom options (e.g. validators) should
>> one add ?
>> - How do you store your .proto files: isolated repo ? all-projects
>> monorepo ?
>> - And how should you commit your generated code per language ? One repo
>> per language, language directories in the isolated protobuf repos, vendored
>> in each project, or just generated on the fly ?
>> - Should you always include max items count for responses containing
>> repeated items ?
>> - When do you switch paging from and id group to a timestamp, or a Bloom
>> filter ?
>>
>> Basically, all the questions a team is asking themselves to put these
>> technologies in practice once they know how they technically work but are
>> still green on actual production use.
>>
>> Any pointers to resources welcome !
>>
> --
> You received this message because you are subscribed to the Google Groups "
> grpc.io" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to grpc-io+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/grpc-io/f356c762-7aea-4d64-9715-eaf8767f6b2bn%40googlegroups.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CAOew_sHrn0%3DRAhsFet-RrDVop6wXYzOehE8cxGf0JBu6Ms2MnQ%40mail.gmail.com.


[grpc-io] Re: Python: grpc aio server parallelism multicore

2023-07-12 Thread 'Richard Belleville' via grpc.io

In general, the way to achieve performance in Python is to use a *single* 
thread, 
not multiple threads. This is because of the global interpreter lock 
. When a thread is 
accessing Python objects, no other thread in the process may access any 
objects. A god lock for all of Python. As a result, threads in Python will 
buy *concurrency* but not *parallelism.* What's more, the cost of 
inter-thread synchronization results in thread-based concurrency being less 
performant than single-thread concurrency. As a result, the assumption for 
asyncio is that you use a *single thread* except for compatibility/interop 
reasons.

This may change in the future depending on the fate of PEP 703 
, but the core CPython development team 
is currently hyper-focused on *single threaded* performance, not 
multi-threaded performance. If and when the core Python development team 
changes their stance on this, we'll reassess, but for the moment, if you're 
looking to performantly run gRPC Python, you should do it with asyncio on a 
single thread.
On Tuesday, July 4, 2023 at 7:06:17 AM UTC-7 weide zhang wrote:

> Hi, 
>
> It seems from the documentation that the grpc aio server only uses one 
> thread to do async io.  Does that mean for a multicore system, in order to 
> leverage the performance of all the cores and using Async IO to achieve 
> maximum performance , I have to spawn multiple thread or process each with 
> its own aio server? 
>
> My question is really how to leverage all the cores on one server in the 
> meanwhile still using AIO to achieve maximum performance. 
>
> Thank you,
>
> Weide 
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/a4c81ec6-87b2-422b-a73f-9d52486db549n%40googlegroups.com.


Re: [grpc-io] Flush dnsmasq cache on IP address failure

2023-07-12 Thread 'Richard Belleville' via grpc.io

Depending on which language you're using, you could use the custom name 
resolver interface  to 
implement this behavior yourself.
On Wednesday, July 5, 2023 at 12:53:43 PM UTC-7 Gmail wrote:

> Thanks Frederic
> I understand that. But I only want to do it when grpc has a connection 
> failure. Is there an already existing mechanism to do that.?
>
> On Jul 5, 2023, at 12:37 PM, Frédéric Martinsons  
> wrote:
>
> 
>
> I think this is totally unrelated to grpc but for what it worth, if you 
> control your dnsmasq, you can use --clear-on-reload option and send a 
> SIGHUP to dnsmasq process to reload the cache. 
>
> Le mer. 5 juil. 2023, 21:24, Ramanujam Jagannath  a 
> écrit :
>
>> Backgrounder - Our device connects to an AWS static IP. We use dnsmasq on 
>> device to provide lookup services for downstream devices. Currently we are 
>> planning to use a long. DNS TTL on AWS to avoid too many DNS lookups from 
>> on field devices. The on-field devices use a grpc  connection to maintain 
>> long standing tcp connections. We do have multiple availability zones and 
>> so a DNS resolution does return 4 IP addresses
>>
>> Problem - When an IP address fails(on AWS) the grpc client will retry and 
>> re-resolve. But because we have dnsmasq on device it will send a cached 
>> address - which is potentially faulty. 
>>
>> Solution - This can be resolved by flushing the dnsmasq cache on device. 
>> But is there a way to flush the dnsmasq cache on device on connection 
>> failure only? grpc under the hood uses c-ares which in our case goes to the 
>> dnsmasq proxy on device.
>>
>> Any solutions/thoughts. Someone must have encountered this problem before?
>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "grpc.io" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to grpc-io+u...@googlegroups.com.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/grpc-io/75d68762-eb58-4400-b8e1-3584f6bd6e51n%40googlegroups.com
>>  
>> 
>> .
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/90825f02-6699-4f3a-8b77-3c9b7844c20dn%40googlegroups.com.


[grpc-io] Re: How to avoid overriding while creating channels?

2023-07-12 Thread 'Richard Belleville' via grpc.io

This sounds more like a question for the developers of the flower 
framework. gRPC itself absolutely supports multiple concurrent client 
channels to different server targets. If there is an issue with this 
functionality via flower, the issue almost certainly lies in the layer on 
top of gRPC.
On Friday, July 7, 2023 at 7:13:34 AM UTC-7 Saurav Pawar wrote:

> I am working with Flower which is a federated learning framework. In its 
> (grpc connection)[
> https://github.com/adap/flower/blob/main/src/py/flwr/client/grpc_client/connection.py#L91]
>  
> file they are only creating 1 channel whereas I want 2-3 channels. But when 
> I created 1 more channel with server_address `localhost:5040`, the previous 
> channel with server address `localhost:8080` is getting overridden. How can 
> I avoid that and use both the channels?
>
> ```
> # Copyright 2020 Adap GmbH. All Rights Reserved.
> #
> # Licensed under the Apache License, Version 2.0 (the "License");
> # you may not use this file except in compliance with the License.
> # You may obtain a copy of the License at
> #
> # http://www.apache.org/licenses/LICENSE-2.0
> #
> # Unless required by applicable law or agreed to in writing, software
> # distributed under the License is distributed on an "AS IS" BASIS,
> # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
> # See the License for the specific language governing permissions and
> # limitations under the License.
> # 
> ==
> """Contextmanager for a gRPC streaming channel to the Flower server."""
>
>
> from contextlib import contextmanager
> from logging import DEBUG
> from pathlib import Path
> from queue import Queue
> from typing import Callable, Iterator, Optional, Tuple, Union
>
> from flwr.common import GRPC_MAX_MESSAGE_LENGTH
> from flwr.common.grpc import create_channel
> from flwr.common.logger import log
> from flwr.proto.transport_pb2 import ClientMessage, ServerMessage
> from flwr.proto.transport_pb2_grpc import FlowerServiceStub
>
> # The following flags can be uncommented for debugging. Other possible 
> values:
> # https://github.com/grpc/grpc/blob/master/doc/environment_variables.md
> # import os
> # os.environ["GRPC_VERBOSITY"] = "debug"
> # os.environ["GRPC_TRACE"] = "tcp,http"
>
>
> def on_channel_state_change(channel_connectivity: str) -> None:
> """Log channel connectivity."""
> log(DEBUG, channel_connectivity)
>
>
> @contextmanager
> def grpc_connection(
> server_address: str,
> max_message_length: int = GRPC_MAX_MESSAGE_LENGTH,
> root_certificates: Optional[Union[bytes, str]] = None,
> ) -> Iterator[Tuple[Callable[[], ServerMessage], Callable[[ClientMessage], 
> None]]]:
> """Establish a gRPC connection to a gRPC server.
>
> Parameters
> --
> server_address : str
> The IPv4 or IPv6 address of the server. If the Flower server runs 
> on the same
> machine on port 8080, then `server_address` would be `"
> 0.0.0.0:8080"` or
> `"[::]:8080"`.
> max_message_length : int
> The maximum length of gRPC messages that can be exchanged with the 
> Flower
> server. The default should be sufficient for most models. Users 
> who train
> very large models might need to increase this value. Note that the 
> Flower
> server needs to be started with the same value
> (see `flwr.server.start_server`), otherwise it will not know about 
> the
> increased limit and block larger messages.
> (default: 536_870_912, this equals 512MB)
> root_certificates : Optional[bytes] (default: None)
> The PEM-encoded root certificates as a byte string or a path 
> string.
> If provided, a secure connection using the certificates will be
> established to an SSL-enabled Flower server.
>
> Returns
> ---
> receive, send : Callable, Callable
>
> Examples
> 
> Establishing a SSL-enabled connection to the server:
>
> >>> from pathlib import Path
> >>> with grpc_connection(
> >>> server_address,
> >>> max_message_length=max_message_length,
> >>> root_certificates=Path("/crts/root.pem").read_bytes(),
> >>> ) as conn:
> >>> receive, send = conn
> >>> server_message = receive()
> >>> # do something here
> >>> send(client_message)
> """
> if isinstance(root_certificates, str):
> root_certificates = Path(root_certificates).read_bytes()
>
> channel = create_channel(
> server_address='localhost:8080',
> root_certificates=root_certificates,
> max_message_length=max_message_length,
> )
> channel.subscribe(on_channel_state_change)
>
> queue: Queue[ClientMessage] = Queue(  # pylint: 
> disable=unsubscriptable-object
> maxsize=1
> )
> stub = FlowerServiceStub(channel)
>
> server_message_iterator: 

[grpc-io] Re: Package version numbers for protobuf and gRPC (for Python)

2023-07-12 Thread 'Richard Belleville' via grpc.io
Hi Jens,

The grpcio package  itself is completely 
agnostic to protobuf. It only has byte-oriented interfaces. Protobuf 
integration only happens within the generated code (e.g. 
helloworld_pb2_grpc.py 
).
 
This generated code comes from running the grpcio-tools package 
, which *does *have a dependency on 
protobuf. The compatibility range with protobuf is defined by this 
package's dependency range on protobuf and can be seen by either looking at its 
setup.py file 

 
or using a dependency inspection tool such as pipdeptree:

(venv) rbellevi-macbookpro:tmp.m984j04o rbellevi$ python3 -m pipdeptree
grpcio-tools==1.56.0
├── grpcio [required: >=1.56.0, installed: 1.56.0]
├── protobuf [required: >=4.21.6,<5.0dev, installed: 4.23.4]
└── setuptools [required: Any, installed: 67.8.0]

In general, you can use the heuristic that if you do pip install 
grpcio-tools and generate your code, you'll have the right version of 
protobuf already installed.

This isn't ideal since many people generate their code only once and then 
rebuild their application many times, potentially forgetting the version of 
protobuf that they originally used to generate their code. In practice, 
even these people are generally fine, only getting bitten when protobuf 
does a major version bump, which has happened once in the past 5 years.
On Sunday, July 9, 2023 at 6:37:06 PM UTC-7 Jens Troeger wrote:

> Hello,
>
> Following this question 
>  I’m trying to find 
> the documentation that defines which versions of the grpcio 
>  package implement which version of the 
> Protocol Buffers language.
>
> And, in that context, how do the Google API common protos 
>  (and its generated 
> Python package ) 
> relate to the different Protobuf versions?
>
> Much thanks!
> Jens
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/ca9bb3d7-8459-4e9b-a0bd-c4e622bf85b9n%40googlegroups.com.


[grpc-io] Re: Upgrade grpcio (+grpcio-tools) from 1.48.1 to 1.54.2 became the reason massive memory leak

2023-06-07 Thread 'Richard Belleville' via grpc.io
> If allowed, I can provide a link to my project on GitHub

Absolutely. Please share. There's not much to investigate given just the 
information so far.

> Trying other versions older than 1.48.1 - the same result - massive 
memory leak.

Is this a typo? Do you mean versions *newer *than 1.48.1 show a memory 
leak? Or do you really mean "older."

On Sunday, June 4, 2023 at 2:11:42 AM UTC-7 Jerry wrote:

> [image: grpcio.jpg]
> Some instances of my app, green and blue full rollback to prev version, 
> for yellow - downgrade  grpcio (+grpcio-tools) to 1.48.1 only. Trying other 
> versions older than 1.48.1 - the same result - massive memory leak.
>
> If allowed, I can provide a link to my project on GitHub
>
> Regard's, Jerry.
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/ecc33a66-ddcf-4928-9fb7-3202ce70a4a4n%40googlegroups.com.


[grpc-io] Re: Live camera streaming using grpc python

2023-06-07 Thread 'Richard Belleville' via grpc.io
Sanket, you've shared your servicer, but you haven't shared the code that's 
enqueueing data into your camera_buffer. If you have a single writer 
enqueueing objects into a global buffer at a constant rate, then when you 
have one active RPC, you'll be getting the expected result -- each frame is 
delivered to the client. But if you have two connected clients you'll have 
two readers on your queue and each frame will be received by only one of 
them. So you'll basically be sending even frames to one client and odd 
frames to another.

What you do instead will depend on the sort of behavior you want. If you 
are okay with potential loss of frames on a stream, then you can do 
something very simple. Keep a global variable with a single frame, along 
with a threading.Lock and a threading.Condition. The writer will signal all 
awaiting threads each time it changes the frame. Each stream will be 
waiting on the condition variable, read the frame when signalled, and send 
it on its stream.

If you can't tolerate the potential loss of frames, then you'll need a data 
structure a little more complicated. Each frame needs to be kept in memory 
until all readers have received it. Only then can it be purged. You'd keep 
frames in a global buffer. Each time you pull a frame from the buffer, 
increase a count on the frame. Whichever reader happens to read it last 
will observe that the count has reached the global number of readers and 
can purge it from memory. Then you'll need to factor in the fact that 
clients can come and go, so that the total number of readers may change at 
any time.

Regardless, the issue isn't really with gRPC. This is more about 
multithreading in Python.
On Wednesday, June 7, 2023 at 10:19:39 AM UTC-7 yas...@google.com wrote:

> Are you tied to gRPC Python or could you also experiment with another 
> language?
>
> On Saturday, June 3, 2023 at 12:16:42 AM UTC-7 Sanket Kumar Mali wrote:
>
>> my proto file
>>
>> syntax = "proto3";
>>
>> package camera_stream;
>>
>> // Camera frame message
>> message Frame {
>>   bytes frame = 1;
>>   int64 timestamp = 2;
>> }
>>
>> // Camera stream service definition
>> service CameraStream {
>>   // Method to connect and start receiving camera frames
>>   rpc CameraStream(Empty) returns (stream Frame) {}
>> }
>> // Empty message
>> message Empty {}
>>
>> On Saturday, 3 June 2023 at 12:32:28 UTC+5:30 Sanket Kumar Mali wrote:
>>
>>> my server code
>>>
>>> camera_buffer = queue.Queue(maxsize=20)
>>>
>>> # Define the gRPC server class
>>> class CameraStreamServicer(camera_stream_pb2_grpc.CameraStreamServicer):
>>> def __init__(self):
>>> self.clients = []
>>>
>>> def CameraStream(self, request, context):
>>> global camera_buffer
>>> # Add the connected client to the list
>>> self.clients.append(context)
>>> try:
>>> while True:
>>> print("size: ",camera_buffer.qsize())
>>> frame = camera_buffer.get(timeout=1)  # Get a frame 
>>> from the buffer
>>>
>>> # Continuously send frames to the client
>>> for client in self.clients:
>>> try:
>>> response = camera_stream_pb2.Frame()
>>> response.frame = frame
>>> response.timestamp = int(time.time())
>>> yield response
>>> except grpc.RpcError:
>>> # Handle any errors or disconnections
>>> self.clients.remove(context)
>>> print("Client disconnected")
>>> except Exception as e:
>>> print("unlnown error: ", e)
>>>
>>>
>>> in a seperate thread I am getting frames from camera and populating the 
>>> buffer
>>>
>>>
>>> On Monday, 22 May 2023 at 12:16:57 UTC+5:30 torpido wrote:
>>>
 What happens if you run the same process in parallel, and serve in each 
 one a different client?
 just to make sure that there is no issue with the bandwidth in the 
 server.

 I would also set debug logs for gRPC to get more info

 Can you share the RPC and server code you are using? Seems like it 
 should be A *request-streaming RPC* 
 ב-יום שבת, 13 במאי 2023 בשעה 16:21:41 UTC+3, Sanket Kumar Mali כתב/ה:

> Hi,
> I am trying to implement a live camera streaming setup using grpc 
> python. I was able to stream camera frame(1280x720) in 30 fps to a single 
> client. But whenever I try to consume the stream from multiple client it 
> seems frame rate is getting divided (e.g if I connect two client frame 
> rate 
> becomes 15fps).
> I am looking for a direction where I am doing wrong. Appreciate any 
> clue in the right way to achieve multi client streaming.
>
> Thanks
>


-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" 

[grpc-io] Re: GRPC Serializer does call at the time of GRPC request calling why

2023-05-18 Thread 'Richard Belleville' via grpc.io
I'm still not 100% sure I understand, but I'll try to clarify my 
assumptions as I go.

> when we want to serialize the we sent it to the protoserrializer file

I'm assuming this means that you're using a custom response serializer 

 
via this keyword argument of the RpcMethodHandler class. "protoserializer" 
is not a file I am familiar with in either the gRPC or protobuf projects, 
so I'm guessing this is your own file and probably your custom serializer.

I'm not sure why you would be using a custom serializer if all you want to 
do is serialize protobufs though. That's the behavior of the default 
serializer.

> why does function call only when server get refresh it whenever I invoke 
the API request at that my serializer  does not call

I'm really not sure what you mean by "refresh" here. gRPC servers have no 
refresh method. If you are using your own RpcMethodHandler with a custom 
response serializer, then it will be invoked every time you return a 
response in your server handler.

> about the custom method Serializer

The capitalization of "Serializer" here is throwing me. There is no 
"Serializer method" in the gRPC Python API surface, nor do we have any 
methods that are capitalized, but individual proto files _do_ capitalize 
all of their methods. Is this a particular RPC method that your or another 
member of your organization has written into a proto file?
On Thursday, May 18, 2023 at 10:58:22 AM UTC-7 Tripti Kothari wrote:

>
> HI  in GRPC when we want to serialize the we sent it to the 
> protoserrializer file in python right why does function call only when 
> server get refresh it whenever I invoke the API request at that my 
> serializer  does not call why it happen like this and I want to ask about 
> the custom method Serializer in grpc python  I hope my issue now is clear
> On Thursday, May 18, 2023 at 11:19:19 PM UTC+5:30 Richard Belleville wrote:
>
>> I'm not clear on what the issue is exactly. Can you please add some more 
>> detail here?
>>
>> On Wednesday, May 17, 2023 at 11:24:51 PM UTC-7 Tripti Kothari wrote:
>>
>>>
>>> HI we are using python for grpc as a server and client both
>>> On Thursday, May 18, 2023 at 1:24:33 AM UTC+5:30 Zach Reyes wrote:
>>>
 What language are you using? Can you please provide more information? I 
 don't know what pdb is referring to.

 On Monday, May 15, 2023 at 7:45:10 AM UTC-4 Joseph Anjilimoottil wrote:

> Hi Guys,
> I am working on the project and i want to do  nested serializer  and 
> in that i want  to do data filtering based on the request while API 
> calling 
> but the pdb does not hit but while saving it hit the pdb  in the 
> serialize 
> why.
>
> can you please give me answer on these questions
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/2fa57f5a-ed1f-483a-b4ea-77605b4cf961n%40googlegroups.com.


[grpc-io] Re: GRPC Serializer does call at the time of GRPC request calling why

2023-05-18 Thread 'Richard Belleville' via grpc.io
I'm not clear on what the issue is exactly. Can you please add some more 
detail here?

On Wednesday, May 17, 2023 at 11:24:51 PM UTC-7 Tripti Kothari wrote:

>
> HI we are using python for grpc as a server and client both
> On Thursday, May 18, 2023 at 1:24:33 AM UTC+5:30 Zach Reyes wrote:
>
>> What language are you using? Can you please provide more information? I 
>> don't know what pdb is referring to.
>>
>> On Monday, May 15, 2023 at 7:45:10 AM UTC-4 Joseph Anjilimoottil wrote:
>>
>>> Hi Guys,
>>> I am working on the project and i want to do  nested serializer  and in 
>>> that i want  to do data filtering based on the request while API calling 
>>> but the pdb does not hit but while saving it hit the pdb  in the serialize 
>>> why.
>>>
>>> can you please give me answer on these questions
>>>
>>>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/471800b8-96ec-4657-998b-0cbb090db8c6n%40googlegroups.com.


[grpc-io] Re: protobuf-gradle-plugin + python

2023-05-03 Thread 'Richard Belleville' via grpc.io
The gRPC Python plugin can be built using this Bazel target 

.

On Wednesday, April 26, 2023 at 10:25:11 AM UTC-7 apo...@google.com wrote:

>
> https://grpc.io/docs/languages/python/basics/#generating-client-and-server-code
>  
> may be useful
>
> On Thursday, April 20, 2023 at 6:56:25 AM UTC-7 Ciprian Ieremeiov wrote:
>
>> Hello. 
>>
>> Is there a way to manually compile the python grpc generator? I need to 
>> provide the path to grpc python plugin  to generate python grpc 
>> client/server stubs. The gradle plugin already generates Java classes + 
>> client/server stubs and also python classes (but no client/server stubs). 
>> Is there a way to fix this issue?
>>
>>
>> [image: 52.png]
>>
>> Build python grpc? · Issue #52 · google/protobuf-gradle-plugin 
>> 
>> github.com 
>> 
>>
>> Thank you
>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/b9f211fd-f873-47c2-9819-beb1f17be16bn%40googlegroups.com.


[grpc-io] Re: gRPC C++ Android 'Security handshake failed' issue

2023-02-22 Thread 'Richard Belleville' via grpc.io
> ipv4:xx.xxx.xxx.xxx:443

Did you censor the logs or is this really your target address? If this is 
not censored, then this is clearly not a valid ipv4 address.

On Wednesday, February 22, 2023 at 5:44:52 AM UTC-8 Artem V wrote:

> Greetings,
>
> We are trying to port our gRPC C++ solution we use for Unreal Engine 5 to 
> Android. We use gRPC version 1.35 due to Unreal Engine restrictions. Our 
> loading process works fine on Windows but fails on Android with ‘Security 
> handshake failed’. I’ll appreciate any help to find out why this happens 
> and how can I fix it. 
>
> This what I can see in gRPC logs. “Bad address” error seems odd because 
> domain and target address are the same as in working Windows build. Please 
> let me know if you need more info.
>
> LogPlayLevel: UAT: 02-17 16:23:05.434 14296 14912 D UE : 
> [2023.02.17-12.23.05:434][279]LogProject: GRPC 
> C:\Projects\project\grpc\src\core\lib\iomgr\tcp_client_posix.cc::143::1: 
> CLIENT_CONNECT: ipv4:xx.xxx.xxx.xxx:443: on_writable: error="No Error" 
> LogPlayLevel: UAT: 02-17 16:23:05.434 14296 14912 D UE : 
> [2023.02.17-12.23.05:434][279]LogProject: GRPC 
> C:\Projects\project\grpc\src\core\lib\iomgr\timer_generic.cc::470::1: TIMER 
> 0x78fe0e7d90: CANCEL pending=true LogPlayLevel: UAT: 02-17 16:23:05.434 
> 14296 14889 D UE : [2023.02.17-12.23.05:434][279]LogProject: GRPC 
> C:\Projects\project\grpc\src\core\lib\iomgr\timer_generic.cc::719::1: TIMER 
> CHECK BEGIN: now=5010 next=9223372036854775807 tls_min=1410 glob_min=5013 
> LogPlayLevel: UAT: 02-17 16:23:05.434 14296 14889 D UE : 
> [2023.02.17-12.23.05:434][279]LogProject: GRPC 
> C:\Projects\project\grpc\src\core\lib\iomgr\timer_generic.cc::741::1: TIMER 
> CHECK END: r=1; next=5013 LogPlayLevel: UAT: 02-17 16:23:05.434 14296 14912 
> D UE : [2023.02.17-12.23.05:434][279]LogProject: GRPC 
> C:\Projects\project\grpc\src\core\lib\iomgr\tcp_client_posix.cc::107::1: 
> CLIENT_CONNECT: ipv4:xx.xxx.xxx.xxx:443: on_alarm: error="Cancelled" 
> LogPlayLevel: UAT: 02-17 16:23:05.434 14296 14889 D UE : 
> [2023.02.17-12.23.05:434][279]LogProject: GRPC 
> C:\Projects\project\grpc\src\core\lib\iomgr\timer_manager.cc::188::1: sleep 
> for a 3 milliseconds LogPlayLevel: UAT: 02-17 16:23:05.434 14296 14912 D UE 
> : [2023.02.17-12.23.05:434][279]LogProject: GRPC 
> C:\Projects\project\grpc\src\core\lib\channel\handshaker.cc::99::1: 
> handshake_manager 0x78fe5300a0: adding handshaker http_connect 
> [0x78fe540010] at index 0 LogPlayLevel: UAT: 02-17 16:23:05.435 14296 14912 
> D UE : [2023.02.17-12.23.05:435][279]LogProject: GRPC 
> C:\Projects\project\grpc\src\core\tsi\ssl_transport_security.cc::226::1: 
> HANDSHAKE START - before SSL initialization - PINIT LogPlayLevel: UAT: 
> 02-17 16:23:05.435 14296 14912 D UE : 
> [2023.02.17-12.23.05:435][279]LogProject: GRPC 
> C:\Projects\project\grpc\src\core\tsi\ssl_transport_security.cc::226::1: 
> LOOP - before SSL initialization - PINIT LogPlayLevel: UAT: 02-17 
> 16:23:05.435 14296 14912 D UE : [2023.02.17-12.23.05:435][279]LogProject: 
> GRPC 
> C:\Projects\project\grpc\src\core\tsi\ssl_transport_security.cc::226::1: 
> LOOP - SSLv3/TLS write client hello - TWCH LogPlayLevel: UAT: 02-17 
> 16:23:05.435 14296 14912 D UE : [2023.02.17-12.23.05:435][279]LogProject: 
> GRPC C:\Projects\project\grpc\src\core\lib\channel\handshaker.cc::99::1: 
> handshake_manager 0x78fe5300a0: adding handshaker security [0x78fe36c800] 
> at index 1 LogPlayLevel: UAT: 02-17 16:23:05.435 14296 14912 D UE : 
> [2023.02.17-12.23.05:435][279]LogProject: GRPC 
> C:\Projects\project\grpc\src\core\lib\iomgr\timer_generic.cc::367::1: TIMER 
> 0x78fe530108: SET 21100 now 5011 call 0x78fe530138[0x7969167388] 
> LogPlayLevel: UAT: 02-17 16:23:05.435 14296 14912 D UE : 
> [2023.02.17-12.23.05:435][279]LogProject: GRPC 
> C:\Projects\project\grpc\src\core\lib\iomgr\timer_generic.cc::404::1: .. 
> add to shard 12 with queue_deadline_cap=6008 => is_first_timer=false 
> LogPlayLevel: UAT: 02-17 16:23:05.435 14296 14912 D UE : 
> [2023.02.17-12.23.05:435][279]LogProject: GRPC 
> C:\Projects\project\grpc\src\core\lib\channel\handshaker.cc::129::1: 
> handshake_manager 0x78fe5300a0: error="No Error" shutdown=0 index=0, 
> args={endpoint=0x78fd80e780, args=0xb479c26c32e0 {size=9: 
> grpc.primary_user_agent=grpc-c++/1.35.0, 
> grpc.client_channel_factory=0x7920c00c20, 
> grpc.channel_credentials=0x7920510740, grpc.server_uri=dns:///
> api-studio.project.ai, grpc.subchannel_pool=0x796cd3b200, 
> grpc.default_authority=api-studio.project.ai, grpc.http2_scheme=https, 
> grpc.security_con nector=0x7920ed3200, 
> grpc.subchannel_address=ipv4:xx.xxx.xxx.xxx:443}, 
> read_buffer=0xb47951a27c00 (length=0), exit_early=0} LogPlayLevel: UAT: 
> 02-17 16:23:05.435 14296 14912 D UE : 
> [2023.02.17-12.23.05:435][279]LogProject: GRPC 
> C:\Projects\project\grpc\src\core\lib\channel\handshaker.cc::176::1: 
> handshake_manager 0x78fe5300a0: calling handshaker http_connect 
> [0x78fe540010] at index 

[grpc-io] Re: grpc.io

2023-02-22 Thread 'Richard Belleville' via grpc.io
> Alternatively, you could check out the artifacts hosted at piwheels 
.

To install from piwheels, run the following:

pip install --extra-index-url=https://www.piwheels.org/simple grpcio 
grpcio-tools

This will avoid a from-source build, which is usually difficult on 
resource-constrainted Raspberry pis.

If you're still seeing the glibc version requirement higher than the glibc 
installed (as indicated by ldd --version), then you might consider updating 
your distro with sudo apt dist-upgrade.
On Thursday, February 16, 2023 at 6:30:41 PM UTC-8 Jeffrey Berg wrote:

> @Amanda
>
> I assume you either got it to work or have given up by now.  So, this is 
> for others.
> When you run the pip install with --no-binary option, it builds from a 
> compressed archive (from source code).
> I have built the grpc libraries from source before, and there are a ton of 
> dependencies, so keep these in mind:
>
>- If you are building on an ARM system, it's going to take a long 
>time.  It took 3-4 hours on my system.
>- Watch a separate tab with htop running, and monitor your memory usage
>   - If memory runs out, your system will crash, but it will look like 
>   its still running if you are remoted into it
>  - In this case you can create a swap file (I use a 6GB swap 
>  file) for extra memory during builds
>   - My minimal hardware crashes, and even the local serial terminal 
>   is unusable (have to reboot the system)
>- Periodically check df -h ("disk free" with human readable output)
>   - Python will do the build in /tmp, which often does not have 
>   enough storage allocated to complete the build
>  - GRPC will be several GB by the time it's done
>   - You can use "export TMPDIR=/home/yourID/someOtherDestination" to 
>   get the build to take place somewhere else
>
> Good luck everyone.
>
> On Tuesday, August 9, 2022 at 11:26:21 AM UTC-4 Amanda Reich wrote:
>
>> hello! I'm still having this issue and this solution has not worked. I 
>> get the message "running setup.py install for grpcio", and it never 
>> completes the installation. I've even let it run for a full day. are there 
>> any other solutions? 
>>
>> On Monday, June 27, 2022 at 1:59:03 AM UTC-4 obinna _Ac wrote:
>>
>>> thanks, this worked for me
>>>
>>>
>>> On Saturday, May 7, 2022 at 3:53:26 PM UTC+8 Iestyn Lloyd wrote:
>>>
 I found this thread via Google, and have fixed for me, so thought i'd 
 share for future Googlers.
 I tried everything, including breaking everything and having to 
 re-image my Pi from a backup..

 1. Uninstall the offending packages. 
 Using pip list, check your virtual env if you're using one, and remove 
 from there too. Remove from everywhere. Set it on fire.

 pip uninstall grpcio 
 pip uninstall grpcio-status 

 Then install an earlier version of grpcio and grpcio-status. Not sure 
 if something broke in a recent one? 

 pip install grpcio==1.44.0 --no-binary=grpcio 
 pip install grpcio-tools==1.44.0 --no-binary=grpcio-tools

 This then fixed the GLIBC_2.33 not found for me.

 On Friday, January 28, 2022 at 4:01:29 AM UTC Antonio Orozco wrote:

> That is great to know. I think I tried installing that version of 
> grpcio, but was not able to. When you get the chance, please post the 
> commands you ran to downgrade/reinstall version 1.40.0, thanks.
>
> On Thursday, January 27, 2022 at 3:43:39 PM UTC-8 Richard Mejia wrote:
>
>> Mi problema sucedía con el raspbian buster recien descargado en 
>> raspberry pi4, al momento de llamar a la biblioteca de google cloud 
>> vision. 
>> Encontré que tenia instalado la versión de grpcio==1.43.0, he degradado 
>> a 
>> la version grpcio==1.40.0. El problema ha desaparecido.
>>
>> El jueves, 27 de enero de 2022 a las 17:35:59 UTC-5, Antonio Orozco 
>> escribió:
>>
>>> No solution yet. If you really want to use python api package, you 
>>> may need to install ubuntu or other supported os. Otherwise, use other 
>>> package for C++, Go (those work for me).
>>>
>>> On Thursday, January 27, 2022 at 12:43:17 PM UTC-8 Richard Mejia 
>>> wrote:
>>>

 Hola, tenego el mismo problema, alguna solucion?
 El martes, 4 de enero de 2022 a las 9:33:52 UTC-5, Christopher 
 Connor escribió:

> Hi,  
>
> I am running into the same issue with Google Cloud IOT API  on 
> Raspberry PI 4.  Tried the above commands to re-install the grpcio 
> modules, 
> but still not working.  
>
>   File 
> "/home/pi/.local/lib/python3.7/site-packages/google/cloud/iot_v1/__init__.py",
>  
> line 17, in 
> from .services.device_manager import DeviceManagerClient
>   File 
> 

[grpc-io] Re: Python grpc.aio.UnaryUnaryCall and blocking asyncio?

2023-02-22 Thread 'Richard Belleville' via grpc.io
> Does it yield to the event loop while reading data

It absolutely should be 

 
and if it's not, then we would consider it a bug. I will note that 
deserializing 
the protobuf 

 
is a blocking operation, so a very complex proto (or simply a very big one, 
where a copy takes a long time) could explain the issue you're seeing.

> Do we have to use grpc.aio.UnaryStreamCall to read() and yield 
periodically?

If the issue is the deserialization time, then I would expect this to help.

Any chance you can do a little ad-hoc measurement to determine what the 
source of the blocking is? If this is a common problem, I suppose we could 
do a native thread offload for protobuf serialization where we release the 
GIL, do the deserialization in native code, and provide an async def 
deserialization method.
On Thursday, February 16, 2023 at 10:18:55 AM UTC-8 Charles Chan wrote:

> Hello, 
>
> We are using Python 3 to implement a HTTP server using asyncio, the HTTP 
> server initiates a grpc.aio.UnaryUnaryCall to the backend to retrieve a 
> (large) file. We notice these requests seems to be block other requests on 
> the asyncio server.
>
> How does grpc.aio.UnaryUnaryCall work underneath the covers? Does it yield 
> to the event loop while reading data, in between every chunk of data? Do we 
> have to use grpc.aio.UnaryStreamCall to read() and yield periodically?
>
> Thanks
>
> Note: I found another thread (
> https://groups.google.com/g/grpc-io/c/r7thQeaAYYI/m/2s8qad9ZBAAJ) but it 
> doesn't fully answer my question.
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/d0144dfb-92a4-4a3f-8ca5-f7762d4255f5n%40googlegroups.com.


Re: [grpc-io] Re: Suggested approach to chain gRPC Calls

2023-02-02 Thread 'Richard Belleville' via grpc.io
1. In general, the channel will remain open indefinitely regardless of the
state of the underlying TCP connection.
2. In most cases, __del__ will probably fine, but __del__ is not reliably
called when an object goes out of scope. It may happen an arbitrary amount
of time after that. If you're *really* worried about deterministically
closing the connection, you'll want to add an explicit close method that
closes the channel.

On Thu, Feb 2, 2023 at 6:35 AM Jens Troeger  wrote:

> Thanks, that seems to work!
>
> But it leads me to the next questions:
>
>1. If for whatever reason the channel is closed, do I need to reopen
>it or does that channel instance manage disconnects itself? Or: how do I
>keep the stub alive?
>2. I presume that I should close the channel in the __del__()
>
>method of my Servicer?
>
> Cheers,
> Jens
>
>
> On Thursday, February 2, 2023 at 11:26:09 AM UTC+10 rbel...@google.com
> wrote:
> Jens,
>
> In general, the best way to do this is to simply create the channel in
> your Servicer constructor and then use self._channel or self._stub from
> your server handlers.
>
> --
> You received this message because you are subscribed to the Google Groups "
> grpc.io" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to grpc-io+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/grpc-io/f813855d-15c2-488c-9a18-fa135e093fc1n%40googlegroups.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CAOew_sEQoNL2a%3Dn2U5q6jz9otso%2B8i5Qn8-8tALg88EvAkdS%3DQ%40mail.gmail.com.


[grpc-io] Re: Suggested approach to chain gRPC Calls

2023-02-01 Thread 'Richard Belleville' via grpc.io
Jens,

In general, the best way to do this is to simply create the channel in your 
Servicer constructor and then use self._channel or self._stub from your 
server handlers.

On Wednesday, February 1, 2023 at 5:24:28 PM UTC-8 Jens Troeger wrote:

> Thank you, Xuan!
>
> Hi, you're correct that channel should be created either during server 
> start or during initialization, you can then pass the stub reference to 
> server handler. 
>
> Is there a recommended approach to hook into the server 
> start/initialization? And suppose I can create a stub, how do I then pass 
> it down to the handlers? Is there state I can share for intercepts, or how 
> do I best create and access such a “global” resource?
>
> Cheers,
> Jens 
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/6255e8bb-6c65-46be-a0e4-0e6c47f8cc77n%40googlegroups.com.


[grpc-io] Re: Kubernetes, NodeJS and gRPC

2023-01-19 Thread 'Richard Belleville' via grpc.io
Can you please provide more context? Your headless service should create a 
DNS entry in kubedns as you've described and the kubernetes NodeJS client 
should be able to resolve that DNS entry. Exactly what is the issue you are 
experiencing? 

On Sunday, January 15, 2023 at 9:21:44 PM UTC-8 Sharan Karthikeyan wrote:

> I have created a headless service in kubernetes for gRPC server. But, I'm 
> not able to connect that server in NodeJS client the connection url like 
> this "bbl-org-server.default.svc.cluster.local" -- 
> "..svc.cluster.local". Please help me.
>
> Regards,
> Sharan
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/a34a3e3b-e986-4669-a171-6427821cb22en%40googlegroups.com.


[grpc-io] Re: gRPC python socket_mutator

2023-01-19 Thread 'Richard Belleville' via grpc.io
As I said, you're going very far off-road here. I don't think anyone has 
ever tried this before.

On Monday, January 16, 2023 at 8:42:12 AM UTC-8 Rodrigo Alexandre wrote:

> I have tried that but I was not able to figure out how to make it work. Do 
> you have any code examples of something like that being done?
>
> On Friday, 13 January 2023 at 23:33:24 UTC rbel...@google.com wrote:
>
>> You're going very far off road here, but you'd need to write a C 
>> extension that instantiates a instance of grpc_socket_mutator 
>> 
>>  
>> that implements the functionality that you want, then put the pointer into 
>> a Python int object and pass it as the value in the channel arg.
>>
>> On Tuesday, January 10, 2023 at 4:06:04 PM UTC-8 Rodrigo Alexandre wrote:
>>
>>> Hello everyone,
>>>
>>> I am trying to make a gRPC client to use a specific exit interface.
>>> For that, I am setting the value for the "socket_mutator" in the 
>>> channel_arguments.
>>> I defined a function that takes a socket as an argument and modifies it 
>>> however I receive a:
>>> *TypeError: Expected int, bytes, or behavior, got *
>>>
>>> Does someone know what should I place in the socket_mutator value 
>>> instead?
>>>
>>> Thank you for your time.
>>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/acbbf7ad-8932-480b-bbf9-6e28125d3324n%40googlegroups.com.


[grpc-io] Re: raspberry pi zero

2023-01-19 Thread 'Richard Belleville' via grpc.io
I think what's happening here is the installing is falling back to a 
from-source build because PyPi doesn't have precompiled artifacts for your 
architecture. The compilation process is taking too much memory for your 
machine and the compiler process gets OOM killed with SIGKILL.

Exactly which architecture you have depends on the model of your RasPi 
Zero. An RPi Zero W is ARMv6Z and an RPi Zero 2 W would be ARMv8. 
Regardless, piwheels  should be able to provide 
precompiled artifacts for you.



On Thursday, January 19, 2023 at 4:45:02 AM UTC-8 Neil Butler wrote:

> Hi,
>
> I am trying to install firebase-admin on a raspberry pi zero and when it 
> comes to installing grpcio the installation is killed: 
> [image: image.png]
>
> I have tried using Thonny also and i get "process returned with code -9"
>
> are you aware if this is a known thing with raspberry pi zero/raspberry pi 
> OS?
>
> do you have any idea how I can get it installed?
>
>
> thanks,
>
> n
>
>
> -- 
> Neil Butler,
> PDST Advisor for LCCS
> 0863872433
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/6e519532-aae0-4efa-a359-c82ecc85f73bn%40googlegroups.com.


[grpc-io] Re: gRPC python socket_mutator

2023-01-13 Thread 'Richard Belleville' via grpc.io
You're going very far off road here, but you'd need to write a C extension 
that instantiates a instance of grpc_socket_mutator 

 
that implements the functionality that you want, then put the pointer into 
a Python int object and pass it as the value in the channel arg.

On Tuesday, January 10, 2023 at 4:06:04 PM UTC-8 Rodrigo Alexandre wrote:

> Hello everyone,
>
> I am trying to make a gRPC client to use a specific exit interface.
> For that, I am setting the value for the "socket_mutator" in the 
> channel_arguments.
> I defined a function that takes a socket as an argument and modifies it 
> however I receive a:
> *TypeError: Expected int, bytes, or behavior, got *
>
> Does someone know what should I place in the socket_mutator value instead?
>
> Thank you for your time.
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/5c79267b-9bb9-4eea-9bcb-da2326d025d2n%40googlegroups.com.


[grpc-io] Re: Can't build simple python gRPC client/server on Mac M2 (Arm64) with Bazel

2022-12-15 Thread 'Richard Belleville' via grpc.io
I think this message is the solution:

*external/com_github_grpc_grpc/src/core/lib/gpr/useful.h:109:17: error: use 
of 'auto' in lambda parameter declaration only available with '-std=c++14' 
or '-std=gnu++14'*

We require C++14 and you're not compiling against it. Have you tried adding 
build 
--copt=-std=c++14 in your bazelrc  file?

On Wednesday, December 14, 2022 at 1:32:50 AM UTC-8 o...@blix.ai wrote:

> Hi,
> I'm trying to set up my working environment on my Mac M2.
>
> I want to use Bazel for building my projects when using gRPC and I'm 
> struggling with making it build.
>
> I'm using the example from here 
>  with 
> one exception: I'm using grpc 1.51.1 instead by specifying:
> *http_archive(*
> * name = "com_github_grpc_grpc",*
> * sha256 = 
> "b55696fb249669744de3e71acc54a9382bea0dce7cd5ba379b356b12b82d4229",*
> * strip_prefix = "grpc-1.51.1",*
> * urls = ["https://github.com/grpc/grpc/archive/v1.51.1.tar.gz 
> "],*
> *)*
>
> When I try to run bazel build ... I get the following error:
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> *INFO: Analyzed 5 targets (141 packages loaded, 4492 targets 
> configured).INFO: Found 5 targets...INFO: From Generating Descriptor Set 
> proto_library 
> @com_github_cncf_udpa//xds/service/orca/v3:pkg:xds/service/orca/v3/orca.proto:14:1:
>  
> warning: Import validate/validate.proto is unused.INFO: From Generating 
> Descriptor Set proto_library 
> @com_github_cncf_udpa//xds/type/v3:pkg:xds/type/v3/typed_struct.proto:10:1: 
> warning: Import validate/validate.proto is unused.ERROR: 
> /home/or/.cache/bazel/_bazel_or/72d6b6069a50c7d00e12c5ccd81a12ad/external/com_github_grpc_grpc/BUILD:524:16:
>  
> Compiling src/core/plugin_registry/grpc_plugin_registry_extra.cc failed: 
> (Exit 1): gcc failed: error executing command /usr/bin/gcc 
> -U_FORTIFY_SOURCE -fstack-protector -Wall -Wunused-but-set-parameter 
> -Wno-free-nonheap-object -fno-omit-frame-pointer '-std=c++0x' -MD -MF ... 
> (remaining 113 arguments skipped)Use --sandbox_debug to see verbose 
> messages from the sandbox and retain the sandbox build root for debuggingIn 
> file included from 
> external/com_github_grpc_grpc/src/core/lib/avl/avl.h:26,
>  from 
> external/com_github_grpc_grpc/src/core/lib/channel/channel_args.h:40,  
>from 
> external/com_github_grpc_grpc/src/core/lib/channel/channel_args_preconditioning.h:25,
>   
>from 
> external/com_github_grpc_grpc/src/core/lib/config/core_configuration.h:25,  
>from 
> external/com_github_grpc_grpc/src/core/plugin_registry/grpc_plugin_registry_extra.cc:21:external/com_github_grpc_grpc/src/core/lib/gpr/useful.h:
>  
> In function 'int grpc_core::QsortCompare(const 
> absl::lts_20220623::variant&, const 
> absl::lts_20220623::variant ...>&)':external/com_github_grpc_grpc/src/core/lib/gpr/useful.h:109:17: 
> error: use of 'auto' in lambda parameter declaration only available with 
> '-std=c++14' or '-std=gnu++14'  109 |   [&](const auto& x) {  | 
> ^~~~In file included from 
> external/com_github_grpc_grpc/src/core/lib/gprpp/ref_counted.h:32,  
>from 
> external/com_github_grpc_grpc/src/core/lib/gprpp/orphanable.h:29,  
>from 
> external/com_github_grpc_grpc/src/core/lib/gprpp/dual_ref_counted.h:28,
>  from 
> external/com_github_grpc_grpc/src/core/lib/channel/channel_args.h:43,  
>from 
> external/com_github_grpc_grpc/src/core/lib/channel/channel_args_preconditioning.h:25,
>   
>from 
> external/com_github_grpc_grpc/src/core/lib/config/core_configuration.h:25,  
>from 
> external/com_github_grpc_grpc/src/core/plugin_registry/grpc_plugin_registry_extra.cc:21:external/com_github_grpc_grpc/src/core/lib/gprpp/ref_counted_ptr.h:
>  
> In member function 'grpc_core::RefCountedPtr& 
> grpc_core::RefCountedPtr::operator=(grpc_core::RefCountedPtr&&)':external/com_github_grpc_grpc/src/core/lib/gprpp/ref_counted_ptr.h:58:16:
>  
> error: 'exchange' is not a member of 'std'; did you mean 
> 'absl::lts_20220623::exchange'?   58 | 
> reset(std::exchange(other.value_, nullptr));  |   
>  ^~~~In file included from 
> external/com_google_absl/absl/types/optional.h:39, from 
> external/com_github_grpc_grpc/src/core/lib/channel/channel_args.h:34,  
>from 
> 

[grpc-io] Re: ImportError: cannot import name 'shutdown_grpc_aio' from 'grpc._cython.cygrpc'

2022-11-21 Thread 'Richard Belleville' via grpc.io

In general, this sort of error implies that the installed artifact's 
platform does not match that of the execution environment. I would check 
with the maintainers of the environment about this if the environment is 
supposed to handle the installation.
On Monday, November 14, 2022 at 1:37:57 AM UTC-8 Wong Kin Sun wrote:

> Hi, I am following this tutorial on model deployment (
> https://codelabs.developers.google.com/vertex-image-deploy#6), but I ran 
> into a issue when importing the aiplatform library which imports the grpc 
> library.
>
> When running "from google.cloud import aiplatform", I get the following 
> error message:
>
> ImportError: cannot import name 'shutdown_grpc_aio' from 
> 'grpc._cython.cygrpc' (/opt/conda/lib/python3.7/site-packages/grpc/_cython/
> cygrpc.cpython-37m-x86_64-linux-gnu.so)
>
> The entire error message can be found in the attached picture.
>
> The versions of the libraries used are as shown:
>
> google-api-core 2.10.1 
> google-api-python-client 2.55.0 
> google-cloud-aiplatform 1.17.0 
> grpcio 1.33.1 
> grpcio-gcp 0.2.2 
> grpcio-status 1.47.0
>
> I would like to ask why the import error is occurring and how it can be 
> resolved.
>
> Thank you.
>
> [image: grpc_error_message.JPG]
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/0c5ce19a-6fae-4919-ab7c-3ec7efa3260bn%40googlegroups.com.


[grpc-io] Re: gRPC for the R programming language

2022-11-15 Thread 'Richard Belleville' via grpc.io
The core gRPC team does not currently have plans to extend support to R. 
Frankly, this is the first request I've heard for R support. If there is a 
need here, we'd love to hear more about it.

With that said, gRPC is an open protocol and the gRPC Core codebase is open 
source. The C++ Core API 
 is designed 
specifically for use with foreign function interfaces like R's 
. 
This is how we implemented Python, Ruby, PHP, etc. My gut says that getting 
a basic client working is about the size of a weekend project. We'd be 
happy to give you (or anyone else) the guidance you'd need to get that off 
the ground.

Thanks,
Richard Belleville
gRPC Team

On Tuesday, November 15, 2022 at 4:50:03 AM UTC-8 Jim Sheldon wrote:

> Hello!
>
> Are there plans to add gRPC to the R language?
> It would help coordinate work between software and science.
>
> Thanks,
> Jim
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/9c3a8325-a6d6-49c6-a86e-78cf5c1d8a90n%40googlegroups.com.


[grpc-io] Protobuf security bulletin

2022-09-27 Thread 'Richard Belleville' via grpc.io
The protocol buffers project has published a security bulletin affecting 
protobuf for C++ and Python: 
https://github.com/protocolbuffers/protobuf/security/advisories/GHSA-8gq9-2x98-w8hf

Please consider upgrading your protobuf version ASAP. The following gRPC 
versions have been tested with the patched protobuf versions and will work 
properly with them:

   - grpc (C++) (1.49.1 , 
   1.48.2 , 1.47.2 
   , 1.46.5 
   )
   - grpcio , grpcio-tools 
    (Python) (1.49.1 
   , 1.48.2 
   , 1.47.2 
   , 1.46.5 
   )

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/893c2a1a-c55f-41d8-88bb-05af3cb1a270n%40googlegroups.com.


Re: [grpc-io] C++ - multi-vendor gRPC -C++ "dial-out" collector

2022-06-15 Thread 'Richard Belleville' via grpc.io
> Nowadays, all big players (Cisco, Huawei, Juniper, Nokia ...) support 
"yang" to model the data & gRPC with multiple encoding (JSON, GPB-KV, GPB) 
to share it across the network.

Interesting. I used to work in telecom until 2018. Back then, RESTConf 
still seemed to be the dominant format. The telecom industry's penchant for 
acronyms doesn't disappoint. I had to look up what GPB stands for even 
though I work on the gRPC team.

> This is why I recently started the development of a gRPC-C++ "dial-out" 
collector & I was asking myself if someone else is already working on 
something similar or might be interested in joining the project.

I'm not aware of any existing system specifically for OSS gRPC. I imagine 
you'd write a client interceptor that would log the initiation of an 
outgoing connection, which would log the connection to a per-process 
datastore. Then, a separate thread would periodically send a batch of 
updates to an aggregation server. Depending on the scale of your system, 
the design of the aggregation server could get tricky.

On Wednesday, June 8, 2022 at 6:08:35 AM UTC-7 salvatore...@gmail.com wrote:

> Hi Community, 
>
> I was looking for an efficient way to collect metrics from a relatively 
> big (>1000) & multi-vendor network.
>
> Nowadays, all big players (Cisco, Huawei, Juniper, Nokia ...) support 
> "yang" to model the data & gRPC with multiple encoding (JSON, GPB-KV, GPB) 
> to share it across the network.
>
> This is why I recently started the development of a gRPC-C++ "dial-out" 
> collector & I was asking myself if someone else is already working on 
> something similar or might be interested in joining the project.
>
> The development is done with C++ using the gRPC's async API plus 
> multi-threading to maximize scalability.
>
>
> Regards,
> Salvatore.
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/b01e7326-8f6b-495f-8a64-4bcf3de6eb55n%40googlegroups.com.


[grpc-io] Re: Wrong instructions on building and installing gRPC?

2022-06-15 Thread 'Richard Belleville' via grpc.io
I just did a run-through of the instructions (on the master branch) and 
they seem to be working properly. Would you mind including a log of the 
exact commands you ran?
On Friday, June 10, 2022 at 7:21:32 AM UTC-7 Lennard wrote:

> Hi,
> I have just tried to install grpc on my local machine (Ubuntu 20.04) by 
> following these steps: 
> https://grpc.io/docs/languages/cpp/quickstart/#build-and-install-grpc-and-protocol-buffers
>
> However, when I try to run "make install", I get the error message 
> "Installing via 'make' is no longer supported. Use cmake or bazel instead."
>
> Shouldn't the aforementioned guide be updated accordingly?
> (Note: I am not asking how to fix this error. I just think an installation 
> guide on the official website should just work under normal circumstances. 
> Or did I do something wrong somewhere?)
> Best regards,
> Lennard
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/f575307e-a693-44b8-8662-9353d411ad45n%40googlegroups.com.


[grpc-io] Re: What is the last version of grpcio, which is compatible with Python 2.7.8?

2022-05-25 Thread 'Richard Belleville' via grpc.io
Recent answer to the same question. 


On Monday, April 18, 2022 at 3:24:48 AM UTC-7 xy l wrote:

> Hello, Guys,
> I have an embedded system with Python 2.7.8. So I need to do some 
> porting work.
> Thanks a lot
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/f617b15b-8047-4906-9497-93608652eac7n%40googlegroups.com.


[grpc-io] Re: When will ProtoReflectionDescriptorDatabase go into the official release

2022-04-06 Thread 'Richard Belleville' via grpc.io
This should be in 1.46.0, which is scheduled to be released 4/19/2022. In 
the meantime, you can access this functionality in our nightly builds at 
packages.grpc.io.

On Monday, April 4, 2022 at 10:01:08 PM UTC-7 Hejin Liu wrote:

> Hi group,
>
> I'm a python developer who's trying to add the version reflection feature 
> into my project. I noticed that you have the 
> ProtoReflectionDescriptorDatabase 
> 
>  feature 
> went in last week which is really promising to me until I found it was not 
> in the latest official release yet. May I ask what's your plan for the next 
> python pkg release? Will this feature be there and when will you release 
> next version?
>
> Best 
> Hejin
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/d20c7a64-96b6-4ec4-bfc1-2bbb3990ede3n%40googlegroups.com.


[grpc-io] Re: grpc with python 2

2022-03-30 Thread 'Richard Belleville' via grpc.io
Bhaskar,

As Vinod pointed out, yes, Python 2 support was dropped in 2020. However, 
all previous releases supporting Python 2 should theoretically continue 
working. Try pinning to grpcio==1.39.0.

Realistically speaking though, you should really consider moving off of 
Python 2. No bug or security fixes are being backported to this version, so 
you put yourself at more and more risk over time by sticking with Python 2.

On Wednesday, March 30, 2022 at 8:39:21 AM UTC-7 Vinod Lasrado wrote:

> Hi Bhaskar,
>Support for python 2 was dropped in 2020. 
>
> --Vinod
>
> On Wednesday, March 30, 2022 at 7:20:35 AM UTC+5:30 bhaskar rana wrote:
>
>> Hi Grpc team,
>>
>> I want to run  python 2 with grpc, i did see it require 3.5+.
>> Is there any way to run grpc with python 2.
>> Kindly assit
>>
>> Regards
>> Bhaskar Rana
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/33b0fc87-c56a-4c5b-a0be-b9618546e9a1n%40googlegroups.com.


[grpc-io] PyPi Prerelease Deletion Policy

2022-03-23 Thread 'Richard Belleville' via grpc.io
As a C extension supporting many platforms, each release of the grpcio PyPi 
project  currently takes up over 1 GB. 
This has previously led to the project hitting its storage ceiling. To 
combat this issue, going forward, *grpcio will begin deleting its PyPi 
prereleases older than 1 year*.

Please reach out within the next week if you have any concerns with this 
new policy.

Thanks,
Richard Belleville
gRPC Team

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/ff7bfb2b-3dbc-4219-b277-76665a845345n%40googlegroups.com.


[grpc-io] Community Feedback Needed: gRPC Routing in Production

2022-02-24 Thread 'Richard Belleville' via grpc.io
Hello,

I'm working on a set of gRPC routing APIs 
 and would like 
to get some community feedback on usage patterns. Please answer any/all 
applicable questions:

1. Do you route external gRPC traffic through an L7 load balancer/reverse 
proxy? (for example, AWS application load balancer, GCP HTTP Load Balancer, 
nginx)

2. If yes to #1, do you also route external non-gRPC HTTP traffic through 
this load balancer/reverse proxy into your system?

3. If yes to #2, are there any hostname/port combinations on which both 
gRPC and non-gRPC HTTP paths exist? For example, foo.com:80/v1/restapi 
serves REST traffic while foo.com:80/fooorg.widget.WidgetService/ serves 
gRPC

4. Where do your workloads run? AWS? Azure? GCP? On-prem? Some combination 
of these?

Thank you!
Richard Belleville
gRPC Team

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/9a607632-f524-4641-a4c4-e69055282852n%40googlegroups.com.


Re: [grpc-io] Slow gRPC communication with large file in Python

2022-02-08 Thread 'Richard Belleville' via grpc.io
Josh,

I don't think I'm able to reproduce with your repo. I'm getting something
like 0.2s on my desktop:

(venv) rbellevi@rbell:~/Dev/tmp/grpc_min$ python3 grpc_client.py
0.28313207626342773s18506294 photons in 1008640 bins
0.14323067665100098s18506294 photons in 1008640 bins
(venv) rbellevi@rbell:~/Dev/tmp/grpc_min$ python3 grpc_client.py
0.23985695838928223s18506294 photons in 1008640 bins
0.13980460166931152s18506294 photons in 1008640 bins

Also, your requirements.txt includes "grpc=1.0.0". I'm assuming this is
just an typo. I used "grpcio".

Maybe try running cProfile to generate a profile of the repro on your
machine and sharing that here?

On Tue, Feb 8, 2022 at 10:34 AM Josh Parks  wrote:

> I'm trying to do a large array transfer (10-50MB) over gRPC in python and
> it's quite slow (5-10 seconds, both client and server on localhost). I've
> tried both streaming and unary requests, and they both seem to run slowly.
>
> For more details/conversation, here's the stackoveflow question:
> https://stackoverflow.com/questions/70993553/grpc-slow-serialization-on-large-dataset
>
> And for the minimum reproducible example:
> https://github.com/parksj10/grpc_min
>
> Any help/guidance much appreciated!!!
>
>
> --
> You received this message because you are subscribed to the Google Groups "
> grpc.io" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to grpc-io+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/grpc-io/b713e084-a3d9-46e3-aae2-8501164ca449n%40googlegroups.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CAOew_sFSgyu9AohYy%2BzbH9N%3DhmXXW_u0MP%3DNXW1wyhZL9UM7UQ%40mail.gmail.com.


[grpc-io] Re: How do I build WHL packages after changing the code

2022-01-12 Thread 'Richard Belleville' via grpc.io
The grpcio package doesn't actually depend on protobuf. Only generated code 
(i.e. _pb2.py and _pb2_grpc.py) have this dependency. Instead, you'll want 
to build the protobuf wheel from the protobuf repository and use that 
alongside an off-the-shelf grpcio wheel.

Also, to answer the obvious follow-up question: "Why do we have a vendored 
protobuf directory in our repo at all then?". The grpcio-tools wheel *does 
*build 
in the libprotobuf native code to generate the aforementioned _pb2.py and 
_pb2_grpc.py files.

On Thursday, January 6, 2022 at 10:33:54 PM UTC-8 hr d wrote:

> I added some log codes in grpc/third_party/protobuf/src/google/protobuf 
> code, to help locate the problem, i install grpc python from source, but 
> The change did not take effect. Please give me some advice, thanks.
>
>
> *Supported Python Versions*
>
> Python 3.7.9
>
> *What operating system (Linux, Windows,...) and version?*
>
> Ubuntu 18.04.5 LTS
>
> *What did you do?*
>
> I added some log codes in grpc/third_party/protobuf/src/google/protobuf 
> code, then i compiled protobuf , compiled grpc and install grpc python 
> from source
>
>
>
>
>
>
> *mkdir -p "third_party/protobuf/cmake/build"cd 
>  "third_party/protobuf/cmake/build"cmake -Dprotobuf_BUILD_TESTS=OFF 
> -DCMAKE_BUILD_TYPE=Release ..make -j14 installpopdmake -j14*
>
> i  modified setup.py, add this code:
> *CORE_C_FILES = filter(lambda x: 'third_party/protobuf' not in x, 
> CORE_C_FILES)*
> *PROTOBUF_INCLUDE = (os.path.join('/usr', 'include', 'protobuf'),)*
> EXTENSION_INCLUDE_DIRECTORIES = ((PYTHON_STEM,) + CORE_INCLUDE + 
> ABSL_INCLUDE +
>  ADDRESS_SORTING_INCLUDE + CARES_INCLUDE +
>  RE2_INCLUDE + PROTOBUF_INCLUDE + 
> SSL_INCLUDE + UPB_INCLUDE +
>  UPB_GRPC_GENERATED_INCLUDE +
>  UPBDEFS_GRPC_GENERATED_INCLUDE +
>  XXHASH_INCLUDE + ZLIB_INCLUDE)
>
> then i run:
>
>
> *pip install -rrequirements.txtGRPC_PYTHON_BUILD_WITH_CYTHON=1 pip install 
> .*
>
> *What did you expect to see?*
>
> Changes for Protobuf take effect
>
> *What did you see instead?*
>
> Changes for Protobuf did not take effect  
>
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/fff4cc45-c5e6-4b25-9a0a-50b3de6b3e85n%40googlegroups.com.


[grpc-io] Re: How Does grpcio - Python handle interceptor exceptions?

2021-12-01 Thread 'Richard Belleville' via grpc.io
I think this 

 
is the code block you're wondering about. There's one for each of the four 
arities.

I hope that answers your question. Let me know if you want any more info.

On Thursday, November 18, 2021 at 3:17:29 AM UTC-8 shmis...@gmail.com wrote:

> If a exception happened at one of trailing interceptors
>
> gRPC somehow replace the response with _FailureOutcome 
> 
> and makes the interceptor has no problem.
>
> So I thought there should be some where wrapping
> user overrided method 
> 
>  
> with try / except
>
> and when Exception is raised, replace the response with _FailureOutcome 
> 
>
> *pseudocode*
>
> ```python
> # somewhere deep in grpcio ...
>
> def run_user_defined_interceptor_method(response):
> try :
> response = intercept_unary_unary( continuation, client_call_details, 
> request   )
> except Exception as e:
> response = _FailureOutcome(exception = e)
> return response
> ```
> so even I run into a exception in interceptor and interceptor handle well
> eventually at client code, not handling error 
> bump
>  
> into exception
>
> I'm quite sure about how it works because 
> codes under where exception happened doesn't run
>
> I can't find where *those wrappings* are happened like pseudocode 
>
> class 
> 
>  
> that implements abstract method "intercept_unary_unary"
> has no more than a abstract code,
>
> so as I said, there should be something more under grpcio
>
> ...
>
> This design was impressive because
> it looked like mocking kind of Javascript's then(), catch()
> and async pattern, but not actually  async (in a way of propagating error) 
> and makes it easy to trace errors 
>
> most of all,  as a client side, it looks like real network connection even 
> it was problem from interceptor
>
> and as for enterprise level, it looked quite safe from unexpected errors
>
> I have no experience with using gRPC as enterprise level, 
> just got interested at gRPC, I'm senior freshman now.
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/e57ed6aa-9b30-4f51-9124-d7d98da55ad9n%40googlegroups.com.


[grpc-io] Re: helloworld example build mode problem,default is debug,how to change it to release?

2021-10-27 Thread 'Richard Belleville' via grpc.io
This is really more of a cmake question, but a release build can be created 
with cmake -DCMAKE_BUILD_TYPE=Release

On Wednesday, October 20, 2021 at 8:38:08 PM UTC-7 tang102...@gmail.com 
wrote:

> I have build grpc in Ubuntu 16.04.When I build the hellowold example,I 
> find that gdb can run it,I  realise it run debug mode now,how can i to use 
> release mode, I use Cmake release it not work。

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/4fe22b0f-4245-4f51-9d96-cdeb5a7b3f31n%40googlegroups.com.


[grpc-io] Re: ImportError: /lib/arm-linux-gnueabihf/libc.so.6: version `GLIBC_2.33' not found

2021-10-27 Thread 'Richard Belleville' via grpc.io

How did you install grpcio? If you used pip install grpcio, can you please 
include the installation logs?

Did this work on previous versions?
On Monday, October 25, 2021 at 8:33:16 PM UTC-7 Antonio Orozco wrote:

> Traceback (most recent call last):
>   File "fpl_notifications.py", line 4, in 
> *from google.cloud import pubsub_v1*
>   File 
> "/home/antonio/.local/lib/python3.7/site-packages/google/cloud/pubsub_v1/__init__.py",
>  
> line 17, in 
> from google.cloud.pubsub_v1 import types
>   File 
> "/home/antonio/.local/lib/python3.7/site-packages/google/cloud/pubsub_v1/types.py",
>  
> line 25, in 
> from google.api_core import gapic_v1
>   File 
> "/home/antonio/.local/lib/python3.7/site-packages/google/api_core/gapic_v1/__init__.py",
>  
> line 16, in 
> from google.api_core.gapic_v1 import config
>   File 
> "/home/antonio/.local/lib/python3.7/site-packages/google/api_core/gapic_v1/config.py",
>  
> line 23, in 
> import grpc
>   File 
> "/home/antonio/.local/lib/python3.7/site-packages/grpc/__init__.py", line 
> 22, in 
> from grpc import _compression
>   File 
> "/home/antonio/.local/lib/python3.7/site-packages/grpc/_compression.py", 
> line 15, in 
> from grpc._cython import cygrpc
> *ImportError: /lib/arm-linux-gnueabihf/libc.so.6: version `GLIBC_2.33' not 
> found* (required by 
> /home/antonio/.local/lib/python3.7/site-packages/grpc/_cython/
> cygrpc.cpython-37m-arm-linux-gnueabihf.so)
>
> On Monday, October 25, 2021 at 8:22:28 PM UTC-7 Antonio Orozco wrote:
>
>> Hello,
>>
>> Running into the following issue on a Raspberry Pi 4
>>
>> $ lsb_release -a
>> No LSB modules are available.
>> Distributor ID: Raspbian
>> Description:Raspbian GNU/Linux 10 (buster)
>> Release:10
>> Codename:   buster
>> $ uname -a
>> Linux liverpool 5.10.63-v7l+ #1459 SMP Wed Oct 6 16:41:57 BST 2021 armv7l 
>> GNU/Linux
>>
>>
>> Python 3.7.3 (default, Jan 22 2021, 20:04:44)
>> [GCC 8.3.0] on linux
>> Type "help", "copyright", "credits" or "license" for more information.
>> >>> from grpc._cython import cygrpc
>> Traceback (most recent call last):
>>   File "", line 1, in 
>>   File 
>> "/home/antonio/.local/lib/python3.7/site-packages/grpc/__init__.py", line 
>> 22, in 
>> from grpc import _compression
>>   File 
>> "/home/antonio/.local/lib/python3.7/site-packages/grpc/_compression.py", 
>> line 15, in 
>> from grpc._cython import cygrpc
>> ImportError: /lib/arm-linux-gnueabihf/libc.so.6: version `GLIBC_2.33' not 
>> found (required by 
>> /home/antonio/.local/lib/python3.7/site-packages/grpc/_cython/
>> cygrpc.cpython-37m-arm-linux-gnueabihf.so)
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/b9a8c59e-67c5-4250-ac53-7cf7591755c9n%40googlegroups.com.


[grpc-io] Re: One client many servers

2021-10-27 Thread 'Richard Belleville' via grpc.io
> In case you decide for streaming interface.
> As far as I know, for a streaming interface, you define one message type, 
which you can then send multiple times. But it is not possible to send 
different kind of messages to the peer over one streaming interface.
> If you want to send only one type of message, then the streaming 
interface is good enough.

This is not true. Suppose you want to send either FooMsg or BarMsg as a 
request msg, then just do this:

message RequestMsg {
  oneof {
FooMsg foo = 1;
BarMsg bar = 2;
  }
}

On Tuesday, October 26, 2021 at 11:57:01 PM UTC-7 Eberhard Ludwig wrote:

> In case you decide for streaming interface.
> As far as I know, for a streaming interface, you define one message type, 
> which you can then send multiple times. But it is not possible to send 
> different kind of messages to the peer over one streaming interface.
> If you want to send only one type of message, then the streaming interface 
> is good enough.
>
> If you plan to have a real bi-directional message exchange, then it is 
> worth to look for another technology.
> I tried finalmq , this looks quite 
> promising.
>
> Cheers
> Eberhard
>
> Fabiano Ferronato schrieb am Montag, 18. Oktober 2021 um 18:09:45 UTC+2:
>
>> I have a problem to solve: one computer (PC) will send requests to many 
>> devices (e.g. RPi). The devices will execute the request and respond.
>>
>> Is it possible to use gRPC ? 
>>
>> From the documentation (Introduction) it shows the opposite: clients 
>> sending requests to a server. So maybe I'm going the wrong way choosing 
>> gRPC.
>>
>> Any help is much appreciated.
>>
>>
>>  
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/2bfbce29-74d8-4527-9cfa-a2eb811efa87n%40googlegroups.com.


[grpc-io] Re: `GLIBC_2.33' not found

2021-10-20 Thread 'Richard Belleville' via grpc.io
If the precompiled binaries do not meet the constraints of your runtime 
environment, you can also build from source using pip install --no-binary. 
Alternatively, you could check out the artifacts hosted at piwheels 
.

On Wednesday, October 20, 2021 at 4:56:36 AM UTC-7 p.o.seidon wrote:

> I use gRPC in my program, where it reads
>
> import grpc
>
> which calls 
>
> from grpc import _compression
>
> which calls 
>
> from grpc import _compression
>
> which causes 
>
> builtins.ImportError: /lib/arm-linux-gnueabihf/libc.so.6: version 
> `GLIBC_2.33' not found (required by 
> /usr/local/lib/python3.7/dist-packages/grpc/_cython/
> cygrpc.cpython-37m-arm-linux-gnueabihf.so)
>
> I am on a RasPi 4 / 8 GB, Raspberry OS / Buster, installed a few days ago. 
> I installed gRPC by issueing
>
> sudo pip3 install grpcio -U
> sudo pip3 install grpcio -tools -U
>
> Issueing ldd --version yields
>
> ldd (Debian GLIBC 2.28-10+rpt2+rpi1) 2.28
>
> What am I supposed to do now?
>
> Cheers Paul
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/e794458e-152d-4760-b5c2-6e42f4ec1333n%40googlegroups.com.


[grpc-io] Re: Python: Using response streaming api from a done callback

2021-09-15 Thread 'Richard Belleville' via grpc.io
So this is an interesting problem. It certainly is unintuitive behavior. 
I'm also not sure if we should change it. Let me start by explaining the 
internals of gRPC Python a little bit.

A server-streaming RPC call requires the cooperation of two threads: the 
thread provided by the client application calling __next__ repeatedly 
(thread A) and a thread created by the gRPC library that drives the event 
loop in the C extension, which ultimately uses a mechanism like epoll 
(thread B). Under the hood, __next__ (thread A) just checks to see if 
thread B has received a response from the server and, if so, returns it to 
the client code. Normally, this works out just fine.

But thread B has some other responsibilities, including running any RPC 
callbacks. This means that in the scenario you described above, thread A 
and thread B are actually the same thread. So when __next__ is called, 
there is no separate thread to drive the event loop and receive the 
responses.

So that's the cause for the deadlock you described. Now, you might say that 
this is an easy problem to solve. Why not just run the callbacks on a *new* 
thread? 
Then there is no deadlock in this scenario. True. But we've found that 
additional Python threads kill performance because they're all contending 
for the GIL. Doing this at the library level could slow down *many* existing 
workloads. We've actually put quite a bit of effort into *reducing* the 
number of threads  we use in the 
library. There are some options we could consider to make this work out of 
the box without destroying performance, but it's going to take some thought 
and careful benchmarking.

For the moment, I'd recommend that you not initiate an RPC from the 
callback handler and instead use the callback handler just to notify 
another thread that your application has ownership of, whether that's the 
same thread as the unary RPC was initiated from or some other thread that 
you've created yourself.

On Wednesday, September 15, 2021 at 1:09:22 AM UTC-7 Reino Ruusu wrote:

> A further clarification: The thread is not waiting for the future but 
> returns to the event loop. The callback function is definitely executed and 
> the deadlock happens in the call to next(). Also, the same callback 
> function is successful in synchronously making other single-single api 
> calls, but the single-streming call is deadlocked.
>
> keskiviikko 15. syyskuuta 2021 klo 10.49.27 UTC+3 Reino Ruusu kirjoitti:
>
>> Of course I meant to write add_done_callback() instead of 
>> set_done_callback().
>>
>> To clarify, the code looks like this:
>>
>> it = stub.singleStreamApi(...)
>> next(it) # <-- This works as expected
>>
>> fut = stub.singleSingleApi.future(...)
>> def callback(fut):
>> it = stub.singleStreamApi(...)
>> next(it) # <-- This gets stuck in a deadlock
>> fut.add_done_callback(callback)
>>
>> keskiviikko 15. syyskuuta 2021 klo 10.40.46 UTC+3 Reino Ruusu kirjoitti:
>>
>>> I have a case in which a call is made to a 
>>> single-request-streaming-response api (through 
>>> UnaryStreamMultiCallable.__call__()). This api is invoked from a callback 
>>> that is registered using set_done_callback() to a future object returned by 
>>> a call to UnaryUnaryMultiCallable.future(), so that the streaming is 
>>> started asynchronously as soon as the previous call is finished.
>>>
>>> This causes the iterator that is returned for the streaming response to 
>>> deadlock in the first next() call, irrespective of whether the stream is 
>>> producing messages or an exception.
>>>
>>> The streaming call works as expected when called from some other context 
>>> than the done-callback of the previous asynchronous call. This makes me 
>>> suspect that some resource related to the channel is locked during the 
>>> callback execution, resulting in a deadlock in the call to the stream's 
>>> iterator.
>>>
>>> Is there some way around this?
>>>
>>> BR,
>>> -- 
>>> Reino Ruusu
>>>
>>>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/89fca84a-aec9-43e1-8cce-05e7ab09005cn%40googlegroups.com.


[grpc-io] Re: GRPC support in Micropython?

2021-09-15 Thread 'Richard Belleville' via grpc.io
I don't see a way that this would work without significant effort. For 
starters, gRPC expects to be run on top of Linux, Windows, or MacOS. There 
are some forks that make the stack work on BSD, but that's not much 
different from Linux. Based on some quick investigation 
, Micropython is not only the interpreter, but 
also the operating system. You'd have to rewrite the lower layers of the 
Python gRPC stack to hook into the Micropython networking stack.

The second difficulty is that gRPC Python is implemented as a C extension. 
That is, the majority of the codebase is actually in C++, not Python. We 
offer a from-source distribution, but you'd likely have to play with things 
to get a cross-compilation environment set up properly.

There *is *an unofficial gRPC Python stack implemented in pure Python 
. I haven't used it personally, so I 
can't vouch for it, but it's possible that it would be easier to get it to 
run on Micropython.

On Friday, September 10, 2021 at 9:25:08 AM UTC-7 Ofir wrote:

> Hi,
> Is there a known way to use GRPC in micropython?
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/142e1f4a-5156-4b67-8250-2c1445103d53n%40googlegroups.com.


[grpc-io] Re: grpc.channel_ready_future throw inactive rpc error

2021-06-16 Thread 'Richard Belleville' via grpc.io
It seems that the client is failing to connect to the server:

"failed to connect to all addresses",

Are you sure that the server is listening on that port? Try running netstat 
on the server to verify. Are there firewall rules getting in the way? Try 
using nmap from the client to verify.

On Friday, June 4, 2021 at 9:05:47 PM UTC-7 sheikh...@gmail.com wrote:

> Hi Team,
>
> the below code throws exception
>
>
> with grpc.insecure_channel('192.168.0.176:1234') as channel:
>
> grpc.channel_ready_future(channel).start()
> except grpc.FutureTimeoutError:
> sys.exit('Error connecting to server')
>
>
>
>
>
> = EXCEPTION =
> Traceback (most recent call last):
>   File "/Users/sheikhjebran/Desktop/new/mySocket.py", line 53, in 
> advertising()
>   File "/Users/sheikhjebran/Desktop/new/mySocket.py", line 48, in 
> advertising
> response = stub.RequestAdvertisement(Nothing)
>   File 
> "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/grpc/_channel.py",
>  
> line 946, in __call__
> return _end_unary_response_blocking(state, call, False, None)
>   File 
> "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/grpc/_channel.py",
>  
> line 849, in _end_unary_response_blocking
> raise _InactiveRpcError(state)
> grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated 
> with:
> status = StatusCode.UNAVAILABLE
> details = "failed to connect to all addresses"
> debug_error_string = 
> "{"created":"@1622865794.382882000","description":"Failed to pick 
> subchannel","file":"src/core/ext/filters/client_channel/client_channel.cc","file_line":3009,"referenced_errors":[{"created":"@1622865794.382874000","description":"failed
>  
> to connect to all 
> addresses","file":"src/core/ext/filters/client_channel/lb_policy/pick_first/pick_first.cc","file_line":398,"grpc_status":14}]}"
> >
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/8c5d35a9-4010-4fa7-bd1a-bc580811de9en%40googlegroups.com.


[grpc-io] Re: __call__() require one positional argument 'request'

2021-06-16 Thread 'Richard Belleville' via grpc.io

There's not a lot of information to go on here, but your error message is 
pretty clear in one respect:

TypeError: __call__() missing 1 required positional argument: 'request'

You need to supply a request when you invoke your stub (which is presumably 
what you're doing in this snippet).
On Friday, June 4, 2021 at 8:51:23 PM UTC-7 sheikh...@gmail.com wrote:

> Hi Team,
>
> I have created a proto file for a method which dont not take any argument 
> nor return back any thing,
>
> but when i run my code it throws the below exception
>
>
> Connected to pydev debugger (build 211.7442.45)
> 1041
> 1041
> Traceback (most recent call last):
>   File "/Applications/PyCharm 
> CE.app/Contents/plugins/python-ce/helpers/pydev/pydevd.py", line 1483, in 
> _exec
> pydev_imports.execfile(file, globals, locals)  # execute the script
>   File "/Applications/PyCharm 
> CE.app/Contents/plugins/python-ce/helpers/pydev/_pydev_imps/_pydev_execfile.py",
>  
> line 18, in execfile
> exec(compile(contents+"\n", file, 'exec'), glob, loc)
>   File "/Users/sheikhjebran/Desktop/new/mySocket.py", line 52, in 
> sample()
>   File "/Users/sheikhjebran/Desktop/new/mySocket.py", line 35, in sample
> stub.Reset()
> TypeError: __call__() missing 1 required positional argument: 'request'
> python-BaseException
>
> Process finished with exit code 1
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/0d89d7ba-9a5a-404b-b1f8-522b13b80f11n%40googlegroups.com.


Re: [grpc-io] GRPC Server implementation: How to Read data from a file using Python?

2021-05-14 Thread 'Richard Belleville' via grpc.io
I see. Both of these approaches are equally valid. The latter approach
would be a unary RPC and require the server to buffer the entire
spreadsheet in memory, meaning that it wouldn't be able to scale to very
large datasets. However, this would also be the simpler approach in terms
of implementation -- everything can be written synchronously.

The former approach is more robust if you need to scale to large datasets.
This would be a client-unary, server-streaming RPC and the server wouldn't
have to buffer the whole dataset in memory.

A couple of pitfalls specific to Python. The sync stack works with a fixed
sized thread pool, which limits the number of clients that can connect to
your backend at once. In the case of a unary RPC, this is less of an issue.
If you go with the streaming approach, however, misbehaving clients could
eat up a thread indefinitely until you have none left to service requests.
If this is a concern for you, you could use the asyncio stack instead. A
last caveat for Python is that it will be slower than other languages you
could choose, including Go, Java, and C++, all of which are well supported
gRPC implementations which will be more performant.

Thanks,
Richard Belleville



On Fri, May 14, 2021 at 2:14 PM Raja Omer  wrote:

> My confusion is regarding my approach, specifically that how to serve a
> client call for excel data using a  GRPC server? Do I have to store the
> data in some database and then read it into a message and send back on a
> row by row basis or can I just read from the excel directly in the GRCP
> server python file and send the entire excel data as one message response?
> Would any of these approaches be correct and will the server python file be
> the right place for this implementation?
> On Saturday, May 15, 2021 at 2:02:06 AM UTC+5 rbel...@google.com wrote:
>
>> > Do I have to read the excel in this step using python or do I have to
>> make a database connection and pull the excel data from a database?
>>
>> This question doesn't actually seem to have anything to do with gRPC. Is
>> there something I'm missing?
>>
>> On Fri, May 14, 2021 at 1:35 PM Raja Omer  wrote:
>>
>>> I am new to GRPC, and want to know how should a GRCP server which can
>>> read from a file and send the data to a GRCP client in python?
>>>
>>> Scenario implementation: I have an excel file with two columns, ID and
>>> value. My understanding for implementation of this scenario is :
>>> 1- I will have to define a message in a protocol buffer file having
>>> these two attributes.
>>> 2- Inside my protocol buffer file, I will define a service which will
>>> send the data to the client.
>>> 3- Using GRPC tools I will then create _pb2.py and _pb2_grpc.py files.
>>> 4- Now I will write the GRPC server file in python. Inside this file I
>>> will override or define the service I have written in the protocol buffer
>>> file. My confusion is regarding this step. Do I have to read the excel in
>>> this step using python or do I have to make a database connection and pull
>>> the excel data from a database?
>>>
>>> Many thanks!
>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "grpc.io" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to grpc-io+u...@googlegroups.com.
>>> To view this discussion on the web visit
>>> https://groups.google.com/d/msgid/grpc-io/a59b9544-13c0-42fd-afd8-284be0ea34cfn%40googlegroups.com
>>> 
>>> .
>>>
>> --
> You received this message because you are subscribed to the Google Groups "
> grpc.io" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to grpc-io+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/grpc-io/23a8d925-1420-44a4-9b9c-8c35d29c86a7n%40googlegroups.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CAOew_sHZ0mDCFsVtH5EuDpMgBg8%3DU9d0pKdhRwfQCB1H-A%2BzPg%40mail.gmail.com.


Re: [grpc-io] GRPC Server implementation: How to Read data from a file using Python?

2021-05-14 Thread 'Richard Belleville' via grpc.io
> Do I have to read the excel in this step using python or do I have to
make a database connection and pull the excel data from a database?

This question doesn't actually seem to have anything to do with gRPC. Is
there something I'm missing?

On Fri, May 14, 2021 at 1:35 PM Raja Omer  wrote:

> I am new to GRPC, and want to know how should a GRCP server which can read
> from a file and send the data to a GRCP client in python?
>
> Scenario implementation: I have an excel file with two columns, ID and
> value. My understanding for implementation of this scenario is :
> 1- I will have to define a message in a protocol buffer file having these
> two attributes.
> 2- Inside my protocol buffer file, I will define a service which will send
> the data to the client.
> 3- Using GRPC tools I will then create _pb2.py and _pb2_grpc.py files.
> 4- Now I will write the GRPC server file in python. Inside this file I
> will override or define the service I have written in the protocol buffer
> file. My confusion is regarding this step. Do I have to read the excel in
> this step using python or do I have to make a database connection and pull
> the excel data from a database?
>
> Many thanks!
>
> --
> You received this message because you are subscribed to the Google Groups "
> grpc.io" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to grpc-io+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/grpc-io/a59b9544-13c0-42fd-afd8-284be0ea34cfn%40googlegroups.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CAOew_sH%2BNmLTU2co6Gc7-caZEvh7u1ab0oYzHzTYv3b7YvkuEg%40mail.gmail.com.


[grpc-io] gRPC-Core Release 1.37.0

2021-04-20 Thread 'Richard Belleville' via grpc.io
This is the 1.37.0(gilded 
) release 
announcement for gRPC-Core and the wrapped languages C++, C#, Objective-C, 
Python, PHP and Ruby. The latest release notes are here 
.

*Core*
   
   - Bump up minimum supported clang to 4.0. (#25443 
   )
   - Use URI form of address for channelz listen node. (#25785 
   )
   - Implementation CSDS (xDS Config Dump). (#25038 
   )
   - Don't assume that c-ares won't retry failed writes in 
   grpc_core::GrpcPolledFdWindows::SendVUDP. (#25726 
   )
   - Fix an infinite read loop with SRV record resolution on windows. (
   #25672 )
   - xDS status notifier. (#25321 )
   - Remove CAS loops in global subchannel pool and simplify subchannel 
   refcounting. (#25485 )
   - Add missing security field to channelz Socket. (#25593 
   )
   - Disable check_call_host when server_verification_option is not 
   GRPC_TLS_SERVER_VERIFICATION. (#25577 
   )

*C++*
   
   - Remove fault injection environmental variable guard. (#25792 
   )
   - Implement C++ Admin Interface API. (#25753 
   )
   - cmake: Reflect minor version change in SONAME for C++ and C#. (#25617 
   )
   - xDS Client-Side Fault Injection. (#24354 
   )

*C#*
   
   - [C#] Add buildTransitive directory to NuGet package. (#25385 
   )
   - Reduce Grpc.Core nuget size by generating separate .so with/without 
   debug symbols for grpc_csharp_ext. (#25729 
   )
   - make Grpc C# work on aarch64 linux. (#25717 
   )
   - Add support for additional protoc arguments in Grpc.Tools. (#25374 
   )
   - C#: Use explicit native extension loading whenever possible. (#25490 
   )

*Python*
   
   - use boringssl asm optimizations in aarch64 wheel source build. (#25453 
   )
   - Clarify Guarantees about grpc.Future Interface. (#25383 
   )
   - Use crosscompilation to build python armv7 wheels. (#25704 
   )
   - [Aio] Add time_remaining method to ServicerContext. (#25719 
   )
   - Standardize all environment variable boolean configuration in python's 
   setup.py. (#25444 )
   - Crosscompile python aarch64 wheels with dockcross. (#25418 
   )
   - Fix Signal Safety Issue. (#25394 
   )

*Ruby*
   
   - Cherry-pick PR #25429  "Add 
   ruby 3.0 support for mac binary packages" to 1.37.x. (#25869 
   )
   - Include GRPC::GenericService from root namespace. (#25153 
   )
   - Ruby: support for PSM security. (#25330 
   )


-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/f7f2eee2-8ccb-4bfe-ae00-cd96d71310ddn%40googlegroups.com.


[grpc-io] Re: Binding network interface to gRPC channel [python]

2021-01-27 Thread 'Richard Belleville' via grpc.io
I'm assuming you're asking about a client channel, not a server channel 
because for a server channel, it's as simple as binding to an IP owned by 
the network interface you want.

On the client side, this is determined by your routing table. Check out 
your route table with `ip route sho` and ensure that the kernel will route 
traffic to your desired server using the desired network interface.
On Thursday, January 21, 2021 at 10:19:24 AM UTC-8 Max wrote:

> Is there a way to bind a gRPC channel to a network interface (e.g. eth0)?
> The moment i configure a macsec interface it automatically tries to 
> connect via that interface, which doesn't work.
> Netns might work, however, it breaks my MySQL connection on localhost.

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/d6b2ede3-5964-4564-8e11-85ca9d2bb12bn%40googlegroups.com.


[grpc-io] Re: Cannot generate Python modules from containerd's API .proto files

2020-12-09 Thread 'Richard Belleville' via grpc.io

Can you please include the runtime errors you're encountering?
On Wednesday, December 9, 2020 at 11:39:45 AM UTC-8 harald@gmx.net 
wrote:

> The containerd project  doesn't 
> publish a Python package for its containerd gRPC-based API, so I want to 
> generate Python modules for it myself. For API version 1.3 of containerd, 
> the .proto files can be found in 
> https://github.com/containerd/containerd/tree/release/1.3/api. 
> Unfortunately, I'm hitting two road blocks when trying to generate Python 
> modules from the API .proto files and when to run them.
>
> For example, the some of the containerd API .proto files reference 
> protobuf plugin .proto files using an (for lack of a better term on my 
> side) absolute import path: api/events/container.proto 
> 
>  does 
> an:
>
> import weak "
> github.com/containerd/containerd/protobuf/plugin/fieldpath.proto";
>
> However, the Python grpc compiler always wants to resolve such references 
> in the local file system and containerd's source have 
> ./protobuf/plugin/fieldpath.proto 
> 
>  -- 
> so this won't ever resolve correctly (using -I ...), because it lacks the 
> github.lcom/containerd/containerd path elements. Please note that the 
> Python grpc protocol compiler resolves such import paths in form of 
> chopping down the full import path into parts and then searching its -I 
> ... include directories.
>
> Trying the lazy route by simply copying over these sources into 
> inside vendor/github.com/... inside the containerd source tree will later 
> cause *runtime errors *when trying to use the generated Python modules: 
> this is because the grpc compiler considers the same containerd API .proto 
> file in two locations (paths) to be separate instances. In consequence we 
> get duplicate modules which unfortunately now try to register with grpc for 
> the same protocol element names. Consequently, the gRPC Python runtime thus 
> throws an error and terminates.
>
> How can I correctly get this resolved when using python3 -m 
> grpc.tools.protoc ...? What am I missing here or getting wrong?
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/ce257795-7a29-4bdb-8bc9-57538c00eb33n%40googlegroups.com.


[grpc-io] Re: python-speech client library error originating from sync_posix.cc

2020-10-06 Thread 'Richard Belleville' via grpc.io
I think that's unlikely. We run the Core tests with thousands of instances 
in parallel regularly. Were you able to gatherany more information on this 
failure?

On Thursday, October 1, 2020 at 11:37:54 AM UTC-7 ersc...@google.com wrote:

> Could this possibly be the result of multiple test runs occurring 
> concurrently?
>
>
> On Thursday, October 1, 2020 at 11:10:15 AM UTC-7, Richard Belleville 
> wrote:
>>
>> This looks like a system-level failure of pthread_mutex_lock. 
>> Unfortunately, the actual return value doesn't appear to be printed here 
>> (it would be helpful if we could get that). So the cause could be any of 
>> the following:
>>
>> *EINVAL *The *mutex* was created with the protocol attribute having the 
>> value PTHREAD_PRIO_PROTECT and the calling thread's priority is higher than 
>> the mutex's current priority ceiling.
>>
>> *EINVAL *The value specified by *mutex* does not refer to an initialized 
>> mutex object.
>>
>> *EAGAIN *The mutex could not be acquired because the maximum number of 
>> recursive locks for *mutex* has been exceeded.
>>
>> *EDEADLK *The current thread already owns the mutex.
>>
>> On Thursday, October 1, 2020 at 10:52:23 AM UTC-7 ersc...@google.com 
>> wrote:
>>
>>>
>>> Hello,
>>>
>>> I need to understand under what conditions the pthread_mutex_lock check 
>>> fails 
>>> 
>>> .
>>>
>>> Context: I'm trying to diagnose a flaky test. Intermittently, one of the 
>>> Google Cloud Speech API tests fails with the following message 
>>> :
>>>
>>> ```
>>>
>>> = test session starts 
>>> ==
>>> platform linux -- Python 3.7.7, pytest-6.0.1, py-1.9.0, pluggy-0.13.1 -- 
>>> /workspace/speech/cloud-client/.nox/py-3-7/bin/python
>>> cachedir: .pytest_cache
>>> rootdir: /workspace, configfile: pytest.ini
>>> collecting ... collected 22 items
>>>
>>> beta_snippets_test.py::test_transcribe_file_with_enhanced_model PASSED   [  
>>> 4%]
>>> beta_snippets_test.py::test_transcribe_file_with_metadata PASSED [  
>>> 9%]
>>> beta_snippets_test.py::test_transcribe_file_with_auto_punctuation PASSED [ 
>>> 13%]
>>> beta_snippets_test.py::test_transcribe_diarization PASSED[ 
>>> 18%]
>>> beta_snippets_test.py::test_transcribe_multichannel_file E0828 
>>> 10:27:18.708634490   13111 sync_posix.cc:67]   assertion failed: 
>>> pthread_mutex_lock(mu) == 0
>>> Fatal Python error: Aborted
>>>
>>> Thread 0x7f2c36e0e600 (most recent call first):
>>>   File "/usr/local/lib/python3.7/codecs.py", line 322 in decode
>>>   File 
>>> "/workspace/speech/cloud-client/.nox/py-3-7/lib/python3.7/site-packages/_pytest/capture.py",
>>>  line 484 in snap
>>>   File 
>>> "/workspace/speech/cloud-client/.nox/py-3-7/lib/python3.7/site-packages/_pytest/capture.py",
>>>  line 570 in readouterr
>>>   File 
>>> "/workspace/speech/cloud-client/.nox/py-3-7/lib/python3.7/site-packages/_pytest/capture.py",
>>>  line 657 in read_global_capture
>>>   File 
>>> "/workspace/speech/cloud-client/.nox/py-3-7/lib/python3.7/site-packages/_pytest/capture.py",
>>>  line 718 in item_capture
>>> nox > Command pytest --junitxml=sponge_log.xml failed with exit code -6
>>> nox > Session py-3.7 failed.
>>>
>>> ```
>>>
>>>
>>>
>>> Details:
>>>
>>>- Original issue 
>>>
>>>- New location of issue 
>>> (*Note*: the 
>>>tests have recently moved GitHub repos for reasons unrelated to this 
>>> error.)
>>>
>>>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/cf5f2207-5b3b-4f45-9b60-8576b8717417n%40googlegroups.com.


[grpc-io] Re: python-speech client library error originating from sync_posix.cc

2020-10-01 Thread 'Richard Belleville' via grpc.io
This looks like a system-level failure of pthread_mutex_lock. 
Unfortunately, the actual return value doesn't appear to be printed here 
(it would be helpful if we could get that). So the cause could be any of 
the following:

*EINVAL *The *mutex* was created with the protocol attribute having the 
value PTHREAD_PRIO_PROTECT and the calling thread's priority is higher than 
the mutex's current priority ceiling.

*EINVAL *The value specified by *mutex* does not refer to an initialized 
mutex object.

*EAGAIN *The mutex could not be acquired because the maximum number of 
recursive locks for *mutex* has been exceeded.

*EDEADLK *The current thread already owns the mutex.

On Thursday, October 1, 2020 at 10:52:23 AM UTC-7 ersc...@google.com wrote:

>
> Hello,
>
> I need to understand under what conditions the pthread_mutex_lock check 
> fails 
> 
> .
>
> Context: I'm trying to diagnose a flaky test. Intermittently, one of the 
> Google Cloud Speech API tests fails with the following message 
> :
>
> ```
>
> = test session starts 
> ==
> platform linux -- Python 3.7.7, pytest-6.0.1, py-1.9.0, pluggy-0.13.1 -- 
> /workspace/speech/cloud-client/.nox/py-3-7/bin/python
> cachedir: .pytest_cache
> rootdir: /workspace, configfile: pytest.ini
> collecting ... collected 22 items
>
> beta_snippets_test.py::test_transcribe_file_with_enhanced_model PASSED   [  
> 4%]
> beta_snippets_test.py::test_transcribe_file_with_metadata PASSED [  
> 9%]
> beta_snippets_test.py::test_transcribe_file_with_auto_punctuation PASSED [ 
> 13%]
> beta_snippets_test.py::test_transcribe_diarization PASSED[ 
> 18%]
> beta_snippets_test.py::test_transcribe_multichannel_file E0828 
> 10:27:18.708634490   13111 sync_posix.cc:67]   assertion failed: 
> pthread_mutex_lock(mu) == 0
> Fatal Python error: Aborted
>
> Thread 0x7f2c36e0e600 (most recent call first):
>   File "/usr/local/lib/python3.7/codecs.py", line 322 in decode
>   File 
> "/workspace/speech/cloud-client/.nox/py-3-7/lib/python3.7/site-packages/_pytest/capture.py",
>  line 484 in snap
>   File 
> "/workspace/speech/cloud-client/.nox/py-3-7/lib/python3.7/site-packages/_pytest/capture.py",
>  line 570 in readouterr
>   File 
> "/workspace/speech/cloud-client/.nox/py-3-7/lib/python3.7/site-packages/_pytest/capture.py",
>  line 657 in read_global_capture
>   File 
> "/workspace/speech/cloud-client/.nox/py-3-7/lib/python3.7/site-packages/_pytest/capture.py",
>  line 718 in item_capture
> nox > Command pytest --junitxml=sponge_log.xml failed with exit code -6
> nox > Session py-3.7 failed.
>
> ```
>
>
>
> Details:
>
>- Original issue 
>
>- New location of issue 
> (*Note*: the 
>tests have recently moved GitHub repos for reasons unrelated to this 
> error.)
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/d8849bea-838a-4519-97f2-fc0dd5f14b77n%40googlegroups.com.


[grpc-io] Re: Sending pickle object as client request

2020-09-23 Thread 'Richard Belleville' via grpc.io
I would expect there to be more to the error message than just 
"client_class". I'm assuming there's an indentation problem in your 
original post and that the instantiation of the "client_class" message is 
happening within the Get_Data handler. What happens if you try to 
instantiate a "client_class" outside of the handler, on the main thread. Do 
you get a more illuminating error message?
On Wednesday, September 23, 2020 at 9:20:45 AM UTC-4 Jatin Sharma wrote:

> I want to send a class to the grpc server. So, I am pickling the class and 
> sharing as a bytes format in the message.
> The .proto file(serverdata.proto) looks like below:
>
> syntax = "proto3";
>
> service DataProvider{
> rpc Get_Data(client_class) returns (result);
> }
>
> message client_class{
> bytes class_str = 1;
> }
>
> message result{
> int64 res = 1;
> }
>
> client.py file looks like below:
>
> import grpc
> import serverdata_pb2
> import serverdata_pb2_grpc
> import pickle
> import pandas as pd
>
>
> class Get_Hash():
> def get_hash(self,df):
> return pd.util.hash_pandas_object(df).sum()
> 
> a = serverdata_pb2.client_class()
> a.class_str = pickle.dumps(Get_Hash)
>
> channel = grpc.insecure_channel('localhost:50051')
> # create a stub (client)
> stub = serverdata_pb2_grpc.DataProviderStub(channel)
> response = stub.Get_Data(a)
>
> print(response.res)
>
> On running this client, I'm getting the following error:
>
> Traceback (most recent call last):
>   File "client.py", line 18, in 
> response = stub.Get_Data(a)
>   File "/home/jatin/.local/lib/python3.8/site-packages/grpc/_channel.py", 
> line 826, in __call__
> return _end_unary_response_blocking(state, call, False, None)
>   File "/home/jatin/.local/lib/python3.8/site-packages/grpc/_channel.py", 
> line 729, in _end_unary_response_blocking
> raise _InactiveRpcError(state)
> grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated 
> with:
> status = StatusCode.UNKNOWN
> details = "Exception calling application: client_class"
> debug_error_string = 
> "{"created":"@1600867086.557068214","description":"Error received from peer 
> ipv6:[::1]:50051","file":"src/core/lib/surface/call.cc","file_line":1061,"grpc_message":"Exception
>  
> calling application: client_class","grpc_status":2}"
> >
>
> I'm unable to resolve this error. I checked the type of the pickle file. 
> It was 'bytes'. So, I changed the type in the .proto file to bytes. I'd be 
> grateful if someone can help me resolve this error.
>
> Regards,
> Jatin
>
>
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/94820718-28c9-4238-bbf9-9169f7448a52n%40googlegroups.com.


[grpc-io] Re: grpc-python bidirectional streaming delivery status

2020-08-12 Thread 'Richard Belleville' via grpc.io
gRPC does not handle application-level acknowledgement of receipt of 
messages. If you want to do this, you'll have to add it into your protocol. 
Your method might look something like this:


syntax = "proto3";

message MetricRequest {
  ...
}

message MetricResponse {
  oneof payload {
google.protobuf.Empty ack = 1;
... // Whatever the original intended response was.
  }
}

service MultiGreeter {
rpc MetricReport (MetricRequest) returns (stream MetricResponse) {}
}

Your server would then immediately acknowledge receipt of a client request 
by sending a MetricResponse with only the ack field set.

Thanks,
Richard
On Thursday, August 6, 2020 at 6:27:50 PM UTC-7 sud@gmail.com wrote:

> Hi all,
> I must be missing something simple. I'm using the python client to 
> generate RPC messages for a bidirectional streaming service. The server is 
> in java and generates messages very rarely. How can i at a grpc level know 
> if messages have been delivered to the business logic on the server side?
>
> This is an example i see at many places:
>
> response = stub.MetricReport(iter(repeated_metric()))
>
> for r in response:
>
>//Do something with response
>
>
> However, this seems to be a blocking call and like i said the server 
> rarely responds. What can i look at from a grpc level to get delivery 
> status?
>
> Thanks
> Sudharsan
>
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/e58be813-276e-4f09-bbc6-1927a446180bn%40googlegroups.com.


[grpc-io] Re: gRFC L69: Allow Call Credentials to be Specified in grpc_google_default_credentials_create

2020-07-06 Thread 'Richard Belleville' via grpc.io
I've just realized that L69 is already taken by a proposal in the draft
stage. I've updated the proposal title to "L69: Allow Call Credentials to
be Specified in grpc_google_default_credentials_create". Pardon the mistake.

On Mon, Jul 6, 2020 at 12:38 PM Richard Belleville 
wrote:

> All,
>
> I've drafted a proposal to change the constructor for google default
> credentials in the Core API to accommodate a corner case that is not
> currently served well by the API. Please take a look
> .
>
> Thanks,
> Richard Belleville
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CAOew_sHL%2BTfAcoQmdvk%2BbtZn1jzHVfE4m1FF3Ck4XTAqEwgNAA%40mail.gmail.com.


[grpc-io] gRFC L69: Allow Call Credentials to be Specified in grpc_google_default_credentials_create

2020-07-06 Thread 'Richard Belleville' via grpc.io
All,

I've drafted a proposal to change the constructor for google default 
credentials in the Core API to accommodate a corner case that is not 
currently served well by the API. Please take a look 
.

Thanks,
Richard Belleville

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/1cd97392-df63-4434-bae8-27a7660ee892n%40googlegroups.com.


[grpc-io] Re: How to use a specific version of protoc ?

2020-07-06 Thread 'Richard Belleville' via grpc.io

You should be able to specify your preferred version of the protobuf repo 
in your WORKSPACE file. *As you can see 
*,
 
we only provide a default revision of the protobuf repo when none is 
specified by the workspace pulling us in. However, we only test against 
this version in CI, so it's possible that you'll have to manually work 
through build issues when you supply your desired version.
On Monday, July 6, 2020 at 12:22:38 AM UTC-7 jao...@gmail.com wrote:

> Hi all,
>
>
>
> I'd like to use a specific version of protoc (3.9.1) when building a 
> library that depends on grpc
>
>
> In the WORKSPACE I have the following lines to load grpc dependencies:
>
>
>
> *  load("@com_github_grpc_grpc//bazel:grpc_deps.bzl", "grpc_deps")*
>
> *  grpc_deps()*
>
> *  load("@com_github_grpc_grpc//bazel:grpc_extra_deps.bzl", 
> "grpc_extra_deps")*
>
> *  grpc_extra_deps()*
>
>
> If I use recent grpc source (1.27+), it uses a version of protobuf that I 
> can't set automatically.
>
>
> Is it possible to use a specific version of protoc in this case ?
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/be827bd1-0c1b-4755-8212-a25564f8536bn%40googlegroups.com.


Re: [grpc-io] gRFC L65: Additional PyPI Packages for gRPC Python

2020-05-15 Thread 'Richard Belleville' via grpc.io
Kailash,

This idea has been floated in the past. I think there are two levels of 
value embedded in your proposal. The first is a simple bundle package. No 
code, just a grpc==1.XX package declaring dependencies on other packages, 
including:

   - grpcio==1.XX
   - protobuf==(some version of protobuf we've tested against)
   - grpcio-channelz==1.XX
   - grpcio-reflection==1.XX
   - grpcio-health-checking==1.XX
   - grpcio-status==1.XX

That list intentionally omits grpc-testing and grpc-tools, because both of 
those should be build-time-only dependencies (thought that may change in 
the future). As far as I can see, the only downside here is download size. 
Basically everyone is going to be using grpcio and protobuf, but basically *no 
one* is going to be using the other packages, based on the code I see 
trolling Github. You could argue that that's simply because very few people 
even know these packages exist. Perhaps. But that claim isn't exactly 
falsifiable.

Even so, downloading a couple of megabytes of Python code that you're not 
going to use isn't the end of the world, especially when you have the 
option to download the individual packages you need after doing a little 
research on the topic.

So, overall, a dependency bundling package seems like a slam dunk.

The second level of value is a package that, in your own words, "err(s) ... 
toward sane defaults". I assume this means things like automatically adding 
a reflection servicer, health servicer, channelz server, etc. We wrap gRPC 
Python at Google internally to do this sort of thing by default. But there 
are some problems with this. Suppose a new user instantiates a server not 
knowing that all of these bells and whistles are activated by default. 
They're now paying the cost at runtime. Even worse, the reflection server 
and channelz server could pose a security problem.

There are also API considerations here. If you add new classes/functions to 
the grpc package, what Python module do you put it under? grpc? That module 
is already occupied by the code in the grpcio package. That problem isn't 
insurmountable. But it does mean trouble with name clashes. So we'd have to 
name the function that gives you a bells-and-whistles server something 
besides grpc.server, which is already taken. Otherwise, it would constitute 
an API regression. So now you've got a different function, 
grpc.bells_and_whistles_server() 
 that no one 
knows about. It's not the most obvious name, so how likely are people to 
use it? You've have to rewrite all of the documentation/examples to 
recommend its use (along with loud disclaimers about when/why *not* to use 
it). It's (perhaps) doable, but it's a lot of effort for what (at the 
moment) seems like a low level of demand.

Finally, it's worth pointing out that the current proposal isn't 
necessarily the end of the line. It leaves us room to go down either of the 
above two paths in the future if we see enough demand from the community 
for it.

On Thursday, May 14, 2020 at 6:00:22 PM UTC-7 hsa...@gmail.com wrote:

> To clarify, I am suggesting that you consider solving the package name 
> confusion by using the gRPC name for a meaningful package that does not 
> exist yet. Another positive with this approach is that its intuitive -  "if 
> I type pip install grpc, I expect to get a working gRPC package 
> installed."  You can err this package toward sane defaults and depend on 
> protobuf too - 99% of your users use it with protobuf anyways..
>
>
>
> On Thu, May 14, 2020 at 1:32 PM 'Lidi Zheng' via grpc.io <
> grp...@googlegroups.com> wrote:
>
>> Richard had an idea that we could create a bundle named 
>> `grpcio[protobuf]`, which includes peripheral packages. After all, gRPC 
>> team wants to keep the implementation agnostic to codec, so they weren't 
>> packed into the main package.
>>
>> It's a good idea, this gRFC is for package name confusion. For the new 
>> bundle package, I can start another proposal.
>>
>> On Wednesday, May 13, 2020 at 8:18:51 PM UTC-7 hsa...@gmail.com wrote:
>>
>>> Have you considered using the name for a new meta-package bundle for 
>>> related packages?
>>>
>>>- grpcio
>>>- grpcio-status
>>>- grpcio-channelz
>>>- grpcio-reflection
>>>- grpcio-health-checking
>>>- 
>>>
>>> Or even a kitchen sink package that includes grpcio-testing and 
>>> grpcio-tools. 
>>>
>>> On Wed, May 13, 2020 at 7:32 PM 'Lidi Zheng' via grpc.io <
>>> grp...@googlegroups.com> wrote:
>>>
 Abstract:

 gRPC Python is uploaded as "grpcio" on PyPI, but there is another 
 package named "grpc". Hence, some users are confused. This document 
 proposes to upload additional packages named "grpc" and "grpc-*" to guide 
 users to official packages.

 gRFC: 
 https://github.com/lidizheng/proposal/blob/L65-python-package-name/L65-python-package-name.md
 PR: 

[grpc-io] Re: create channel credential from access token only (python)

2020-04-22 Thread 'Richard Belleville' via grpc.io
To expand on that, you can use either a UDS or a TCP connection over 
loopback for this. Take a look at the local_channel_credentials 

 
documentation for more info. It's more or less a dummy credential that 
ensures the transport is one of these two, but will allow you to add call 
credentials.
On Wednesday, April 22, 2020 at 10:36:27 AM UTC-7 Srini Polavarapu wrote:

> I believe credentials can be added to an insecure channel only if the 
> channel is local (e.g. UDS). See more details here: 
> https://github.com/grpc/grpc/pull/20875
>
> On Tuesday, April 21, 2020 at 2:35:29 PM UTC-7 davidk...@gmail.com wrote:
>
>> I am creating a channel  as follows:
>> call_credentials = grpc.access_token_call_credentials(token)
>> root_credentials = grpc.ssl_channel_credentials(certificate)
>> credentials = 
>> grpc.composite_channel_credentials(root_credentials, call_credentials)
>> channel = grpc.secure_channel(endpoint, credentials=credentials)
>>
>> Suppose I am not using SSL (no certificate), but just an access token, 
>> how do I create a channel as above.
>> Thanks.
>>   
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/b5a6d064-0961-4234-88a5-819483f13a06%40googlegroups.com.


[grpc-io] gRFC L64: Python Runtime Proto Parsing

2020-04-10 Thread 'Richard Belleville' via grpc.io
Hey everyone,

I've just written up a proposal for a feature I've been working on for a 
while now. It will allow users to import Protobuf message types and 
services from ".proto" files on the filesystem at runtime.

gRFC: 
https://github.com/gnossen/proposal/blob/runtime_proto_parsing/L64-python-runtime-proto-parsing.md
PR: https://github.com/grpc/proposal/pull/175

Thanks!
Richard

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/31a420ab-c2c4-47a5-b7a1-2bb485707e9f%40googlegroups.com.


[grpc-io] Re: gRPC Python transition from manylinux1 to manylinux2010

2020-04-03 Thread 'Richard Belleville' via grpc.io
grpcio 1.28.1  has been released. 
It is the first version without support for manylinux1.
On Tuesday, December 3, 2019 at 10:57:11 AM UTC-8 veb...@google.com wrote:

> TL;DR: gRPC Python on Linux will transition from manylinux1 to 
> manylinux2010 in early 2020. If you use a pip 
>  version earlier than 19, please upgrade 
> it to 19 or higher to continue downloading binary packages rather than 
> from-source packages.
>
> gRPC Python currently distributes manylinux1 binary wheels on Linux to 
> provide binary wheels for various Linux distributions compliant with 
> manylinux1 (PEP-513 ). Since 
> manylinux1 is superseded by manylinux2010 (PEP-571 
> ), gRPC Python will transition 
> from manylinux1 to manylinux2010 from 2020. Since gRPC 1.24.3 
> , gRPC Python has been 
> distributed for both manylinux1 and manylinux2010 and ~80% pip downloads 
> are for manylinux2010, ~14% are for manylinux1, and others are for source 
> wheels. Most of the manylinux1 downloads are not because of the OS 
> capability but because of the lower PIP version.
>
> manylinux2010 will provide several benefits. It can have better 
> performance due to modern build tool chain and recent OS features 
> introduced by manylinux2010 mandating glibc 2.12 or higher.
>
> Once the transition happens, if the version of pip is lower than 19, it 
> might result in downloading a source package and building it from source 
> instead of downloading pre-built package. This would be far slower than 
> installation with binary wheels and it may fail due to lack of required 
> build tools.
>
> Since manylinux2010 is based on CentOS 6, with Linux system which doesn’t 
> have glibc 2.12 or higher and libstdc++ 6.0.13 or higher (notably CentOS 
> 5), gRPC Python won’t work. Please consider upgrading OS or use gRPC 1.26, 
> the last version distributed with a binary wheel for manylinux1.
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/deb6a902-c756-4d91-9f1f-385a2bea732d%40googlegroups.com.


[grpc-io] grpcio 1.28.0 Python Package Deleted

2020-04-02 Thread 'Richard Belleville' via grpc.io
Yesterday, 4/1/2020, at ~7:00PM PDT, grpcio 1.28.0 was uploaded to PyPI. 
After several hours, https://github.com/grpc/grpc/issues/22546 was filed on 
the gRPC GitHub repo, detailing an import failure for Python 3.5 users. We 
initially began work on a 1.28.1 patch to fix the issue, but after the 
impact on our users became clear, we instead deleted the 1.28.0 release. 
This is not an action we take lightly. We delete packages only when there 
is a clear negative impact on a broad subset of our user base. We will 
follow up with a 1.28.1 release patched to resolve the issue in the coming 
days. In addition, our continuous integration system will be modified to 
incorporate release-blocking tests verifying continued compatibility with 
Python 3.5. We apologize for any inconvenience this may have caused.

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/a88346ad-b5fd-44b3-864e-a2fc1734832b%40googlegroups.com.


[grpc-io] Re: Building GRPC via Bazel without BoringSSL

2020-03-03 Thread 'Richard Belleville' via grpc.io

"We are currently exploring defining a fake `boringssl` local_repository 
which has a single target (`@boringssl//:ssl` which points to system 
provided openssl)"

This actually sounds ideal to me. Compare what the protobuf repo does about 
Python headers 
.
 
If you come up with a robust repository rule to pull in the system openssl, 
consider making it available as an example. I imagine it will be useful to 
many besides yourself.
On Tuesday, March 3, 2020 at 3:30:35 PM UTC-8 priy...@gmail.com wrote:

> Hello,
>
> I am trying to compile GRPC without BoringSSL. Looks like this is 
> supported via cmake. Is there a way to also support it via Bazel? My 
> current understanding is that the Bazel build unconditionally depends on 
> @boringssl//:ssl.
>
> We are using Bazel to build a custom binary that links in aws-cpp-sdk and 
> grpc. It seems like aws-cpp-sdk wants openssl while grpc seems to want 
> boringssl. We are currently exploring defining a fake `boringssl` 
> local_repository which has a single target (`@boringssl//:ssl` which points 
> to system provided openssl). Would love to know if this is likely to be a 
> fool's errand.
>
> Regards!
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/e8d97291-f292-444a-85c5-10eb94883bf4%40googlegroups.com.


Re: [grpc-io] Re: Error building grpc with bazel on Ubuntu 18.04.4

2020-02-03 Thread 'Richard Belleville' via grpc.io

Nisarg,

I'm not able to reproduce.

I ran under the ubuntu:18.04 docker image 

 with 
Bazel 2.0:

root@6cd6278876b8:/grpc# tools/bazel --version
INFO: Running bazel wrapper (see //tools/bazel for details), bazel version 
2.0.0 will be used instead of system-wide bazel installation.
bazel 2.0.0

I installed just a few system packages:
root@6cd6278876b8:/grpc# history
1  cd /grpc
2  apt-get update -y && apt-get install -y python python3 clang
3  tools/bazel build //:all
4  tools/bazel --version
5  history

And the build completed successfully:
INFO: Analyzed 98 targets (30 packages loaded, 2910 targets configured).
INFO: Found 98 targets...
INFO: Elapsed time: 83.488s, Critical Path: 16.32s
INFO: 1364 processes: 1364 processwrapper-sandbox.
INFO: Build completed successfully, 1535 total actions

I am using master instead of the latest release. Perhaps that's what's 
making the difference?
On Monday, February 3, 2020 at 10:28:40 AM UTC-8 Nicolas Noble wrote:

> This one indicates something is aloof with your compiler environment. 
> stdarg.h is supposed to be a system header.
>
> On Mon, Feb 3, 2020 at 10:16 AM Nisarg Shah  
> wrote:
>
>> Thanks Nicolas, I tried building ti with bazel 1.0.0 and it fails with 
>> this error -
>>
>> $ ~/private/bazel-1.0.0-linux-x86_64 build :all
>>> Starting local Bazel server and connecting to it...
>>> INFO: Writing tracer profile to 
>>> '/home/nisargs/.cache/bazel/_bazel_nisargs/509de2f44a35a9c68f4268d75d0fe17a/command.profile.gz'
>>> DEBUG: 
>>> /home/nisargs/.cache/bazel/_bazel_nisargs/509de2f44a35a9c68f4268d75d0fe17a/external/bazel_toolchains/rules/rbe_repo/checked_in.bzl:226:13:
>>>  
>>> rbe_msan not using checked in configs; Bazel version 1.0.0 was 
>>> picked/selected with '["9.0.0", "10.0.0"]' compatible configs but none 
>>> match the 'env = {"ABI_LIBC_VERSION": "glibc_2.19", "ABI_VERSION": "clang", 
>>> "BAZEL_COMPILER": "clang", "BAZEL_HOST_SYSTEM": "i686-unknown-linux-gnu", 
>>> "BAZEL_TARGET_CPU": "k8", "BAZEL_TARGET_LIBC": "glibc_2.19", 
>>> "BAZEL_TARGET_SYSTEM": "x86_64-unknown-linux-gnu", "CC": "clang", 
>>> "CC_TOOLCHAIN_NAME": "linux_gnu_x86", "BAZEL_LINKOPTS": 
>>> "-lc++:-lc++abi:-lm"}', 'config_repos = None',and/or 'create_cc_configs = 
>>> True' passed as attrs
>>> INFO: SHA256 (
>>> https://boringssl.googlesource.com/boringssl/+archive/83da28a68f32023fd3b95a8ae94991a07b1f6c62.tar.gz)
>>>  
>>> = f1fde09c75c73890a6453943b7e4161b34f3d4f0f0478bc6325f73d18086f190
>>> DEBUG: Rule 'boringssl' indicated that a canonical reproducible form can 
>>> be obtained by modifying arguments sha256 = 
>>> "f1fde09c75c73890a6453943b7e4161b34f3d4f0f0478bc6325f73d18086f190"
>>> DEBUG: Call stack for the definition of repository 'boringssl' which is 
>>> a http_archive (rule definition at 
>>> /home/nisargs/.cache/bazel/_bazel_nisargs/509de2f44a35a9c68f4268d75d0fe17a/external/bazel_tools/tools/build_defs/repo/http.bzl:262:16):
>>>  - /nobackup/grpc-temp/grpc/bazel/grpc_deps.bzl:105:9
>>>  - /nobackup/grpc-temp/grpc/WORKSPACE:5:1
>>> INFO: Analyzed 96 targets (52 packages loaded, 3117 targets configured).
>>> INFO: Found 96 targets...
>>> ERROR: 
>>> /home/nisargs/.cache/bazel/_bazel_nisargs/509de2f44a35a9c68f4268d75d0fe17a/external/upb/BUILD:57:1:
>>>  
>>> C++ compilation of rule '@upb//:upb' failed (Exit 1) clang failed: error 
>>> executing command /s/std/bin/clang -U_FORTIFY_SOURCE -fstack-protector 
>>> -Wall -Wthread-safety -Wself-assign -fcolor-diagnostics 
>>> -fno-omit-frame-pointer -MD -MF 
>>> bazel-out/k8-fastbuild/bin/external/upb/_objs/upb/port.pic.d ... (remaining 
>>> 20 argument(s) skipped)
>>>
>>> Use --sandbox_debug to see verbose messages from the sandbox
>>> In file included from external/upb/upb/port.c:2:
>>> external/upb/upb/upb.h:12:10: fatal error: 'stdarg.h' file not found
>>> #include 
>>>  ^~
>>> 1 error generated.
>>> INFO: Elapsed time: 197.128s, Critical Path: 3.69s
>>> INFO: 0 processes.
>>> FAILED: Build did NOT complete successfully
>>>
>>
>> Here is clang version info
>>
>> $ /s/std/bin/clang --version
>>> clang version 8.0.0 (trunk 340542)
>>> Target: x86_64-unknown-linux-gnu
>>> Thread model: posix
>>> InstalledDir: /s/std/bin
>>>
>>
>> Thanks
>> Nisarg
>>
>>
>> On Mon, Feb 3, 2020 at 10:49 AM Nicolas Noble  
>> wrote:
>>
>>> We're not Bazel 2.0 ready yet.
>>>
>>> On Sun, Feb 2, 2020 at 2:33 PM  wrote:
>>>
 I tried building it with v1.26.0 tag instead of v1.25.0, and now I get 
 the following error -

 $ ~/private/bazel-2.0.0-linux-x86_64 build :all
 Starting local Bazel server and connecting to it...
 DEBUG: 
 /home/nisargs/.cache/bazel/_bazel_nisargs/4e4151a9a278b0177b97ca89f46caad9/external/bazel_toolchains/rules/rbe_repo/version_check.bzl:68:9:
  

 Current running Bazel is ahead of bazel-toolchains repo. Please 

[grpc-io] gRPC-Core Release 1.26.0

2020-01-06 Thread 'Richard Belleville' via grpc.io
This is 1.26.0(*gon 
*) release 
announcement for gRPC-Core and the wrapped languages C++, C#, Objective-C, 
Python, PHP and Ruby. Latest release notes are here 
.

Core
   
   - Fix compression filter crash on empty payload. (#21315 
   )
   - Ensure awake pollset_work threads exist on Windows. (#19311 
   )
   - Disable client_idle_filter. (#20910 
   )
   - Remove gpr_get/set_allocation_functions. (#20462 
   )
   - Security audit response. (#20839 
   )

C++
   
   - Automatically disable testing frameworks if gRPC_BUILD_TESTS=OFF. (
   #20976 )
   - Do not build channelz when gRPC_USE_PROTO_LITE. (#21011 
   )
   - Add options for all codegen plugins. (#20629 
   )
   - gRPC-C++ podspec follows gRPC versioning. (#20977 
   )
   - Issue 19208: Fix pollset_set_del_fd to cleanup all fd references. (
   #20452 )
   - De-duplicate .proto file processing. (#20537 
   )
   - cmake: Add VERSION and SOVERSION properties to libraries. (#20770 
   )

C#
   
   - C# Fix Unobserved Task Exception problem for cancelled calls with 
   unexhausted response stream. (#21202 
   )
   - Fix C# sending empty payloads with gzip compression. (#21266 
   )
   - C#: fix #20782 . (#20859 
   )

Objective-C
   
   - Update GRPCUnaryResponseHandler with generics. (#21316 
   )

Python
   
   - Release Python3.8 wheels for Windows. (#21271 
   )
   - Release Python3.8 wheel on macOS. (#21270 
   )
   - Fix issue with exception being out of scope in Python 3. (#20314 
   )
   - [AIO] Implement the shutdown process for AIO server and completion 
   queue. (#20805 )
   - Attempt to drop support for Python 3.4. (#20789 
   )
   - AIO Unified call interface. (#20824 
   )
   - Make sure Core aware of gevent Cython objects. (#20891 
   )
   - [bazel] Add an ability to call an optional custom plugin for 
   py_proto_library and py_grpc_library. (#20846 
   )

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/bff36a49-4da7-4ba3-8907-f975e7fdf20f%40googlegroups.com.