Hi Kevin, performance depends on how you organize your client and
backend. https://grpc.io/docs/guides/performance/ would be a good starting
point to get better performance.
Also gRPC has a flow control starting from small size of window so it
usually takes some time to get the maximum throughpu
If you have two proto files, you may want to have the things done in the
example twice; one for main.proto and the other for commonutil.proto.
And others are pretty regular cmake wiring; You can have a single proto
target including two generated sources or two proto target having a
dependency.
gRPC Core needs C++ to build it.
On Monday, January 17, 2022 at 7:15:15 AM UTC-8 Fridolin Siegmund wrote:
> Hi!
> I want to integrate a grpc client in an existing c project. As an
> standalone example (in C!) it compiles and works fine (cmake with
> FetchConent to fetch grpc [
> https://github
How long did it take for this solution to work? I ran the first new install
command and it is taking forever to install. Just wanted to see if this is
normal or not.
On Monday, May 9, 2022 at 9:36:24 PM UTC-6 Kenny Tovar wrote:
> Thank you for this. I had the same issue and your instructions r
Bazel isn't designed to get library artifacts. Although there should be lib
and so files with bazel build, it's supposed to be used like defining your
app or lib that likes to use gRPC as a bazel target and adding gRPC to its
dependency.
If you just want to grab lib or so files, probably using c
I don't think there is a right way for this. If the client doesn't need to
know the result of long-running job, gRPC part would be simple. A server
just needs to manage its work; it could be just spawning a dedicated thread
handling that job.
If the client needs the result, it's just a normal rp
Trace isn't that helpful to understand what happened if you don't know
gRPC's internal. Maybe grpc_trace=http gives you some clue as to whether
it's a network issue or not.
For the latency of the client, I don't understand. Server code looks good
and since it's working well without heavy worklo
We have a mechanism to limit the memory used by a process. To make sure
that there are no violators, we rely on maxrss of the process. We check
maxrss every few mins to see if we had seen a spike in memory which was
beyond the permitted value.
We have a grpc server and what we are seeing is tha
Could you elaborate your suggestion? I cannot see why it's useful. Usually
streaming request reactor needs some state to handle writing a series of
messages to the peer.
On Thursday, March 3, 2022 at 10:02:30 AM UTC-8 Brent Edwards wrote:
> This is for a service with callback processing enable
AFAIK there is no way for that. You may have a way by
using SerializationTraits that handles (de)serialization but it's not
recommended since it's not designed for that purpose.
On Saturday, May 1, 2021 at 2:44:49 PM UTC-7 abo...@gmail.com wrote:
> Hi,
> I have a C++ based gRPC based bidirecti
Would you give a small repro we can run and the output of full leak report
to know what's leaked?
On Wednesday, September 22, 2021 at 6:57:26 PM UTC-7 AmirM wrote:
> Hi
>
> I wrapped a c++ gRPC server in a docker, and I observed when every time I
> ran the server docker, memory usage would incr
>From the output of Valgrind, if Finish call was called properly against
ClientAsyncResponseReader, I suspect that server is killed while serving
the client, resulting in active call objects leaked.
On Thu, May 26, 2022 at 3:16 AM Vishal Kaushik <
vishal.kaus...@anzocontrols.com> wrote:
> Thanks
Thanks for your email. As I am using gRPC to connect to a machine and run
it, you will not be able to replicate it. But you can do teamviewer with me
for a few mins and I think you can understand the problem easily. I ran
the Valgard and it gave me so many lines of messages. Few of them are below:
I have read tutorial and google for this question, but still have some
confuse? here is my understand about their difference.
1. synchronous and callback grpc itself manage request and response queue
and thread model, but asynchronous let user provide a thread management, am
I right?
2. sync an
14 matches
Mail list logo