[grpc-io] Re: Crash on RpcMethod constructor

2020-02-12 Thread 'Stanley Cheung' via grpc.io
Unfortunately grpc version 1.2.6 is probably too old for us to support (it 
was released almost 3 years ago). Please use a newer version of grpc.

On Sunday, February 2, 2020 at 7:44:51 PM UTC-8, armstr...@gmail.com wrote:
>
> Im running OS X 10.15.3 with grpc installed via homebrew which gives grpc 
> version v1.2.6 and protofub v3.11.2 
>
>
> RpcMethod(*const* *char** name, RpcType type,
>
> *const* std::shared_ptr& channel)
>
>   : name_(name),
>
> method_type_(type),
>
> channel_tag_(channel->RegisterMethod(name)) {}
>
>
>
> and "channel->RegisterMethod" is an invalid address of 0x0
>
>
>
> If I roll back protobuf to version 3.8.0 it works as expected.
>
>
>
> Anyone seen this before?
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/52d85522-93cb-4fc2-9022-7e54b2114d0c%40googlegroups.com.


Re: [grpc-io] [Python] GRPC Server performance bottleneck

2020-02-12 Thread 'Lidi Zheng' via grpc.io
I answered in SO. Let's continue the discussion in SO.

On Tue, Feb 11, 2020 at 9:58 PM  wrote:

>
> I have asked this question in SO
>
> https://stackoverflow.com/questions/60181972/python-grpc-server-performance-bottleneck
>
> but i just would like to try my luck in here as well...
>
>
> I have written a grpc server that contains multiple rpc services. Some are
> unary and some are server side streaming.
>
> It connects to a grpc kubernetes server so I am using the python
> kubernetes client to query the server
>
> Currently I am having some performance problems as I think if there are
> multiple request coming in that it buffers for every worker to finish up
> before it can serve the incoming request.
>
> def startServer():
> global server
> server = grpc.server(futures.ThreadPoolExecutor(max_workers=10))
> servicer_grpc.add_Servicer_to_server(Servicer(), server)
> server.add_insecure_port('[::]:' + str(port))
> server.start()
>
> My questions are:
>
>1.
>
>How can I improve my performance? Will adding more max_workers in the
>threadpoolexecutor helps?
>2.
>
>How can I diagnose the problem and isolate which is causing the
>slowdown?
>3.
>
>I am thinking if the size of the response matters in this case as I am
>streaming bytestring to the client. Is there a way to measure the size of
>the response or does it matter in python grpc?
>
> I would like to know how do you diagnose your python grpc server so that
> you would know where to improve?
>
> --
> You received this message because you are subscribed to the Google Groups "
> grpc.io" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to grpc-io+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/grpc-io/c305b3f7-f047-4d6a-b90b-eaa1bc2ae926%40googlegroups.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CAMC1%3DjdDwKsCGVYS-EvW%2B6H0y-1omhskVpKqbLJj4dBTn9Fexw%40mail.gmail.com.


[grpc-io] http://empower.sh : A cloud-native dev framework, based on gRPC and Protobuf

2020-02-12 Thread Yannick Buron
Hello all,

I'm sharing here a project I am working since quite some time now: 
http://empower.sh. This is intended to be a backend framework usable by any 
developer, with only basic knowledge of Golang.

It goes way beyond micro-framework like micro or gin, and is not a monolith 
like Buffalo. It is heavily inspired by the Odoo  
framework, an ERP I used for more than 10 years and which is one of the 
most productive and underestimated back-office framework out there but with 
one caveat : it's also a monolith. With the Empower stack I want to keep 
the same productivity but with micro-services patterns, so each teams can 
be fully autonomous and manage their own services.

My goal is to build libraries to manage CRUD operations in a micro-services 
context, for example we can define in a model X a many2one field linked to 
a model Y in another service. When we create a new X, the libraries will 
request the other service to check that the Y referenced in the many2one 
field exist.
This is only a simple example, CRUD in micro-services are complex to 
manage, yet common this is why I think we absolutely need libraries to 
manage them and share the work.

To be as productive as possible, I need each services to follow standard 
pattern for their inter-communications. This is where I heavily use gRPC, 
to fight network latency and ensure a speed as close as possible from a 
monolithic ERP framework.
Also, Protobuf is used as a first-class citizen inside the ORM itself, so 
we can serialize the model at any time and there is no transcript needed 
when building the gRPC request. And I extensively use code generation, to 
avoid the effect "you use the framework, you learn the framework, not the 
language".

I invite you to read my post on medium for more details on the framework 
architecture : 
https://medium.com/@yannick.b/how-should-be-designed-the-ideal-golang-crud-backend-36c8f874c6a7.
 
Golang, gRPC, Protobuf, microservices are awesome technologies and 
patterns, solving real problems. I think some of the most experimented 
backend developers are on this mailing list and I hope to have feedback 
from some of you.

Thanks for your attention, I hope you'll enjoy this PoC.


-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/249029dc-869f-4ab4-b426-46384ad7f3ee%40googlegroups.com.


[grpc-io] Re: Flow Control for data heavy application (grpc-java)

2020-02-12 Thread lokeshj1703
Sorry for the late reply. We are actually using a bidirectional stream 
observer. Here is a model implementation of stream observer.

Multiple clients can be connected to the server. Clients send chunks of 
data(16MB chunks) to server for processing.
ClientRequestObserver#onNext() {
  // add the request to pending queue.
  // if the queue is full the request is rejected and client retries after 
some time. These rejected requests lead to lots of garbage in the system 
because the requests are larger in size.
}

In the background there is a service running which polls the pending 
requests queue and processes them. The requirement was to be able to 
control the client flow of requests based on the pending request queue size.

On Wednesday, January 29, 2020 at 5:05:23 AM UTC+5:30, Penn (Dapeng) Zhang 
wrote:
>
> I assume your RPC is unary type (correct me if it's not the case), you can 
> (1) use NettyServerBuilder.maxConcurrentCallsPerConnection() to limit the 
> number of concurrent calls per client channel; (2) in server application 
> implementation, send response slowly if possible (e.g. sleep a little bit 
> before sending out the response when server is too busy). To limit the 
> total number of connections to the server, the discussion in 
> https://github.com/grpc/grpc-java/issues/1886 may help.
> On Friday, January 17, 2020 at 1:42:28 AM UTC-8 lokes...@gmail.com wrote:
>
>> Apache Ratis is a java implementation of RAFT consensus protocol and uses 
>> grpc as a transport protocol. Currently we have multiple clients connecting 
>> to a server. The server has limited resources available to handle the 
>> client requests and it fails the requests which it can not handle. These 
>> resources are in the application layer. Since client requests can be large 
>> in size, failing them creates a lot of garbage. We wanted to push back the 
>> clients until resources become available without creating a lot of garbage.
>> Based on my understanding flow control in grpc works by controlling the 
>> amount of data buffered in the receiver. In our use case we want the server 
>> to have not more than x number of requests which it has to process. Lets 
>> assume that the server enqueues the requests it receives in a queue for 
>> processing them(I don't think the isReady control would work in this 
>> scenario?). Is it possible for server to limit the number of requests it 
>> receives from the clients? Is it possible for server to stop receiving data 
>> from the socket?
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/a80a40fc-9360-44e5-be5c-e4f643020c87%40googlegroups.com.