[grpc-io] grpc ruby error: `block in marshal_proc': undefined method `encode' for String:Class (NoMethodError)

2021-09-15 Thread Bo Tian
Hi All,

I got an error of : `block in marshal_proc': undefined method `*encode*' 
for String:Class (NoMethodError) when sending protobuf message to server.  
I wonder if someone can help me on this.

actually there is an encode method in generated service ruby file. here is 
the snippet

*class Service*
*   include GRPC::GenericService*
*   self.marshal_class_method = :encode*
*   self.unmarshal_class_method = :decode*
*   self.service_name = 
'opentelemetry.proto.collector.metrics.v1.MetricsService'*
*   # For performance reasons, it is recommended to keep this RPC*
*   # alive for the entire life of the application.*
*   rpc :Export, ::ExportServiceRequest, ::ExportServiceResponse*
*  end*
*  Stub = Service.rpc_stub_class*

Here is the call stack:

* 6: from ./metrics.rb:87:in `main'*
* 5: from 
/Users/bo/.rvm/gems/ruby-2.6.6/gems/grpc-1.40.0-universal-darwin/src/ruby/lib/grpc/generic/service.rb:171:in
 
`block (3 levels) in rpc_stub_class'*
* 4: from 
/Users/bo/.rvm/gems/ruby-2.6.6/gems/grpc-1.40.0-universal-darwin/src/ruby/lib/grpc/generic/client_stub.rb:179:in
 
`request_response'*
* 3: from 
/Users/bo/.rvm/gems/ruby-2.6.6/gems/grpc-1.40.0-universal-darwin/src/ruby/lib/grpc/generic/interceptors.rb:170:in
 
`intercept!'*
* 2: from 
/Users/bo/.rvm/gems/ruby-2.6.6/gems/grpc-1.40.0-universal-darwin/src/ruby/lib/grpc/generic/client_stub.rb:180:in
 
`block in request_response'*
* 1: from 
/Users/bo/.rvm/gems/ruby-2.6.6/gems/grpc-1.40.0-universal-darwin/src/ruby/lib/grpc/generic/active_call.rb:352:in
 
`request_response'*

*/Users/bo.tian/.rvm/gems/ruby-2.6.6/gems/grpc-1.40.0-universal-darwin/src/ruby/lib/grpc/generic/rpc_desc.rb:35:in
 
`block in marshal_proc': undefined method `encode' for String:Class 
(NoMethodError)*
Thanks,
Bo

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/aae2ac73-706b-40b1-a461-0f6952a4c620n%40googlegroups.com.


[grpc-io] Re: Python: Using response streaming api from a done callback

2021-09-15 Thread 'Richard Belleville' via grpc.io
So this is an interesting problem. It certainly is unintuitive behavior. 
I'm also not sure if we should change it. Let me start by explaining the 
internals of gRPC Python a little bit.

A server-streaming RPC call requires the cooperation of two threads: the 
thread provided by the client application calling __next__ repeatedly 
(thread A) and a thread created by the gRPC library that drives the event 
loop in the C extension, which ultimately uses a mechanism like epoll 
(thread B). Under the hood, __next__ (thread A) just checks to see if 
thread B has received a response from the server and, if so, returns it to 
the client code. Normally, this works out just fine.

But thread B has some other responsibilities, including running any RPC 
callbacks. This means that in the scenario you described above, thread A 
and thread B are actually the same thread. So when __next__ is called, 
there is no separate thread to drive the event loop and receive the 
responses.

So that's the cause for the deadlock you described. Now, you might say that 
this is an easy problem to solve. Why not just run the callbacks on a *new* 
thread? 
Then there is no deadlock in this scenario. True. But we've found that 
additional Python threads kill performance because they're all contending 
for the GIL. Doing this at the library level could slow down *many* existing 
workloads. We've actually put quite a bit of effort into *reducing* the 
number of threads  we use in the 
library. There are some options we could consider to make this work out of 
the box without destroying performance, but it's going to take some thought 
and careful benchmarking.

For the moment, I'd recommend that you not initiate an RPC from the 
callback handler and instead use the callback handler just to notify 
another thread that your application has ownership of, whether that's the 
same thread as the unary RPC was initiated from or some other thread that 
you've created yourself.

On Wednesday, September 15, 2021 at 1:09:22 AM UTC-7 Reino Ruusu wrote:

> A further clarification: The thread is not waiting for the future but 
> returns to the event loop. The callback function is definitely executed and 
> the deadlock happens in the call to next(). Also, the same callback 
> function is successful in synchronously making other single-single api 
> calls, but the single-streming call is deadlocked.
>
> keskiviikko 15. syyskuuta 2021 klo 10.49.27 UTC+3 Reino Ruusu kirjoitti:
>
>> Of course I meant to write add_done_callback() instead of 
>> set_done_callback().
>>
>> To clarify, the code looks like this:
>>
>> it = stub.singleStreamApi(...)
>> next(it) # <-- This works as expected
>>
>> fut = stub.singleSingleApi.future(...)
>> def callback(fut):
>> it = stub.singleStreamApi(...)
>> next(it) # <-- This gets stuck in a deadlock
>> fut.add_done_callback(callback)
>>
>> keskiviikko 15. syyskuuta 2021 klo 10.40.46 UTC+3 Reino Ruusu kirjoitti:
>>
>>> I have a case in which a call is made to a 
>>> single-request-streaming-response api (through 
>>> UnaryStreamMultiCallable.__call__()). This api is invoked from a callback 
>>> that is registered using set_done_callback() to a future object returned by 
>>> a call to UnaryUnaryMultiCallable.future(), so that the streaming is 
>>> started asynchronously as soon as the previous call is finished.
>>>
>>> This causes the iterator that is returned for the streaming response to 
>>> deadlock in the first next() call, irrespective of whether the stream is 
>>> producing messages or an exception.
>>>
>>> The streaming call works as expected when called from some other context 
>>> than the done-callback of the previous asynchronous call. This makes me 
>>> suspect that some resource related to the channel is locked during the 
>>> callback execution, resulting in a deadlock in the call to the stream's 
>>> iterator.
>>>
>>> Is there some way around this?
>>>
>>> BR,
>>> -- 
>>> Reino Ruusu
>>>
>>>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/89fca84a-aec9-43e1-8cce-05e7ab09005cn%40googlegroups.com.


[grpc-io] Re: GRPC support in Micropython?

2021-09-15 Thread 'Richard Belleville' via grpc.io
I don't see a way that this would work without significant effort. For 
starters, gRPC expects to be run on top of Linux, Windows, or MacOS. There 
are some forks that make the stack work on BSD, but that's not much 
different from Linux. Based on some quick investigation 
, Micropython is not only the interpreter, but 
also the operating system. You'd have to rewrite the lower layers of the 
Python gRPC stack to hook into the Micropython networking stack.

The second difficulty is that gRPC Python is implemented as a C extension. 
That is, the majority of the codebase is actually in C++, not Python. We 
offer a from-source distribution, but you'd likely have to play with things 
to get a cross-compilation environment set up properly.

There *is *an unofficial gRPC Python stack implemented in pure Python 
. I haven't used it personally, so I 
can't vouch for it, but it's possible that it would be easier to get it to 
run on Micropython.

On Friday, September 10, 2021 at 9:25:08 AM UTC-7 Ofir wrote:

> Hi,
> Is there a known way to use GRPC in micropython?
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/142e1f4a-5156-4b67-8250-2c1445103d53n%40googlegroups.com.


[grpc-io] Re: Send Dynamic gRPC messages without defining the message structure in proto

2021-09-15 Thread 'sanjay...@google.com' via grpc.io
Which language do you want to do this in? 

On Sunday, September 12, 2021 at 9:39:16 PM UTC-7 vasantha...@gmail.com 
wrote:

> Hi All! 
>
> How can we send a Dynamic gRPC message without defining the model of 
> message in proto?
>
>
> Say, for example,  I want to send an int data type server to a client - 
> But I don't want to define the Message with int property within it. I 
> wanted to serialise Int data to binary format from Business Logic and write 
> directly to the gRPC stream. 
>
> Please let me know if you have inputs related to the same.
> Thanks in Advance. 
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/6d35fecb-96f0-442f-9295-26309adf9d44n%40googlegroups.com.


[grpc-io] Re: question on GRPC DNS resolution

2021-09-15 Thread 'sanjay...@google.com' via grpc.io
Which gRPC language are you using? Is it C++? If yes, this could be an 
issue with the "ares" DNS used by default. I suggest reproducing the issue 
with some logs and logging it at github.

On Sunday, September 12, 2021 at 11:37:05 PM UTC-7 sureshb...@gmail.com 
wrote:

> In our intranet, GRPC servers are running in different computers. While 
> many of the servers are connecting without any issue, there are some 
> servers failed with error *"DNS resolution failed for service : 
> :"*
>
> The issue can be resolved by adding the environment variable 
> *GRPC_DNS_RESOLVER* with value *native.*
>
> *Questions: *
>
>1. Since all the servers are running in the same network*, *what could 
>be the reason behind DNS resolution failure for some servers only?
>2. Is adding the  *GRPC_DNS_RESOLVER degrades* name resolution 
>performance or any other side effect.
>
> We need to take a decision whether this environment variable is to be 
> added as part of our product installation or not to avoid any similar 
> issues in customer environment. 
>
> Please let know if any other input like log files required for this issue 
> analysis
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/fca7dd0c-f142-43a4-80ed-8be53bb6cae9n%40googlegroups.com.


[grpc-io] Re: Python: Using response streaming api from a done callback

2021-09-15 Thread 'Reino Ruusu' via grpc.io
A further clarification: The thread is not waiting for the future but 
returns to the event loop. The callback function is definitely executed and 
the deadlock happens in the call to next(). Also, the same callback 
function is successful in synchronously making other single-single api 
calls, but the single-streming call is deadlocked.

keskiviikko 15. syyskuuta 2021 klo 10.49.27 UTC+3 Reino Ruusu kirjoitti:

> Of course I meant to write add_done_callback() instead of 
> set_done_callback().
>
> To clarify, the code looks like this:
>
> it = stub.singleStreamApi(...)
> next(it) # <-- This works as expected
>
> fut = stub.singleSingleApi.future(...)
> def callback(fut):
> it = stub.singleStreamApi(...)
> next(it) # <-- This gets stuck in a deadlock
> fut.add_done_callback(callback)
>
> keskiviikko 15. syyskuuta 2021 klo 10.40.46 UTC+3 Reino Ruusu kirjoitti:
>
>> I have a case in which a call is made to a 
>> single-request-streaming-response api (through 
>> UnaryStreamMultiCallable.__call__()). This api is invoked from a callback 
>> that is registered using set_done_callback() to a future object returned by 
>> a call to UnaryUnaryMultiCallable.future(), so that the streaming is 
>> started asynchronously as soon as the previous call is finished.
>>
>> This causes the iterator that is returned for the streaming response to 
>> deadlock in the first next() call, irrespective of whether the stream is 
>> producing messages or an exception.
>>
>> The streaming call works as expected when called from some other context 
>> than the done-callback of the previous asynchronous call. This makes me 
>> suspect that some resource related to the channel is locked during the 
>> callback execution, resulting in a deadlock in the call to the stream's 
>> iterator.
>>
>> Is there some way around this?
>>
>> BR,
>> -- 
>> Reino Ruusu
>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/d5f240d1-de21-4590-aff5-e2f37612e190n%40googlegroups.com.


[grpc-io] Re: Python: Using response streaming api from a done callback

2021-09-15 Thread 'Reino Ruusu' via grpc.io
Of course I meant to write add_done_callback() instead of 
set_done_callback().

To clarify, the code looks like this:

it = stub.singleStreamApi(...)
next(it) # <-- This works as expected

fut = stub.singleSingleApi.future(...)
def callback(fut):
it = stub.singleStreamApi(...)
next(it) # <-- This gets stuck in a deadlock
fut.add_done_callback(callback)

keskiviikko 15. syyskuuta 2021 klo 10.40.46 UTC+3 Reino Ruusu kirjoitti:

> I have a case in which a call is made to a 
> single-request-streaming-response api (through 
> UnaryStreamMultiCallable.__call__()). This api is invoked from a callback 
> that is registered using set_done_callback() to a future object returned by 
> a call to UnaryUnaryMultiCallable.future(), so that the streaming is 
> started asynchronously as soon as the previous call is finished.
>
> This causes the iterator that is returned for the streaming response to 
> deadlock in the first next() call, irrespective of whether the stream is 
> producing messages or an exception.
>
> The streaming call works as expected when called from some other context 
> than the done-callback of the previous asynchronous call. This makes me 
> suspect that some resource related to the channel is locked during the 
> callback execution, resulting in a deadlock in the call to the stream's 
> iterator.
>
> Is there some way around this?
>
> BR,
> -- 
> Reino Ruusu
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/ce0aaf3f-815b-4f91-87d6-00d48d0e0f75n%40googlegroups.com.


[grpc-io] Python: Using response streaming api from a done callback

2021-09-15 Thread 'Reino Ruusu' via grpc.io
I have a case in which a call is made to a 
single-request-streaming-response api (through 
UnaryStreamMultiCallable.__call__()). This api is invoked from a callback 
that is registered using set_done_callback() to a future object returned by 
a call to UnaryUnaryMultiCallable.future(), so that the streaming is 
started asynchronously as soon as the previous call is finished.

This causes the iterator that is returned for the streaming response to 
deadlock in the first next() call, irrespective of whether the stream is 
producing messages or an exception.

The streaming call works as expected when called from some other context 
than the done-callback of the previous asynchronous call. This makes me 
suspect that some resource related to the channel is locked during the 
callback execution, resulting in a deadlock in the call to the stream's 
iterator.

Is there some way around this?

BR,
-- 
Reino Ruusu

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/d9c5de4d-d95b-45a7-9be7-916e76ba6a7cn%40googlegroups.com.