ping :)

On Tuesday, April 4, 2017 at 9:14:45 AM UTC+3, David Edery wrote:
>
>
>
> On Friday, March 31, 2017 at 10:49:32 PM UTC+3, Eric Anderson wrote:
>>
>> On Mon, Mar 27, 2017 at 10:11 PM, David Edery <
>> da...@intuitionrobotics.com> wrote:
>>
>>> 4. Create a request observer (of type 
>>> StreamObserver<StreamingRecognizeRequest>) by calling the speech client's 
>>> (which is of type SpeechGrpc.SpeechStub) streamingRecognize function
>>>
>>> I didn't get into the details (yet) but I'm sure that there's network 
>>> activity in the above described flow. I know it due to an exception I got 
>>> on network activity when this flow was executed on the main (UI) thread 
>>> (which doesn't allow network activity to be executed on it).
>>>
>>
>> #4 creates an RPC. So that's where the I/O should be.
>>
>> "the frontend doesn't want the addition traffic" == RPC calls are ok but 
>>> anything else would be suspected as DDoS? (depends of course on the 
>>> frequency of the keep alive)
>>>
>>
>> I can't speak authoritatively, but I think it more about the load and 
>> lack of billing. If you aren't careful, keepalive pings can very easily eat 
>> up a significant portion of network/cpu. They are also mostly invisible, so 
>> it's very easy to avoid noticing unnecessary load.
>>
>> Is there a way to know what's the state of the channel? I saw that 
>>>>> grpc-java issue #28 should address this issue with the 
>>>>> ManagedChannel.getState/notifyWhenStateChanged APIs (rel 1.2.0) but it's 
>>>>> not implemented yet.
>>>>>
>>>>
>>>> Nope. Note that the API wouldn't tell you anything in this case, since 
>>>> the problem isn't likely caused by gRPC going idle. But if it was 
>>>> implemented it would provide you a way to "kick" gRPC to eagerly make a 
>>>> TCP 
>>>> connection.
>>>>
>>>
>>> A:
>>> So if I understand correctly (and please correct me if I'm wrong), once 
>>> state API is available the flow would be something like:
>>> 1. create the channel (as described above) with idleTimeout + listener 
>>> on connectivity state change
>>> 2. In case of connectivity state change, goto #1
>>> 3. prior to using the channel, call getState(true) to eagerly connect it 
>>> (in case that idleTimeout was reached) if is not connected and then do the 
>>> actual streaming work 
>>>
>>
>> #2 should be calling getState(true). #3 should never be necessary; 
>> getState(true) basically does the first half of setting up an RPC, making 
>> sure that a connection is available, but then doesn't send an RPC
>>
>
> Just to be sure that I understand the flow - for #2, when the connectivity 
> state changes, I don't need to rebuild the whole channel I just need to 
> call getState(true). Right?
>
>
>> B:
>>> Today, in step #1 (that doesn't include idleTimeout), if channel != null 
>>> && !channel.isShutdown && !channel.isTerminated I call channel.shutdownNow 
>>> and immediately create a new ManagedChannel (which means - the way I 
>>> understand it - that there's a channel in the process of shutting down 
>>> while immediately I create another channel which is wiring up). Just to 
>>> validate this point - is this described flow is ok? (shutdown one channel 
>>> instance while creating another channel for the same host).
>>>
>>
>> Shutting down a channel while creating another to the same host is safe. 
>> I probably would just check isShutdown; isTerminated can take some time 
>> since it needs to release resources. Semi-unrelated, but isTerminated == 
>> true implies isShutdown == true.
>>
>
> great. will use only isShutdown
>  
>
>>
>> Given the future A and the current B, I assume that I will still need to 
>>> take care for the channel shutdown at the end of the streaming operation. 
>>> idleTimeout will not take care for it once the channel has been active no? 
>>> from the documentation of idleTimeout: "By default the channel will never 
>>> go to idle mode after it leaves the initial idle mode". Is this a correct 
>>> assumption?
>>> Does the above flow (A+B) sounds reasonable as a solution to an 
>>> always-ready channel requirement?
>>>
>>
>> Hmm... that documentation is bit misleading. I just sent out a PR to 
>> improve it <https://github.com/grpc/grpc-java/pull/2870>.
>>
>> idleTimeout doesn't shutdown a channel, but it would cause it to go idle 
>> (i.e., TCP connection). The part of documentation you linked to starts with 
>> "by default"; that was meaning "if you don't call idleTimeout."
>>
>
> Thank you for clarifying
>
>
> There's another, probably-unrelated issue of a channel that reached the 
> streaming limitation - If I stream more than 65 seconds using the same 
> channel, I get an exception. I assume that the source of this exception is 
> the speech API itself and not an internal gRPC logic (is my assumption 
> correct?) Currently I'm handling this by:
> A. Not streaming more than 65 seconds of audio data
> B. Once I get the final result from the speech API, I immediately create 
> another channel using the above described flow
>
> If my assumption is correct, I guess that that's the way to avoid the 
> exception. If not, is there a way to re-use the channel by calling a kind 
> of "reset" function? (just like your suggestion above on #2 in which the 
> channel should be reused by calling getState(true) instead of creating a 
> new channel)
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/4e862d74-b817-481e-acc9-ef098523e1f1%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to