[grpc-io] [grpc-go] ClientConn exposes WaitForConnection API

2023-06-07 Thread yishu
Hey grpc community,
I am sorry if it's been asked. I did a very short research and didn't find
the answer.

Is it possible for ClientConn
 to
expose a WaitForConnection API that blocks until connectivity becomes
READY? something like
```
func (cc *ClientConn) WaitForConnection(cc *grpc.ClientConn, deadline
time.Time) {
ctx := context.Background()
if deadline != time.Time{} {
var cancel context.CancelFunc
ctx, cancel = context.WithDeadline(ctx, deadline)
defer cancel()
}
for {
state := cc.GetState()
if state == connectivity.Ready {
return
}
if !cc.WaitForStateChange(ctx, state) {
return
}
}
}
```
yishu

-- 
戴翊書
Yi-Shu Tai
Software Engineer @Dropbox

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CAK%2B0oQG7_t8ShAzCu7ZLg6O86JL%3DtMUnM3fX9v%2BVTnYGf78CAg%40mail.gmail.com.


[grpc-io] GRPC assertion Failure in 1.7.2

2023-06-07 Thread Shivteja Ayyagari
Hello,

I am facing one crash in grpc library:

Backtrace
#0  0x7f0a93b7547f in __pthread_kill_implementation () from
/home/tmp_work_dirs/ard/tmp-1685948391-17258/tmp/lib64/libc.so.6

#1  0x7f0a93b2b3c2 in raise () from
/home/tmp_work_dirs/ard/tmp-1685948391-17258/tmp/lib64/libc.so.6

#2  0x7f0a93b1643f in abort () from
/home/tmp_work_dirs/ard/tmp-1685948391-17258/tmp/lib64/libc.so.6

#3  0x7f0a93ec6a11 in call_start_batch (exec_ctx=exec_ctx@entry
=0x7f0a91a14c50, call=call@entry=0x7f0a644ee6b8, ops=ops@entry
=0x7f0a91a14ce0, nops=nops@entry=1, notify_tag=notify_tag@entry
=0x7f0a64013f80, is_notify_tag_closure=is_notify_tag_closure@entry
=0) at src/core/lib/surface/call.c:2032

Apparently the completion queue seems to be drained out, next is being
called on the completion queue, hence next is returning false, causing
assert. Is there a way to determine if this is the root cause?

Further details:

(gdb) p *((cq_next_data*)(call->cq+1))

$53 = {queue = {queue_lock = {atm = 0}, queue = {head = 139682470484976,
padding = '\000' , tail = 0x7f0a644b34a8, stub = {next =
139682609301216}}, num_queue_items = 4},

  things_queued_ever = 21872, pending_events = 0, shutdown_called = true}

Pending events is zero.

Is there any documentation to explanation how completion queue works in
detail?
-- 



Thanks & Regards,

Shivteja Ayyagari

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CAAdBGgUkUab2rGF%3DUt7ZQHv97zP_ar4XBQoXDQzFDnZYqNYjEA%40mail.gmail.com.


[grpc-io] Re: Upgrade grpcio (+grpcio-tools) from 1.48.1 to 1.54.2 became the reason massive memory leak

2023-06-07 Thread 'Richard Belleville' via grpc.io
> If allowed, I can provide a link to my project on GitHub

Absolutely. Please share. There's not much to investigate given just the 
information so far.

> Trying other versions older than 1.48.1 - the same result - massive 
memory leak.

Is this a typo? Do you mean versions *newer *than 1.48.1 show a memory 
leak? Or do you really mean "older."

On Sunday, June 4, 2023 at 2:11:42 AM UTC-7 Jerry wrote:

> [image: grpcio.jpg]
> Some instances of my app, green and blue full rollback to prev version, 
> for yellow - downgrade  grpcio (+grpcio-tools) to 1.48.1 only. Trying other 
> versions older than 1.48.1 - the same result - massive memory leak.
>
> If allowed, I can provide a link to my project on GitHub
>
> Regard's, Jerry.
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/ecc33a66-ddcf-4928-9fb7-3202ce70a4a4n%40googlegroups.com.


[grpc-io] Re: Live camera streaming using grpc python

2023-06-07 Thread 'Richard Belleville' via grpc.io
Sanket, you've shared your servicer, but you haven't shared the code that's 
enqueueing data into your camera_buffer. If you have a single writer 
enqueueing objects into a global buffer at a constant rate, then when you 
have one active RPC, you'll be getting the expected result -- each frame is 
delivered to the client. But if you have two connected clients you'll have 
two readers on your queue and each frame will be received by only one of 
them. So you'll basically be sending even frames to one client and odd 
frames to another.

What you do instead will depend on the sort of behavior you want. If you 
are okay with potential loss of frames on a stream, then you can do 
something very simple. Keep a global variable with a single frame, along 
with a threading.Lock and a threading.Condition. The writer will signal all 
awaiting threads each time it changes the frame. Each stream will be 
waiting on the condition variable, read the frame when signalled, and send 
it on its stream.

If you can't tolerate the potential loss of frames, then you'll need a data 
structure a little more complicated. Each frame needs to be kept in memory 
until all readers have received it. Only then can it be purged. You'd keep 
frames in a global buffer. Each time you pull a frame from the buffer, 
increase a count on the frame. Whichever reader happens to read it last 
will observe that the count has reached the global number of readers and 
can purge it from memory. Then you'll need to factor in the fact that 
clients can come and go, so that the total number of readers may change at 
any time.

Regardless, the issue isn't really with gRPC. This is more about 
multithreading in Python.
On Wednesday, June 7, 2023 at 10:19:39 AM UTC-7 yas...@google.com wrote:

> Are you tied to gRPC Python or could you also experiment with another 
> language?
>
> On Saturday, June 3, 2023 at 12:16:42 AM UTC-7 Sanket Kumar Mali wrote:
>
>> my proto file
>>
>> syntax = "proto3";
>>
>> package camera_stream;
>>
>> // Camera frame message
>> message Frame {
>>   bytes frame = 1;
>>   int64 timestamp = 2;
>> }
>>
>> // Camera stream service definition
>> service CameraStream {
>>   // Method to connect and start receiving camera frames
>>   rpc CameraStream(Empty) returns (stream Frame) {}
>> }
>> // Empty message
>> message Empty {}
>>
>> On Saturday, 3 June 2023 at 12:32:28 UTC+5:30 Sanket Kumar Mali wrote:
>>
>>> my server code
>>>
>>> camera_buffer = queue.Queue(maxsize=20)
>>>
>>> # Define the gRPC server class
>>> class CameraStreamServicer(camera_stream_pb2_grpc.CameraStreamServicer):
>>> def __init__(self):
>>> self.clients = []
>>>
>>> def CameraStream(self, request, context):
>>> global camera_buffer
>>> # Add the connected client to the list
>>> self.clients.append(context)
>>> try:
>>> while True:
>>> print("size: ",camera_buffer.qsize())
>>> frame = camera_buffer.get(timeout=1)  # Get a frame 
>>> from the buffer
>>>
>>> # Continuously send frames to the client
>>> for client in self.clients:
>>> try:
>>> response = camera_stream_pb2.Frame()
>>> response.frame = frame
>>> response.timestamp = int(time.time())
>>> yield response
>>> except grpc.RpcError:
>>> # Handle any errors or disconnections
>>> self.clients.remove(context)
>>> print("Client disconnected")
>>> except Exception as e:
>>> print("unlnown error: ", e)
>>>
>>>
>>> in a seperate thread I am getting frames from camera and populating the 
>>> buffer
>>>
>>>
>>> On Monday, 22 May 2023 at 12:16:57 UTC+5:30 torpido wrote:
>>>
 What happens if you run the same process in parallel, and serve in each 
 one a different client?
 just to make sure that there is no issue with the bandwidth in the 
 server.

 I would also set debug logs for gRPC to get more info

 Can you share the RPC and server code you are using? Seems like it 
 should be A *request-streaming RPC* 
 ב-יום שבת, 13 במאי 2023 בשעה 16:21:41 UTC+3, Sanket Kumar Mali כתב/ה:

> Hi,
> I am trying to implement a live camera streaming setup using grpc 
> python. I was able to stream camera frame(1280x720) in 30 fps to a single 
> client. But whenever I try to consume the stream from multiple client it 
> seems frame rate is getting divided (e.g if I connect two client frame 
> rate 
> becomes 15fps).
> I am looking for a direction where I am doing wrong. Appreciate any 
> clue in the right way to achieve multi client streaming.
>
> Thanks
>


-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" g

[grpc-io] Re: Live camera streaming using grpc python

2023-06-07 Thread 'yas...@google.com' via grpc.io
Are you tied to gRPC Python or could you also experiment with another 
language?

On Saturday, June 3, 2023 at 12:16:42 AM UTC-7 Sanket Kumar Mali wrote:

> my proto file
>
> syntax = "proto3";
>
> package camera_stream;
>
> // Camera frame message
> message Frame {
>   bytes frame = 1;
>   int64 timestamp = 2;
> }
>
> // Camera stream service definition
> service CameraStream {
>   // Method to connect and start receiving camera frames
>   rpc CameraStream(Empty) returns (stream Frame) {}
> }
> // Empty message
> message Empty {}
>
> On Saturday, 3 June 2023 at 12:32:28 UTC+5:30 Sanket Kumar Mali wrote:
>
>> my server code
>>
>> camera_buffer = queue.Queue(maxsize=20)
>>
>> # Define the gRPC server class
>> class CameraStreamServicer(camera_stream_pb2_grpc.CameraStreamServicer):
>> def __init__(self):
>> self.clients = []
>>
>> def CameraStream(self, request, context):
>> global camera_buffer
>> # Add the connected client to the list
>> self.clients.append(context)
>> try:
>> while True:
>> print("size: ",camera_buffer.qsize())
>> frame = camera_buffer.get(timeout=1)  # Get a frame 
>> from the buffer
>>
>> # Continuously send frames to the client
>> for client in self.clients:
>> try:
>> response = camera_stream_pb2.Frame()
>> response.frame = frame
>> response.timestamp = int(time.time())
>> yield response
>> except grpc.RpcError:
>> # Handle any errors or disconnections
>> self.clients.remove(context)
>> print("Client disconnected")
>> except Exception as e:
>> print("unlnown error: ", e)
>>
>>
>> in a seperate thread I am getting frames from camera and populating the 
>> buffer
>>
>>
>> On Monday, 22 May 2023 at 12:16:57 UTC+5:30 torpido wrote:
>>
>>> What happens if you run the same process in parallel, and serve in each 
>>> one a different client?
>>> just to make sure that there is no issue with the bandwidth in the 
>>> server.
>>>
>>> I would also set debug logs for gRPC to get more info
>>>
>>> Can you share the RPC and server code you are using? Seems like it 
>>> should be A *request-streaming RPC* 
>>> ב-יום שבת, 13 במאי 2023 בשעה 16:21:41 UTC+3, Sanket Kumar Mali כתב/ה:
>>>
 Hi,
 I am trying to implement a live camera streaming setup using grpc 
 python. I was able to stream camera frame(1280x720) in 30 fps to a single 
 client. But whenever I try to consume the stream from multiple client it 
 seems frame rate is getting divided (e.g if I connect two client frame 
 rate 
 becomes 15fps).
 I am looking for a direction where I am doing wrong. Appreciate any 
 clue in the right way to achieve multi client streaming.

 Thanks

>>>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/b66d7ceb-3b1f-4d4a-87a9-eb48c5db2447n%40googlegroups.com.


[grpc-io] Re: Updates on gRPC C# (Grpc.Core) support

2023-06-07 Thread Zaphod Stardust
@Jan Tattermusch:
I understand  that now the "maintenance mode" of Grpc.Core is over.
That means Grpc.Core is now officially deprecated, correct?

What are the rules / support policies that apply now?
-> No fixes (even for security) any more (besides contributed by open 
source community)?

Thanks for clarifying!

Jan Tattermusch schrieb am Dienstag, 3. Mai 2022 um 11:25:07 UTC+2:

> Hello gRPC C# Users!
>
> In May 2021 we announced  that 
> Grpc.Core (the original C# implementation of gRPC) became "maintenance 
> only" and that grpc-dotnet will be the recommended implementation going 
> forward. We also announced that Grpc.Core will become deprecated in the 
> future.
>
> While all the above is still the plan, we are making some adjustments 
> based on the user feedback we received. We also wanted to publish more 
> details about the plan and its technical execution. All the important 
> updates are summarized in the following sections of this announcement.
> Grpc.Core maintenance period will be extended by 1 more year (until May 
> 2023)
>
> Originally we planned to deprecate the Grpc.Core implementation in May 
> 2022, but the feedback we received from users has indicated that extending 
> the maintenance period would make sense. Without going too much into the 
> details, the main points of the feedback can be summarized as:
>
>- 
>
>The main blocker for deprecating Grpc.Core is the lack of support of 
>the legacy .NET Framework in grpc-dotnet. The desire to migrate off the 
>legacy .NET framework is often there, but migrating workloads from .NET 
>Framework to .NET Core / .NET 6 simply takes time and effort.
>- 
>
>Grpc.Core is a very important technology for enabling migration off 
>.NET Framework (since it enables piece-by-piece migration by 
>interconnecting components on newer .NET platforms with components that 
>remain on .NET Framework), so supporting it for a little longer can 
>(somewhat paradoxically) help users migrate off it faster.
>
>
> As a result, we are delaying the deprecation of Grpc.Core until May 2023 
> (1 year from now, and 2 years after the original announcement). Until 
> then, Grpc.Core will remain to be supported in the "maintenance mode", as 
> described below.
>
> Since the plan to deprecate Grpc.Core has been now publicly known for a 
> while and since the main reason we are extending the maintenance period is 
> to deal with the issues related to the legacy .NET Framework (and migration 
> off it), we also want to clarify what exactly will be covered by the 
> "Grpc.Core maintenance" going forward:
>
>- 
>
>The main goal of keeping Grpc.Core alive is to maintain the ability to 
>run gRPC C# clients and servers on the legacy .NET Framework on Windows. 
>This will be taken into account when considering issues / fixes.
>- 
>
>We will only provide critical and security fixes going forward. This 
>is to minimize the maintenance costs and reflects the fact that 
> grpc-dotnet 
>is the recommended implementation to use.
>- 
>
>There will be no new features for Grpc.Core. Note that since Grpc.Core 
>is moving to a maintenance branch (see section below), there will also be 
>no new features coming from the native C-core layer.
>- 
>
>There will be no new platform support and portability work. The focus 
>will be on continuing support for the legacy .NET Framework on Windows 
>(where there is no alternative implementation to use) and the list of 
>supported platforms will not be expanded (e.g. we will not work towards 
>better support for Unity, Xamarin, Alpine Linux etc.). We will likely drop 
>support for platforms that have been so far considered as "experimental"  
>(e.g. Unity and Xamarin), since they are also hard to test and maintain.
>- 
>
>Work to support new .NET versions (.NET6, NET 7, …) will be kept to a 
>minimum (or not done at all) since those .NET versions fully support 
>grpc-dotnet.
>- 
>
>No more performance work: Since the main purpose of Grpc.Core is to 
>maintain interoperability with legacy .NET framework, there will be less 
>focus on performance. We do not expect any significant performance drops, 
>but performance may degrade over time if tradeoffs between performance vs 
>maintainability are needed.
>
>
> Grpc.Core moves to a maintenance branch in the grpc/grpc repository (while 
> other actively developed packages move to grpc/grpc-dotnet repository)
>
> To simplify the maintenance of Grpc.Core, we decided to move the the 
> Grpc.Core implementation to a maintenance branch (v1.46.x 
>  on the grpc/grpc repository), 
> where it will continue to receive security and critical fixes, but will not 
> be slowing down the development of the native C-core library it is based o