[grpc-io] Re: Grpc excessive default memory usage

2020-05-13 Thread ravi . y0102
Thanks Eric for the reply.

Couple of things. I am not able to get this when you say that gRPC java is 
not optimized for this type of large file. My individual message size is 
1024 bytes only. The program reads this much bytes from the file at a time 
and sends this to onNext() call of stream observer. So, each message sent 
on wire for each onNext() must be little over 1kb only. 

Also, I just changed the gRPC version to *1.29.0* and saw that the native 
memory usage is almost zero with this version. The high usage that I saw  
was on version *1.21.0*. 

Regards,

On Thursday, May 14, 2020 at 3:26:12 AM UTC+5:30, Eric Gribkoff wrote:
>
> gRPC Java may not be optimized for this type of large file transfer, and 
> is likely copying the provided bytes more than once which could explain why 
> you see additional direct memory allocation. These should be all released 
> once the data goes out over the wire. It looks from your code snippet that 
> the entire file is immediately buffered - we would expect to see the memory 
> consumption decrease as bytes are sent out to the client.
>
> Further investigation of this would probably be best carried out on the 
> grpc java repository at https://github.com/grpc/grpc-java/ if you'd care 
> to file an issue/reproduction case there.
>
> Thanks,
>
> Eric
> On Monday, May 11, 2020 at 3:22:35 AM UTC-7 ravi@gmail.com wrote:
>
>> I have been testing gRPC behaviour for larger message size on my machine. 
>> I have got a single client to which I am streaming a video file of 603mb 
>> via streaming gRPC. I ran into OOM while testing and found that in case of 
>> slow clients, response messages were getting queued up and I was getting 
>> below error msg:
>> io.netty.util.internal.OutOfDirectMemoryError: failed to allocate 
>> 16777216 byte(s) of direct memory (used: 1873805512, max: 1890582528)
>>
>>
>> Now this is fixable via flow control(using onReady() channel handler) but 
>> out of curiosity I increased my direct memory to 4GB 
>> via -XX:MaxDirectMemorySize=4g  jvm flag to force queuing up of all 
>> response messages and hence the client can consume on it's own pace. It got 
>> completed successfully. But I observed that I ended up using 2.4GB of 
>> direct memory. I checked it via usedDirectMemory() eposed by netty 
>> for ByteBufAllocatorMetric.  
>>
>> *Isn't this too much for a 603mb file as it is 3 times more than the 
>> total file size*. Below is the code snippet that I am using:
>>
>> stream = new FileInputStream(file);
>> byte[] buffer = new byte[1024];
>> ByteBufAllocator byteBufAllocator = ByteBufAllocator.DEFAULT;
>> int length;
>> while ((length = stream.read(buffer)) > 0) {
>>  response.onNext(VideoResponse.newBuilder().setVideoBytes(ByteString.
>> copyFrom(buffer)).build());
>>  if (ByteBufAllocator.DEFAULT instanceof ByteBufAllocatorMetricProvider) 
>> {
>> ByteBufAllocatorMetric metric = ((ByteBufAllocatorMetricProvider) 
>> byteBufAllocator).metric();
>> System.out.println(metric.usedDirectMemory()/(1024*1024));
>> }
>> }
>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/e9584663-f06e-4a22-9307-520409511eab%40googlegroups.com.


[grpc-io] Re: GRPC Seg faults during streaming

2020-05-13 Thread 'Yang Gao' via grpc.io
This seems like some kind of memory corruption. I suggest you run the 
program under some memory tools such as AddressSanitizer.

On Wednesday, May 13, 2020 at 3:04:06 AM UTC-7 deepankar wrote:

> I am using grpc for streaming audio streams. My grpc client runs fine most 
> of the time, but crashes occasionally.  I have implemented a async client 
> using CompletionQueue.
>
> Here is the stack trace of my application seg fault.
>
> 0x7f60957976b6 in ?? () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6 
> 0x7f6095797701 in std::terminate() () from 
> /usr/lib/x86_64-linux-gnu/libstdc++.so.6 0x7f609579823f in 
> __cxa_pure_virtual () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6 
> 0x7f60961f5302 in grpc::CompletionQueue::AsyncNextInternal(void**, 
> bool*, gpr_timespec) () from /usr/local/lib/libgrpc++.so.1 
> 0x7f60965a4edf in grpc::ByteBuffer::Swap(grpc::ByteBuffer*) () from 
> /usr/local/unimrcp/plugin/vairecog.so 0x7f60965ac353 in 
> CCACallStream::reset() () from /usr/local/unimrcp/plugin/vairecog.so 
> 0x7f60965bc6bb in std::_Bind (CCACallStream::*)()> (CCACallStream*)> ()>* 
> std::__addressof (CCACallStream::*)()> (CCACallStream*)> ()> 
> >(std::_Bind 
> (CCACallStream*)> ()>&) () from /usr/local/unimrcp/plugin/vairecog.so
>
> I also get some grpc core dumps though I am not sure if it is related to 
> this problem.
>
> #0 0x7fef70ec8f00 in grpc_cq_end_op(grpc_completion_queue*, void*, 
> grpc_error*, void (*)(void*, grpc_cq_completion*), void*, 
> grpc_cq_completion*) () from /usr/local/lib/libgrpc.so.7 #1 
> 0x7fef7157d421 in 
> grpc_impl::internal::AlarmImpl::Set(grpc::CompletionQueue*, gpr_timespec, 
> void*)::{lambda(void*, grpc_error*)#1}::_FUN(void*, grpc_error*) () from 
> /usr/local/lib/libgrpc++.so.1 #2 0x7fef70ea6e31 in 
> grpc_core::ExecCtx::Flush() () from /usr/local/lib/libgrpc.so.7 #3 
> 0x7fef70eb7e68 in ?? () from /usr/local/lib/libgrpc.so.7 #4 
> 0x7fef70f695b3 in ?? () from /usr/local/lib/libgrpc.so.7 #5 
> 0x7fef738116ba in start_thread (arg=0x7fef6a7fc700) at 
> pthread_create.c:333 #6 0x7fef7354741d in clone () at 
> ../sysdeps/unix/sysv/linux/x86_64/clone.S:109
>
> Thanks in advance for any help.
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/3ea8bcd4-4720-418e-b423-745c87f11364%40googlegroups.com.


Re: [grpc-io] gRFC L65: Additional PyPI Packages for gRPC Python

2020-05-13 Thread Kailash Sethuraman
Have you considered using the name for a new meta-package bundle for
related packages?

   - grpcio
   - grpcio-status
   - grpcio-channelz
   - grpcio-reflection
   - grpcio-health-checking
   -

Or even a kitchen sink package that includes grpcio-testing and
grpcio-tools.

On Wed, May 13, 2020 at 7:32 PM 'Lidi Zheng' via grpc.io <
grpc-io@googlegroups.com> wrote:

> Abstract:
>
> gRPC Python is uploaded as "grpcio" on PyPI, but there is another package
> named "grpc". Hence, some users are confused. This document proposes to
> upload additional packages named "grpc" and "grpc-*" to guide users to
> official packages.
>
> gRFC:
> https://github.com/lidizheng/proposal/blob/L65-python-package-name/L65-python-package-name.md
> PR: https://github.com/grpc/proposal/pull/177
>
> --
> You received this message because you are subscribed to the Google Groups "
> grpc.io" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to grpc-io+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/grpc-io/482e22f0-4044-4f7e-81b9-179f70ac5be5%40googlegroups.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CAFEPE9QMiUp_JV_7atXn%3DOrYR0FSwDt_efdrgB2%3DkNikG_GE%3Dw%40mail.gmail.com.


[grpc-io] gRFC L65: Additional PyPI Packages for gRPC Python

2020-05-13 Thread 'Lidi Zheng' via grpc.io
Abstract:

gRPC Python is uploaded as "grpcio" on PyPI, but there is another package 
named "grpc". Hence, some users are confused. This document proposes to 
upload additional packages named "grpc" and "grpc-*" to guide users to 
official packages.

gRFC: 
https://github.com/lidizheng/proposal/blob/L65-python-package-name/L65-python-package-name.md
PR: https://github.com/grpc/proposal/pull/177

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/482e22f0-4044-4f7e-81b9-179f70ac5be5%40googlegroups.com.


Re: [grpc-io] Synchonize gRPC threads / calls using shared_futures and share data between threads in C++

2020-05-13 Thread Jeff Steger
I think the use case you are describing is better implemented using a
condition variable. All threads wait on the condition variable and then the
newcommandmsg will signal all. Then the threads execute and can wait again
on same condition. this is how consumer/producer pattern works, which is
what you are implementing essentially.

On Wed, May 13, 2020 at 4:36 PM  wrote:

> Hey,
> for a project I start understanding how gRPC works. For this I implemented
> the following setup:
>
> A C++ server using the sync API offers two services
> RegisterCommand(streaming) and NewCommandMsg(blocking). This is the .proto
> definition:
>
> service Command {
> rpc RegisterCloud (CommandRequest) returns (stream CommandMessage) {}
> rpc NewCommandMsg (CommandMessage) returns (google.protobuf.Empty) {}
> }
> what do I try to achieve?
>
> Multiple clients shall call RegisterCommand and the server shall block
> inside the procedure until a call to NewCommandMsg happened (I guarantee
> that only one single call happens at a time). If NewCommandMsg is called,
> the argument CommandMessage shall be transported to every thread of
> RegisterCommand (I understood every call is handled in a thread), the
> thread shall be unblocked and the CommandMessage shall be written to the
> stream. After that, the threads of RegisterCommand shall be blocked again
> and wait for the next call to NewCommandMsg. Later, the NewCommandMsg will
> be replaced by a single non-grpc thread.
>
> What did I already do
>
> I read a lot about (shared) futures, promise, mutex and conditional
> variables in C++ and implemented the following code.
>
> class CommandServiceImpl final : public Command::Service {
> //To my understanding these are common for all threads
> std::promise newCommandPromise;
> std::shared_future newCommandFuture =
> this->newCommandPromise.get_future();
>
> //To my understanding this is executed in an own thread
> Status RegisterCommand(ServerContext* context, const CommandRequest*
> request, ServerWriter* writer) override {
> //Each thread gets its own copy of the shared future
> std::shared_future future = this->newCommandFuture;
> while(!context->IsCancelled()){
> future.wait();
> (void)future.get();
> std::cout << "distributing command" << std::endl;
> //actual writing would happen here
> }
>
> return Status::CANCELLED;
> }
>
> //To my understanding this is executed in an own thread
> Status NewCommandMsg(ServerContext* context, const CommandMessage*
> request, google::protobuf::Empty* response) override {
> std::promise promise =
> move(this->newCommandPromise);
>
> std::cout << "new command received" << std::endl;
> promise.set_value(*request);
>
> //Provide new promise, for next call
> //In my evaluation phase, I guarantee, that only one client at a
> time will call NewCommandMsg
> std::promise cleanPromise;
> this->newCommandPromise = move(cleanPromise);
>
> return Status::OK;
> }
> };
> What happens with that code
>
> After one or multiple concurrent calls to RegisterCommand, the server
> blocks and after a call to NewCommandMessage, the future.wait() unblocks,
> which is expected. After that, of course future.wait() is always
> non-blocking, so that the threads run in an infinite loop. But it may only
> run exactly once and then wait for new data to be available.
>
> It seems that it is not possible to "reuse" an existing future. Any ideas
> on how to achieve my goal?
>
> --
> You received this message because you are subscribed to the Google Groups "
> grpc.io" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to grpc-io+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/grpc-io/78dbca9e-9ece-4632-a18a-6a2bd12f93d6%40googlegroups.com
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CAA-WHumDH3n2TdQVu%2BkT6UT16qKSPkpoYh_UwEfsv24ijN9uTA%40mail.gmail.com.


[grpc-io] Re: Grpc excessive default memory usage

2020-05-13 Thread 'ericgr...@google.com' via grpc.io
gRPC Java may not be optimized for this type of large file transfer, and is 
likely copying the provided bytes more than once which could explain why 
you see additional direct memory allocation. These should be all released 
once the data goes out over the wire. It looks from your code snippet that 
the entire file is immediately buffered - we would expect to see the memory 
consumption decrease as bytes are sent out to the client.

Further investigation of this would probably be best carried out on the 
grpc java repository at https://github.com/grpc/grpc-java/ if you'd care to 
file an issue/reproduction case there.

Thanks,

Eric
On Monday, May 11, 2020 at 3:22:35 AM UTC-7 ravi@gmail.com wrote:

> I have been testing gRPC behaviour for larger message size on my machine. 
> I have got a single client to which I am streaming a video file of 603mb 
> via streaming gRPC. I ran into OOM while testing and found that in case of 
> slow clients, response messages were getting queued up and I was getting 
> below error msg:
> io.netty.util.internal.OutOfDirectMemoryError: failed to allocate 16777216 
> byte(s) of direct memory (used: 1873805512, max: 1890582528)
>
>
> Now this is fixable via flow control(using onReady() channel handler) but 
> out of curiosity I increased my direct memory to 4GB 
> via -XX:MaxDirectMemorySize=4g  jvm flag to force queuing up of all 
> response messages and hence the client can consume on it's own pace. It got 
> completed successfully. But I observed that I ended up using 2.4GB of 
> direct memory. I checked it via usedDirectMemory() eposed by netty 
> for ByteBufAllocatorMetric.  
>
> *Isn't this too much for a 603mb file as it is 3 times more than the total 
> file size*. Below is the code snippet that I am using:
>
> stream = new FileInputStream(file);
> byte[] buffer = new byte[1024];
> ByteBufAllocator byteBufAllocator = ByteBufAllocator.DEFAULT;
> int length;
> while ((length = stream.read(buffer)) > 0) {
>  response.onNext(VideoResponse.newBuilder().setVideoBytes(ByteString.
> copyFrom(buffer)).build());
>  if (ByteBufAllocator.DEFAULT instanceof ByteBufAllocatorMetricProvider) {
> ByteBufAllocatorMetric metric = ((ByteBufAllocatorMetricProvider) 
> byteBufAllocator).metric();
> System.out.println(metric.usedDirectMemory()/(1024*1024));
> }
> }
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/4052a036-6eb7-4490-b49b-ec5c527001b4%40googlegroups.com.


[grpc-io] Synchonize gRPC threads / calls using shared_futures and share data between threads in C++

2020-05-13 Thread lange . timo22
Hey,
for a project I start understanding how gRPC works. For this I implemented the 
following setup:

A C++ server using the sync API offers two services RegisterCommand(streaming) 
and NewCommandMsg(blocking). This is the .proto definition:

service Command {
rpc RegisterCloud (CommandRequest) returns (stream CommandMessage) {}
rpc NewCommandMsg (CommandMessage) returns (google.protobuf.Empty) {}
}
what do I try to achieve?

Multiple clients shall call RegisterCommand and the server shall block inside 
the procedure until a call to NewCommandMsg happened (I guarantee that only one 
single call happens at a time). If NewCommandMsg is called, the argument 
CommandMessage shall be transported to every thread of RegisterCommand (I 
understood every call is handled in a thread), the thread shall be unblocked 
and the CommandMessage shall be written to the stream. After that, the threads 
of RegisterCommand shall be blocked again and wait for the next call to 
NewCommandMsg. Later, the NewCommandMsg will be replaced by a single non-grpc 
thread.

What did I already do

I read a lot about (shared) futures, promise, mutex and conditional variables 
in C++ and implemented the following code.

class CommandServiceImpl final : public Command::Service {
//To my understanding these are common for all threads
std::promise newCommandPromise;
std::shared_future newCommandFuture = 
this->newCommandPromise.get_future();

//To my understanding this is executed in an own thread
Status RegisterCommand(ServerContext* context, const CommandRequest* 
request, ServerWriter* writer) override {
//Each thread gets its own copy of the shared future
std::shared_future future = this->newCommandFuture;
while(!context->IsCancelled()){
future.wait();
(void)future.get();
std::cout << "distributing command" << std::endl;
//actual writing would happen here
}

return Status::CANCELLED;
}

//To my understanding this is executed in an own thread
Status NewCommandMsg(ServerContext* context, const CommandMessage* request, 
google::protobuf::Empty* response) override {
std::promise promise = move(this->newCommandPromise);

std::cout << "new command received" << std::endl;
promise.set_value(*request);

//Provide new promise, for next call
//In my evaluation phase, I guarantee, that only one client at a time 
will call NewCommandMsg
std::promise cleanPromise;
this->newCommandPromise = move(cleanPromise);

return Status::OK;
}
};
What happens with that code

After one or multiple concurrent calls to RegisterCommand, the server blocks 
and after a call to NewCommandMessage, the future.wait() unblocks, which is 
expected. After that, of course future.wait() is always non-blocking, so that 
the threads run in an infinite loop. But it may only run exactly once and then 
wait for new data to be available.

It seems that it is not possible to "reuse" an existing future. Any ideas on 
how to achieve my goal?

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/78dbca9e-9ece-4632-a18a-6a2bd12f93d6%40googlegroups.com.


[grpc-io] Re: CreateInsecureChannelFromFd() - ownership of the given file descriptor.

2020-05-13 Thread Krzysztof Rybak
So what is the proper way to run grpc based on fd-s ? I use grpc tagged 
v1.19.1. 
The problem is that when I release the fd on the client side 
in CreateInsecureChannelFromFd(), the client doesn't reconnect when server 
is restarted. From strace log: the client does a shutdown() on the given fd 
and doesn't recover from that state. 

In my application I do close() on the fd passed to 
CreateInsecureChannelFromFd(), create a new socket and pass it to 
CreateInsecureChannelFromFd(), but from the previous response this is not a 
solution.
I also tried (on client side) not to close the passed fd, but create a new 
socket and pass fd to CreateInsecureChannelFromFd() - this caused assertion 
errors on grpc.
Also tried just leave it without any handling but looks like grpc client 
cannot recover in case of for example server restart.

The simple example with one client (based on helloworld example) is below, 
is that way correct? (I omitted intentionally checking return codes and 
other part of the program).

server side:

  int server_fd;
  struct sockaddr_in address;

  server_fd = socket(AF_INET, SOCK_STREAM, 0);

  address.sin_family = AF_INET;
  address.sin_addr.s_addr = INADDR_ANY;
  address.sin_port = htons(PORT);

  bind(server_fd, (struct sockaddr *)&address, sizeof(address);
  listen(server_fd, 0);

  int flags = fcntl(server_fd, F_GETFL);
  fcntl(server_fd, F_SETFL, flags | O_NONBLOCK);

  GreeterServiceImpl service;
  ServerBuilder builder;
  builder.RegisterService(&service);
  std::unique_ptr server(builder.BuildAndStart());

  int client_fd = -1;
  while(1){
client_fd = accept(server_fd, nullptr, nullptr);
if (client_fd == -1 ) continue;

flags = fcntl(client_fd, F_GETFL);
fcntl(client_fd, F_SETFL, flags | O_NONBLOCK);

::grpc::AddInsecureChannelFromFd(server.get(), client_fd);

  }


client side:
  int client_fd;
  struct sockaddr_in serv_addr;

  client_fd = socket(AF_INET, SOCK_STREAM, 0);

  serv_addr.sin_family = AF_INET;
  serv_addr.sin_port = htons(PORT);

  inet_pton(AF_INET, "127.0.0.1", &serv_addr.sin_addr);

  connect(client_fd, (struct sockaddr *)&serv_addr, sizeof(serv_addr));

  int flags = fcntl(client_fd, F_GETFL);
  fcntl(client_fd, F_SETFL, flags | O_NONBLOCK);

  auto channel = 
grpc::CreateInsecureChannelFromFd(Greeter::service_full_name(), client_fd); 

GreeterClient greeter(channel);

  while (1){
// calling SayHello -> stub->SayHello and when status is not ok, should I 
handle it? 
  }

 
W dniu środa, 6 maja 2020 19:36:44 UTC+2 użytkownik Mark D. Roth napisał:
>
> gRPC takes ownership of the fd when you pass it to 
> CreateInsecureChannelFromFd(), so you don't need to shut it down or close 
> it.
>
> On Tuesday, May 5, 2020 at 4:33:23 AM UTC-7 krzyszt...@gmail.com wrote:
>
>> Hi,
>> I'm creating a grpc channel for the grpc client using 
>> function CreateInsecureChannelFromFd() and giving the file descriptor to 
>> it. 
>> Should I handle the connection for the given file descriptor afterwards 
>> in application or is it handled in grpc framework?
>>
>> For example on application side when messages are not delivered I 
>> reconnect using shutdown() or close() etc. 
>> Is it necessary? From strace log I found multiple calls to close() on the 
>> same fd like some are called by application and some by grpc framework.
>>
>> Sorry, if something is written badly - my first post here.
>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/4db72044-dbd3-4171-8aa9-fe7d67cb0929%40googlegroups.com.


[grpc-io] Re: GRPC version to use for Googleapi build

2020-05-13 Thread 'stanle...@google.com' via grpc.io
I can answer 4. for now that the minimum gc version required is 4.9 
https://grpc.io/about/#officially-supported-languages-and-platforms

On Monday, May 11, 2020 at 1:56:33 PM UTC-7 ushun...@gmail.com wrote:

>
> I was trying to do a C++ build of* googleapis* (latest source from 
> https://github.com/googleapis/googleapis.git) on Linux, using the latest 
> grpc version 1.28.1 (and protobuf 3.11.2) , but the build produced some 
> errors that indicated that the grpc version might not be compatible with 
> the googleapis version. 
>
>
> My questions are:
>
>
> 1. What is the *latest stable version* of googleapis that I should be 
> using? BTW, how do I check/confirm which version of the source I have 
> checked out?
>
>
> 2. Where could I find the changelog(s) for the googleapis? 
>
>
> 3. How do I find the correct GRPC version to use with a particular version 
> of googleapis?
>
>
> 4. Is there a minimum g++ version required (I was using 4.8.2)?
>
>
> Thanks in advance!
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/7585b9ac-3c9d-46ff-b527-d4274a2cbd6f%40googlegroups.com.


Re: [grpc-io] Re: CreateInsecureChannelFromFd() - ownership of the given file descriptor.

2020-05-13 Thread Krzysztof Rybak
I'm afraid in my case a channel created from fd behaves a different way.
Just to mention I use grpc version tagged v1.19.1, maybe this was modified
later.

A simple test I did is based on helloworld example(
https://github.com/grpc/grpc/tree/v1.19.1/examples/cpp/helloworld/{greeter_client.cc,greeter_server.cc}
):

   - original version

I periodically (1 second) send a SayHello message from client to server,
the server responds. After a few messages I stop the server binary.
Then client prints on output:

Greeter received: RPC failed
> 14: channel is in state TRANSIENT_FAILURE
>

But when server is started again, the client is able to reconnect and
contintues to send and receive SayHello messages.
>From strace logs it looks that grpc framework on client side saved the
server information passed to grpc::CreateChannel().
When the connection is broken, the grpc framework calls shutdown() and
close() on fd used(sendmsg() and recvmsg() were called on that fd).
During the server shutdown, the client periodically creates a new socket,
tries to connect and eventually closes the fd. But when the server is back
again, the connect is successfull and communication is back again.


   - fd-based version

In my main application I use TIPC and Unix Domain sockets, but for this
test an AF_INET socket is used. Scenario is the same, but after a server
relaunch, the client cannot reconnect. From strace it looks like the client
hasn't saved information about the passed fd (no call to getsockname()).
Here is how I initialize client and server based on fd (return codes are
omitted intentionally (no errors), other parts of the code also, there is
only one client):
server side:

  int server_fd;
  struct sockaddr_in address;

  server_fd = socket(AF_INET, SOCK_STREAM, 0);

  address.sin_family = AF_INET;
  address.sin_addr.s_addr = INADDR_ANY;
  address.sin_port = htons(PORT);

  bind(server_fd, (struct sockaddr *)&address, sizeof(address);
  listen(server_fd, 0);

  int flags = fcntl(server_fd, F_GETFL);
  fcntl(server_fd, F_SETFL, flags | O_NONBLOCK);

  GreeterServiceImpl service;
  ServerBuilder builder;
  builder.RegisterService(&service);
  std::unique_ptr server(builder.BuildAndStart());

  int client_fd = -1;
  while(1)*{*
client_fd = accept(server_fd, nullptr, nullptr);
if (client_fd == -1 ) continue;

flags = fcntl(client_fd, F_GETFL);
fcntl(client_fd, F_SETFL, flags | O_NONBLOCK);

::grpc::AddInsecureChannelFromFd(server.get(), client_fd);
  }

client side:

  int client_fd;
  struct sockaddr_in serv_addr;

  client_fd = socket(AF_INET, SOCK_STREAM, 0);

  serv_addr.sin_family = AF_INET;
  serv_addr.sin_port = htons(PORT);

  inet_pton(AF_INET, "127.0.0.1", &serv_addr.sin_addr);

  connect(client_fd, (struct sockaddr *)&serv_addr, sizeof(serv_addr));

  int flags = fcntl(client_fd, F_GETFL);
  fcntl(client_fd, F_SETFL, flags | O_NONBLOCK);

  auto channel =
grpc::CreateInsecureChannelFromFd(Greeter::service_full_name(),
client_fd);
  GreeterClient greeter(channel);



śr., 6 maj 2020 o 19:36 'Mark D. Roth' via grpc.io 
napisał(a):

> gRPC takes ownership of the fd when you pass it to
> CreateInsecureChannelFromFd(), so you don't need to shut it down or close
> it.
>
> On Tuesday, May 5, 2020 at 4:33:23 AM UTC-7 krzyszt...@gmail.com wrote:
>
>> Hi,
>> I'm creating a grpc channel for the grpc client using
>> function CreateInsecureChannelFromFd() and giving the file descriptor to
>> it.
>> Should I handle the connection for the given file descriptor afterwards
>> in application or is it handled in grpc framework?
>>
>> For example on application side when messages are not delivered I
>> reconnect using shutdown() or close() etc.
>> Is it necessary? From strace log I found multiple calls to close() on the
>> same fd like some are called by application and some by grpc framework.
>>
>> Sorry, if something is written badly - my first post here.
>>
>> --
> You received this message because you are subscribed to the Google Groups "
> grpc.io" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to grpc-io+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/grpc-io/9da96e2f-5547-4bf0-a21d-e64432a67cd9%40googlegroups.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CAP74dWotJ2GVj%2B0EK1409-ObFxZOVGOY%2BzLW%3DWtkeKTpyrBXCA%40mail.gmail.com.


[grpc-io] Memory access violation inside AsyncNextInternal

2020-05-13 Thread afshin . pir
Hi

I have written a small asynchronous gRPC server, but sometimes I get a 
strange *memory access violation* inside gRPC code after finishing a 
request. I get the exception in **AsyncNextInternal**:
CompletionQueue::NextStatus CompletionQueue::AsyncNextInternal(
void** tag, bool* ok, gpr_timespec deadline) {
  for (;;) {
auto ev = grpc_completion_queue_next(cq_, deadline, nullptr);
switch (ev.type) {
  case GRPC_QUEUE_TIMEOUT:
return TIMEOUT;
  case GRPC_QUEUE_SHUTDOWN:
return SHUTDOWN;
  case GRPC_OP_COMPLETE:
auto core_cq_tag =
static_cast<::grpc::internal::CompletionQueueTag*>(ev.tag);
*ok = ev.success != 0;
*tag = core_cq_tag;
if (core_cq_tag->FinalizeResult(tag, ok)) { // <<-- Here
  return GOT_EVENT;
}
break;
}
  }
}
It happens because *core_cq_tag* will be a dangling pointer and 
dereferencing this pointer for calling *FinalizeResult()* creates a memory 
access violation. This this is an internal pointer and not what I provide 
as tag, any idea why this may happen? I'm using gRPC 1.27.3 and I have both 
*Alarm 
*and *AsyncNotifyWhenDone()* registered for my connection info and I delete 
my own corresponding data when *AsyncNotifyWhenDone()* callback is called.
As an additional question, what happens if a CompletionQueue is shutdown 
before a registered alarm expires? will this alarm be ignored or be 
delivered? 

Best Regards

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/afa4d595-10ab-46dc-ad5b-63cb107107a8%40googlegroups.com.


[grpc-io] GRPC Seg faults during streaming

2020-05-13 Thread deepankar
I am using grpc for streaming audio streams. My grpc client runs fine most 
of the time, but crashes occasionally.  I have implemented a async client 
using CompletionQueue.

Here is the stack trace of my application seg fault.

0x7f60957976b6 in ?? () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6 
0x7f6095797701 in std::terminate() () from 
/usr/lib/x86_64-linux-gnu/libstdc++.so.6 0x7f609579823f in 
__cxa_pure_virtual () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6 
0x7f60961f5302 in grpc::CompletionQueue::AsyncNextInternal(void**, 
bool*, gpr_timespec) () from /usr/local/lib/libgrpc++.so.1 
0x7f60965a4edf in grpc::ByteBuffer::Swap(grpc::ByteBuffer*) () from 
/usr/local/unimrcp/plugin/vairecog.so 0x7f60965ac353 in 
CCACallStream::reset() () from /usr/local/unimrcp/plugin/vairecog.so 
0x7f60965bc6bb in std::_Bind (CCACallStream*)> ()>* 
std::__addressof (CCACallStream*)> ()> 
>(std::_Bind 
(CCACallStream*)> ()>&) () from /usr/local/unimrcp/plugin/vairecog.so

I also get some grpc core dumps though I am not sure if it is related to 
this problem.

#0 0x7fef70ec8f00 in grpc_cq_end_op(grpc_completion_queue*, void*, 
grpc_error*, void (*)(void*, grpc_cq_completion*), void*, 
grpc_cq_completion*) () from /usr/local/lib/libgrpc.so.7 #1 
0x7fef7157d421 in 
grpc_impl::internal::AlarmImpl::Set(grpc::CompletionQueue*, gpr_timespec, 
void*)::{lambda(void*, grpc_error*)#1}::_FUN(void*, grpc_error*) () from 
/usr/local/lib/libgrpc++.so.1 #2 0x7fef70ea6e31 in 
grpc_core::ExecCtx::Flush() () from /usr/local/lib/libgrpc.so.7 #3 
0x7fef70eb7e68 in ?? () from /usr/local/lib/libgrpc.so.7 #4 
0x7fef70f695b3 in ?? () from /usr/local/lib/libgrpc.so.7 #5 
0x7fef738116ba in start_thread (arg=0x7fef6a7fc700) at 
pthread_create.c:333 #6 0x7fef7354741d in clone () at 
../sysdeps/unix/sysv/linux/x86_64/clone.S:109

Thanks in advance for any help.

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/c0cb145a-e005-4f3a-b9e2-6037af759ae1%40googlegroups.com.