[grpc-io] docker containers connection refused

2022-05-17 Thread Fabrizio Marangio
Hi all, I am setting up an Hyperledger Fabric network using docker-compose. 
I have some containers and some of them (orderers) need talk each others to 
elect a raft leader. When i see at the grpc logs i have this type of error:

[core]pickfirstBalancer: UpdateSubConnState: 0xc000496280, 
{TRANSIENT_FAILURE connection error: desc = "transport: Error while dialing 
dial tcp 172.18.0.3:8050: connect: connection refused"}

I tried to add the host names indicated in the compose file to the hosts 
file (i'm using ubuntu 20.04)  but without success.

Can you help me with this issue?

This is my docker-compose yaml:

version: '2'

networks:
  fabric-ca:

services:
  tls-ca:
container_name: tls-ca
image: hyperledger/fabric-ca:1.5.2
command: sh -c 'fabric-ca-server start -d -b 
tls-ca-admin:tls-ca-adminpw --port 7052'
environment:
- FABRIC_CA_SERVER_HOME=/tmp/hyperledger/fabric-ca/crypto
- FABRIC_CA_SERVER_TLS_ENABLED=true
- FABRIC_CA_SERVER_CA_NAME=tls-ca
- FABRIC_CA_SERVER_CSR_HOSTS=tls-ca
- FABRIC_CA_SERVER_CSR_CN=tls-ca
- FABRIC_CA_SERVER_DEBUG=true
volumes:
- /tmp/hyperledger/tls-ca:/tmp/hyperledger/fabric-ca
- /tmp/hyperledger/assets:/assets
networks:
- fabric-ca
ports:
- 7052:7052

  ordererCA1:
container_name: ordererCA1
image: hyperledger/fabric-ca:1.5.2
command: sh -c 'fabric-ca-server start -d -b 
ordererCA1-admin:ordererCA1-adminpw --port 7053'
environment:
- FABRIC_CA_SERVER_HOME=/tmp/hyperledger/fabric-ca/crypto
- FABRIC_CA_SERVER_TLS_ENABLED=true
- FABRIC_CA_SERVER_CSR_CN=ordererCA1
- FABRIC_CA_SERVER_CSR_HOSTS=ordererCA1
- FABRIC_CA_SERVER_DEBUG=true
volumes:
- /tmp/hyperledger/ordCA1/ca:/tmp/hyperledger/fabric-ca
- /tmp/hyperledger/assets:/assets
- /tmp/hyperledger/ordCA1/:/tmp/hyperledger/fabric-ca-enrollment
networks:
- fabric-ca
ports:
- 7053:7053

  ordererCA2:
container_name: ordererCA2
image: hyperledger/fabric-ca:1.5.2
command: sh -c 'fabric-ca-server start -d -b 
ordererCA2-admin:ordererCA2-adminpw --port 8053'
environment:
- FABRIC_CA_SERVER_HOME=/tmp/hyperledger/fabric-ca/crypto
- FABRIC_CA_SERVER_TLS_ENABLED=true
- FABRIC_CA_SERVER_CSR_CN=ordererCA2
- FABRIC_CA_SERVER_CSR_HOSTS=ordererCA2
- FABRIC_CA_SERVER_DEBUG=true
volumes:
- /tmp/hyperledger/ordCA2/ca:/tmp/hyperledger/fabric-ca
- /tmp/hyperledger/assets:/assets
- /tmp/hyperledger/ordCA2/:/tmp/hyperledger/fabric-ca-enrollment
networks:
- fabric-ca
ports:
- 8053:8053

  ordererCA3:
container_name: ordererCA3
image: hyperledger/fabric-ca:1.5.2
command: sh -c 'fabric-ca-server start -d -b 
ordererCA3-admin:ordererCA3-adminpw --port 9053'
environment:
- FABRIC_CA_SERVER_HOME=/tmp/hyperledger/fabric-ca/crypto
- FABRIC_CA_SERVER_TLS_ENABLED=true
- FABRIC_CA_SERVER_CSR_CN=ordererCA3
- FABRIC_CA_SERVER_CSR_HOSTS=ordererCA3
- FABRIC_CA_SERVER_DEBUG=true
volumes:
- /tmp/hyperledger/ordCA3/ca:/tmp/hyperledger/fabric-ca
- /tmp/hyperledger/assets:/assets
- /tmp/hyperledger/ordCA3/:/tmp/hyperledger/fabric-ca-enrollment
networks:
- fabric-ca
ports:
- 9053:9053

  rca-org1:
container_name: rca-org1
image: hyperledger/fabric-ca:1.5.2
command: sh -c 'fabric-ca-server start -d -b 
rca-org1-admin:rca-org1-adminpw --port 7054'
environment:
- FABRIC_CA_SERVER_HOME=/tmp/hyperledger/fabric-ca/crypto
- FABRIC_CA_SERVER_TLS_ENABLED=true
- FABRIC_CA_SERVER_CSR_CN=rca-org1
- FABRIC_CA_SERVER_CSR_HOSTS=rca-org1
- FABRIC_CA_SERVER_DEBUG=true
volumes:
- /tmp/hyperledger/org1/ca:/tmp/hyperledger/fabric-ca
- /tmp/hyperledger/assets:/assets
- /tmp/hyperledger/org1/:/tmp/hyperledger/fabric-ca-enrollment
networks:
- fabric-ca
ports:
- 7054:7054

  rca-org2:
container_name: rca-org2
image: hyperledger/fabric-ca:1.5.2
command: sh -c 'fabric-ca-server start -d -b 
rca-org2-admin:rca-org2-adminpw --port 7055'
environment:
- FABRIC_CA_SERVER_HOME=/tmp/hyperledger/fabric-ca/crypto
- FABRIC_CA_SERVER_TLS_ENABLED=true
- FABRIC_CA_SERVER_CSR_CN=rca-org2
- FABRIC_CA_SERVER_CSR_HOSTS=rca-org2
- FABRIC_CA_SERVER_DEBUG=true
volumes:
- /tmp/hyperledger/org2/ca:/tmp/hyperledger/fabric-ca
- /tmp/hyperledger/assets:/assets
- /tmp/hyperledger/org2/:/tmp/hyperledger/fabric-ca-enrollment
networks:
- fabric-ca
ports:
- 7055:7055

  peer1-org1:
container_name: peer1-org1
image: hyperledger/fabric-peer:2.4
environment:
- CORE_PEER_ID=peer1-org1
- CORE_PEER_ADDRESS=peer1-org1:7051
- CORE_PEER_LOCALMSPID=org1MSP
- 

[grpc-io] memory leak in grpc

2022-05-17 Thread Vishal Kaushik
Hi,
I am facing sever memory leak while using gRPC on my machine. If anyone has 
any idea what is wrong in my code, please help:
*SERVER CODE:*
/** \file
* \brief Example code for Simple Open EtherCAT master
*
* Usage : simple_test [ifname1]
* ifname is NIC interface, f.e. eth0
*
* This is a minimal test.
*
* (c)Arthur Ketels 2010 - 2011
*/

#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 

#include 
#include 
#include 
#include 
#include 

#include "absl/memory/memory.h"
#include "robot.grpc.pb.h"

using grpc::Server;
using grpc::ServerAsyncResponseWriter;
using grpc::ServerAsyncWriter;

using grpc::ServerBuilder;
using grpc::ServerCompletionQueue;
using grpc::ServerContext;
using grpc::Status;
using RobotExample::Robot;
using RobotExample::RobotStatus;
using RobotExample::RobotStatusRequest;

#include 
#include 
#include 

#include "etherCatConfig.h"
#include "ethercat.h"

#define EC_TIMEOUTMON 500

char IOmap[4096];
OSAL_THREAD_HANDLE thread1;
int expectedWKC;
boolean needlf;
volatile int wkc;
boolean inOP;
uint8 currentgroup = 0;

volatile int64_t current_time = 0;

int64_t g_ControlWord = 0;
int64_t g_ModesOfOperation = 0;
int64_t g_ProfileAcceleration = 0;
int64_t g_ProfileDeceleration = 0;
int64_t g_ProfileVelocity = 0;
int64_t g_TargetVelocity = 0;
int64_t g_TargetPosition = 0;
int64_t g_MaxProfileVelocity = 0;
int64_t g_positive_torquee_limit = 0;
int64_t g_negative_torquee_limit = 0;

int64_t g_ControlWord_2 = 0;
int64_t g_ModesOfOperation_2 = 0;
int64_t g_ProfileAcceleration_2 = 0;
int64_t g_ProfileDeceleration_2 = 0;
int64_t g_ProfileVelocity_2 = 0;
int64_t g_TargetVelocity_2 = 0;
int64_t g_TargetPosition_2 = 0;
int64_t g_MaxProfileVelocity_2 = 0;
int64_t g_positive_torquee_limit_2 = 0;
int64_t g_negative_torquee_limit_2 = 0;

int64_t g_ControlWord_3 = 0;
int64_t g_ModesOfOperation_3 = 0;
int64_t g_ProfileAcceleration_3 = 0;
int64_t g_ProfileDeceleration_3 = 0;
int64_t g_ProfileVelocity_3 = 0;
int64_t g_TargetVelocity_3 = 0;
int64_t g_TargetPosition_3 = 0;
int64_t g_MaxProfileVelocity_3 = 0;
int64_t g_positive_torquee_limit_3 = 0;
int64_t g_negative_torquee_limit_3 = 0;

int64_t g_ControlWord_4 = 0;
int64_t g_ModesOfOperation_4 = 0;
int64_t g_ProfileAcceleration_4 = 0;
int64_t g_ProfileDeceleration_4 = 0;
int64_t g_ProfileVelocity_4 = 0;
int64_t g_TargetVelocity_4 = 0;
int64_t g_TargetPosition_4 = 0;
int64_t g_MaxProfileVelocity_4 = 0;
int64_t g_positive_torquee_limit_4 = 0;
int64_t g_negative_torquee_limit_4 = 0;


int64_t g_IOSensors_Channel_9_Output = 0;
int64_t g_IOSensors_Channel_10_Output = 0;
int64_t g_IOSensors_Channel_11_Output = 0;
int64_t g_IOSensors_Channel_12_Output = 0;

class RequestHandlerBase {
public:
virtual void Proceed(bool succes) = 0;
};

class GetStatusHandler : public RequestHandlerBase {
public:
// Take in the "service" instance (in this case representing an asynchronous
// server) and the completion queue "cq" used for asynchronous communication
// with the gRPC runtime.
GetStatusHandler(Robot::AsyncService* service, ServerCompletionQueue* cq)
: service_(service),
cq_(cq),
responder_(&ctx_),
status_(CREATE),
number_(0) {
// Invoke the serving logic right away.
Proceed();
}

void Proceed(bool succes = true) override {
if (status_ == CREATE) {
// Make this instance progress to the PROCESS state.
status_ = PROCESS;

service_->RequestgetStatus(&ctx_, &request_, &responder_, cq_, cq_, this);
} else if (status_ == PROCESS) {
// each CallData object should create only one new CallData
if (number_ == 0) {
std::cout << new GetStatusHandler(service_, cq_) << std::endl;
number_++;
}

/*if (number_++ >= 1) // we want to send the response 3 times (for
whatever reason)
{
status_ = FINISH;
responder_.Finish(Status::OK, this);
std::cout << "finished\n";

}
else*/
{
if (succes) {

InputsProcessImage * input = &g_inputsProcessImage; 
//reinterpret_cast(&g_outputsProcessImage);
status_reply_.set_current_position(input->
Joint1_1st_transmit_PDO_Mapping_Position_actual_value);
status_reply_.set_status(input->Joint1_1st_transmit_PDO_Mapping_Statusword);
status_reply_.set_modes_of_operation(input->
Joint1_1st_transmit_PDO_Mapping_Modes_of_operation_display);
status_reply_.set_modes_of_operation_2(input->
Joint2_1st_transmit_PDO_Mapping_Modes_of_operation_display);
status_reply_.set_modes_of_operation_3(input->
Joint3_1st_transmit_PDO_Mapping_Modes_of_operation_display);
status_reply_.set_modes_of_operation_4(input->
Joint4_1st_transmit_PDO_Mapping_Modes_of_operation_display);

status_reply_.set_error_code_1(input->
Joint1_1st_transmit_PDO_Mapping_Error_code);
//status_reply_.set_current_speed(input->joint1_1st_Transmit_PDO_mapping_Velocity_Actual_Value);

status_reply_.set_current_position_2(input->
Joint2_1st_transmit_PDO_Mapping_Position_actual_value);
status_reply_.set_status_2(input->Joint2_1st_transmit_PDO_Mapping_Statusword
);
//status_reply_.set_modes_of_operation_2(input->joint2_1st_Transmit_PDO_mapping_Mode_of_Operation_Display);
status_reply_.set

Re: [grpc-io] keep channel alive without activity

2022-05-17 Thread Rajat Goyal
Yes Sanjay your understanding is correct.

We solved this by

   a) Client sends some dummy request to server every 1 min. This is a
actual request defined in proto buf by passing some type like dummy request.

   b) On reception of every such above request, server responds back with a
dummy response, which client ignores based on request type like dummy
response.

  Earlier we were not doing part-b, only part-a was there and server was
just ignoring it.

Now issue is solved completely, when we implemented part-b as well.

Regards,
Rajat

On Tue, 17 May, 2022, 8:18 am Sanjay Pujare, 
wrote:

> Comments inline below:
>
> On Mon, May 16, 2022 at 8:35 AM Rajat Goyal 
> wrote:
>
>> Hi Sanjay / Grpc team,
>>
>> I have implemented RPC based regular pings. Like I am sending
>> some dummy request each minute to LB once connected from client side.
>> I observed below :
>> a) This method works fine if there is no request from the server
>> side. This way the connection is alive for many hours without issues.
>>
>
> "request from the server side": just want to clarify what is being said.
> You mean a response message since a server can only send response messages
> back. Or did you mean server sending requests on a different connection to
> another server?
>
>
>
>> b) But this method doesn't work if there is some bi-directional
>> request from the server. The moment it receives a response from server, the
>> connections is dropped from LB after exactly 5 mins, even if I am sending a
>> regular dummy request from the client side every minute.
>>
>
> Again I would like to clarify your "bi-directional request from the
> server" : do you mean a bi-di RPC into the server where server is sending
> responses to the client? And in such a case the LB drops the connection
> after 5 mins?
>
>
>>
>> I also checked the server logs the dummy request is being received every
>> min, which means client is sending regular 1 min ping, but still LB is
>> dropping the connection.
>> While in case there is no response from server side, the connection is
>> not dropped by LB.
>>
>> Can you please help me what grpc / LB might be doing in both cases ?
>>
>
> To summarize my understanding of what you are. saying: when a connection
> is established through the ALB to a gRPC backend the connection stays alive
> indefinitely if the client only sends a dummy RPC every minute. This dummy
> RPC has a dummy request but no response (only header(s) and status code).
> As soon as you send any real RPCs where the server sends any response
> messages then the LB does not keep the connection alive but drops it 5
> minutes after the last non-dummy RPC. Is this correct?
>
>
>
>>
>> Regards,
>> Rajat
>>
>>
>> On Tue, Jan 11, 2022 at 4:01 AM Sanjay Pujare 
>> wrote:
>>
>>> Check this
>>> https://stackoverflow.com/questions/66818645/http2-ping-frames-over-aws-alb-grpc-keepalive-ping
>>>
>>> "*ALB does not support the HTTP2 ping frames*."
>>>
>>> On Mon, Jan 10, 2022 at 12:16 PM Rajat Goyal 
>>> wrote:
>>>
 ALB is configured with idle-timeout - 5 minutes.
 I configured bi-di client with :
  keepAliveWithoutCalls(true).keepAliveTime(90,
 TimeUnit.SECONDS).keepAliveTimeout(10, TimeUnit.SECONDS)
 while server is configured with :
 permitKeepAliveWithoutCalls(true).permitKeepAliveTime(1, TimeUnit.MINUTES)

 But I received INTERNAL: HTTP/2 error code: PROTOCOL_ERROR Received Rst
 Stream after exactly 5 minutes. Which looks like ALB has dropped the
 connection after 5 minutes.

 Any idea how we can keep idle connection alive ?



 On Mon, Jan 10, 2022 at 10:39 PM Rajat Goyal <
 rajatgoyal247...@gmail.com> wrote:

> Hi Sanjay,
>
>  I see that bi-directional streamObserver object gets call back
> onError() in case of any error in network.
>
> Isn't that done by any heartbeat mechanism already?. If so, then
> connection at ALB should be active with these ping-pong packets ?
>
> Regards,
> Rajat
>
> On Mon, 10 Jan, 2022, 10:33 pm Sanjay Pujare, 
> wrote:
>
>> This may probably help?
>> https://grpc.io/blog/grpc-on-http2/#keeping-connections-alive ?
>>
>> On Mon, Jan 10, 2022 at 8:54 AM Rajat Goyal <
>> rajatgoyal247...@gmail.com> wrote:
>>
>>> Hi,
>>>
>>>   Gentle reminder for any resolution for above.
>>>
>>> Regards,
>>> Rajat
>>>
>>> On Sun, 9 Jan, 2022, 6:50 pm Rajat Goyal, <
>>> rajatgoyal247...@gmail.com> wrote:
>>>
 Hi,

  We have a system where clients open bi-directional grpc stream
 to ALB, which proxies to one of active server. So

  bi-di
 client <>  ALB  <> server

 In-case of any failure of connection, clients re-connects to us as
 we want to keep a bi-di channel open.

 Question is :

[grpc-io] Re: gRPC stuck in epoll_wait state

2022-05-17 Thread 'AJ Heller' via grpc.io
If you're still having this issue, it would be worth trying to upgrade to 
gRPC v1.46.0 or newer. The default polling engine has been removed, so if 
there is still an underlying bug in gnmi or gRPC, it may show up in some 
other way.

On Monday, December 13, 2021 at 4:43:01 PM UTC-8 nupur uttarwar wrote:

> Hello,
>
> We are using gnmi-cli client to configure ports which sends a unary rpc 
> request to gRPC.
>
> Eg: sudo gnmi-cli set 
> "device:virtual-device,name:net_vhost0,host:host1,device-type:VIRTIO_NET,queues:1,socket-path:/tmp/vhost-user-0,port-type:LINK"
>
> This was working fine with gRPC version 1.17.2. We are trying to upgrade 
> gRPC and other dependent modules used in our project. After upgrading to 
> 1.33 version, gnmi client send request is stuck in epoll_wait indefinitely. 
> Here is the back trace:
>
> 0x7f85e9bc380e in epoll_wait () from /lib64/libc.so.6
>
> (gdb) bt
>
> #0  0x7f85e9bc380e in epoll_wait () from /lib64/libc.so.6
>
> #1  0x7f85eb642864 in pollable_epoll(pollable*, long) () from 
> /usr/local/lib/libgrpc.so.12
>
> #2  0x7f85eb6432e9 in pollset_work(grpc_pollset*, 
> grpc_pollset_worker**, long) () from /usr/local/lib/libgrpc.so.12
>
> #3  0x7f85eb64acd5 in pollset_work(grpc_pollset*, 
> grpc_pollset_worker**, long) () from /usr/local/lib/libgrpc.so.12
>
> #4  0x7f85eb652cde in grpc_pollset_work(grpc_pollset*, 
> grpc_pollset_worker**, long) () from /usr/local/lib/libgrpc.so.12
>
> #5  0x7f85eb6b9c50 in cq_pluck(grpc_completion_queue*, void*, 
> gpr_timespec, void*) () from /usr/local/lib/libgrpc.so.12
>
> #6  0x7f85eb6b9ed3 in grpc_completion_queue_pluck () from 
> /usr/local/lib/libgrpc.so.12
>
> #7  0x7f85ea856f2b in 
> grpc::CoreCodegen::grpc_completion_queue_pluck(grpc_completion_queue*, 
> void*, gpr_timespec, void*) ()
>
>from /usr/local/lib/libgrpc++.so.1
>
> #8  0x005db71e in grpc::CompletionQueue::Pluck 
> (this=0x7ffec74be7e0, tag=0x7ffec74be840)
>
> at /usr/local/include/grpcpp/impl/codegen/completion_queue.h:316
>
> #9  0x005e7467 in 
> grpc::internal::BlockingUnaryCallImpl gnmi::SetResponse>::BlockingUnaryCallImpl (this=0x7ffec74beaa0,
>
> channel=, method=..., context=0x7ffec74beea0, 
> request=..., result=0x7ffec74bec40)
>
> at /usr/local/include/grpcpp/impl/codegen/client_unary_call.h:69
>
> #10 0x005d5dab in 
> grpc::internal::BlockingUnaryCall 
> (result=0x7ffec74be670, request=...,
>
> context=0x7ffec74bebf0, method=..., channel=) at 
> /usr/local/include/grpcpp/impl/codegen/client_unary_call.h:38
>
> #11 gnmi::gNMI::Stub::Set (this=, 
> context=context@entry=0x7ffec74beea0, request=..., 
> response=response@entry=0x7ffec74bec40)
>
> at p4proto/p4rt/proto/p4/gnmi/gnmi.grpc.pb.cc:101
>
> #12 0x0041de62 in gnmi::Main (argc=-951325536, 
> argv=0x7ffec74bee20) at /usr/include/c++/10/bits/unique_ptr.h:173
>
> #13 0x7f85e9aea1e2 in __libc_start_main () from /lib64/libc.so.6
>
> #14 0x0041a06e in _start () at /usr/include/c++/10/new:175
>
>  
>
> Comparing the successful and unsuccessful logs, I can see that grpc gets 
> stuck in epoll_wait state waiting for OP_COMPLETE event after 
> grpc_call_start_batch is started. 
>
> After investigating further, I can see that this issue started from 
> version 1.32.0, mainly after this commit(
> https://github.com/grpc/grpc/pull/23372). Just before this commit, it 
> works fine.
>
> Attached are the logs with with GRPC_TRACE=all,-timer_check,-timer and 
> GRPC_VERBOSITY=DEBUG for reference. List of the logs attached:
>
>
>- Trace logs with gRPC version 1.32.0 for unsuccessful request - 
>https://gist.github.com/nupuruttarwar/f97bbd7f339843c45ab48a10be065f0b 
>- Trace logs with gRPC version 1.32.0 for successful request before 
>abseil synchronization was enabled (at commit 
>52cde540a4768eea7a7a1ad0f21c99f6b51eedf7) - 
>https://gist.github.com/nupuruttarwar/2d36e56a791a88690ce4ac9fb01666f7 
>- Trace logs with gRPC version 1.17.2 for successful request - 
>https://gist.github.com/nupuruttarwar/62d6bcb277309fc878d7f348d57c3fb6 
>
> Any idea why this is happening? Please let me know if you need more logs 
> or any other information to assist further.
>
>  
>
> Thanks,
>
> Nupur Uttarwar
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/1fc4dc9b-8ddb-4a9c-9618-4cf181447b39n%40googlegroups.com.


[grpc-io] what is Grpc c++ Release version matching to minimum gcc 4.8.1

2022-05-17 Thread Rajesh Venmanad
Hi
  I have a query on how to find the minimum GCC version for a specific GRPC 
C++ release.
  Iam aware that the current release of GRPC C++ requires minimum 5.1 GCC.
  
  For 1.41 release https://github.com/grpc/grpc/tree/v1.41.x/src/cpp it is 
mentioned as GCC 4.9+ for Linux based OS .
 I don't see the same kind of info for the earlier release.

 Please share what releases of GRPC  C++ maps to minimum GCC 4.8.1 if some 
is aware ?

Can some one help me out .

Regards
Rajesh
 

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/607d12dd-63bb-443e-bd9a-2d6bb8df93bfn%40googlegroups.com.