Re: [vpp-dev]question on memif configuration

2022-01-12 Thread Damjan Marion via lists.fd.io
I am not able ro help with your libmemif related qurstions, but can answer 1st 
one.

What you see is OK. Slave is always ring producer. In s2m direction slave is 
enqueueing packets to the ring and in m2s direction slave is enqueueing empty 
buffers. So from your output it is clear that s2m is empty and m2s is full of 
empty buffers. 

— 
Damjan

> On 11.01.2022., at 19:09, vipin allawadhi  wrote:
> 
> 
> Hello Experts,
> 
> I have a question on memif configuration. One of our application connects to 
> VPP via memif. following is memif configuration for this connection:
> 
> vpp# show memif memif0/2
> 
> interface memif0/2
>   remote-name "XYZ"
>   remote-interface "memif_conn"
>   socket-id 0 id 2 mode ip
>   flags admin-up connected
>   listener-fd 50 conn-fd 51
>   num-s2m-rings 1 num-m2s-rings 1 buffer-size 0 num-regions 2
>   region 0 size 65792 fd 56
>   region 1 size 264241152 fd 59
> master-to-slave ring 0:
>   region 0 offset 32896 ring-size 2048 int-fd 66
>   head 2057 tail 9 flags 0x interrupts 9
> slave-to-master ring 0:
>   region 0 offset 0 ring-size 2048 int-fd 62
>   head 9 tail 9 flags 0x0001 interrupts 0
> vpp#
> 
> one question related to above config is, slave-to-master ring's head and tail 
> points to same index even though ring size is 2048. Is this correct?  in case 
> of master-to-slave ring, head and tail index differ by 2048 which is exactly 
> same as ring size. Let us know your opinion on this. 
> 
> Another problem (major one) is, when we send multiple messages of size 64K 
> bytes from slave to master in a tight loop, those messages are received 
> corrupted on master side. by corrupted, I actually mean is, succeeding 
> message content is written over previous message content. same is observed 
> for messages from master to slave also. when we send a single message from 
> slave to master, we do not see any problem but if we increase message sending 
> rate, we hit this problem immediately. 
> 
> that's how we send the message from slave to master and master is expected to 
> respond back for each received message.
> 
> #deinfe MAX_COUNT = 100;
> for(tmp=0;tmp {
> memif_send_msg (0, 0, data_len, data);
> } 
> 
> memif_send_msg (int index, int q_id, uint16_t data_len, void *data)
> {
>uint64_t count = 1;
>memif_connection_t *c = &memif_connection[index];
>if (c->conn == NULL)
>{
>   INFO ("No connection at index %d. Returning Failure ...\n", index);
>   return SM_RC_ERROR;
>   }
> 
>   uint16_t tx, i;
>   int err = MEMIF_ERR_SUCCESS;
>   uint32_t seq = 0;
>   struct timespec start, end;
> 
>   memif_conn_args_t *args = &(c->args);
>   icmpr_flow_mode_t transport_mode = (icmpr_flow_mode_t) args->mode;
> 
>   memset (&start, 0, sizeof (start));
>   memset (&end, 0, sizeof (end));
> 
>   timespec_get (&start, TIME_UTC);
>   while (count)
>   {
>   i = 0;
>   err = memif_buffer_alloc (c->conn, q_id, c->tx_bufs, MAX_MEMIF_BUFS > 
> count ? count : MAX_MEMIF_BUFS, &tx, 128);
> 
>   if ((err != MEMIF_ERR_SUCCESS) && (err != MEMIF_ERR_NOBUF_RING))
>   {
> INFO ("memif_buffer_alloc: %s Returning Failure...\n", 
> memif_strerror (err));
> return SM_RC_ERROR;
>}
>c->tx_buf_num += tx;
>   while (tx)
>   {
>   while (tx > 2)
>   {
>   memif_generate_packet ((void *) c->tx_bufs[i].data, 
> &c->tx_bufs[i].len, c->ip_addr, c->ip_daddr, c->hw_daddr, data, data_len, 
> transport_mode);
>   memif_generate_packet ((void *) c->tx_bufs[i + 1].data, 
> &c->tx_bufs[i + 1].len, c->ip_addr, c->ip_daddr, c->hw_daddr, data, data_len, 
> transport_mode);
>   i += 2;
>   tx -= 2;
>   }
> 
>   /* Generate the last remained one */
>   if(tx)
>   {
>   memif_generate_packet ((void *) 
> c->tx_bufs[i].data,&c->tx_bufs[i].len, c->ip_addr,c->ip_daddr, 
> c->hw_daddr,data, data_len, transport_mode);
>   i++;
>   tx--;
>   }
>   }
> 
>   err = memif_tx_burst (c->conn, q_id, c->tx_bufs, c->tx_buf_num, &tx);
>   if (err != MEMIF_ERR_SUCCESS)
>   {
>  INFO ("memif_tx_burst: %s Returning Failure...\n",memif_strerror 
> (err));
>   return SM_RC_ERROR;
>   }
> 
>   c->tx_buf_num -= tx;
>   c->tx_counter += tx;
>   count -= tx;
>   }
> }
> 
> We doubt the way we are invoking the "memif_buffer_alloc" function. we are 
> not incrementing the "tx_buf" pointer. so at every invocation of the  
> "memif_send_msg" function, we will try to alloc the same "tx buffer" again 
> and may end up overwriting the previously written content. this may garbage 
> the previous message content if that is not yet sent to memif. 
> 
> Can you please go through this analysis and share your valuable inputs.
> 
> Thanks
> Vipin A.
> 
> 
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Repl

[vpp-dev]question on memif configuration

2022-01-11 Thread vipin allawadhi
Hello Experts,

I have a question on memif configuration. One of our application connects
to VPP via memif. following is memif configuration for this connection:

vpp# show memif memif0/2

interface memif0/2
  remote-name "XYZ"
  remote-interface "memif_conn"
  socket-id 0 id 2 mode ip
  flags admin-up connected
  listener-fd 50 conn-fd 51
  num-s2m-rings 1 num-m2s-rings 1 buffer-size 0 num-regions 2
  region 0 size 65792 fd 56
  region 1 size 264241152 fd 59
master-to-slave ring 0:
  region 0 offset 32896 ring-size 2048 int-fd 66
  head 2057 tail 9 flags 0x interrupts 9
slave-to-master ring 0:
  region 0 offset 0 ring-size 2048 int-fd 62
  *head 9 tail 9* flags 0x0001 interrupts 0
vpp#

one question related to above config is, slave-to-master ring's head and
tail points to same index even though ring size is 2048. Is this correct?
in case of master-to-slave ring, head and tail index differ by 2048 which
is exactly same as ring size. Let us know your opinion on this.

Another problem (major one) is, when we send multiple messages of size 64K
bytes from slave to master in a tight loop, those messages are received
corrupted on master side. by corrupted, I actually mean is, succeeding
message content is written over previous message content. same is observed
for messages from master to slave also. when we send a single message from
slave to master, we do not see any problem but if we increase message
sending rate, we hit this problem immediately.

that's how we send the message from slave to master and master is expected
to respond back for each received message.

#deinfe MAX_COUNT = 100;
for(tmp=0;tmpconn == NULL)
   {
  INFO ("No connection at index %d. Returning Failure ...\n", index);
  return SM_RC_ERROR;
  }

  uint16_t tx, i;
  int err = MEMIF_ERR_SUCCESS;
  uint32_t seq = 0;
  struct timespec start, end;

  memif_conn_args_t *args = &(c->args);
  icmpr_flow_mode_t transport_mode = (icmpr_flow_mode_t) args->mode;

  memset (&start, 0, sizeof (start));
  memset (&end, 0, sizeof (end));

  timespec_get (&start, TIME_UTC);
  while (count)
  {
  i = 0;
  err = *memif_buffer_alloc* (c->conn, q_id, c->tx_bufs, MAX_MEMIF_BUFS
> count ? count : MAX_MEMIF_BUFS, &tx, 128);

  if ((err != MEMIF_ERR_SUCCESS) && (err != MEMIF_ERR_NOBUF_RING))
  {
INFO ("memif_buffer_alloc: %s Returning Failure...\n",
memif_strerror (err));
return SM_RC_ERROR;
   }
   c->tx_buf_num += tx;
  while (tx)
  {
  while (tx > 2)
  {
  memif_generate_packet ((void *) c->tx_bufs[i].data,
&c->tx_bufs[i].len, c->ip_addr, c->ip_daddr, c->hw_daddr, data, data_len,
transport_mode);
  memif_generate_packet ((void *) c->tx_bufs[i + 1].data,
&c->tx_bufs[i + 1].len, c->ip_addr, c->ip_daddr, c->hw_daddr, data,
data_len, transport_mode);
  i += 2;
  tx -= 2;
  }

  /* Generate the last remained one */
  if(tx)
  {
  memif_generate_packet ((void *)
c->tx_bufs[i].data,&c->tx_bufs[i].len, c->ip_addr,c->ip_daddr,
c->hw_daddr,data, data_len, transport_mode);
  i++;
  tx--;
  }
  }

  err = memif_tx_burst (c->conn, q_id, c->tx_bufs, c->tx_buf_num, &tx);
  if (err != MEMIF_ERR_SUCCESS)
  {
 INFO ("memif_tx_burst: %s Returning
Failure...\n",memif_strerror (err));
  return SM_RC_ERROR;
  }

  c->tx_buf_num -= tx;
  c->tx_counter += tx;
  count -= tx;
  }
}

We doubt the way we are invoking the "*memif_buffer_alloc*" function. we
are not incrementing the "*tx_buf*" pointer. so at every invocation of the
"*memif_send_msg*" function, we will try to alloc the same "*tx buffer*"
again and may end up overwriting the previously written content. this may
garbage the previous message content if that is not yet sent to memif.

Can you please go through this analysis and share your valuable inputs.

Thanks
Vipin A.

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20700): https://lists.fd.io/g/vpp-dev/message/20700
Mute This Topic: https://lists.fd.io/mt/88355146/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-