On 11/06/2014 02:59 PM, Savolainen, Petri (NSN - FI/Espoo) wrote:
+int lg_odp_pktio_lookup(const char *dev, pktio_entry_t *pktio_entry)
+{
+       int ret = -1;
+       char ipc_shm_name[ODPH_RING_NAMESIZE];
+       size_t ring_size;
+
+       if (!(lg_odp_shm_lookup_pktio(dev) == 0 &&
+             odp_shm_lookup_ipc("shm_packet_pool") == 0)) {
+               ODP_DBG("pid %d unable to find ipc object: %s.\n",
+                       getpid(), dev);
+               goto error;
+       }
So, the role of the new odp_shm_lookup_ipc() call is just check if a shm has 
been created. I'm against adding xxx_ipc() versions of existing API calls. You 
should be able to just use the normal lookup. Maybe ODP (global init) needs to 
be informed that application runs in multi-process mode, so that ODP internal 
data structures are mapped so that multiple processes can lookup from those.

It is internal implementation. This look up should be for 2 separate processes. Might be _ipc prefix is very confusing here, but it's just function to check if there is such shm or not. It can be pktio_in_shm(dev) and pool_in_shm(name"). Also I need to think how to remove this predefined value "shm_packet_pool", skipped that on clean up.

Taras also mentioned about such flag to odp_global_init.


If you check e.g. "Multi-process Sample Application" chapter in DPDK sample 
apps guide. It gives a cleaner framework how to handle the multi-process case.

+
+       ODP_DBG("pid %d odp_shm_lookup_ipc found shared object\n",
+               getpid());
+       ring_size =  PKTIO_IPC_ENTRIES * sizeof(void *) +
+               sizeof(odph_ring_t);
+
+       memset(ipc_shm_name, 0, ODPH_RING_NAMESIZE);
+       memcpy(ipc_shm_name, dev, strlen(dev));
+       memcpy(ipc_shm_name + strlen(dev), "_r", 2);
+
+       /* allocate shared memory for buffers needed to be produced */
+       odp_shm_t shm = odp_shm_reserve(ipc_shm_name, ring_size,
+                       ODP_CACHE_LINE_SIZE,
+                       ODP_SHM_PROC_NOCREAT);
This should be replaced by the normal (multi-process capable) lookup.

Why not to reuse odp_shm_reserve() here? I can put it to separate function, no problem or even better just to replace to shm_open() call. My point is when we will have clean up function for odp_shm_reserve() then we can use it for that memory also.


+
+       pktio_entry->s.ipc_r = odp_shm_addr(shm);
+       if (!pktio_entry->s.ipc_r) {
+               ODP_DBG("pid %d unable to find ipc ring %s name\n",
+                       getpid(), dev);
+               goto error;
+       }
+
+       memcpy(ipc_shm_name + strlen(dev), "_p", 2);
+       /* allocate shared memory for produced ring buffer handlers. That
+        * buffers will be cleaned up after they are produced by other
process.
+        */
+       shm = odp_shm_reserve(ipc_shm_name, ring_size,
+                       ODP_CACHE_LINE_SIZE,
+                       ODP_SHM_PROC_NOCREAT);
Same thing here.

-Petri

+
+       pktio_entry->s.ipc_p = odp_shm_addr(shm);
+       if (!pktio_entry->s.ipc_p) {
+               ODP_DBG("pid %d unable to find ipc ring %s name\n",
+                       getpid(), dev);
+               goto error;
+       }
+
+       pktio_entry->s.type = ODP_PKTIO_TYPE_IPC;
+
+       /* Get info about remote pool */
+       struct pktio_info *pinfo = lng_map_shm_pool_info(dev,
+                                       ODP_SHM_PROC_NOCREAT);
+       pktio_entry->s.ipc_pool_base = map_remote_pool(pinfo);
+       pktio_entry->s.ipc_pkt_size = pinfo->shm_pkt_size;
+
+       /* @todo: to simplify in linux-generic implementation we create pool
for
+        * packets from IPC queue. On receive implementation copies packets
to
+        * that pool. Later we can try to reuse original tool without
packets
+        * copying.
+        */
+       pktio_entry->s.pkt_sock.pool = lg_odp_alloc_and_create_pool(
+               pinfo->shm_pkt_pool_size, pinfo->shm_pkt_size);
+
+       ret = 0;
+       ODP_DBG("%s OK.\n", __func__);
+error:
+       /* @todo free shm on error (api not impemented yet) */
+       return ret;
+}
+


_______________________________________________
lng-odp mailing list
lng-odp@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/lng-odp

Reply via email to