04/01/2022 03:41, eagost...@nvidia.com: > From: Elena Agostini <eagost...@nvidia.com> > > Enable the possibility to make a GPU memory area accessible from > the CPU. > > GPU memory has to be allocated via rte_gpu_mem_alloc(). > > This patch allows the gpudev library to pin, through the GPU driver, > a chunk of GPU memory and to return a memory pointer usable > by the CPU to access the GPU memory area. > > Signed-off-by: Elena Agostini <eagost...@nvidia.com> [...] > +/** > + * @warning > + * @b EXPERIMENTAL: this API may change without prior notice. > + * > + * Pin a chunk of GPU memory to make it accessible from the CPU
You should define what means "pin" exactly. Which properties should we expect? > + * using the memory pointer returned by the function. Which function should return the pointer? rte_gpu_mem_pin is returning an int. > + * GPU memory has to be allocated via rte_gpu_mem_alloc(). Why pinning is not done by rte_gpu_mem_alloc()? Should it be a flag? > + * > + * @param dev_id > + * Device ID requiring pinned memory. > + * @param size > + * Number of bytes to pin. > + * Requesting 0 will do nothing. > + * @param ptr > + * Pointer to the GPU memory area to be pinned. > + * NULL is a no-op accepted value. > + > + * @return > + * A pointer to the pinned GPU memory usable by the CPU, otherwise NULL > and rte_errno is set: > + * - ENODEV if invalid dev_id > + * - EINVAL if reserved flags Which reserved flags? > + * - ENOTSUP if operation not supported by the driver > + * - E2BIG if size is higher than limit > + * - ENOMEM if out of space Is out of space relevant for pinning? > + * - EPERM if driver error > + */ > +__rte_experimental > +int rte_gpu_mem_pin(int16_t dev_id, size_t size, void *ptr);