Call for Papers - ICITS'20 - Bogota, Colombia

2019-06-30 Thread Maria Lemos
* Proceedings by Springer. Indexed by Scopus, ISI, etc.





ICITS'20 - The 2020 International Conference on Information Technology & Systems

5 - 7 February 2020, Bogota, Colombia

http://www.icits.me/ 

----
--


Scope

ICITS'20 - The 2020 International Conference on Information Technology & 
Systems, to be held in Bogotá, Colombia, 5 - 7 February 2020, is an 
international forum for researchers and practitioners to present and discuss 
the most recent innovations, trends, results, experiences and concerns in the 
several perspectives of Information Technology & Systems.

We are pleased to invite you to submit your papers to ICITS'20. They can be 
written in English, Spanish or Portuguese. All submissions will be reviewed on 
the basis of relevance, originality, importance and clarity.



Topics

Submitted papers should be related with one or more of the main themes proposed 
for the Conference:

A) Information and Knowledge Management (IKM);

B) Organizational Models and Information Systems (OMIS);

C) Software and Systems Modeling (SSM);

D) Software Systems, Architectures, Applications and Tools (SSAAT);

E) Multimedia Systems and Applications (MSA);

F) Computer Networks, Mobility and Pervasive Systems (CNMPS);

G) Intelligent and Decision Support Systems (IDSS);

H) Big Data Analytics and Applications (BDAA);

I) Human-Computer Interaction (HCI);

J) Ethics, Computers and Security (ECS)

K) Health Informatics (HIS);

L) Information Technologies in Education (ITE);



Submission and Decision

Submitted papers written in English (until 10-page limit) must comply with the 
format of Advances in Intelligent Systems and Computing series (see 
Instructions for Authors at Springer Website 
),
 must not have been published before, not be under review for any other 
conference or publication and not include any information leading to the 
authors’ identification. Therefore, the authors’ names, affiliations and 
bibliographic references should not be included in the version for evaluation 
by the Scientific Committee. This information should only be included in the 
camera-ready version, saved in Word or Latex format and also in PDF format. 
These files must be accompanied by the Consent to Publish form 
 filled out, in a ZIP file, and uploaded at 
the conference management system.

Submitted papers written in Spanish or Portuguese (until 15-page limit) must 
comply with the format of RISTI  - Revista Ibérica de 
Sistemas e Tecnologias de Informação (download instructions/template for 
authors in Spanish  or Portuguese 
), must not have been published before, 
not be under review for any other conference or publication and not include any 
information leading to the authors’ identification. Therefore, the authors’ 
names, affiliations and bibliographic references should not be included in the 
version for evaluation by the Scientific Committee. This information should 
only be included in the camera-ready version, saved in Word. These file must be 
uploaded at the conference management system in a ZIP file.

All papers will be subjected to a “double-blind review” by at least two members 
of the Scientific Committee.

Based on Scientific Committee evaluation, a paper can be rejected or accepted 
by the Conference Chairs. In the later case, it can be accepted as paper or 
poster.

The authors of papers accepted as posters must build and print a poster to be 
exhibited during the Conference. This poster must follow an A1 or A2 vertical 
format. The Conference can includes Work Sessions where these posters are 
presented and orally discussed, with a 7 minute limit per poster.

The authors of accepted papers will have 15 minutes to present their work in a 
Conference Work Session; approximately 5 minutes of discussion will follow each 
presentation.



Publication and Indexing

Papers accepted as posters are not published; they are only exhibited, 
presented and discussed during the conference.

To ensure that a paper accepted as paper is published, at least one of the 
authors must be fully registered by the 8th of November 2019, and the paper 
must comply with the suggested layout and page-limit. Additionally, all 
recommended changes must be addressed by the authors before they submit the 
camera-ready version.

No more than one paper per registration will be published. An extra fee must be 
paid for publication of additional papers, with a maximum of one additional 
paper per registration. One registration permits only the participation of one 
author in the conference.

Papers written in English and accepted and registered w

Re: [PATCH v5 08/12] drm/virtio: rework virtio_gpu_execbuffer_ioctl fencing

2019-06-30 Thread Chia-I Wu
On Fri, Jun 28, 2019 at 5:13 AM Gerd Hoffmann  wrote:
>
> Use gem reservation helpers and direct reservation_object_* calls
> instead of ttm.
>
> v5: fix fencing (Chia-I Wu).
> v3: Also attach the array of gem objects to the virtio command buffer,
> so we can drop the object references in the completion callback.  Needed
> because ttm fence helpers grab a reference for us, but gem helpers
> don't.
There are other places
>
> Signed-off-by: Gerd Hoffmann 
> Acked-by: Daniel Vetter 
> ---
>  drivers/gpu/drm/virtio/virtgpu_drv.h   |  6 ++-
>  drivers/gpu/drm/virtio/virtgpu_ioctl.c | 62 +++---
>  drivers/gpu/drm/virtio/virtgpu_vq.c| 17 ---
>  3 files changed, 43 insertions(+), 42 deletions(-)
>
> diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h 
> b/drivers/gpu/drm/virtio/virtgpu_drv.h
> index 98d646789d23..356d27132388 100644
> --- a/drivers/gpu/drm/virtio/virtgpu_drv.h
> +++ b/drivers/gpu/drm/virtio/virtgpu_drv.h
> @@ -120,9 +120,9 @@ struct virtio_gpu_vbuffer {
>
> char *resp_buf;
> int resp_size;
> -
> virtio_gpu_resp_cb resp_cb;
>
> +   struct virtio_gpu_object_array *objs;
> struct list_head list;
>  };
>
> @@ -311,7 +311,9 @@ void virtio_gpu_cmd_context_detach_resource(struct 
> virtio_gpu_device *vgdev,
> uint32_t resource_id);
>  void virtio_gpu_cmd_submit(struct virtio_gpu_device *vgdev,
>void *data, uint32_t data_size,
> -  uint32_t ctx_id, struct virtio_gpu_fence *fence);
> +  uint32_t ctx_id,
> +  struct virtio_gpu_object_array *objs,
> +  struct virtio_gpu_fence *fence);
>  void virtio_gpu_cmd_transfer_from_host_3d(struct virtio_gpu_device *vgdev,
>   uint32_t resource_id, uint32_t 
> ctx_id,
>   uint64_t offset, uint32_t level,
> diff --git a/drivers/gpu/drm/virtio/virtgpu_ioctl.c 
> b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
> index 0caff3fa623e..ae6830aa38c9 100644
> --- a/drivers/gpu/drm/virtio/virtgpu_ioctl.c
> +++ b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
> @@ -105,14 +105,11 @@ static int virtio_gpu_execbuffer_ioctl(struct 
> drm_device *dev, void *data,
> struct drm_virtgpu_execbuffer *exbuf = data;
> struct virtio_gpu_device *vgdev = dev->dev_private;
> struct virtio_gpu_fpriv *vfpriv = drm_file->driver_priv;
> -   struct drm_gem_object *gobj;
> struct virtio_gpu_fence *out_fence;
> -   struct virtio_gpu_object *qobj;
> int ret;
> uint32_t *bo_handles = NULL;
> void __user *user_bo_handles = NULL;
> -   struct list_head validate_list;
> -   struct ttm_validate_buffer *buflist = NULL;
> +   struct virtio_gpu_object_array *buflist = NULL;
> int i;
> struct ww_acquire_ctx ticket;
> struct sync_file *sync_file;
> @@ -155,15 +152,10 @@ static int virtio_gpu_execbuffer_ioctl(struct 
> drm_device *dev, void *data,
> return out_fence_fd;
> }
>
> -   INIT_LIST_HEAD(&validate_list);
> if (exbuf->num_bo_handles) {
> -
> bo_handles = kvmalloc_array(exbuf->num_bo_handles,
> -  sizeof(uint32_t), GFP_KERNEL);
> -   buflist = kvmalloc_array(exbuf->num_bo_handles,
> -  sizeof(struct ttm_validate_buffer),
> -  GFP_KERNEL | __GFP_ZERO);
> -   if (!bo_handles || !buflist) {
> +   sizeof(uint32_t), GFP_KERNEL);
> +   if (!bo_handles) {
> ret = -ENOMEM;
> goto out_unused_fd;
> }
> @@ -175,25 +167,22 @@ static int virtio_gpu_execbuffer_ioctl(struct 
> drm_device *dev, void *data,
> goto out_unused_fd;
> }
>
> -   for (i = 0; i < exbuf->num_bo_handles; i++) {
> -   gobj = drm_gem_object_lookup(drm_file, bo_handles[i]);
> -   if (!gobj) {
> -   ret = -ENOENT;
> -   goto out_unused_fd;
> -   }
> -
> -   qobj = gem_to_virtio_gpu_obj(gobj);
> -   buflist[i].bo = &qobj->tbo;
> -
> -   list_add(&buflist[i].head, &validate_list);
> +   buflist = virtio_gpu_array_from_handles(drm_file, bo_handles,
> +   
> exbuf->num_bo_handles);
> +   if (!buflist) {
> +   ret = -ENOENT;
> +   goto out_unused_fd;
> }
> kvfree(bo_handles);
> bo_handles = NULL;
> }
>
> -   ret = virtio_gpu_object_list_validate(&ticket, &valid

Re: [PATCH v5 09/12] drm/virtio: rework virtio_gpu_object_create fencing

2019-06-30 Thread Chia-I Wu
On Fri, Jun 28, 2019 at 5:13 AM Gerd Hoffmann  wrote:
>
> Use gem reservation helpers and direct reservation_object_* calls
> instead of ttm.
>
> v5: fix fencing (Chia-I Wu).
> v3: Due to using the gem reservation object it is initialized and ready
> for use before calling ttm_bo_init, so we can also drop the tricky fence
> logic which checks whenever the command is in flight still.  We can
> simply fence our object before submitting the virtio command and be done
> with it.
>
> Signed-off-by: Gerd Hoffmann 
> Acked-by: Daniel Vetter 
> ---
>  drivers/gpu/drm/virtio/virtgpu_drv.h|  2 +
>  drivers/gpu/drm/virtio/virtgpu_object.c | 55 ++---
>  drivers/gpu/drm/virtio/virtgpu_vq.c |  4 ++
>  3 files changed, 27 insertions(+), 34 deletions(-)
>
> diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h 
> b/drivers/gpu/drm/virtio/virtgpu_drv.h
> index 356d27132388..c4b266b6f731 100644
> --- a/drivers/gpu/drm/virtio/virtgpu_drv.h
> +++ b/drivers/gpu/drm/virtio/virtgpu_drv.h
> @@ -267,6 +267,7 @@ void virtio_gpu_free_vbufs(struct virtio_gpu_device 
> *vgdev);
>  void virtio_gpu_cmd_create_resource(struct virtio_gpu_device *vgdev,
> struct virtio_gpu_object *bo,
> struct virtio_gpu_object_params *params,
> +   struct virtio_gpu_object_array *objs,
> struct virtio_gpu_fence *fence);
>  void virtio_gpu_cmd_unref_resource(struct virtio_gpu_device *vgdev,
>uint32_t resource_id);
> @@ -329,6 +330,7 @@ void
>  virtio_gpu_cmd_resource_create_3d(struct virtio_gpu_device *vgdev,
>   struct virtio_gpu_object *bo,
>   struct virtio_gpu_object_params *params,
> + struct virtio_gpu_object_array *objs,
>   struct virtio_gpu_fence *fence);
>  void virtio_gpu_ctrl_ack(struct virtqueue *vq);
>  void virtio_gpu_cursor_ack(struct virtqueue *vq);
> diff --git a/drivers/gpu/drm/virtio/virtgpu_object.c 
> b/drivers/gpu/drm/virtio/virtgpu_object.c
> index 82bfbf983fd2..fa0ea22c68b0 100644
> --- a/drivers/gpu/drm/virtio/virtgpu_object.c
> +++ b/drivers/gpu/drm/virtio/virtgpu_object.c
> @@ -97,7 +97,9 @@ int virtio_gpu_object_create(struct virtio_gpu_device 
> *vgdev,
>  struct virtio_gpu_object **bo_ptr,
>  struct virtio_gpu_fence *fence)
>  {
> +   struct virtio_gpu_object_array *objs = NULL;
> struct virtio_gpu_object *bo;
> +   struct ww_acquire_ctx ticket;
> size_t acc_size;
> int ret;
>
> @@ -123,12 +125,29 @@ int virtio_gpu_object_create(struct virtio_gpu_device 
> *vgdev,
> }
> bo->dumb = params->dumb;
>
> +   if (fence) {
> +   objs = virtio_gpu_array_alloc(1);
> +   objs->objs[0] = &bo->gem_base;
> +   drm_gem_object_get(objs->objs[0]);
> +
> +   ret = drm_gem_lock_reservations(objs->objs, objs->nents,
> +   &ticket);
We can use virtio_gpu_object_reserve when there is only one object.

> +   if (ret == 0)
> +   reservation_object_add_excl_fence(objs->objs[0]->resv,
> + &fence->f);
Similar to in execbuffer, this might need to be moved to after
virtio_gpu_cmd_resource_create_*.
> +   }
> +
> if (params->virgl) {
> -   virtio_gpu_cmd_resource_create_3d(vgdev, bo, params, fence);
> +   virtio_gpu_cmd_resource_create_3d(vgdev, bo, params,
> + objs, fence);
> } else {
> -   virtio_gpu_cmd_create_resource(vgdev, bo, params, fence);
> +   virtio_gpu_cmd_create_resource(vgdev, bo, params,
> +  objs, fence);
> }
>
> +   if (fence)
> +   drm_gem_unlock_reservations(objs->objs, objs->nents, &ticket);
objs might have been freed.
> +
> virtio_gpu_init_ttm_placement(bo);
> ret = ttm_bo_init(&vgdev->mman.bdev, &bo->tbo, params->size,
>   ttm_bo_type_device, &bo->placement, 0,
> @@ -139,38 +158,6 @@ int virtio_gpu_object_create(struct virtio_gpu_device 
> *vgdev,
> if (ret != 0)
> return ret;
>
> -   if (fence) {
> -   struct virtio_gpu_fence_driver *drv = &vgdev->fence_drv;
> -   struct list_head validate_list;
> -   struct ttm_validate_buffer mainbuf;
> -   struct ww_acquire_ctx ticket;
> -   unsigned long irq_flags;
> -   bool signaled;
> -
> -   INIT_LIST_HEAD(&validate_list);
> -   memset(&mainbuf, 0, sizeof(struct ttm_validate_buffer));
> -
> -   /* use a gem reference since unref list

Re: [PATCH v5 08/12] drm/virtio: rework virtio_gpu_execbuffer_ioctl fencing

2019-06-30 Thread Chia-I Wu
(pressed Send too early)

On Sun, Jun 30, 2019 at 11:20 AM Chia-I Wu  wrote:
>
> On Fri, Jun 28, 2019 at 5:13 AM Gerd Hoffmann  wrote:
> >
> > Use gem reservation helpers and direct reservation_object_* calls
> > instead of ttm.
> >
> > v5: fix fencing (Chia-I Wu).
> > v3: Also attach the array of gem objects to the virtio command buffer,
> > so we can drop the object references in the completion callback.  Needed
> > because ttm fence helpers grab a reference for us, but gem helpers
> > don't.
> There are other places
There are other places where a vbuffer uses objects, such as in
transfers.  Do we need to make sure the objects are alive in those
places?

> >
> > Signed-off-by: Gerd Hoffmann 
> > Acked-by: Daniel Vetter 
> > ---
> >  drivers/gpu/drm/virtio/virtgpu_drv.h   |  6 ++-
> >  drivers/gpu/drm/virtio/virtgpu_ioctl.c | 62 +++---
> >  drivers/gpu/drm/virtio/virtgpu_vq.c| 17 ---
> >  3 files changed, 43 insertions(+), 42 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h 
> > b/drivers/gpu/drm/virtio/virtgpu_drv.h
> > index 98d646789d23..356d27132388 100644
> > --- a/drivers/gpu/drm/virtio/virtgpu_drv.h
> > +++ b/drivers/gpu/drm/virtio/virtgpu_drv.h
> > @@ -120,9 +120,9 @@ struct virtio_gpu_vbuffer {
> >
> > char *resp_buf;
> > int resp_size;
> > -
> > virtio_gpu_resp_cb resp_cb;
> >
> > +   struct virtio_gpu_object_array *objs;
> > struct list_head list;
> >  };
> >
> > @@ -311,7 +311,9 @@ void virtio_gpu_cmd_context_detach_resource(struct 
> > virtio_gpu_device *vgdev,
> > uint32_t resource_id);
> >  void virtio_gpu_cmd_submit(struct virtio_gpu_device *vgdev,
> >void *data, uint32_t data_size,
> > -  uint32_t ctx_id, struct virtio_gpu_fence *fence);
> > +  uint32_t ctx_id,
> > +  struct virtio_gpu_object_array *objs,
> > +  struct virtio_gpu_fence *fence);
> >  void virtio_gpu_cmd_transfer_from_host_3d(struct virtio_gpu_device *vgdev,
> >   uint32_t resource_id, uint32_t 
> > ctx_id,
> >   uint64_t offset, uint32_t level,
> > diff --git a/drivers/gpu/drm/virtio/virtgpu_ioctl.c 
> > b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
> > index 0caff3fa623e..ae6830aa38c9 100644
> > --- a/drivers/gpu/drm/virtio/virtgpu_ioctl.c
> > +++ b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
> > @@ -105,14 +105,11 @@ static int virtio_gpu_execbuffer_ioctl(struct 
> > drm_device *dev, void *data,
> > struct drm_virtgpu_execbuffer *exbuf = data;
> > struct virtio_gpu_device *vgdev = dev->dev_private;
> > struct virtio_gpu_fpriv *vfpriv = drm_file->driver_priv;
> > -   struct drm_gem_object *gobj;
> > struct virtio_gpu_fence *out_fence;
> > -   struct virtio_gpu_object *qobj;
> > int ret;
> > uint32_t *bo_handles = NULL;
> > void __user *user_bo_handles = NULL;
> > -   struct list_head validate_list;
> > -   struct ttm_validate_buffer *buflist = NULL;
> > +   struct virtio_gpu_object_array *buflist = NULL;
> > int i;
> > struct ww_acquire_ctx ticket;
> > struct sync_file *sync_file;
> > @@ -155,15 +152,10 @@ static int virtio_gpu_execbuffer_ioctl(struct 
> > drm_device *dev, void *data,
> > return out_fence_fd;
> > }
> >
> > -   INIT_LIST_HEAD(&validate_list);
> > if (exbuf->num_bo_handles) {
> > -
> > bo_handles = kvmalloc_array(exbuf->num_bo_handles,
> > -  sizeof(uint32_t), GFP_KERNEL);
> > -   buflist = kvmalloc_array(exbuf->num_bo_handles,
> > -  sizeof(struct 
> > ttm_validate_buffer),
> > -  GFP_KERNEL | __GFP_ZERO);
> > -   if (!bo_handles || !buflist) {
> > +   sizeof(uint32_t), GFP_KERNEL);
> > +   if (!bo_handles) {
> > ret = -ENOMEM;
> > goto out_unused_fd;
> > }
> > @@ -175,25 +167,22 @@ static int virtio_gpu_execbuffer_ioctl(struct 
> > drm_device *dev, void *data,
> > goto out_unused_fd;
> > }
> >
> > -   for (i = 0; i < exbuf->num_bo_handles; i++) {
> > -   gobj = drm_gem_object_lookup(drm_file, 
> > bo_handles[i]);
> > -   if (!gobj) {
> > -   ret = -ENOENT;
> > -   goto out_unused_fd;
> > -   }
> > -
> > -   qobj = gem_to_virtio_gpu_obj(gobj);
> > -   buflist[i].bo = &qobj->tbo;
> > -
> > -   list_add(&buflist[i].head, &validate_list);
> > +

Re: [PATCH v4 11/12] drm/virtio: switch from ttm to gem shmem helpers

2019-06-30 Thread Chia-I Wu
On Fri, Jun 28, 2019 at 3:49 AM Gerd Hoffmann  wrote:
>
> > >  static inline struct virtio_gpu_object*
> > >  virtio_gpu_object_ref(struct virtio_gpu_object *bo)
>
> > The last users of these two helpers are removed with this patch.  We
> > can remove them.
>
> patch 12/12 does that.
I meant virtio_gpu_object_ref/unref, which are still around after patch 12.
>
> > > +   bo = gem_to_virtio_gpu_obj(&shmem_obj->base);
> > > +   bo->base.base.funcs = &virtio_gpu_gem_funcs;
> > Move this to virtio_gpu_create_object.
>
> Fixed.
>
> > > +   ret = drm_gem_shmem_pin(&obj->base.base);
> > The bo is attached for its entire lifetime, at least currently.  Maybe
> > we can use drm_gem_shmem_get_pages_sgt (and get rid of obj->pages).
>
> Already checked this.
> We can't due to the iommu quirks.
>
> cheers,
>   Gerd
>
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH v4 08/12] drm/virtio: rework virtio_gpu_execbuffer_ioctl fencing

2019-06-30 Thread Chia-I Wu
On Fri, Jun 28, 2019 at 3:34 AM Gerd Hoffmann  wrote:
>
>   Hi,
>
> > > --- a/drivers/gpu/drm/virtio/virtgpu_drv.h
> > > +++ b/drivers/gpu/drm/virtio/virtgpu_drv.h
> > > @@ -120,9 +120,9 @@ struct virtio_gpu_vbuffer {
> > >
> > > char *resp_buf;
> > > int resp_size;
> > > -
> > > virtio_gpu_resp_cb resp_cb;
> > >
> > > +   struct virtio_gpu_object_array *objs;
> > This can use a comment (e.g., objects referenced by the vbuffer)
>
> IMHO this is obvious ...
>
> > >  void virtio_gpu_cmd_submit(struct virtio_gpu_device *vgdev,
> > >void *data, uint32_t data_size,
> > > -  uint32_t ctx_id, struct virtio_gpu_fence 
> > > *fence);
> > > +  uint32_t ctx_id, struct virtio_gpu_fence 
> > > *fence,
> > > +  struct virtio_gpu_object_array *objs);
> > Can we keep fence, which is updated, as the last parameter?
>
> Fixed.
>
> > > +   if (buflist) {
> > > +   for (i = 0; i < exbuf->num_bo_handles; i++)
> > > +   
> > > reservation_object_add_excl_fence(buflist->objs[i]->resv,
> > > + &out_fence->f);
> > > +   drm_gem_unlock_reservations(buflist->objs, buflist->nents,
> > > +   &ticket);
> > > +   }
> > We used to unlock after virtio_gpu_cmd_submit.
> >
> > I guess, the fence is considered signaled (because its seqno is still
> > 0) until after virtio_gpu_cmd_submit.  We probably don't want other
> > processes to see the semi-initialized fence.
>
> Good point.  Fixed.
>
> > >  out_memdup:
> > > kfree(buf);
> > >  out_unresv:
> > > -   ttm_eu_backoff_reservation(&ticket, &validate_list);
> > > -out_free:
> > > -   virtio_gpu_unref_list(&validate_list);
> > Keeping out_free to free buflist seems just fine.
>
> We don't need the separate label though ...
>
> > > +   drm_gem_unlock_reservations(buflist->objs, buflist->nents, 
> > > &ticket);
> > >  out_unused_fd:
> > > kvfree(bo_handles);
> > > -   kvfree(buflist);
> > > +   if (buflist)
> > > +   virtio_gpu_array_put_free(buflist);
>
> ... and the buflist is released here if needed.
>
> But we need if (buflist) for drm_gem_unlock_reservations too.  Fixed.
>
> > > -
> > > -   list_del(&entry->list);
> > > -   free_vbuf(vgdev, entry);
> > > }
> > > wake_up(&vgdev->ctrlq.ack_queue);
> > >
> > > if (fence_id)
> > > virtio_gpu_fence_event_process(vgdev, fence_id);
> > > +
> > > +   list_for_each_entry_safe(entry, tmp, &reclaim_list, list) {
> > > +   if (entry->objs)
> > > +   virtio_gpu_array_put_free(entry->objs);
> > > +   list_del(&entry->list);
> > We are clearing the list.  I guess list_del is not needed.
> > > +   free_vbuf(vgdev, entry);
>
> This just shuffles around the code.  Dropping list_del() is unrelated
> and should be a separate patch.
Fair point.  We now loop the list twice and I was just looking for
chances for micro-optimizations.
>
> Beside that I'm not sure it actually can be dropped.  free_vbuf() will
> not kfree() the vbuf but keep it cached in a freelist instead.
vbuf is created with kmem_cache_zalloc which always zeros the struct.

>
> cheers,
>   Gerd
>
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH v4 02/12] drm/virtio: switch virtio_gpu_wait_ioctl() to gem helper.

2019-06-30 Thread Chia-I Wu
On Fri, Jun 28, 2019 at 3:05 AM Gerd Hoffmann  wrote:
>
> On Wed, Jun 26, 2019 at 04:55:20PM -0700, Chia-I Wu wrote:
> > On Wed, Jun 19, 2019 at 11:07 PM Gerd Hoffmann  wrote:
> > >
> > > Use drm_gem_reservation_object_wait() in virtio_gpu_wait_ioctl().
> > > This also makes the ioctl run lockless.
> > The userspace has a BO cache to avoid freeing BOs immediately but to
> > reuse them on next allocations.  The BO cache checks if a BO is busy
> > before reuse, and I am seeing a big negative perf impact because of
> > slow virtio_gpu_wait_ioctl.  I wonder if this helps.
>
> Could help indeed (assuming it checks with NOWAIT).
Yeah, that is the case.
>
> How many objects does userspace check in one go typically?  Maybe it
> makes sense to add an ioctl which checks a list, to reduce the system
> call overhead.
One.  The cache sorts BOs by the time they are freed, and checks only
the first (compatible) BO.  If it is idle, cache hit.  Otherwise,
cache miss.  A new ioctl probably won't help.


>
> > > +   if (args->flags & VIRTGPU_WAIT_NOWAIT) {
> > > +   obj = drm_gem_object_lookup(file, args->handle);
> > Don't we need a NULL check here?
>
> Yes, we do.  Will fix.
>
> thanks,
>   Gerd
>
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH v1 09/33] drm/qxl: drop use of drmP.h

2019-06-30 Thread Gerd Hoffmann
On Sun, Jun 30, 2019 at 08:18:58AM +0200, Sam Ravnborg wrote:
> Drop use of the deprecated drmP.h header file.
> While touching the files divided includes in blocks,
> and when needed sort the blocks.
> Fix fallout.
> 
> Signed-off-by: Sam Ravnborg 
> Cc: Dave Airlie 
> Cc: Gerd Hoffmann 
> Cc: virtualization@lists.linux-foundation.org
> Cc: spice-de...@lists.freedesktop.org

Acked-by: Gerd Hoffmann 

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH v1 27/33] drm/virtgpu: drop use of drmP.h

2019-06-30 Thread Gerd Hoffmann
On Sun, Jun 30, 2019 at 08:19:16AM +0200, Sam Ravnborg wrote:
> Drop use of the deprecated drmP.h header file.
> Fix fallout by adding missing include files.
> 
> Signed-off-by: Sam Ravnborg 
> Cc: David Airlie 
> Cc: Gerd Hoffmann 
> Cc: Daniel Vetter 
> Cc: virtualization@lists.linux-foundation.org

Acked-by: Gerd Hoffmann 

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH v1 31/33] drm/bochs: drop use of drmP.h

2019-06-30 Thread Gerd Hoffmann
On Sun, Jun 30, 2019 at 08:19:20AM +0200, Sam Ravnborg wrote:
> Drop use of the deprecated drmP.h header file.
> Made bochs.h self-contained and then fixed
> fallout in remaining files.
> Several unused includes was dropped in the process.
> 
> Signed-off-by: Sam Ravnborg 
> Cc: Gerd Hoffmann 
> Cc: David Airlie 
> Cc: Daniel Vetter 
> Cc: virtualization@lists.linux-foundation.org

Acked-by: Gerd Hoffmann 

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization