[PATCH] dma-fence: dma-buf synchronization
Op 12-07-12 00:29, Rob Clark schreef: > From: Rob Clark > > A dma-fence can be attached to a buffer which is being filled or consumed > by hw, to allow userspace to pass the buffer without waiting to another > device. For example, userspace can call page_flip ioctl to display the > next frame of graphics after kicking the GPU but while the GPU is still > rendering. The display device sharing the buffer with the GPU would > attach a callback to get notified when the GPU's rendering-complete IRQ > fires, to update the scan-out address of the display, without having to > wake up userspace. > > A dma-fence is transient, one-shot deal. It is allocated and attached > to dma-buf's list of fences. When the one that attached it is done, > with the pending operation, it can signal the fence removing it from the > dma-buf's list of fences: > > + dma_buf_attach_fence() > + dma_fence_signal() > > Other drivers can access the current fence on the dma-buf (if any), > which increment's the fences refcnt: > > + dma_buf_get_fence() > + dma_fence_put() > > The one pending on the fence can add an async callback (and optionally > cancel it.. for example, to recover from GPU hangs): > > + dma_fence_add_callback() > + dma_fence_cancel_callback() > > Or wait synchronously (optionally with timeout or from atomic context): > > + dma_fence_wait() Waiting for an undefined time from atomic context is probably not a good idea. However just checking non-blocking if the fence has passed would be fine. > A default software-only implementation is provided, which can be used > by drivers attaching a fence to a buffer when they have no other means > for hw sync. But a memory backed fence is also envisioned, because it > is common that GPU's can write to, or poll on some memory location for > synchronization. For example: > > fence = dma_buf_get_fence(dmabuf); > if (fence->ops == _dma_fence_ops) { > dma_buf *fence_buf; > mem_dma_fence_get_buf(fence, _buf, ); > ... tell the hw the memory location to wait on ... > } else { > /* fall-back to sw sync * / > dma_fence_add_callback(fence, my_cb); > } This will probably have to be done on dma-buf attach time instead, so drivers that support both know if an interrupt needs to be inserted in the command stream or not. > The memory location is itself backed by dma-buf, to simplify mapping > to the device's address space, an idea borrowed from Maarten Lankhorst. > > NOTE: the memory location fence is not implemented yet, the above is > just for explaining how it would work. > > On SoC platforms, if some other hw mechanism is provided for synchronizing > between IP blocks, it could be supported as an alternate implementation > with it's own fence ops in a similar way. > > The other non-sw implementations would wrap the add/cancel_callback and > wait fence ops, so that they can keep track if a device not supporting > hw sync is waiting on the fence, and in this case should arrange to Standardizing an errno in case the device already signalled the fence would be nice. > call dma_fence_signal() at some point after the condition has changed, > to notify other devices waiting on the fence. If there are no sw > waiters, this can be skipped to avoid waking the CPU unnecessarily. Can this be done inside interrupt context? I could insert some semaphores into intel that would block execution, but I would save a context switch if intel could release the command blocking from inside irq context. > The intention is to provide a userspace interface (presumably via eventfd) > later, to be used in conjunction with dma-buf's mmap support for sw access > to buffers (or for userspace apps that would prefer to do their own > synchronization). I'll have to look at this more in the morning but I see no barrier for this being used with dmabufmgr right now. The fence lock should probably not be static but shared with the dmabufmgr code, with _locked variants. Oh and in your example code I noticed inconsistent use of spin_lock and spin_lock_irqsave, do you intend it to be used in hardirq context? ~Maarten
[PATCH] dma-fence: dma-buf synchronization
On Wed, Jul 11, 2012 at 6:49 PM, Maarten Lankhorst wrote: > Op 12-07-12 00:29, Rob Clark schreef: >> From: Rob Clark >> >> A dma-fence can be attached to a buffer which is being filled or consumed >> by hw, to allow userspace to pass the buffer without waiting to another >> device. For example, userspace can call page_flip ioctl to display the >> next frame of graphics after kicking the GPU but while the GPU is still >> rendering. The display device sharing the buffer with the GPU would >> attach a callback to get notified when the GPU's rendering-complete IRQ >> fires, to update the scan-out address of the display, without having to >> wake up userspace. >> >> A dma-fence is transient, one-shot deal. It is allocated and attached >> to dma-buf's list of fences. When the one that attached it is done, >> with the pending operation, it can signal the fence removing it from the >> dma-buf's list of fences: >> >> + dma_buf_attach_fence() >> + dma_fence_signal() >> >> Other drivers can access the current fence on the dma-buf (if any), >> which increment's the fences refcnt: >> >> + dma_buf_get_fence() >> + dma_fence_put() >> >> The one pending on the fence can add an async callback (and optionally >> cancel it.. for example, to recover from GPU hangs): >> >> + dma_fence_add_callback() >> + dma_fence_cancel_callback() >> >> Or wait synchronously (optionally with timeout or from atomic context): >> >> + dma_fence_wait() > Waiting for an undefined time from atomic context is probably > not a good idea. However just checking non-blocking if the fence > has passed would be fine. yeah, the intention was to use short timeout or no-blocking if from atomic ctxt, or interruptible with whatever timeout if non-atomic (for example, to implement a CPU_PREP sort of ioctl) >> A default software-only implementation is provided, which can be used >> by drivers attaching a fence to a buffer when they have no other means >> for hw sync. But a memory backed fence is also envisioned, because it >> is common that GPU's can write to, or poll on some memory location for >> synchronization. For example: >> >> fence = dma_buf_get_fence(dmabuf); >> if (fence->ops == _dma_fence_ops) { >> dma_buf *fence_buf; >> mem_dma_fence_get_buf(fence, _buf, ); >> ... tell the hw the memory location to wait on ... >> } else { >> /* fall-back to sw sync * / >> dma_fence_add_callback(fence, my_cb); >> } > This will probably have to be done on dma-buf attach time instead, > so drivers that support both know if an interrupt needs to be inserted > in the command stream or not. probably a hint, ie. add a flags parameter to attach() would do the job? >> The memory location is itself backed by dma-buf, to simplify mapping >> to the device's address space, an idea borrowed from Maarten Lankhorst. >> >> NOTE: the memory location fence is not implemented yet, the above is >> just for explaining how it would work. >> >> On SoC platforms, if some other hw mechanism is provided for synchronizing >> between IP blocks, it could be supported as an alternate implementation >> with it's own fence ops in a similar way. >> >> The other non-sw implementations would wrap the add/cancel_callback and >> wait fence ops, so that they can keep track if a device not supporting >> hw sync is waiting on the fence, and in this case should arrange to > Standardizing an errno in case the device already signalled the fence > would be nice. I was just using EINVAL, but perhaps there is a better choice? >> call dma_fence_signal() at some point after the condition has changed, >> to notify other devices waiting on the fence. If there are no sw >> waiters, this can be skipped to avoid waking the CPU unnecessarily. > Can this be done inside interrupt context? I could insert some > semaphores into intel that would block execution, but I would > save a context switch if intel could release the command blocking > from inside irq context. yeah, it was the intention that signal() could be from irq handler directly (and that registered cb's can be called from atomic ctxt.. which is sufficient if they just have to bang a register or two, otherwise they can schedule a worker) >> The intention is to provide a userspace interface (presumably via eventfd) >> later, to be used in conjunction with dma-buf's mmap support for sw access >> to buffers (or for userspace apps that would prefer to do their own >> synchronization). > I'll have to look at this more in the morning but I see no barrier for > this being used with dmabufmgr right now. > > The fence lock should probably not be static but shared with the > dmabufmgr code, with _locked variants. > Oh and in your example code I noticed inconsistent use of spin_lock > and spin_lock_irqsave, do you intend it to be used in hardirq context? oh, whoops, I started w/ spin_lock() an then realized I wanted signal() from irq handlers and forgot to update all the other places where spin_lock() was used
[PATCH] dma-fence: dma-buf synchronization
From: Rob ClarkA dma-fence can be attached to a buffer which is being filled or consumed by hw, to allow userspace to pass the buffer without waiting to another device. For example, userspace can call page_flip ioctl to display the next frame of graphics after kicking the GPU but while the GPU is still rendering. The display device sharing the buffer with the GPU would attach a callback to get notified when the GPU's rendering-complete IRQ fires, to update the scan-out address of the display, without having to wake up userspace. A dma-fence is transient, one-shot deal. It is allocated and attached to dma-buf's list of fences. When the one that attached it is done, with the pending operation, it can signal the fence removing it from the dma-buf's list of fences: + dma_buf_attach_fence() + dma_fence_signal() Other drivers can access the current fence on the dma-buf (if any), which increment's the fences refcnt: + dma_buf_get_fence() + dma_fence_put() The one pending on the fence can add an async callback (and optionally cancel it.. for example, to recover from GPU hangs): + dma_fence_add_callback() + dma_fence_cancel_callback() Or wait synchronously (optionally with timeout or from atomic context): + dma_fence_wait() A default software-only implementation is provided, which can be used by drivers attaching a fence to a buffer when they have no other means for hw sync. But a memory backed fence is also envisioned, because it is common that GPU's can write to, or poll on some memory location for synchronization. For example: fence = dma_buf_get_fence(dmabuf); if (fence->ops == _dma_fence_ops) { dma_buf *fence_buf; mem_dma_fence_get_buf(fence, _buf, ); ... tell the hw the memory location to wait on ... } else { /* fall-back to sw sync * / dma_fence_add_callback(fence, my_cb); } The memory location is itself backed by dma-buf, to simplify mapping to the device's address space, an idea borrowed from Maarten Lankhorst. NOTE: the memory location fence is not implemented yet, the above is just for explaining how it would work. On SoC platforms, if some other hw mechanism is provided for synchronizing between IP blocks, it could be supported as an alternate implementation with it's own fence ops in a similar way. The other non-sw implementations would wrap the add/cancel_callback and wait fence ops, so that they can keep track if a device not supporting hw sync is waiting on the fence, and in this case should arrange to call dma_fence_signal() at some point after the condition has changed, to notify other devices waiting on the fence. If there are no sw waiters, this can be skipped to avoid waking the CPU unnecessarily. The intention is to provide a userspace interface (presumably via eventfd) later, to be used in conjunction with dma-buf's mmap support for sw access to buffers (or for userspace apps that would prefer to do their own synchronization). --- drivers/base/Makefile |2 +- drivers/base/dma-buf.c|3 + drivers/base/dma-fence.c | 325 + include/linux/dma-buf.h |3 + include/linux/dma-fence.h | 118 5 files changed, 450 insertions(+), 1 deletion(-) create mode 100644 drivers/base/dma-fence.c create mode 100644 include/linux/dma-fence.h diff --git a/drivers/base/Makefile b/drivers/base/Makefile index 5aa2d70..6e9f217 100644 --- a/drivers/base/Makefile +++ b/drivers/base/Makefile @@ -10,7 +10,7 @@ obj-$(CONFIG_CMA) += dma-contiguous.o obj-y += power/ obj-$(CONFIG_HAS_DMA) += dma-mapping.o obj-$(CONFIG_HAVE_GENERIC_DMA_COHERENT) += dma-coherent.o -obj-$(CONFIG_DMA_SHARED_BUFFER) += dma-buf.o +obj-$(CONFIG_DMA_SHARED_BUFFER) += dma-buf.o dma-fence.o obj-$(CONFIG_ISA) += isa.o obj-$(CONFIG_FW_LOADER)+= firmware_class.o obj-$(CONFIG_NUMA) += node.o diff --git a/drivers/base/dma-buf.c b/drivers/base/dma-buf.c index 24e88fe..b053236 100644 --- a/drivers/base/dma-buf.c +++ b/drivers/base/dma-buf.c @@ -39,6 +39,8 @@ static int dma_buf_release(struct inode *inode, struct file *file) dmabuf = file->private_data; + WARN_ON(!list_empty(>fence_list)); + dmabuf->ops->release(dmabuf); kfree(dmabuf); return 0; @@ -119,6 +121,7 @@ struct dma_buf *dma_buf_export(void *priv, const struct dma_buf_ops *ops, mutex_init(>lock); INIT_LIST_HEAD(>attachments); + INIT_LIST_HEAD(>fence_list); return dmabuf; } diff --git a/drivers/base/dma-fence.c b/drivers/base/dma-fence.c new file mode 100644 index 000..a94ed01 --- /dev/null +++ b/drivers/base/dma-fence.c @@ -0,0 +1,325 @@ +/* + * Fence mechanism for dma-buf to allow for asynchronous dma access + * + * Copyright (C) 2012 Texas Instruments + * Author: Rob Clark + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License version 2
[PATCH] dma-fence: dma-buf synchronization
From: Rob Clark r...@ti.com A dma-fence can be attached to a buffer which is being filled or consumed by hw, to allow userspace to pass the buffer without waiting to another device. For example, userspace can call page_flip ioctl to display the next frame of graphics after kicking the GPU but while the GPU is still rendering. The display device sharing the buffer with the GPU would attach a callback to get notified when the GPU's rendering-complete IRQ fires, to update the scan-out address of the display, without having to wake up userspace. A dma-fence is transient, one-shot deal. It is allocated and attached to dma-buf's list of fences. When the one that attached it is done, with the pending operation, it can signal the fence removing it from the dma-buf's list of fences: + dma_buf_attach_fence() + dma_fence_signal() Other drivers can access the current fence on the dma-buf (if any), which increment's the fences refcnt: + dma_buf_get_fence() + dma_fence_put() The one pending on the fence can add an async callback (and optionally cancel it.. for example, to recover from GPU hangs): + dma_fence_add_callback() + dma_fence_cancel_callback() Or wait synchronously (optionally with timeout or from atomic context): + dma_fence_wait() A default software-only implementation is provided, which can be used by drivers attaching a fence to a buffer when they have no other means for hw sync. But a memory backed fence is also envisioned, because it is common that GPU's can write to, or poll on some memory location for synchronization. For example: fence = dma_buf_get_fence(dmabuf); if (fence-ops == mem_dma_fence_ops) { dma_buf *fence_buf; mem_dma_fence_get_buf(fence, fence_buf, offset); ... tell the hw the memory location to wait on ... } else { /* fall-back to sw sync * / dma_fence_add_callback(fence, my_cb); } The memory location is itself backed by dma-buf, to simplify mapping to the device's address space, an idea borrowed from Maarten Lankhorst. NOTE: the memory location fence is not implemented yet, the above is just for explaining how it would work. On SoC platforms, if some other hw mechanism is provided for synchronizing between IP blocks, it could be supported as an alternate implementation with it's own fence ops in a similar way. The other non-sw implementations would wrap the add/cancel_callback and wait fence ops, so that they can keep track if a device not supporting hw sync is waiting on the fence, and in this case should arrange to call dma_fence_signal() at some point after the condition has changed, to notify other devices waiting on the fence. If there are no sw waiters, this can be skipped to avoid waking the CPU unnecessarily. The intention is to provide a userspace interface (presumably via eventfd) later, to be used in conjunction with dma-buf's mmap support for sw access to buffers (or for userspace apps that would prefer to do their own synchronization). --- drivers/base/Makefile |2 +- drivers/base/dma-buf.c|3 + drivers/base/dma-fence.c | 325 + include/linux/dma-buf.h |3 + include/linux/dma-fence.h | 118 5 files changed, 450 insertions(+), 1 deletion(-) create mode 100644 drivers/base/dma-fence.c create mode 100644 include/linux/dma-fence.h diff --git a/drivers/base/Makefile b/drivers/base/Makefile index 5aa2d70..6e9f217 100644 --- a/drivers/base/Makefile +++ b/drivers/base/Makefile @@ -10,7 +10,7 @@ obj-$(CONFIG_CMA) += dma-contiguous.o obj-y += power/ obj-$(CONFIG_HAS_DMA) += dma-mapping.o obj-$(CONFIG_HAVE_GENERIC_DMA_COHERENT) += dma-coherent.o -obj-$(CONFIG_DMA_SHARED_BUFFER) += dma-buf.o +obj-$(CONFIG_DMA_SHARED_BUFFER) += dma-buf.o dma-fence.o obj-$(CONFIG_ISA) += isa.o obj-$(CONFIG_FW_LOADER)+= firmware_class.o obj-$(CONFIG_NUMA) += node.o diff --git a/drivers/base/dma-buf.c b/drivers/base/dma-buf.c index 24e88fe..b053236 100644 --- a/drivers/base/dma-buf.c +++ b/drivers/base/dma-buf.c @@ -39,6 +39,8 @@ static int dma_buf_release(struct inode *inode, struct file *file) dmabuf = file-private_data; + WARN_ON(!list_empty(dmabuf-fence_list)); + dmabuf-ops-release(dmabuf); kfree(dmabuf); return 0; @@ -119,6 +121,7 @@ struct dma_buf *dma_buf_export(void *priv, const struct dma_buf_ops *ops, mutex_init(dmabuf-lock); INIT_LIST_HEAD(dmabuf-attachments); + INIT_LIST_HEAD(dmabuf-fence_list); return dmabuf; } diff --git a/drivers/base/dma-fence.c b/drivers/base/dma-fence.c new file mode 100644 index 000..a94ed01 --- /dev/null +++ b/drivers/base/dma-fence.c @@ -0,0 +1,325 @@ +/* + * Fence mechanism for dma-buf to allow for asynchronous dma access + * + * Copyright (C) 2012 Texas Instruments + * Author: Rob Clark rob.cl...@linaro.org + * + * This program is free software; you can redistribute it and/or modify it + *
Re: [PATCH] dma-fence: dma-buf synchronization
Op 12-07-12 00:29, Rob Clark schreef: From: Rob Clark r...@ti.com A dma-fence can be attached to a buffer which is being filled or consumed by hw, to allow userspace to pass the buffer without waiting to another device. For example, userspace can call page_flip ioctl to display the next frame of graphics after kicking the GPU but while the GPU is still rendering. The display device sharing the buffer with the GPU would attach a callback to get notified when the GPU's rendering-complete IRQ fires, to update the scan-out address of the display, without having to wake up userspace. A dma-fence is transient, one-shot deal. It is allocated and attached to dma-buf's list of fences. When the one that attached it is done, with the pending operation, it can signal the fence removing it from the dma-buf's list of fences: + dma_buf_attach_fence() + dma_fence_signal() Other drivers can access the current fence on the dma-buf (if any), which increment's the fences refcnt: + dma_buf_get_fence() + dma_fence_put() The one pending on the fence can add an async callback (and optionally cancel it.. for example, to recover from GPU hangs): + dma_fence_add_callback() + dma_fence_cancel_callback() Or wait synchronously (optionally with timeout or from atomic context): + dma_fence_wait() Waiting for an undefined time from atomic context is probably not a good idea. However just checking non-blocking if the fence has passed would be fine. A default software-only implementation is provided, which can be used by drivers attaching a fence to a buffer when they have no other means for hw sync. But a memory backed fence is also envisioned, because it is common that GPU's can write to, or poll on some memory location for synchronization. For example: fence = dma_buf_get_fence(dmabuf); if (fence-ops == mem_dma_fence_ops) { dma_buf *fence_buf; mem_dma_fence_get_buf(fence, fence_buf, offset); ... tell the hw the memory location to wait on ... } else { /* fall-back to sw sync * / dma_fence_add_callback(fence, my_cb); } This will probably have to be done on dma-buf attach time instead, so drivers that support both know if an interrupt needs to be inserted in the command stream or not. The memory location is itself backed by dma-buf, to simplify mapping to the device's address space, an idea borrowed from Maarten Lankhorst. NOTE: the memory location fence is not implemented yet, the above is just for explaining how it would work. On SoC platforms, if some other hw mechanism is provided for synchronizing between IP blocks, it could be supported as an alternate implementation with it's own fence ops in a similar way. The other non-sw implementations would wrap the add/cancel_callback and wait fence ops, so that they can keep track if a device not supporting hw sync is waiting on the fence, and in this case should arrange to Standardizing an errno in case the device already signalled the fence would be nice. call dma_fence_signal() at some point after the condition has changed, to notify other devices waiting on the fence. If there are no sw waiters, this can be skipped to avoid waking the CPU unnecessarily. Can this be done inside interrupt context? I could insert some semaphores into intel that would block execution, but I would save a context switch if intel could release the command blocking from inside irq context. The intention is to provide a userspace interface (presumably via eventfd) later, to be used in conjunction with dma-buf's mmap support for sw access to buffers (or for userspace apps that would prefer to do their own synchronization). I'll have to look at this more in the morning but I see no barrier for this being used with dmabufmgr right now. The fence lock should probably not be static but shared with the dmabufmgr code, with _locked variants. Oh and in your example code I noticed inconsistent use of spin_lock and spin_lock_irqsave, do you intend it to be used in hardirq context? ~Maarten ___ dri-devel mailing list dri-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/dri-devel
Re: [PATCH] dma-fence: dma-buf synchronization
On Wed, Jul 11, 2012 at 6:49 PM, Maarten Lankhorst m.b.lankho...@gmail.com wrote: Op 12-07-12 00:29, Rob Clark schreef: From: Rob Clark r...@ti.com A dma-fence can be attached to a buffer which is being filled or consumed by hw, to allow userspace to pass the buffer without waiting to another device. For example, userspace can call page_flip ioctl to display the next frame of graphics after kicking the GPU but while the GPU is still rendering. The display device sharing the buffer with the GPU would attach a callback to get notified when the GPU's rendering-complete IRQ fires, to update the scan-out address of the display, without having to wake up userspace. A dma-fence is transient, one-shot deal. It is allocated and attached to dma-buf's list of fences. When the one that attached it is done, with the pending operation, it can signal the fence removing it from the dma-buf's list of fences: + dma_buf_attach_fence() + dma_fence_signal() Other drivers can access the current fence on the dma-buf (if any), which increment's the fences refcnt: + dma_buf_get_fence() + dma_fence_put() The one pending on the fence can add an async callback (and optionally cancel it.. for example, to recover from GPU hangs): + dma_fence_add_callback() + dma_fence_cancel_callback() Or wait synchronously (optionally with timeout or from atomic context): + dma_fence_wait() Waiting for an undefined time from atomic context is probably not a good idea. However just checking non-blocking if the fence has passed would be fine. yeah, the intention was to use short timeout or no-blocking if from atomic ctxt, or interruptible with whatever timeout if non-atomic (for example, to implement a CPU_PREP sort of ioctl) A default software-only implementation is provided, which can be used by drivers attaching a fence to a buffer when they have no other means for hw sync. But a memory backed fence is also envisioned, because it is common that GPU's can write to, or poll on some memory location for synchronization. For example: fence = dma_buf_get_fence(dmabuf); if (fence-ops == mem_dma_fence_ops) { dma_buf *fence_buf; mem_dma_fence_get_buf(fence, fence_buf, offset); ... tell the hw the memory location to wait on ... } else { /* fall-back to sw sync * / dma_fence_add_callback(fence, my_cb); } This will probably have to be done on dma-buf attach time instead, so drivers that support both know if an interrupt needs to be inserted in the command stream or not. probably a hint, ie. add a flags parameter to attach() would do the job? The memory location is itself backed by dma-buf, to simplify mapping to the device's address space, an idea borrowed from Maarten Lankhorst. NOTE: the memory location fence is not implemented yet, the above is just for explaining how it would work. On SoC platforms, if some other hw mechanism is provided for synchronizing between IP blocks, it could be supported as an alternate implementation with it's own fence ops in a similar way. The other non-sw implementations would wrap the add/cancel_callback and wait fence ops, so that they can keep track if a device not supporting hw sync is waiting on the fence, and in this case should arrange to Standardizing an errno in case the device already signalled the fence would be nice. I was just using EINVAL, but perhaps there is a better choice? call dma_fence_signal() at some point after the condition has changed, to notify other devices waiting on the fence. If there are no sw waiters, this can be skipped to avoid waking the CPU unnecessarily. Can this be done inside interrupt context? I could insert some semaphores into intel that would block execution, but I would save a context switch if intel could release the command blocking from inside irq context. yeah, it was the intention that signal() could be from irq handler directly (and that registered cb's can be called from atomic ctxt.. which is sufficient if they just have to bang a register or two, otherwise they can schedule a worker) The intention is to provide a userspace interface (presumably via eventfd) later, to be used in conjunction with dma-buf's mmap support for sw access to buffers (or for userspace apps that would prefer to do their own synchronization). I'll have to look at this more in the morning but I see no barrier for this being used with dmabufmgr right now. The fence lock should probably not be static but shared with the dmabufmgr code, with _locked variants. Oh and in your example code I noticed inconsistent use of spin_lock and spin_lock_irqsave, do you intend it to be used in hardirq context? oh, whoops, I started w/ spin_lock() an then realized I wanted signal() from irq handlers and forgot to update all the other places where spin_lock() was used BR, -R ~Maarten -- To unsubscribe from this list: send the line unsubscribe linux-media in the body of a