On 8/19/2016 11:33 PM, Chris Wilson wrote:
On Fri, Aug 19, 2016 at 02:13:07PM +0530, akash.g...@intel.com wrote:
 static void *guc_get_write_buffer(struct intel_guc *guc)
 {
-       return NULL;
+       /* FIXME: Cover the check under a lock ? */
+       if (!guc->log.relay_chan)
+               return NULL;
+
+       /* Just get the base address of a new sub buffer and copy data into it
+        * ourselves. NULL will be returned in no-overwrite mode, if all sub
+        * buffers are full. Could have used the relay_write() to indirectly
+        * copy the data, but that would have been bit convoluted, as we need to
+        * write to only certain locations inside a sub buffer which cannot be
+        * done without using relay_reserve() along with relay_write(). So its
+        * better to use relay_reserve() alone.
+        */
+       return relay_reserve(guc->log.relay_chan, 0);
 }

You have to chase through the code a long way to check whether or not
the allocation is correct.

Please do consider adding a check such as

        GEM_BUG_ON(guc->log.relay_chan->size < guc->log.vma->size);
(near the allocation)

Fine, will add a check after the allocation, but not sure how useful it
will be, as we shall trust relay to do the memory allocation for the
sub-buffers as per the requested 'subbuf_size'.

subbuf_size = guc->log.vma->obj->base.size;
n_subbufs = 8;
guc_log_relay_chan = relay_open(NULL, NULL, subbuf_size,
                                n_subbufs, &relay_callbacks, dev_priv);


GEM_BUG_ON(guc->log.relay_chan->subbuf_size < guc->log.vma->obj->base.size);



        GEM_BUG_ON(write_offset + buffer_size > guc->log.relay_chan->size);
(before the memcpy, or whatever is appropriate).

There is a check already for read_offset/write_offset before the memcpy.

I think it would be better to add this check
GEM_BUG_ON(guc->log.relay_chan->subbuf_size < guc->log.vma->obj->base.size);

just before
return relay_reserve(guc->log.relay_chan, 0);

Best regards
Akash

Just to leave some clues to the reader as to what is going on.
-Chris

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

Reply via email to