On ma, 2015-04-13 at 12:32 +0100, Tvrtko Ursulin wrote:
> Hi,
> 
> On 04/07/2015 01:23 PM, Joonas Lahtinen wrote:
> > Add a straightforward test that allocates a BO that is bigger than
> > (by 1 page currently) the mappable aperture, tests mmap access to it
> > by CPU directly and through GTT in sequence.
> >
> > Currently it is expected for the GTT access to gracefully fail as
> > all objects are attempted to get pinned to GTT completely for mmap
> > access. Once the partial view support is merged to kernel, the test
> > should pass for all parts.
> >
> > Signed-off-by: Joonas Lahtinen <joonas.lahti...@linux.intel.com>
> > ---
> >   tests/gem_mmap_gtt.c | 68 
> > ++++++++++++++++++++++++++++++++++++++++++++++++++++
> >   1 file changed, 68 insertions(+)
> >
> > diff --git a/tests/gem_mmap_gtt.c b/tests/gem_mmap_gtt.c
> > index 55c66a2..bf3627c 100644
> > --- a/tests/gem_mmap_gtt.c
> > +++ b/tests/gem_mmap_gtt.c
> > @@ -41,6 +41,10 @@
> >   #include "drmtest.h"
> >   #include "igt_debugfs.h"
> >
> > +#ifndef PAGE_SIZE
> > +#define PAGE_SIZE 4096
> > +#endif
> > +
> >   static int OBJECT_SIZE = 16*1024*1024;
> >
> >   static void set_domain(int fd, uint32_t handle)
> > @@ -258,6 +262,68 @@ test_write_gtt(int fd)
> >   }
> >
> >   static void
> > +test_huge_bo(int fd)
> > +{
> > +   uint32_t bo;
> > +   char *ptr_cpu;
> > +   char *ptr_gtt;
> > +   char *cpu_pattern;
> > +   uint64_t mappable_aperture_pages = gem_mappable_aperture_size() /
> > +                                      PAGE_SIZE;
> > +   uint64_t huge_object_size = (mappable_aperture_pages + 1) * PAGE_SIZE;
> > +   uint64_t last_offset = huge_object_size - PAGE_SIZE;
> > +
> > +   cpu_pattern = malloc(PAGE_SIZE);
> > +   igt_assert(cpu_pattern);
> 
> I'd be tempted to use 4k from the stack for simplicity.

It's not a nice thing to allocate two 4k objects from stack. So lets
just not.

> > +   memset(cpu_pattern, 0xaa, PAGE_SIZE);
> > +
> > +   bo = gem_create(fd, huge_object_size);
> > +
> > +   ptr_cpu = gem_mmap__cpu(fd, bo, 0, huge_object_size,
> > +                           PROT_READ | PROT_WRITE);
> > +   if (!ptr_cpu) {
> > +           igt_warn("Not enough free memory for huge BO test!\n");
> > +           goto out;
> 
> Free address space or free memory?
> 

It is not really relevant to the test which condition caused it. But
yeah, correcting the error message into 'Can not allocate memory'.

> Also, igt_require so test skips in that case?
> 

Ack using igt_require_f. Because the condition is bit unclear without
the text.

> > +   }
> > +
> > +   /* Test read/write to first/last page with CPU. */
> > +   memcpy(ptr_cpu, cpu_pattern, PAGE_SIZE);
> > +   igt_assert(memcmp(ptr_cpu, cpu_pattern, PAGE_SIZE) == 0);
> > +
> > +   memcpy(ptr_cpu + last_offset, cpu_pattern, PAGE_SIZE);
> > +   igt_assert(memcmp(ptr_cpu + last_offset, cpu_pattern, PAGE_SIZE) == 0);
> > +
> > +   igt_assert(memcmp(ptr_cpu, ptr_cpu + last_offset, PAGE_SIZE) == 0);
> > +
> > +   munmap(ptr_cpu, huge_object_size);
> > +   ptr_cpu = NULL;
> > +
> > +   ptr_gtt = gem_mmap__gtt(fd, bo, huge_object_size,
> > +                           PROT_READ | PROT_WRITE);
> > +   if (!ptr_gtt) {
> > +           igt_debug("Huge BO GTT mapping not supported!\n");
> > +           goto out;
> 
> igt_require as above? Hm, although ideally test would be able to detect 
> the feature (once it is added to the kernel) so it could assert here.
> 

I think the point is somewhat that UMP should not need to know/care
about it. Before introducing the feature the above will always fail, and
after introducing it, it will always succeed (unless there is less than
1MB aperture space available). So I think it should be good as it is.

> > +   }
> > +
> > +   /* Test read/write to first/last page through GTT. */
> > +   set_domain(fd, bo);
> > +
> > +   igt_assert(memcmp(ptr_gtt, cpu_pattern, PAGE_SIZE) == 0);
> > +   igt_assert(memcmp(ptr_gtt + last_offset, cpu_pattern, PAGE_SIZE) == 0);
> > +
> > +   memset(ptr_gtt, 0x55, PAGE_SIZE);
> > +   igt_assert(memcmp(ptr_gtt + last_offset, cpu_pattern, PAGE_SIZE) == 0);
> > +
> > +   memset(ptr_gtt + last_offset, 0x55, PAGE_SIZE);
> > +   igt_assert(memcmp(ptr_gtt, ptr_gtt + last_offset, PAGE_SIZE) == 0);
> 
> Comments for the above would be nice just to explain what is being 
> tested and how.
> 

The level of commenting was higher already than I noticed to be in other
tests, but I'll add a few more.

> Won't the last test has side effects with partial views since it is 
> accessing beginning and end of the object? Would it be better to memcmp 
> against a pattern on stack or in heap like cpu_pattern?
> 
> Will you support two simultaneous partial views or the last memcmp will 
> cause a lot of partial view creation/destruction?

Yes, there will be multiple partial views, but it's all internal to the
kernel implementation. Above access pattern should be supported.

Regards, Joonas

> 
> > +
> > +   munmap(ptr_gtt, huge_object_size);
> > +out:
> > +   gem_close(fd, bo);
> > +   free(cpu_pattern);
> > +}
> > +
> > +static void
> >   test_read(int fd)
> >   {
> >     void *dst;
> > @@ -395,6 +461,8 @@ igt_main
> >             run_without_prefault(fd, test_write_gtt);
> >     igt_subtest("write-cpu-read-gtt")
> >             test_write_cpu_read_gtt(fd);
> > +   igt_subtest("huge-bo")
> > +           test_huge_bo(fd);
> >
> >     igt_fixture
> >             close(fd);
> >
> 
> Regards,
> 
> Tvrtko


_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx

Reply via email to