> -#define HCA_CAP_MR_PGSIZE_4K 1 > -#define HCA_CAP_MR_PGSIZE_64K 2 > -#define HCA_CAP_MR_PGSIZE_1M 4 > -#define HCA_CAP_MR_PGSIZE_16M 8 > +#define HCA_CAP_MR_PGSIZE_4K 0x80000000 > +#define HCA_CAP_MR_PGSIZE_64K 0x40000000 > +#define HCA_CAP_MR_PGSIZE_1M 0x20000000 > +#define HCA_CAP_MR_PGSIZE_16M 0x10000000
Not sure I understand what this has to do with things... is this an unrelated fix? > +static int ehca_is_mem_hugetlb(unsigned long addr, unsigned long size) This is rather awful -- another call to get_user_pages() to iterate over all the vmas... I would suggest extending ib_umem_get() to check the vmas and adding a member to struct ib_umem to say whether the memory is entirely covered by hugetlb pages or not. > + ret = ehca_is_mem_hugetlb(virt, length); > + switch (ret) { > + case 0: /* mem is not from hugetlb */ > + hwpage_size = PAGE_SIZE; > + break; > + case 1: > + if (length <= EHCA_MR_PGSIZE4K > + && PAGE_SIZE == EHCA_MR_PGSIZE4K) > + hwpage_size = EHCA_MR_PGSIZE4K; > + else if (length <= EHCA_MR_PGSIZE64K) > + hwpage_size = EHCA_MR_PGSIZE64K; > + else if (length <= EHCA_MR_PGSIZE1M) > + hwpage_size = EHCA_MR_PGSIZE1M; > + else > + hwpage_size = EHCA_MR_PGSIZE16M; > + break; > + default: /* out of mem */ > + ib_mr = ERR_PTR(-ENOMEM); > + goto reg_user_mr_exit1; It seems like it would be better to just assume the memory is not from a hugetlb is ehca_is_mem_hugetlb() fails its memory allocation and fall back to the PAGE_SIZE case rather than failing entirely. Also if someone runs a kernel with 64K pages on a machine where they end up being simulated from 4K pages, do you have the same issue with the hypervisor ganging together non-contiguous pages? - R. - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/