On 09/12/2014 23:43, Roland Dreier wrote:
> I was getting ready to apply the ODP series, but then I noticed:
>
> On Tue, Nov 11, 2014 at 8:36 AM, Haggai Eran wrote:
>> diff --git a/drivers/infiniband/core/uverbs_main.c
>> b/drivers/infiniband/core/uverbs_main.c
>> index 71ab83fde472..97402502879
On Wed, Dec 10, 2014 at 12:36 AM, Weiny, Ira wrote:
>> On Mon, Dec 8, 2014 at 2:23 AM, Weiny, Ira wrote:
>> 1. add a struct ib_device_attr field to struct ib_device
>> 2. when the device registers itself with the IB core, go and run the
>> query_device verb with the param being pointer to that f
>
> On Sun, Dec 7, 2014 at 4:23 PM, Weiny, Ira wrote:
> > I don't think this is a bad idea, however, the complication is determining
> > the
> kmem_cache object size in that case. How about we limit this value to < 2K
> until such time as there is a need for larger support?
>
> Can we just get
>
> On Mon, Dec 8, 2014 at 2:23 AM, Weiny, Ira wrote:
>
> >> I find it very annoying that upper level drivers replicate in
> >> different ways elements from the IB device attributes returned by
> >> ib_query_device. I met that in multiple drivers and upcoming designs
> >> for which I do code rev
I was getting ready to apply the ODP series, but then I noticed:
On Tue, Nov 11, 2014 at 8:36 AM, Haggai Eran wrote:
> diff --git a/drivers/infiniband/core/uverbs_main.c
> b/drivers/infiniband/core/uverbs_main.c
> index 71ab83fde472..974025028790 100644
> --- a/drivers/infiniband/core/uverbs_mai
On Sun, Dec 7, 2014 at 6:58 PM, Matan Barak wrote:
> On 12/7/2014 2:59 PM, Or Gerlitz wrote:
>> On 12/7/2014 2:22 PM, Matan Barak wrote:
>>> Applications might want to create a CQ on n different cores
>> You mean like an IRQ can flush on a mask potentially made of multiple
>> CPUs?
> Sort of. In
On Tue, Dec 9, 2014 at 7:49 PM, Roland Dreier wrote:
> Thanks, applied for 3.19.
Can you please rebase today your public clown so we can see where
things are 1-2 days before you issue the pull request to Linus? where
does ODP stands from your POV?
Or.
--
To unsubscribe from this list: send the l
On Mon, Dec 8, 2014 at 1:48 AM, Yuval Shaia wrote:
> 1. Add indication whether feature is supported or not.
> 2. Add descriptions of all features.
> Without this fix there is no way to tell if feature is not supported or that
> description is not exist.
The problem with this patch is that the c
Thanks, applied for 3.19.
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
On Sun, Dec 7, 2014 at 7:05 PM, Yuval Shaia wrote:
> This patch is merely makes code more nice and readable.
> Instead of checking for DPDP on every loop cycle the check moves out of the
> loop.
Few short comments:
1. avoid saying "this patch does this and that" in the change-log
2. change the
On Mon, Dec 8, 2014 at 7:45 PM, Yuval Shaia wrote:
> This value is used to calculate max_qp_dest_rdma.
> Default value of 4 brings us to 16 while HW supports 128
> (max_requester_per_qp).
> Although this value can be changed by module param it is best that default be
> optimized.
Do you have an
On 12/8/2014 1:39 PM, Albert Chu wrote:
> Outputting the remote node and port aids in servicing the fabric more
> quickly for system administrators. In addition, it aids in fabric
> monitoring efforts that scan the log.
>
> Example output before this patch:
>
> perfmgr_log_errors: ERR 543C: VL15
On 12/7/2014 10:08 PM, Chuck Lever wrote:
On Dec 7, 2014, at 5:20 AM, Sagi Grimberg wrote:
On 12/4/2014 9:41 PM, Shirley Ma wrote:
On 12/04/2014 10:43 AM, Bart Van Assche wrote:
On 12/04/14 17:47, Shirley Ma wrote:
What's the history of this patch?
http://lists.openfabrics.org/piperma
13 matches
Mail list logo