On 3/17/2015 2:19 AM, Kim Phillips wrote:
> On Mon, 16 Mar 2015 12:02:51 +0200
> Horia Geantă <horia.gea...@freescale.com> wrote:
> 
>> On 3/4/2015 2:23 AM, Kim Phillips wrote:
>>> Only potential problem is getting the crypto API to set the GFP_DMA
>>> flag in the allocation request, but presumably a
>>> CRYPTO_TFM_REQ_DMA crt_flag can be made to handle that.
>>
>> Seems there are quite a few places that do not use the
>> {aead,ablkcipher_ahash}_request_alloc() API to allocate crypto requests.
>> Among them, IPsec and dm-crypt.
>> I've looked at the code and I don't think it can be converted to use
>> crypto API.
> 
> why not?

It would imply having 2 memory allocations, one for crypto request and
the other for the rest of the data bundled with the request (for IPsec
that would be ESN + space for IV + sg entries for authenticated-only
data and sk_buff extension, if needed).

Trying to have a single allocation by making ESN, IV etc. part of the
request private context requires modifying tfm.reqsize on the fly.
This won't work without adding some kind of locking for the tfm.

> 
>> This means that the CRYPTO_TFM_REQ_DMA would be visible to all of these
>> places. Some of the maintainers do not agree, as you've seen.
> 
> would modifying the crypto API to either have a different
> *_request_alloc() API, and/or adding calls to negotiate the GFP mask
> between crypto users and drivers, e.g., get/set_gfp_mask, work?

I think what DaveM asked for was the change to be transparent.

Besides converting to *_request_alloc(), seems that all other options
require some extra awareness from the user.
Could you elaborate on the idea above?

> 
>> An alternative would be for talitos to use the page allocator to get 1 /
>> 2 pages at probe time (4 channels x 32 entries/channel x 64B/descriptor
>> = 8 kB), dma_map_page the area and manage it internally for talitos_desc
>> hw descriptors.
>> What do you think?
> 
> There's a comment in esp_alloc_tmp(): "Use spare space in skb for
> this where possible," which is ideally where we'd want to be (esp.

Ok, I'll check that. But note the "where possible" - finding room in the
skb to avoid the allocation won't always be the case, and then we're
back to square one.

> because that memory could already be DMA-able).  Your above
> suggestion would be in the opposite direction of that.

The proposal:
-removes dma (un)mapping on the fast path
-avoids requesting dma mappable memory for more than it's actually
needed (CRYPTO_TFM_REQ_DMA forces entire request to be mappable, not
only its private context)
-for caam it has the added benefit of speeding the below search for the
offending descriptor in the SW ring from O(n) to O(1):
for (i = 0; CIRC_CNT(head, tail + i, JOBR_DEPTH) >= 1; i++) {
        sw_idx = (tail + i) & (JOBR_DEPTH - 1);

        if (jrp->outring[hw_idx].desc ==
            jrp->entinfo[sw_idx].desc_addr_dma)
                break; /* found */
}
(drivers/crypto/caam/jr.c - caam_dequeue)

Horia


--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to