The zerocopy path ultimately calls iov_iter_get_pages, which defines the
step function for ITER_KVECs as simply, return -EFAULT. Taking the
non-zerocopy path for ITER_KVECs avoids the unnecessary fallback. 

See https://lore.kernel.org/lkml/20150401023311.gl29...@zeniv.linux.org.uk/T/#u
for a discussion of why zerocopy for vmalloc data is not a good idea. 

Discovered while testing NBD traffic encrypted with ktls.

Fixes: c46234ebb4d1 ("tls: RX path for ktls")
Signed-off-by: Doron Roberts-Kedes <doro...@fb.com>
---
 net/tls/tls_sw.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c
index 4618f1c31137..ef15e35232dd 100644
--- a/net/tls/tls_sw.c
+++ b/net/tls/tls_sw.c
@@ -426,7 +426,8 @@ int tls_sw_sendmsg(struct sock *sk, struct msghdr *msg, 
size_t size)
                        full_record = true;
                }
 
-               if (full_record || eor) {
+               bool iskvec = msg->msg_iter.type & ITER_KVEC;
+               if (!iskvec && (full_record || eor)) {
                        ret = zerocopy_from_iter(sk, &msg->msg_iter,
                                try_to_copy, &ctx->sg_plaintext_num_elem,
                                &ctx->sg_plaintext_size,
@@ -804,7 +805,8 @@ int tls_sw_recvmsg(struct sock *sk,
                        page_count = iov_iter_npages(&msg->msg_iter,
                                                     MAX_SKB_FRAGS);
                        to_copy = rxm->full_len - tls_ctx->rx.overhead_size;
-                       if (to_copy <= len && page_count < MAX_SKB_FRAGS &&
+                       bool iskvec = msg->msg_iter.type & ITER_KVEC;
+                       if (!iskvec && to_copy <= len && page_count < 
MAX_SKB_FRAGS &&
                            likely(!(flags & MSG_PEEK)))  {
                                struct scatterlist sgin[MAX_SKB_FRAGS + 1];
                                int pages = 0;
-- 
2.17.1

Reply via email to