From patchwork Wed Jun 19 20:25:33 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Neil Horman X-Patchwork-Id: 1119002 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=tuxdriver.com Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 45Tc0l0RZJz9s7h for ; Thu, 20 Jun 2019 06:26:14 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727076AbfFSU0M (ORCPT ); Wed, 19 Jun 2019 16:26:12 -0400 Received: from charlotte.tuxdriver.com ([70.61.120.58]:50199 "EHLO smtp.tuxdriver.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726143AbfFSU0L (ORCPT ); Wed, 19 Jun 2019 16:26:11 -0400 Received: from [107.15.85.130] (helo=localhost) by smtp.tuxdriver.com with esmtpsa (TLSv1:AES256-SHA:256) (Exim 4.63) (envelope-from ) id 1hdh9f-00012h-17; Wed, 19 Jun 2019 16:26:07 -0400 From: Neil Horman To: netdev@vger.kernel.org Cc: Neil Horman , Matteo Croce , "David S. Miller" Subject: [PATCH net] af_packet: Block execution of tasks waiting for transmit to complete in AF_PACKET Date: Wed, 19 Jun 2019 16:25:33 -0400 Message-Id: <20190619202533.4856-1-nhorman@tuxdriver.com> X-Mailer: git-send-email 2.20.1 MIME-Version: 1.0 X-Spam-Score: -2.9 (--) X-Spam-Status: No Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org When an application is run that: a) Sets its scheduler to be SCHED_FIFO and b) Opens a memory mapped AF_PACKET socket, and sends frames with the MSG_DONTWAIT flag cleared, its possible for the application to hang forever in the kernel. This occurs because when waiting, the code in tpacket_snd calls schedule, which under normal circumstances allows other tasks to run, including ksoftirqd, which in some cases is responsible for freeing the transmitted skb (which in AF_PACKET calls a destructor that flips the status bit of the transmitted frame back to available, allowing the transmitting task to complete). However, when the calling application is SCHED_FIFO, its priority is such that the schedule call immediately places the task back on the cpu, preventing ksoftirqd from freeing the skb, which in turn prevents the transmitting task from detecting that the transmission is complete. We can fix this by converting the schedule call to a completion mechanism. By using a completion queue, we force the calling task, when it detects there are no more frames to send, to schedule itself off the cpu until such time as the last transmitted skb is freed, allowing forward progress to be made. Tested by myself and the reporter, with good results Appies to the net tree Signed-off-by: Neil Horman Reported-by: Matteo Croce CC: "David S. Miller" --- net/packet/af_packet.c | 42 +++++++++++++++++++++++++++++++----------- net/packet/internal.h | 2 ++ 2 files changed, 33 insertions(+), 11 deletions(-) diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c index a29d66da7394..e65f4ef18a06 100644 --- a/net/packet/af_packet.c +++ b/net/packet/af_packet.c @@ -358,7 +358,8 @@ static inline struct page * __pure pgv_to_page(void *addr) return virt_to_page(addr); } -static void __packet_set_status(struct packet_sock *po, void *frame, int status) +static void __packet_set_status(struct packet_sock *po, void *frame, int status, + bool call_complete) { union tpacket_uhdr h; @@ -381,6 +382,8 @@ static void __packet_set_status(struct packet_sock *po, void *frame, int status) BUG(); } + if (po->wait_on_complete && call_complete) + complete(&po->skb_completion); smp_wmb(); } @@ -1148,6 +1151,14 @@ static void *packet_previous_frame(struct packet_sock *po, return packet_lookup_frame(po, rb, previous, status); } +static void *packet_next_frame(struct packet_sock *po, + struct packet_ring_buffer *rb, + int status) +{ + unsigned int next = rb->head != rb->frame_max ? rb->head+1 : 0; + return packet_lookup_frame(po, rb, next, status); +} + static void packet_increment_head(struct packet_ring_buffer *buff) { buff->head = buff->head != buff->frame_max ? buff->head+1 : 0; @@ -2360,7 +2371,7 @@ static int tpacket_rcv(struct sk_buff *skb, struct net_device *dev, #endif if (po->tp_version <= TPACKET_V2) { - __packet_set_status(po, h.raw, status); + __packet_set_status(po, h.raw, status, false); sk->sk_data_ready(sk); } else { prb_clear_blk_fill_status(&po->rx_ring); @@ -2400,7 +2411,7 @@ static void tpacket_destruct_skb(struct sk_buff *skb) packet_dec_pending(&po->tx_ring); ts = __packet_set_timestamp(po, ph, skb); - __packet_set_status(po, ph, TP_STATUS_AVAILABLE | ts); + __packet_set_status(po, ph, TP_STATUS_AVAILABLE | ts, true); } sock_wfree(skb); @@ -2600,6 +2611,7 @@ static int tpacket_snd(struct packet_sock *po, struct msghdr *msg) int len_sum = 0; int status = TP_STATUS_AVAILABLE; int hlen, tlen, copylen = 0; + void *tmp; mutex_lock(&po->pg_vec_lock); @@ -2647,16 +2659,21 @@ static int tpacket_snd(struct packet_sock *po, struct msghdr *msg) size_max = dev->mtu + reserve + VLAN_HLEN; do { + + if (po->wait_on_complete && need_wait) { + wait_for_completion(&po->skb_completion); + po->wait_on_complete = 0; + } + ph = packet_current_frame(po, &po->tx_ring, TP_STATUS_SEND_REQUEST); - if (unlikely(ph == NULL)) { - if (need_wait && need_resched()) - schedule(); - continue; - } + + if (likely(ph == NULL)) + break; skb = NULL; tp_len = tpacket_parse_header(po, ph, size_max, &data); + if (tp_len < 0) goto tpacket_error; @@ -2699,7 +2716,7 @@ static int tpacket_snd(struct packet_sock *po, struct msghdr *msg) tpacket_error: if (po->tp_loss) { __packet_set_status(po, ph, - TP_STATUS_AVAILABLE); + TP_STATUS_AVAILABLE, false); packet_increment_head(&po->tx_ring); kfree_skb(skb); continue; @@ -2719,7 +2736,9 @@ static int tpacket_snd(struct packet_sock *po, struct msghdr *msg) } skb->destructor = tpacket_destruct_skb; - __packet_set_status(po, ph, TP_STATUS_SENDING); + __packet_set_status(po, ph, TP_STATUS_SENDING, false); + if (!packet_next_frame(po, &po->tx_ring, TP_STATUS_SEND_REQUEST)) + po->wait_on_complete = 1; packet_inc_pending(&po->tx_ring); status = TP_STATUS_SEND_REQUEST; @@ -2753,7 +2772,7 @@ static int tpacket_snd(struct packet_sock *po, struct msghdr *msg) goto out_put; out_status: - __packet_set_status(po, ph, status); + __packet_set_status(po, ph, status, false); kfree_skb(skb); out_put: dev_put(dev); @@ -3207,6 +3226,7 @@ static int packet_create(struct net *net, struct socket *sock, int protocol, sock_init_data(sock, sk); po = pkt_sk(sk); + init_completion(&po->skb_completion); sk->sk_family = PF_PACKET; po->num = proto; po->xmit = dev_queue_xmit; diff --git a/net/packet/internal.h b/net/packet/internal.h index 3bb7c5fb3bff..bbb4be2c18e7 100644 --- a/net/packet/internal.h +++ b/net/packet/internal.h @@ -128,6 +128,8 @@ struct packet_sock { unsigned int tp_hdrlen; unsigned int tp_reserve; unsigned int tp_tstamp; + struct completion skb_completion; + unsigned int wait_on_complete; struct net_device __rcu *cached_dev; int (*xmit)(struct sk_buff *skb); struct packet_type prot_hook ____cacheline_aligned_in_smp;