From patchwork Tue Mar 5 07:12:17 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jason Wang X-Patchwork-Id: 1051654 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (mailfrom) smtp.mailfrom=nongnu.org (client-ip=209.51.188.17; helo=lists.gnu.org; envelope-from=qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=redhat.com Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 44D7Yp5Yrgz9s4V for ; Tue, 5 Mar 2019 18:19:26 +1100 (AEDT) Received: from localhost ([127.0.0.1]:38681 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1h14MG-0007Zi-Jq for incoming@patchwork.ozlabs.org; Tue, 05 Mar 2019 02:19:24 -0500 Received: from eggs.gnu.org ([209.51.188.92]:43474) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1h14G7-00032t-Ck for qemu-devel@nongnu.org; Tue, 05 Mar 2019 02:13:07 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1h14G6-0000lX-KN for qemu-devel@nongnu.org; Tue, 05 Mar 2019 02:13:03 -0500 Received: from mx1.redhat.com ([209.132.183.28]:35842) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1h14G5-0000kd-3T for qemu-devel@nongnu.org; Tue, 05 Mar 2019 02:13:02 -0500 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 43EDC3082AF4; Tue, 5 Mar 2019 07:13:00 +0000 (UTC) Received: from jason-ThinkPad-T430s.redhat.com (ovpn-12-234.pek2.redhat.com [10.72.12.234]) by smtp.corp.redhat.com (Postfix) with ESMTP id 837E918009; Tue, 5 Mar 2019 07:12:55 +0000 (UTC) From: Jason Wang To: qemu-devel@nongnu.org, peter.maydell@linaro.org Date: Tue, 5 Mar 2019 15:12:17 +0800 Message-Id: <1551769940-22739-11-git-send-email-jasowang@redhat.com> In-Reply-To: <1551769940-22739-1-git-send-email-jasowang@redhat.com> References: <1551769940-22739-1-git-send-email-jasowang@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.45]); Tue, 05 Mar 2019 07:13:00 +0000 (UTC) X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] X-Received-From: 209.132.183.28 Subject: [Qemu-devel] [PULL V2 10/13] virtio-net: Allow qemu_announce_self to trigger virtio announcements X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Vladislav Yasevich , Jason Wang , "Dr. David Alan Gilbert" Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: "Qemu-devel" From: "Dr. David Alan Gilbert" Expose the virtio-net self announcement capability and allow qemu_announce_self() to call it. These announces are caused by something external (i.e. the announce-self command); they won't trigger if the migration counter is triggering announces at the same time. Signed-off-by: Vladislav Yasevich Signed-off-by: Dr. David Alan Gilbert Reviewed-by: Michael S. Tsirkin Signed-off-by: Jason Wang --- hw/net/trace-events | 1 + hw/net/virtio-net.c | 35 ++++++++++++++++++++++++++++++++--- 2 files changed, 33 insertions(+), 3 deletions(-) diff --git a/hw/net/trace-events b/hw/net/trace-events index b237d90..3a86004 100644 --- a/hw/net/trace-events +++ b/hw/net/trace-events @@ -361,6 +361,7 @@ sunhme_rx_desc(uint32_t addr, int offset, uint32_t status, int len, int cr, int sunhme_rx_xsum_calc(uint16_t xsum) "calculated incoming xsum as 0x%x" # hw/net/virtio-net.c +virtio_net_announce_notify(void) "" virtio_net_announce_timer(int round) "%d" virtio_net_handle_announce(int round) "%d" virtio_net_post_load_device(void) diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c index 38b8974..7e2c2a6 100644 --- a/hw/net/virtio-net.c +++ b/hw/net/virtio-net.c @@ -150,15 +150,42 @@ static bool virtio_net_started(VirtIONet *n, uint8_t status) (n->status & VIRTIO_NET_S_LINK_UP) && vdev->vm_running; } +static void virtio_net_announce_notify(VirtIONet *net) +{ + VirtIODevice *vdev = VIRTIO_DEVICE(net); + trace_virtio_net_announce_notify(); + + net->status |= VIRTIO_NET_S_ANNOUNCE; + virtio_notify_config(vdev); +} + static void virtio_net_announce_timer(void *opaque) { VirtIONet *n = opaque; - VirtIODevice *vdev = VIRTIO_DEVICE(n); trace_virtio_net_announce_timer(n->announce_timer.round); n->announce_timer.round--; - n->status |= VIRTIO_NET_S_ANNOUNCE; - virtio_notify_config(vdev); + virtio_net_announce_notify(n); +} + +static void virtio_net_announce(NetClientState *nc) +{ + VirtIONet *n = qemu_get_nic_opaque(nc); + VirtIODevice *vdev = VIRTIO_DEVICE(n); + + /* + * Make sure the virtio migration announcement timer isn't running + * If it is, let it trigger announcement so that we do not cause + * confusion. + */ + if (n->announce_timer.round) { + return; + } + + if (virtio_vdev_has_feature(vdev, VIRTIO_NET_F_GUEST_ANNOUNCE) && + virtio_vdev_has_feature(vdev, VIRTIO_NET_F_CTRL_VQ)) { + virtio_net_announce_notify(n); + } } static void virtio_net_vhost_status(VirtIONet *n, uint8_t status) @@ -2556,6 +2583,7 @@ static NetClientInfo net_virtio_info = { .receive = virtio_net_receive, .link_status_changed = virtio_net_set_link_status, .query_rx_filter = virtio_net_query_rxfilter, + .announce = virtio_net_announce, }; static bool virtio_net_guest_notifier_pending(VirtIODevice *vdev, int idx) @@ -2692,6 +2720,7 @@ static void virtio_net_device_realize(DeviceState *dev, Error **errp) qemu_announce_timer_reset(&n->announce_timer, migrate_announce_params(), QEMU_CLOCK_VIRTUAL, virtio_net_announce_timer, n); + n->announce_timer.round = 0; if (n->netclient_type) { /*