From patchwork Thu Aug 3 15:28:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Woodhouse X-Patchwork-Id: 1816525 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=nongnu.org (client-ip=209.51.188.17; helo=lists.gnu.org; envelope-from=qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; secure) header.d=infradead.org header.i=@infradead.org header.a=rsa-sha256 header.s=casper.20170209 header.b=J0M8G7Y7; dkim-atps=neutral Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-ECDSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4RGt7p4VFqz1ybS for ; Fri, 4 Aug 2023 01:29:58 +1000 (AEST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1qRaGL-0001cf-04; Thu, 03 Aug 2023 11:29:17 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qRaGI-0001cK-Ew for qemu-devel@nongnu.org; Thu, 03 Aug 2023 11:29:14 -0400 Received: from casper.infradead.org ([2001:8b0:10b:1236::1]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qRaGF-0001nh-FQ for qemu-devel@nongnu.org; Thu, 03 Aug 2023 11:29:14 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=MIME-Version:Content-Type:Date:Cc:To: From:Subject:Message-ID:Sender:Reply-To:Content-Transfer-Encoding:Content-ID: Content-Description:In-Reply-To:References; bh=kyrl246zywEr0Cbf5D3GpYakSCSYG3Vlq30JTi/ovQs=; b=J0M8G7Y7/xDxK6VPq+NqOr6IQU DIzZPE13mLjY1DON/mBUTVh+fqSIOf0KyzUWLzsMPR2JSI6BpXtAei73T8pnsum/kuxuVvLBl/InX CGV0b1i4fX3uP0qvo9/nDB+kb09u2O2dUlI25UU0GxRuunVwMk4RkDh73zHHke+YwM1dkDE3jl+yo bGM1Z6uyyZPf38uTqIxEZz8pexvA0m7ppFo1bYDa5W2/mlhXKDwcSYBMN9PlWpWccfL970Cf9vCDh b3pijNB/XPcZbI04MOzDApYHTOyi6+UBjwoet1rSV87YzMI1ToVZ4ucgXUerOBPLHjDqioEoPMCAu Hj/kVt8g==; Received: from [2001:8b0:10b:5:e396:a7b9:2ed5:7c9b] (helo=u3832b3a9db3152.ant.amazon.com) by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1qRaG3-004S04-Rh; Thu, 03 Aug 2023 15:29:00 +0000 Message-ID: <2d5e23d21fdbd148d9b0d9e4c00145217c4ddd17.camel@infradead.org> Subject: [RFC PATCH post-8.1] hw/xen: Clean up event channel 'type_val' handling to use union From: David Woodhouse To: qemu-devel Cc: Paul Durrant , Paolo Bonzini , Richard Henderson , Eduardo Habkost , "Michael S. Tsirkin" , Marcel Apfelbaum Date: Thu, 03 Aug 2023 16:28:59 +0100 User-Agent: Evolution 3.44.4-0ubuntu1 MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by casper.infradead.org. See http://www.infradead.org/rpr.html Received-SPF: none client-ip=2001:8b0:10b:1236::1; envelope-from=BATV+e9d36835173043153fb4+7284+infradead.org+dwmw2@casper.srs.infradead.org; helo=casper.infradead.org X-Spam_score_int: -43 X-Spam_score: -4.4 X-Spam_bar: ---- X-Spam_report: (-4.4 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_MED=-2.3, SPF_HELO_NONE=0.001, SPF_NONE=0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org From: David Woodhouse A previous implementation of this stuff used a 64-bit field for all of the port information (vcpu/type/type_val) and did atomic exchanges on them. When I implemented that in Qemu I regretted my life choices and just kept it simple with locking instead. So there's no need for the XenEvtchnPort to be so simplistic. We can use a union for the pirq/virq/interdomain information, which lets us keep a separate bit for the 'remote domain' in interdomain ports. A single bit is enough since the only possible targets are loopback or qemu itself. So now we can ditch PORT_INFO_TYPEVAL_REMOTE_QEMU and the horrid manual masking, although the in-memory representation is identical so there's no change in the saved state ABI. Signed-off-by: David Woodhouse Reviewed-by: Paul Durrant --- Thought this would be a nice cleanup to avoid abusing `type_val` for various different purposes, and especially the top bit of it for interdomain ports. But having done it I find myself fairly ambivalent about it. Does anyone feel strongly either way? hw/i386/kvm/xen_evtchn.c | 124 ++++++++++++++++++++------------------- 1 file changed, 64 insertions(+), 60 deletions(-) diff --git a/hw/i386/kvm/xen_evtchn.c b/hw/i386/kvm/xen_evtchn.c index a731738411..446ae46022 100644 --- a/hw/i386/kvm/xen_evtchn.c +++ b/hw/i386/kvm/xen_evtchn.c @@ -58,7 +58,15 @@ OBJECT_DECLARE_SIMPLE_TYPE(XenEvtchnState, XEN_EVTCHN) typedef struct XenEvtchnPort { uint32_t vcpu; /* Xen/ACPI vcpu_id */ uint16_t type; /* EVTCHNSTAT_xxxx */ - uint16_t type_val; /* pirq# / virq# / remote port according to type */ + union { + uint16_t type_val; /* pirq# / virq# / remote port according to type */ + uint16_t pirq; + uint16_t virq; + struct { + uint16_t port:15; + uint16_t to_qemu:1; /* Only two targets; qemu or loopback */ + } interdomain; + } u; } XenEvtchnPort; /* 32-bit compatibility definitions, also used natively in 32-bit build */ @@ -210,16 +218,16 @@ static int xen_evtchn_post_load(void *opaque, int version_id) XenEvtchnPort *p = &s->port_table[i]; if (p->type == EVTCHNSTAT_pirq) { - assert(p->type_val); - assert(p->type_val < s->nr_pirqs); + assert(p->u.pirq); + assert(p->u.pirq < s->nr_pirqs); /* * Set the gsi to IRQ_UNBOUND; it may be changed to an actual * GSI# below, or to IRQ_MSI_EMU when the MSI table snooping * catches up with it. */ - s->pirq[p->type_val].gsi = IRQ_UNBOUND; - s->pirq[p->type_val].port = i; + s->pirq[p->u.pirq].gsi = IRQ_UNBOUND; + s->pirq[p->u.pirq].port = i; } } /* Rebuild s->pirq[].gsi mapping */ @@ -243,7 +251,7 @@ static const VMStateDescription xen_evtchn_port_vmstate = { .fields = (VMStateField[]) { VMSTATE_UINT32(vcpu, XenEvtchnPort), VMSTATE_UINT16(type, XenEvtchnPort), - VMSTATE_UINT16(type_val, XenEvtchnPort), + VMSTATE_UINT16(u.type_val, XenEvtchnPort), VMSTATE_END_OF_LIST() } }; @@ -599,14 +607,13 @@ static void unbind_backend_ports(XenEvtchnState *s) for (i = 1; i < s->nr_ports; i++) { p = &s->port_table[i]; - if (p->type == EVTCHNSTAT_interdomain && - (p->type_val & PORT_INFO_TYPEVAL_REMOTE_QEMU)) { - evtchn_port_t be_port = p->type_val & PORT_INFO_TYPEVAL_REMOTE_PORT_MASK; + if (p->type == EVTCHNSTAT_interdomain && p->u.interdomain.to_qemu) { + evtchn_port_t be_port = p->u.interdomain.port; if (s->be_handles[be_port]) { /* This part will be overwritten on the load anyway. */ p->type = EVTCHNSTAT_unbound; - p->type_val = PORT_INFO_TYPEVAL_REMOTE_QEMU; + p->u.interdomain.port = 0; /* Leave the backend port open and unbound too. */ if (kvm_xen_has_cap(EVTCHN_SEND)) { @@ -644,7 +651,7 @@ int xen_evtchn_status_op(struct evtchn_status *status) switch (p->type) { case EVTCHNSTAT_unbound: - if (p->type_val & PORT_INFO_TYPEVAL_REMOTE_QEMU) { + if (p->u.interdomain.to_qemu) { status->u.unbound.dom = DOMID_QEMU; } else { status->u.unbound.dom = xen_domid; @@ -652,22 +659,21 @@ int xen_evtchn_status_op(struct evtchn_status *status) break; case EVTCHNSTAT_interdomain: - if (p->type_val & PORT_INFO_TYPEVAL_REMOTE_QEMU) { + if (p->u.interdomain.to_qemu) { status->u.interdomain.dom = DOMID_QEMU; } else { status->u.interdomain.dom = xen_domid; } - status->u.interdomain.port = p->type_val & - PORT_INFO_TYPEVAL_REMOTE_PORT_MASK; + status->u.interdomain.port = p->u.interdomain.port; break; case EVTCHNSTAT_pirq: - status->u.pirq = p->type_val; + status->u.pirq = p->u.pirq; break; case EVTCHNSTAT_virq: - status->u.virq = p->type_val; + status->u.virq = p->u.virq; break; } @@ -983,7 +989,7 @@ static int clear_port_pending(XenEvtchnState *s, evtchn_port_t port) static void free_port(XenEvtchnState *s, evtchn_port_t port) { s->port_table[port].type = EVTCHNSTAT_closed; - s->port_table[port].type_val = 0; + s->port_table[port].u.type_val = 0; s->port_table[port].vcpu = 0; if (s->nr_ports == port + 1) { @@ -1006,7 +1012,7 @@ static int allocate_port(XenEvtchnState *s, uint32_t vcpu, uint16_t type, if (s->port_table[p].type == EVTCHNSTAT_closed) { s->port_table[p].vcpu = vcpu; s->port_table[p].type = type; - s->port_table[p].type_val = val; + s->port_table[p].u.type_val = val; *port = p; @@ -1047,15 +1053,15 @@ static int close_port(XenEvtchnState *s, evtchn_port_t port, return -ENOENT; case EVTCHNSTAT_pirq: - s->pirq[p->type_val].port = 0; - if (s->pirq[p->type_val].is_translated) { + s->pirq[p->u.pirq].port = 0; + if (s->pirq[p->u.pirq].is_translated) { *flush_kvm_routes = true; } break; case EVTCHNSTAT_virq: - kvm_xen_set_vcpu_virq(virq_is_global(p->type_val) ? 0 : p->vcpu, - p->type_val, 0); + kvm_xen_set_vcpu_virq(virq_is_global(p->u.virq) ? 0 : p->vcpu, + p->u.virq, 0); break; case EVTCHNSTAT_ipi: @@ -1065,8 +1071,8 @@ static int close_port(XenEvtchnState *s, evtchn_port_t port, break; case EVTCHNSTAT_interdomain: - if (p->type_val & PORT_INFO_TYPEVAL_REMOTE_QEMU) { - uint16_t be_port = p->type_val & ~PORT_INFO_TYPEVAL_REMOTE_QEMU; + if (p->u.interdomain.to_qemu) { + uint16_t be_port = p->u.interdomain.port; struct xenevtchn_handle *xc = s->be_handles[be_port]; if (xc) { if (kvm_xen_has_cap(EVTCHN_SEND)) { @@ -1076,14 +1082,15 @@ static int close_port(XenEvtchnState *s, evtchn_port_t port, } } else { /* Loopback interdomain */ - XenEvtchnPort *rp = &s->port_table[p->type_val]; - if (!valid_port(p->type_val) || rp->type_val != port || + XenEvtchnPort *rp = &s->port_table[p->u.interdomain.port]; + if (!valid_port(p->u.interdomain.port) || + rp->u.interdomain.port != port || rp->type != EVTCHNSTAT_interdomain) { error_report("Inconsistent state for interdomain unbind"); } else { /* Set the other end back to unbound */ rp->type = EVTCHNSTAT_unbound; - rp->type_val = 0; + rp->u.interdomain.port = 0; } } break; @@ -1207,7 +1214,7 @@ int xen_evtchn_bind_vcpu_op(struct evtchn_bind_vcpu *vcpu) if (p->type == EVTCHNSTAT_interdomain || p->type == EVTCHNSTAT_unbound || p->type == EVTCHNSTAT_pirq || - (p->type == EVTCHNSTAT_virq && virq_is_global(p->type_val))) { + (p->type == EVTCHNSTAT_virq && virq_is_global(p->u.virq))) { /* * unmask_port() with do_unmask==false will just raise the event * on the new vCPU if the port was already pending. @@ -1352,19 +1359,15 @@ int xen_evtchn_bind_ipi_op(struct evtchn_bind_ipi *ipi) int xen_evtchn_bind_interdomain_op(struct evtchn_bind_interdomain *interdomain) { XenEvtchnState *s = xen_evtchn_singleton; - uint16_t type_val; int ret; if (!s) { return -ENOTSUP; } - if (interdomain->remote_dom == DOMID_QEMU) { - type_val = PORT_INFO_TYPEVAL_REMOTE_QEMU; - } else if (interdomain->remote_dom == DOMID_SELF || - interdomain->remote_dom == xen_domid) { - type_val = 0; - } else { + if (interdomain->remote_dom != DOMID_QEMU && + interdomain->remote_dom != DOMID_SELF && + interdomain->remote_dom != xen_domid) { return -ESRCH; } @@ -1375,8 +1378,8 @@ int xen_evtchn_bind_interdomain_op(struct evtchn_bind_interdomain *interdomain) qemu_mutex_lock(&s->port_lock); /* The newly allocated port starts out as unbound */ - ret = allocate_port(s, 0, EVTCHNSTAT_unbound, type_val, - &interdomain->local_port); + ret = allocate_port(s, 0, EVTCHNSTAT_unbound, 0, &interdomain->local_port); + if (ret) { goto out; } @@ -1401,7 +1404,8 @@ int xen_evtchn_bind_interdomain_op(struct evtchn_bind_interdomain *interdomain) assign_kernel_eventfd(lp->type, xc->guest_port, xc->fd); } lp->type = EVTCHNSTAT_interdomain; - lp->type_val = PORT_INFO_TYPEVAL_REMOTE_QEMU | interdomain->remote_port; + lp->u.interdomain.to_qemu = 1; + lp->u.interdomain.port = interdomain->remote_port; ret = 0; } else { /* Loopback */ @@ -1415,13 +1419,13 @@ int xen_evtchn_bind_interdomain_op(struct evtchn_bind_interdomain *interdomain) * the port that was just allocated for the local end. */ if (interdomain->local_port != interdomain->remote_port && - rp->type == EVTCHNSTAT_unbound && rp->type_val == 0) { + rp->type == EVTCHNSTAT_unbound && !rp->u.interdomain.to_qemu) { rp->type = EVTCHNSTAT_interdomain; - rp->type_val = interdomain->local_port; + rp->u.interdomain.port = interdomain->local_port; lp->type = EVTCHNSTAT_interdomain; - lp->type_val = interdomain->remote_port; + lp->u.interdomain.port = interdomain->remote_port; } else { ret = -EINVAL; } @@ -1440,7 +1444,6 @@ int xen_evtchn_bind_interdomain_op(struct evtchn_bind_interdomain *interdomain) int xen_evtchn_alloc_unbound_op(struct evtchn_alloc_unbound *alloc) { XenEvtchnState *s = xen_evtchn_singleton; - uint16_t type_val; int ret; if (!s) { @@ -1451,18 +1454,19 @@ int xen_evtchn_alloc_unbound_op(struct evtchn_alloc_unbound *alloc) return -ESRCH; } - if (alloc->remote_dom == DOMID_QEMU) { - type_val = PORT_INFO_TYPEVAL_REMOTE_QEMU; - } else if (alloc->remote_dom == DOMID_SELF || - alloc->remote_dom == xen_domid) { - type_val = 0; - } else { + if (alloc->remote_dom != DOMID_QEMU && alloc->remote_dom != DOMID_SELF && + alloc->remote_dom != xen_domid) { return -EPERM; } qemu_mutex_lock(&s->port_lock); - ret = allocate_port(s, 0, EVTCHNSTAT_unbound, type_val, &alloc->port); + ret = allocate_port(s, 0, EVTCHNSTAT_unbound, 0, &alloc->port); + + if (!ret && alloc->remote_dom == DOMID_QEMU) { + XenEvtchnPort *p = &s->port_table[alloc->port]; + p->u.interdomain.to_qemu = 1; + } qemu_mutex_unlock(&s->port_lock); @@ -1489,12 +1493,12 @@ int xen_evtchn_send_op(struct evtchn_send *send) switch (p->type) { case EVTCHNSTAT_interdomain: - if (p->type_val & PORT_INFO_TYPEVAL_REMOTE_QEMU) { + if (p->u.interdomain.to_qemu) { /* * This is an event from the guest to qemu itself, which is * serving as the driver domain. */ - uint16_t be_port = p->type_val & ~PORT_INFO_TYPEVAL_REMOTE_QEMU; + uint16_t be_port = p->u.interdomain.port; struct xenevtchn_handle *xc = s->be_handles[be_port]; if (xc) { eventfd_write(xc->fd, 1); @@ -1504,7 +1508,7 @@ int xen_evtchn_send_op(struct evtchn_send *send) } } else { /* Loopback interdomain ports; just a complex IPI */ - set_port_pending(s, p->type_val); + set_port_pending(s, p->u.interdomain.port); } break; @@ -1546,8 +1550,7 @@ int xen_evtchn_set_port(uint16_t port) /* QEMU has no business sending to anything but these */ if (p->type == EVTCHNSTAT_virq || - (p->type == EVTCHNSTAT_interdomain && - (p->type_val & PORT_INFO_TYPEVAL_REMOTE_QEMU))) { + (p->type == EVTCHNSTAT_interdomain && p->u.interdomain.to_qemu)) { set_port_pending(s, port); ret = 0; } @@ -2057,7 +2060,7 @@ int xen_be_evtchn_bind_interdomain(struct xenevtchn_handle *xc, uint32_t domid, switch (gp->type) { case EVTCHNSTAT_interdomain: /* Allow rebinding after migration, preserve port # if possible */ - be_port = gp->type_val & ~PORT_INFO_TYPEVAL_REMOTE_QEMU; + be_port = gp->u.interdomain.port; assert(be_port != 0); if (!s->be_handles[be_port]) { s->be_handles[be_port] = xc; @@ -2078,7 +2081,8 @@ int xen_be_evtchn_bind_interdomain(struct xenevtchn_handle *xc, uint32_t domid, } gp->type = EVTCHNSTAT_interdomain; - gp->type_val = be_port | PORT_INFO_TYPEVAL_REMOTE_QEMU; + gp->u.interdomain.to_qemu = 1; + gp->u.interdomain.port = be_port; xc->guest_port = guest_port; if (kvm_xen_has_cap(EVTCHN_SEND)) { assign_kernel_eventfd(gp->type, guest_port, xc->fd); @@ -2123,7 +2127,7 @@ int xen_be_evtchn_unbind(struct xenevtchn_handle *xc, evtchn_port_t port) /* This should never *not* be true */ if (gp->type == EVTCHNSTAT_interdomain) { gp->type = EVTCHNSTAT_unbound; - gp->type_val = PORT_INFO_TYPEVAL_REMOTE_QEMU; + gp->u.interdomain.port = 0; } if (kvm_xen_has_cap(EVTCHN_SEND)) { @@ -2277,11 +2281,11 @@ EvtchnInfoList *qmp_xen_event_list(Error **errp) info->type = p->type; if (p->type == EVTCHNSTAT_interdomain) { - info->remote_domain = g_strdup((p->type_val & PORT_INFO_TYPEVAL_REMOTE_QEMU) ? + info->remote_domain = g_strdup(p->u.interdomain.to_qemu ? "qemu" : "loopback"); - info->target = p->type_val & PORT_INFO_TYPEVAL_REMOTE_PORT_MASK; + info->target = p->u.interdomain.port; } else { - info->target = p->type_val; + info->target = p->u.type_val; /* pirq# or virq# */ } info->vcpu = p->vcpu; info->pending = test_bit(i, pending);