From patchwork Sun Oct 6 23:23:47 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 1993314 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=collabora.com header.i=dmitry.osipenko@collabora.com header.a=rsa-sha256 header.s=zohomail header.b=QqioJoRM; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=nongnu.org (client-ip=209.51.188.17; helo=lists.gnu.org; envelope-from=qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org; receiver=patchwork.ozlabs.org) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-ECDSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4XMJNm43bFz1xtV for ; Mon, 7 Oct 2024 10:27:52 +1100 (AEDT) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1sxae2-0004cM-3Q; Sun, 06 Oct 2024 19:26:34 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1sxae0-0004c5-3M for qemu-devel@nongnu.org; Sun, 06 Oct 2024 19:26:32 -0400 Received: from sender4-pp-f112.zoho.com ([136.143.188.112]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1sxady-0006a7-0t for qemu-devel@nongnu.org; Sun, 06 Oct 2024 19:26:31 -0400 ARC-Seal: i=1; a=rsa-sha256; t=1728257180; cv=none; d=zohomail.com; s=zohoarc; b=MkMTtYEi5PUdjec8n7dBYkLYpfyR9jRnCsiohThBlHsVw/hznJ/zzOTMlKDzZOQjSuUL46t4DCYslTq2VdFwlVqau+QxG7ImrDQOJHFEU35J6PaJ1hmxOkchMz+LB9G1Zw3x/G2Yij0ylz0NU4wbNwUIpL8b5+wNBOkWMQfTgxY= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1728257180; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:Subject:To:To:Message-Id:Reply-To; bh=QMmRPXCCzVRpFPEQi5kG2zHrhHIlz/E/MtnAevffyos=; b=gVA7t+CMyQ3UpVYfIy6GIb9hOmxQwz4PhuYrBc+dyEqXSn2xdkq94c32p/sVyf7o5g4u6R79ytXhaiO2CI76V5VAJWo/b+Rgl+VJvU0B3yehl3DS41L3CI2p/OgqlfTgVyBNLmdWwEcF5jC3UvOvwQO+63MH89wB1g0DzUArXnM= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass header.i=collabora.com; spf=pass smtp.mailfrom=dmitry.osipenko@collabora.com; dmarc=pass header.from= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1728257180; s=zohomail; d=collabora.com; i=dmitry.osipenko@collabora.com; h=From:From:To:To:Cc:Cc:Subject:Subject:Date:Date:Message-ID:In-Reply-To:References:MIME-Version:Content-Transfer-Encoding:Message-Id:Reply-To; bh=QMmRPXCCzVRpFPEQi5kG2zHrhHIlz/E/MtnAevffyos=; b=QqioJoRM9MCm7U3St4WuJtT7wlJn/G+d3Xb7F/1k2T6Xd09fCfdLU2tVwns6qdtb cAa+piLTtkMHh/Im9w1PzymaIQZ2/00MWGQ2Un7nFjMAVv53eHb/q+Zi19adpOzM0fa yvlym1X2gqAUb6///PM3/Oamnasp0NqgkefIsUUs= Received: by mx.zohomail.com with SMTPS id 1728257179876962.0552160190528; Sun, 6 Oct 2024 16:26:19 -0700 (PDT) From: Dmitry Osipenko To: Akihiko Odaki , Huang Rui , =?utf-8?q?Marc-Andr=C3=A9_Lureau?= , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= , Gerd Hoffmann , =?utf-8?q?Alex_Benn=C3=A9e?= , "Michael S . Tsirkin" , Paolo Bonzini Cc: Gert Wollny , qemu-devel@nongnu.org, Gurchetan Singh , Alyssa Ross , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , Alex Deucher , Stefano Stabellini , =?utf-8?q?Christian_K?= =?utf-8?q?=C3=B6nig?= , Xenia Ragiadakou , Pierre-Eric Pelloux-Prayer , Honglei Huang , Julia Zhang , Chen Jiqian , Rob Clark Subject: [PATCH v1 5/6] virtio-gpu: Support asynchronous fencing Date: Mon, 7 Oct 2024 02:23:47 +0300 Message-ID: <20241006232350.3198759-6-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.46.0 In-Reply-To: <20241006232350.3198759-1-dmitry.osipenko@collabora.com> References: <20241006232350.3198759-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 X-ZohoMailClient: External Received-SPF: pass client-ip=136.143.188.112; envelope-from=dmitry.osipenko@collabora.com; helo=sender4-pp-f112.zoho.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H3=-0.01, RCVD_IN_MSPIKE_WL=-0.01, RCVD_IN_VALIDITY_CERTIFIED_BLOCKED=0.001, RCVD_IN_VALIDITY_RPBL_BLOCKED=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Support asynchronous fencing feature of virglrenderer. It allows Qemu to handle fence as soon as it's signalled instead of periodically polling the fence status. This feature is required for enabling DRM context support in Qemu because legacy fencing mode isn't supported for DRM contexts in virglrenderer. Signed-off-by: Dmitry Osipenko --- hw/display/virtio-gpu-gl.c | 3 + hw/display/virtio-gpu-virgl.c | 143 ++++++++++++++++++++++++++++----- include/hw/virtio/virtio-gpu.h | 14 ++++ 3 files changed, 141 insertions(+), 19 deletions(-) diff --git a/hw/display/virtio-gpu-gl.c b/hw/display/virtio-gpu-gl.c index 7c0e448b4661..53d938f23f20 100644 --- a/hw/display/virtio-gpu-gl.c +++ b/hw/display/virtio-gpu-gl.c @@ -170,6 +170,9 @@ static void virtio_gpu_gl_device_unrealize(DeviceState *qdev) if (gl->renderer_state >= RS_INITED) { #if VIRGL_VERSION_MAJOR >= 1 qemu_bh_delete(gl->cmdq_resume_bh); + + virtio_gpu_virgl_reset_async_fences(g); + qemu_bh_delete(gl->async_fence_bh); #endif if (virtio_gpu_stats_enabled(g->parent_obj.conf)) { timer_free(gl->print_stats); diff --git a/hw/display/virtio-gpu-virgl.c b/hw/display/virtio-gpu-virgl.c index b32ce44ba2b1..ad1d40901bff 100644 --- a/hw/display/virtio-gpu-virgl.c +++ b/hw/display/virtio-gpu-virgl.c @@ -891,6 +891,7 @@ static void virgl_cmd_set_scanout_blob(VirtIOGPU *g, void virtio_gpu_virgl_process_cmd(VirtIOGPU *g, struct virtio_gpu_ctrl_command *cmd) { + VirtIOGPUGL *gl = VIRTIO_GPU_GL(g); bool cmd_suspended = false; int ret; @@ -992,35 +993,115 @@ void virtio_gpu_virgl_process_cmd(VirtIOGPU *g, trace_virtio_gpu_fence_ctrl(cmd->cmd_hdr.fence_id, cmd->cmd_hdr.type); - ret = virgl_renderer_create_fence(cmd->cmd_hdr.fence_id, 0); - if (ret) - qemu_log_mask(LOG_GUEST_ERROR, - "%s: virgl_renderer_create_fence error: %s", - __func__, strerror(-ret)); + if (gl->context_fence_enabled && + (cmd->cmd_hdr.flags & VIRTIO_GPU_FLAG_INFO_RING_IDX)) { + uint32_t flags = 0; + + ret = virgl_renderer_context_create_fence(cmd->cmd_hdr.ctx_id, flags, + cmd->cmd_hdr.ring_idx, + cmd->cmd_hdr.fence_id); + if (ret) + qemu_log_mask(LOG_GUEST_ERROR, + "%s: virgl_renderer_context_create_fence error: %s", + __func__, strerror(ret)); + } else { + ret = virgl_renderer_create_fence(cmd->cmd_hdr.fence_id, 0); + if (ret) + qemu_log_mask(LOG_GUEST_ERROR, + "%s: virgl_renderer_create_fence error: %s", + __func__, strerror(-ret)); + } } -static void virgl_write_fence(void *opaque, uint32_t fence) +static void virtio_gpu_virgl_async_fence_bh(void *opaque) { - VirtIOGPU *g = opaque; + struct virtio_gpu_virgl_context_fence *f, *f_tmp; struct virtio_gpu_ctrl_command *cmd, *tmp; + VirtIOGPU *g = opaque; + VirtIOGPUGL *gl = VIRTIO_GPU_GL(g); - QTAILQ_FOREACH_SAFE(cmd, &g->fenceq, next, tmp) { - /* - * the guest can end up emitting fences out of order - * so we should check all fenced cmds not just the first one. - */ - if (cmd->cmd_hdr.fence_id > fence) { - continue; + qemu_mutex_lock(&gl->async_fence_lock); + + QTAILQ_FOREACH_SAFE(f, &gl->async_fenceq, next, f_tmp) { + QTAILQ_FOREACH_SAFE(cmd, &g->fenceq, next, tmp) { + /* + * the guest can end up emitting fences out of order + * so we should check all fenced cmds not just the first one. + */ + if (cmd->cmd_hdr.fence_id > f->fence_id) { + continue; + } + if (cmd->cmd_hdr.flags & VIRTIO_GPU_FLAG_INFO_RING_IDX) { + if (cmd->cmd_hdr.ring_idx != f->ring_idx) { + continue; + } + if (cmd->cmd_hdr.ctx_id != f->ctx_id) { + continue; + } + } else if (f->ring_idx >= 0) { + /* ctx0 GL-query fences don't have ring info */ + continue; + } + virtio_gpu_ctrl_response_nodata(g, cmd, VIRTIO_GPU_RESP_OK_NODATA); + QTAILQ_REMOVE(&g->fenceq, cmd, next); + g_free(cmd); } - trace_virtio_gpu_fence_resp(cmd->cmd_hdr.fence_id); - virtio_gpu_ctrl_response_nodata(g, cmd, VIRTIO_GPU_RESP_OK_NODATA); - QTAILQ_REMOVE(&g->fenceq, cmd, next); - g_free(cmd); + + trace_virtio_gpu_fence_resp(f->fence_id); + QTAILQ_REMOVE(&gl->async_fenceq, f, next); + g_free(f); g->inflight--; if (virtio_gpu_stats_enabled(g->parent_obj.conf)) { - trace_virtio_gpu_dec_inflight_fences(g->inflight); + fprintf(stderr, "inflight: %3d (-)\r", g->inflight); } } + + qemu_mutex_unlock(&gl->async_fence_lock); +} + +void virtio_gpu_virgl_reset_async_fences(VirtIOGPU *g) +{ + struct virtio_gpu_virgl_context_fence *f, *f_tmp; + VirtIOGPUGL *gl = VIRTIO_GPU_GL(g); + + QTAILQ_FOREACH_SAFE(f, &gl->async_fenceq, next, f_tmp) { + QTAILQ_REMOVE(&gl->async_fenceq, f, next); + g_free(f); + } +} + +static void +virtio_gpu_virgl_push_async_fence(VirtIOGPU *g, uint32_t ctx_id, + int64_t ring_idx, uint64_t fence_id) +{ + struct virtio_gpu_virgl_context_fence *f; + VirtIOGPUGL *gl = VIRTIO_GPU_GL(g); + + f = g_new(struct virtio_gpu_virgl_context_fence, 1); + f->ctx_id = ctx_id; + f->ring_idx = ring_idx; + f->fence_id = fence_id; + + qemu_mutex_lock(&gl->async_fence_lock); + QTAILQ_INSERT_TAIL(&gl->async_fenceq, f, next); + qemu_mutex_unlock(&gl->async_fence_lock); + + qemu_bh_schedule(gl->async_fence_bh); +} + +static void virgl_write_fence(void *opaque, uint32_t fence) +{ + VirtIOGPU *g = opaque; + + virtio_gpu_virgl_push_async_fence(g, 0, -1, fence); +} + +static void virgl_write_context_fence(void *opaque, uint32_t ctx_id, + uint32_t ring_idx, uint64_t fence) +{ + VirtIOGPU *g = opaque; + + virtio_gpu_virgl_push_async_fence(g, ctx_id, ring_idx, fence); } static virgl_renderer_gl_context @@ -1110,6 +1191,8 @@ void virtio_gpu_virgl_reset_scanout(VirtIOGPU *g) dpy_gfx_replace_surface(g->parent_obj.scanout[i].con, NULL); dpy_gl_scanout_disable(g->parent_obj.scanout[i].con); } + + virtio_gpu_virgl_reset_async_fences(g); } void virtio_gpu_virgl_reset(VirtIOGPU *g) @@ -1127,6 +1210,12 @@ int virtio_gpu_virgl_init(VirtIOGPU *g) if (qemu_egl_display) { virtio_gpu_3d_cbs.version = 4; virtio_gpu_3d_cbs.get_egl_display = virgl_get_egl_display; +#if VIRGL_VERSION_MAJOR >= 1 + virtio_gpu_3d_cbs.write_context_fence = virgl_write_context_fence; + flags |= VIRGL_RENDERER_ASYNC_FENCE_CB; + flags |= VIRGL_RENDERER_THREAD_SYNC; + gl->context_fence_enabled = true; +#endif } #endif #ifdef VIRGL_RENDERER_D3D11_SHARE_TEXTURE @@ -1160,6 +1249,13 @@ int virtio_gpu_virgl_init(VirtIOGPU *g) gl->cmdq_resume_bh = aio_bh_new(qemu_get_aio_context(), virtio_gpu_virgl_resume_cmdq_bh, g); + + gl->async_fence_bh = aio_bh_new(qemu_get_aio_context(), + virtio_gpu_virgl_async_fence_bh, + g); + + qemu_mutex_init(&gl->async_fence_lock); + QTAILQ_INIT(&gl->async_fenceq); #endif return 0; @@ -1196,5 +1292,14 @@ GArray *virtio_gpu_virgl_get_capsets(VirtIOGPU *g) } } + if (virtio_gpu_drm_enabled(g->parent_obj.conf)) { + virgl_renderer_get_cap_set(VIRTIO_GPU_CAPSET_DRM, + &capset_max_ver, + &capset_max_size); + if (capset_max_size) { + virtio_gpu_virgl_add_capset(capset_ids, VIRTIO_GPU_CAPSET_DRM); + } + } + return capset_ids; } diff --git a/include/hw/virtio/virtio-gpu.h b/include/hw/virtio/virtio-gpu.h index 230fa0c4ee0a..e95d1619c885 100644 --- a/include/hw/virtio/virtio-gpu.h +++ b/include/hw/virtio/virtio-gpu.h @@ -229,6 +229,13 @@ struct VirtIOGPUClass { Error **errp); }; +struct virtio_gpu_virgl_context_fence { + uint32_t ctx_id; + int64_t ring_idx; + uint64_t fence_id; + QTAILQ_ENTRY(virtio_gpu_virgl_context_fence) next; +}; + /* VirtIOGPUGL renderer states */ typedef enum { RS_START, /* starting state */ @@ -246,6 +253,12 @@ struct VirtIOGPUGL { QEMUTimer *print_stats; QEMUBH *cmdq_resume_bh; + + QEMUBH *async_fence_bh; + QemuMutex async_fence_lock; + QTAILQ_HEAD(, virtio_gpu_virgl_context_fence) async_fenceq; + + bool context_fence_enabled; }; struct VhostUserGPU { @@ -360,5 +373,6 @@ void virtio_gpu_virgl_reset_scanout(VirtIOGPU *g); void virtio_gpu_virgl_reset(VirtIOGPU *g); int virtio_gpu_virgl_init(VirtIOGPU *g); GArray *virtio_gpu_virgl_get_capsets(VirtIOGPU *g); +void virtio_gpu_virgl_reset_async_fences(VirtIOGPU *g); #endif