From patchwork Wed Oct 11 09:21:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Juan Quintela X-Patchwork-Id: 1846425 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=TGE8gS3S; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=nongnu.org (client-ip=209.51.188.17; helo=lists.gnu.org; envelope-from=qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org; receiver=patchwork.ozlabs.org) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-ECDSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4S56mS2Srdz23jd for ; Wed, 11 Oct 2023 20:24:40 +1100 (AEDT) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1qqVRS-00029K-Pq; Wed, 11 Oct 2023 05:23:46 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qqVRQ-0001qT-I8 for qemu-devel@nongnu.org; Wed, 11 Oct 2023 05:23:44 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qqVRO-00055S-W9 for qemu-devel@nongnu.org; Wed, 11 Oct 2023 05:23:44 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1697016222; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=MLijd1iDfCXXZWPn6vTfVcDAw5ir/75x8u4aoAK9BdU=; b=TGE8gS3SlncvcUQI+wY01K+Psb9OEBqeXicy+DnuAZ+O25uZNomjHM2Hd5OVI9+Bjcx3wh 5fkyTTY7epWivEVAFaLN8FKyEqgnErR5PGTwD3KU9WCJYpsRfs7YdEUKKaoQJyJ8+gcPWg 48deA1cK6rDS5UdvEPiIymH2mfuxn7c= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-161-f6buMa1uNAi0oV_BvWImIw-1; Wed, 11 Oct 2023 05:23:40 -0400 X-MC-Unique: f6buMa1uNAi0oV_BvWImIw-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 07A4F2825EA4; Wed, 11 Oct 2023 09:23:40 +0000 (UTC) Received: from secure.mitica (unknown [10.39.195.75]) by smtp.corp.redhat.com (Postfix) with ESMTP id 4BEE21C060B6; Wed, 11 Oct 2023 09:23:38 +0000 (UTC) From: Juan Quintela To: qemu-devel@nongnu.org Cc: Laurent Vivier , Peter Xu , Paolo Bonzini , Markus Armbruster , Juan Quintela , Thomas Huth , Li Zhijian , Leonardo Bras , Eric Blake , Fabiano Rosas Subject: [PULL 46/65] migration/rdma: Convert qemu_rdma_write_flush() to Error Date: Wed, 11 Oct 2023 11:21:44 +0200 Message-ID: <20231011092203.1266-47-quintela@redhat.com> In-Reply-To: <20231011092203.1266-1-quintela@redhat.com> References: <20231011092203.1266-1-quintela@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.7 Received-SPF: pass client-ip=170.10.129.124; envelope-from=quintela@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H4=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org From: Markus Armbruster Functions that use an Error **errp parameter to return errors should not also report them to the user, because reporting is the caller's job. When the caller does, the error is reported twice. When it doesn't (because it recovered from the error), there is no error to report, i.e. the report is bogus. qio_channel_rdma_writev() violates this principle: it calls error_report() via qemu_rdma_write_flush(). I elected not to investigate how callers handle the error, i.e. precise impact is not known. Clean this up by converting qemu_rdma_write_flush() to Error. Necessitates setting an error when qemu_rdma_write_one() failed. Since this error will go away later in this series, simply use "FIXME temporary error message" there. Signed-off-by: Markus Armbruster Reviewed-by: Li Zhijian Reviewed-by: Juan Quintela Signed-off-by: Juan Quintela Message-ID: <20230928132019.2544702-40-armbru@redhat.com> --- migration/rdma.c | 19 +++++++++++++------ 1 file changed, 13 insertions(+), 6 deletions(-) diff --git a/migration/rdma.c b/migration/rdma.c index 189932097e..1a74c6d955 100644 --- a/migration/rdma.c +++ b/migration/rdma.c @@ -2283,7 +2283,7 @@ retry: * We support sending out multiple chunks at the same time. * Not all of them need to get signaled in the completion queue. */ -static int qemu_rdma_write_flush(RDMAContext *rdma) +static int qemu_rdma_write_flush(RDMAContext *rdma, Error **errp) { int ret; @@ -2295,6 +2295,7 @@ static int qemu_rdma_write_flush(RDMAContext *rdma) rdma->current_index, rdma->current_addr, rdma->current_length); if (ret < 0) { + error_setg(errp, "FIXME temporary error message"); return -1; } @@ -2368,6 +2369,7 @@ static int qemu_rdma_write(RDMAContext *rdma, uint64_t block_offset, uint64_t offset, uint64_t len) { + Error *err = NULL; uint64_t current_addr = block_offset + offset; uint64_t index = rdma->current_index; uint64_t chunk = rdma->current_chunk; @@ -2375,8 +2377,9 @@ static int qemu_rdma_write(RDMAContext *rdma, /* If we cannot merge it, we flush the current buffer first. */ if (!qemu_rdma_buffer_mergeable(rdma, current_addr, len)) { - ret = qemu_rdma_write_flush(rdma); + ret = qemu_rdma_write_flush(rdma, &err); if (ret < 0) { + error_report_err(err); return -1; } rdma->current_length = 0; @@ -2393,7 +2396,10 @@ static int qemu_rdma_write(RDMAContext *rdma, /* flush it if buffer is too large */ if (rdma->current_length >= RDMA_MERGE_MAX) { - return qemu_rdma_write_flush(rdma); + if (qemu_rdma_write_flush(rdma, &err) < 0) { + error_report_err(err); + return -1; + } } return 0; @@ -2857,10 +2863,9 @@ static ssize_t qio_channel_rdma_writev(QIOChannel *ioc, * Push out any writes that * we're queued up for VM's ram. */ - ret = qemu_rdma_write_flush(rdma); + ret = qemu_rdma_write_flush(rdma, errp); if (ret < 0) { rdma->errored = true; - error_setg(errp, "qemu_rdma_write_flush failed"); return -1; } @@ -3002,9 +3007,11 @@ static ssize_t qio_channel_rdma_readv(QIOChannel *ioc, */ static int qemu_rdma_drain_cq(RDMAContext *rdma) { + Error *err = NULL; int ret; - if (qemu_rdma_write_flush(rdma) < 0) { + if (qemu_rdma_write_flush(rdma, &err) < 0) { + error_report_err(err); return -1; }