From patchwork Tue Aug 29 09:23:54 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christophe Lombard X-Patchwork-Id: 1827182 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=ibm.com header.i=@ibm.com header.a=rsa-sha256 header.s=pp1 header.b=lDRWJuzU; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=lists.ozlabs.org (client-ip=2404:9400:2:0:216:3eff:fee1:b9f1; helo=lists.ozlabs.org; envelope-from=skiboot-bounces+incoming=patchwork.ozlabs.org@lists.ozlabs.org; receiver=patchwork.ozlabs.org) Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2404:9400:2:0:216:3eff:fee1:b9f1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4RZhsT6wbCz1yZs for ; Tue, 29 Aug 2023 19:27:25 +1000 (AEST) Authentication-Results: lists.ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=ibm.com header.i=@ibm.com header.a=rsa-sha256 header.s=pp1 header.b=lDRWJuzU; dkim-atps=neutral Received: from boromir.ozlabs.org (localhost [IPv6:::1]) by lists.ozlabs.org (Postfix) with ESMTP id 4RZhsT5vXHz3c4b for ; Tue, 29 Aug 2023 19:27:25 +1000 (AEST) X-Original-To: skiboot@lists.ozlabs.org Delivered-To: skiboot@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=ibm.com header.i=@ibm.com header.a=rsa-sha256 header.s=pp1 header.b=lDRWJuzU; dkim-atps=neutral Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=linux.ibm.com (client-ip=148.163.158.5; helo=mx0b-001b2d01.pphosted.com; envelope-from=clombard@linux.ibm.com; receiver=lists.ozlabs.org) Received: from mx0b-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com [148.163.158.5]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 4RZhnY4lwlz3bwZ for ; Tue, 29 Aug 2023 19:24:01 +1000 (AEST) Received: from pps.filterd (m0360072.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 37T9AVNX004913 for ; Tue, 29 Aug 2023 09:23:59 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pp1; bh=fDyqnBcXLvtfI5ea5d4oA3JC8R2M7WGQpJtn7HBfHck=; b=lDRWJuzU4zTiqNQBBXwj+Pqj14PswrsRVuCAgvVe9kNfgm3zkYidrtE1AkhYaTJwiDMD tHILyL9rDw593BHdij0Kkx2wYWTxtFXMF69oFTUJ//pGk4wvMTUrjO/EN95IGolcHCGb y6apHeHawXiI3SvkAU1x1szM0huYtp00VmIHcJG/dOKNOma6qlppdNi+kK4B9yPtiosx 808Ul9uakZ1Iizb4eia8S5Uc8XT4fQYy8/u2i1NohydA0Ma4gvSgzAVjlya+DtFBr05l J0ZYUkqPOOXmCC919nwP+vhlTZwEJu+1wqvwM5mtSs5DUd3AABLN/86i6TawMMQ4iF2C OQ== Received: from ppma21.wdc07v.mail.ibm.com (5b.69.3da9.ip4.static.sl-reverse.com [169.61.105.91]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3sr9j35wus-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT) for ; Tue, 29 Aug 2023 09:23:59 +0000 Received: from pps.filterd (ppma21.wdc07v.mail.ibm.com [127.0.0.1]) by ppma21.wdc07v.mail.ibm.com (8.17.1.19/8.17.1.19) with ESMTP id 37T7GFpi014388 for ; Tue, 29 Aug 2023 09:23:58 GMT Received: from smtprelay02.fra02v.mail.ibm.com ([9.218.2.226]) by ppma21.wdc07v.mail.ibm.com (PPS) with ESMTPS id 3sqvqn293k-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT) for ; Tue, 29 Aug 2023 09:23:58 +0000 Received: from smtpav06.fra02v.mail.ibm.com (smtpav06.fra02v.mail.ibm.com [10.20.54.105]) by smtprelay02.fra02v.mail.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 37T9Nuo627722406 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK) for ; Tue, 29 Aug 2023 09:23:56 GMT Received: from smtpav06.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 3BDD420040 for ; Tue, 29 Aug 2023 09:23:56 +0000 (GMT) Received: from smtpav06.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id F27162004B for ; Tue, 29 Aug 2023 09:23:55 +0000 (GMT) Received: from li-ac0ca24c-3330-11b2-a85c-93224c50ad7a.home (unknown [9.171.88.232]) by smtpav06.fra02v.mail.ibm.com (Postfix) with ESMTP for ; Tue, 29 Aug 2023 09:23:55 +0000 (GMT) From: Christophe Lombard To: skiboot@lists.ozlabs.org Date: Tue, 29 Aug 2023 11:23:54 +0200 Message-ID: <20230829092354.75836-3-clombard@linux.ibm.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230829092354.75836-1-clombard@linux.ibm.com> References: <20230829092354.75836-1-clombard@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-GUID: dET1Kww5uKixM8KGw7PnvgdD_tn99db0 X-Proofpoint-ORIG-GUID: dET1Kww5uKixM8KGw7PnvgdD_tn99db0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.957,Hydra:6.0.601,FMLib:17.11.176.26 definitions=2023-08-29_06,2023-08-28_04,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 impostorscore=0 bulkscore=0 malwarescore=0 adultscore=0 suspectscore=0 mlxlogscore=999 lowpriorityscore=0 mlxscore=0 spamscore=0 clxscore=1015 phishscore=0 priorityscore=1501 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2308100000 definitions=main-2308290078 Subject: [Skiboot] [PATCH V5 2/2] core/pldm: Register PLDM as a blocklevel device X-BeenThere: skiboot@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Mailing list for skiboot development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: skiboot-bounces+incoming=patchwork.ozlabs.org@lists.ozlabs.org Sender: "Skiboot" In the same way that ipmi-hiomap implements the PNOR access control protocol, this patch allows to "virtualize" the content of a BMC flash based on lid files. Previously, flash PNOR partitions were viewed this way: partitionXX=NAME, start address, end address, flags The content of each partition is now stored in a lid file. In order to continue to use the libflash library, we manually fill in the contents of a fake flash header when accessing offset 0. This reproduces the behavior via ipmi-hiomap of reading the flash header on the BMC. For the reading and writing of BMC lids files, we convert the virtual addresses of these 'fake' partitions by identifying: lid id. Signed-off-by: Christophe Lombard --- core/pldm/pldm-lid-files.c | 247 +++++++++++++++++++++++++++++++++++++ include/pldm.h | 5 + 2 files changed, 252 insertions(+) diff --git a/core/pldm/pldm-lid-files.c b/core/pldm/pldm-lid-files.c index a46a73164..19f0a9e8e 100644 --- a/core/pldm/pldm-lid-files.c +++ b/core/pldm/pldm-lid-files.c @@ -25,6 +25,14 @@ struct pldm_lid { static LIST_HEAD(lid_files); +struct pldm_ctx_data { + /* Members protected by the blocklevel lock */ + struct blocklevel_device bl; + uint32_t total_size; + uint32_t erase_granule; + struct lock lock; +}; + #define MEGABYTE (1024*1024) /* @@ -34,6 +42,12 @@ static LIST_HEAD(lid_files); */ #define VMM_SIZE_RESERVED_PER_SECTION (32 * MEGABYTE) +#define ERASE_GRANULE_DEF 0x1000 + +/* 'fake' header flash */ +struct __ffs_hdr *raw_hdr; +size_t raw_hdr_size; + /* * Print the attributes of lid files. */ @@ -159,8 +173,221 @@ out: return rc; } +static uint32_t checksum(void *data, size_t size) +{ + uint32_t i, csum = 0; + + for (i = csum = 0; i < (size/4); i++) + csum ^= ((uint32_t *)data)[i]; + return csum; +} + +/* Helper functions for typesafety and size safety */ +static uint32_t hdr_checksum(struct __ffs_hdr *hdr) +{ + return checksum(hdr, sizeof(struct __ffs_hdr)); +} + +static uint32_t entry_checksum(struct __ffs_entry *ent) +{ + return checksum(ent, sizeof(struct __ffs_entry)); +} + +/* + * Fill __ffs structures in order to return a 'fake' header flash + */ +static int lid_ids_to_header_flash(void *buf, uint64_t len) +{ + struct __ffs_entry *entry; + struct pldm_lid *lid = NULL; + uint32_t count, part_id, i; + uint32_t block_size; + + /* reading the flash header has already been requested */ + if (raw_hdr) { + (raw_hdr_size < len) ? memcpy(buf, raw_hdr, raw_hdr_size) : + memcpy(buf, raw_hdr, len); + return OPAL_SUCCESS; + } + + /* number of lid files */ + count = get_lids_count(); + + /* last member of struct __ffs_hdr is a flexible array member */ + raw_hdr_size = sizeof(struct __ffs_hdr) + (count * sizeof(struct __ffs_entry)); + raw_hdr = zalloc(raw_hdr_size); + if (!raw_hdr) + return OPAL_NO_MEM; + + /* complete header flash + * Represents the on flash layout of FFS structures + * Note: Beware that the size of the partition table is in units of block_size + * + * @magic: Eye catcher/corruption detector + * @version: Version of the structure + * @size: Size of partition table (in block_size) + * @entry_size: Size of struct __ffs_entry element (in bytes) + * @entry_count: Number of struct __ffs_entry elements in @entries array + * @block_size: Size of block on device (in bytes) + * @block_count: Number of blocks on device + * @checksum: Header checksum + */ + /* size of the cached map: block_size * raw_hdr->size + * raw_hdr->size = 0x3: we take a little margin if the number + * of element would increase + */ + block_size = ERASE_GRANULE_DEF; + + raw_hdr->magic = cpu_to_be32(FFS_MAGIC); + raw_hdr->version = cpu_to_be32(FFS_VERSION_1); + raw_hdr->size = cpu_to_be32(0x3); + raw_hdr->entry_size = cpu_to_be32(sizeof(struct __ffs_entry)); + raw_hdr->entry_count = cpu_to_be32(count); + raw_hdr->block_size = cpu_to_be32(block_size); + raw_hdr->block_count = cpu_to_be32(0x4000); /* value from IPMI/PNOR protocol */ + raw_hdr->checksum = hdr_checksum(raw_hdr); + + lid = list_top(&lid_files, struct pldm_lid, list); + part_id = 1; + + for (i = 0; i < count; i++) { + entry = &raw_hdr->entries[i]; + + memcpy(entry->name, lid->name, sizeof(entry->name)); + entry->name[FFS_PART_NAME_MAX] = '\0'; + entry->base = cpu_to_be32(lid->start / block_size); + entry->size = cpu_to_be32(lid->length / block_size); + entry->pid = cpu_to_be32(FFS_PID_TOPLEVEL); + entry->id = cpu_to_be32(part_id); + entry->type = cpu_to_be32(0x1); + entry->flags = cpu_to_be32(0x0); + entry->actual = cpu_to_be32(lid->length); + entry->checksum = entry_checksum(entry); + + lid = list_next(&lid_files, lid, list); + part_id++; + } + + /* fill in rquester buffer */ + (raw_hdr_size < len) ? memcpy(buf, raw_hdr, raw_hdr_size) : + memcpy(buf, raw_hdr, len); + + return OPAL_SUCCESS; +} + +/* + * Search lid member from the virtual address. + */ +static int vaddr_to_lid_id(uint64_t pos, uint32_t *start, uint32_t *handle, + uint32_t *length) +{ + struct pldm_lid *lid = NULL; + + list_for_each(&lid_files, lid, list) { + if ((pos >= lid->start) && (pos < lid->start + VMM_SIZE_RESERVED_PER_SECTION)) { + *start = lid->start; + *handle = lid->handle; + *length = lid->length; + return OPAL_SUCCESS; + } + } + + return OPAL_PARAMETER; +} + +static int lid_files_read(struct blocklevel_device *bl __unused, + uint64_t pos, void *buf, uint64_t len) +{ + uint32_t lid_start, lid_handle, lid_length; + int rc = OPAL_SUCCESS; + uint64_t offset; + + /* LPC is only 32bit */ + if (pos > UINT_MAX || (pos + len) > UINT_MAX) + return FLASH_ERR_PARM_ERROR; + + prlog(PR_TRACE, "lid files read at 0x%llx for 0x%llx\n", + pos, len); + + if ((pos == 0) || (pos <= (ERASE_GRANULE_DEF * 0x3))) { + /* return a 'fake' header flash or cached map */ + rc = lid_ids_to_header_flash(buf, len); + } else { + /* convert offset to lid id */ + rc = vaddr_to_lid_id(pos, &lid_start, + &lid_handle, &lid_length); + if (rc) + return rc; + + /* read lid file */ + offset = pos - lid_start; + rc = pldm_file_io_read_file(lid_handle, lid_length, + offset, buf, len); + } + + return rc; +} + +static int lid_files_write(struct blocklevel_device *bl __unused, + uint64_t pos, const void *buf __unused, + uint64_t len) +{ + prlog(PR_ERR, "lid files writes at 0x%llx for 0x%llx\n", + pos, len); + return OPAL_UNSUPPORTED; +} + +static int lid_files_erase(struct blocklevel_device *bl __unused, + uint64_t pos, uint64_t len) +{ + + prlog(PR_ERR, "lid files erase at 0x%llx for 0x%llx\n", + pos, len); + return OPAL_UNSUPPORTED; +} + +static int get_lid_files_info(struct blocklevel_device *bl, + const char **name, uint64_t *total_size, + uint32_t *erase_granule) +{ + struct pldm_ctx_data *ctx; + + ctx = container_of(bl, struct pldm_ctx_data, bl); + ctx->bl.erase_mask = ctx->erase_granule - 1; + + if (name) + *name = NULL; + if (total_size) + *total_size = ctx->total_size; + if (erase_granule) + *erase_granule = ctx->erase_granule; + + return OPAL_SUCCESS; +} + +bool pldm_lid_files_exit(struct blocklevel_device *bl) +{ + struct pldm_ctx_data *ctx; + struct pldm_lid *lid, *tmp; + + if (bl) { + ctx = container_of(bl, struct pldm_ctx_data, bl); + free(ctx); + } + + /* free all lid entries */ + list_for_each_safe(&lid_files, lid, tmp, list) + free(lid); + + if (raw_hdr) + free(raw_hdr); + + return true; +} + int pldm_lid_files_init(struct blocklevel_device **bl) { + struct pldm_ctx_data *ctx; uint32_t lid_files_count; int rc; @@ -169,6 +396,18 @@ int pldm_lid_files_init(struct blocklevel_device **bl) *bl = NULL; + ctx = zalloc(sizeof(struct pldm_ctx_data)); + if (!ctx) + return FLASH_ERR_MALLOC_FAILED; + + init_lock(&ctx->lock); + + ctx->bl.read = &lid_files_read; + ctx->bl.write = &lid_files_write; + ctx->bl.erase = &lid_files_erase; + ctx->bl.get_info = &get_lid_files_info; + ctx->bl.exit = &pldm_lid_files_exit; + /* convert lid ids data to pnor structure */ rc = lid_ids_to_vaddr_mapping(); if (rc) @@ -179,8 +418,16 @@ int pldm_lid_files_init(struct blocklevel_device **bl) prlog(PR_NOTICE, "Number of lid files: %d\n", lid_files_count); print_lid_files_attr(); + ctx->total_size = lid_files_count * VMM_SIZE_RESERVED_PER_SECTION; + ctx->erase_granule = ERASE_GRANULE_DEF; + + ctx->bl.keep_alive = 0; + + *bl = &(ctx->bl); + return OPAL_SUCCESS; err: + free(ctx); return rc; } diff --git a/include/pldm.h b/include/pldm.h index 9c5369edb..8622453be 100644 --- a/include/pldm.h +++ b/include/pldm.h @@ -43,4 +43,9 @@ int pldm_fru_dt_add_bmc_version(void); */ int pldm_lid_files_init(struct blocklevel_device **bl); +/** + * Remove lid ids data + */ +bool pldm_lid_files_exit(struct blocklevel_device *bl); + #endif /* __PLDM_H__ */