mbox series

[v1,00/14] nvmem: core: introduce NVMEM layouts

Message ID 20220825214423.903672-1-michael@walle.cc
Headers show
Series nvmem: core: introduce NVMEM layouts | expand

Message

Michael Walle Aug. 25, 2022, 9:44 p.m. UTC
This is now the third attempt to fetch the MAC addresses from the VPD
for the Kontron sl28 boards. Previous discussions can be found here:
https://lore.kernel.org/lkml/20211228142549.1275412-1-michael@walle.cc/


NVMEM cells are typically added by board code or by the devicetree. But
as the cells get more complex, there is (valid) push back from the
devicetree maintainers to not put that handling in the devicetree.

Therefore, introduce NVMEM layouts. They operate on the NVMEM device and
can add cells during runtime. That way it is possible to add complex
cells than it is possible right now with the offset/length/bits
description in the device tree. For example, you can have post processing
for individual cells (think of endian swapping, or ethernet offset
handling). You can also have cells which have no static offset, like the
ones in an u-boot environment. The last patches will convert the current
u-boot environment driver to a NVMEM layout and lifting the restriction
that it only works with mtd devices. But as it will change the required
compatible strings, it is marked as RFC for now. It also needs to have
its device tree schema update which is left out here.

For now, the layouts are selected by a specifc compatible string in a
device tree. E.g. the VPD on the kontron sl28 do (within a SPI flash node):
  compatible = "kontron,sl28-vpd", "user-otp";
or if you'd use the u-boot environment (within an MTD patition):
  compatible = "u-boot,env", "nvmem";

The "user-otp" (or "nvmem") will lead to a NVMEM device, the
"kontron,sl28-vpd" (or "u-boot,env") will then apply the specific layout
on top of the NVMEM device.

NVMEM layouts as modules?
While possible in principle, it doesn't make any sense because the NVMEM
core can't be compiled as a module. The layouts needs to be available at
probe time. (That is also the reason why they get registered with
subsys_initcall().) So if the NVMEM core would be a module, the layouts
could be modules, too.

Michael Walle (14):
  net: add helper eth_addr_add()
  of: base: add of_parse_phandle_with_optional_args()
  nvmem: core: add an index parameter to the cell
  nvmem: core: drop the removal of the cells in nvmem_add_cells()
  nvmem: core: add nvmem_add_one_cell()
  nvmem: core: introduce NVMEM layouts
  nvmem: core: add per-cell post processing
  dt-bindings: mtd: relax the nvmem compatible string
  dt-bindings: nvmem: add YAML schema for the sl28 vpd layout
  nvmem: layouts: add sl28vpd layout
  nvmem: core: export nvmem device size
  nvmem: layouts: rewrite the u-boot-env driver as a NVMEM layout
  nvmem: layouts: u-boot-env: add device node
  arm64: dts: ls1028a: sl28: get MAC addresses from VPD

 .../devicetree/bindings/mtd/mtd.yaml          |   7 +-
 .../nvmem/layouts/kontron,sl28-vpd.yaml       |  52 +++++
 .../fsl-ls1028a-kontron-kbox-a-230-ls.dts     |   8 +
 .../fsl-ls1028a-kontron-sl28-var1.dts         |   2 +
 .../fsl-ls1028a-kontron-sl28-var2.dts         |   4 +
 .../fsl-ls1028a-kontron-sl28-var4.dts         |   2 +
 .../freescale/fsl-ls1028a-kontron-sl28.dts    |  13 ++
 drivers/nvmem/Kconfig                         |   2 +
 drivers/nvmem/Makefile                        |   1 +
 drivers/nvmem/core.c                          | 183 +++++++++++----
 drivers/nvmem/imx-ocotp.c                     |   4 +-
 drivers/nvmem/layouts/Kconfig                 |  22 ++
 drivers/nvmem/layouts/Makefile                |   7 +
 drivers/nvmem/layouts/sl28vpd.c               | 144 ++++++++++++
 drivers/nvmem/layouts/u-boot-env.c            | 147 ++++++++++++
 drivers/nvmem/u-boot-env.c                    | 218 ------------------
 include/linux/etherdevice.h                   |  14 ++
 include/linux/nvmem-consumer.h                |  11 +
 include/linux/nvmem-provider.h                |  47 +++-
 include/linux/of.h                            |  25 ++
 20 files changed, 649 insertions(+), 264 deletions(-)
 create mode 100644 Documentation/devicetree/bindings/nvmem/layouts/kontron,sl28-vpd.yaml
 create mode 100644 drivers/nvmem/layouts/Kconfig
 create mode 100644 drivers/nvmem/layouts/Makefile
 create mode 100644 drivers/nvmem/layouts/sl28vpd.c
 create mode 100644 drivers/nvmem/layouts/u-boot-env.c
 delete mode 100644 drivers/nvmem/u-boot-env.c

Comments

Michael Walle Aug. 26, 2022, 8:16 a.m. UTC | #1
Am 2022-08-25 23:44, schrieb Michael Walle:

> +struct nvmem_layout {
> +	const char *name;
> +	const struct of_device_id *of_match_table;
> +	int (*add_cells)(struct nvmem_device *nvmem, struct nvmem_layout 
> *layout);

This must be:
int (*add_cells)(struct device *dev, struct nvmem_device *nvmem, struct 
nvmem_layout *layout);

> +	struct list_head node;
> +};
> +
Rafał Miłecki Aug. 28, 2022, 1:55 p.m. UTC | #2
On 25.08.2022 23:44, Michael Walle wrote:
> Register the device node so we can actually make use of the cells from
> within the device tree.
> 
> This obviously only works if the environment variable name can be mapped
> to the device node, which isn't always the case. Think of "_" vs "-".
> But for simple things like ethaddr, this will work.

We probably shouldn't support undocumented syntax (bindings).

I've identical local patch that waits for
[PATCH] dt-bindings: nvmem: u-boot,env: add basic NVMEM cells
https://patchwork.ozlabs.org/project/devicetree-bindings/patch/20220703084843.21922-1-zajec5@gmail.com/
to be accepted.
Rafał Miłecki Aug. 28, 2022, 2:04 p.m. UTC | #3
On 25.08.2022 23:44, Michael Walle wrote:
> Instead of hardcoding the underlying access method mtd_read() and
> duplicating all the error handling, rewrite the driver as a nvmem
> layout which just uses nvmem_device_read() and thus works with any
> NVMEM device.
> 
> But because this is now not a device anymore, the compatible string
> will have to be changed so the device will still be probed:
>    compatible = "u-boot,env";
> to
>    compatible = "u-boot,env", "nvmem-cells";
> 
> "nvmem-cells" will tell the mtd layer to register a nvmem_device().
> "u-boot,env" will tell the NVMEM that it should apply the u-boot
> environment layout to the NVMEM device.

That's fishy but maybe we can ignore backward compatibility at
point.

Still you need to update DT binding.
Rafał Miłecki Aug. 28, 2022, 2:06 p.m. UTC | #4
On 25.08.2022 23:44, Michael Walle wrote:
> For now, the content can be
> described by a device tree or a board file. But this only works if the
> offsets and lengths are static and don't change.

Not really true (see Broadcom's NVRAM and U-Boot's env data).
Michael Walle Aug. 28, 2022, 2:33 p.m. UTC | #5
Am 2022-08-28 16:06, schrieb Rafał Miłecki:
> On 25.08.2022 23:44, Michael Walle wrote:
>> For now, the content can be
>> described by a device tree or a board file. But this only works if the
>> offsets and lengths are static and don't change.
> 
> Not really true (see Broadcom's NVRAM and U-Boot's env data).

All except those two drivers don't add cells on their own. And for
these it is not possible to add non static cells, except in board
code maybe. This series make it possible to add this runtime parsing
to any NVMEM device.

-michael
Michael Walle Aug. 28, 2022, 2:36 p.m. UTC | #6
Am 2022-08-28 15:55, schrieb Rafał Miłecki:
> On 25.08.2022 23:44, Michael Walle wrote:
>> Register the device node so we can actually make use of the cells from
>> within the device tree.
>> 
>> This obviously only works if the environment variable name can be 
>> mapped
>> to the device node, which isn't always the case. Think of "_" vs "-".
>> But for simple things like ethaddr, this will work.
> 
> We probably shouldn't support undocumented syntax (bindings).

If we only support a predefined list of variables, we can make them
device tree compatible anyway. E.g. we could have a mapping
"serial-number" <-> "serial#"

-michael

> I've identical local patch that waits for
> [PATCH] dt-bindings: nvmem: u-boot,env: add basic NVMEM cells
> https://patchwork.ozlabs.org/project/devicetree-bindings/patch/20220703084843.21922-1-zajec5@gmail.com/
> to be accepted.
Michael Walle Aug. 28, 2022, 2:42 p.m. UTC | #7
Am 2022-08-28 16:04, schrieb Rafał Miłecki:
> On 25.08.2022 23:44, Michael Walle wrote:
>> Instead of hardcoding the underlying access method mtd_read() and
>> duplicating all the error handling, rewrite the driver as a nvmem
>> layout which just uses nvmem_device_read() and thus works with any
>> NVMEM device.
>> 
>> But because this is now not a device anymore, the compatible string
>> will have to be changed so the device will still be probed:
>>    compatible = "u-boot,env";
>> to
>>    compatible = "u-boot,env", "nvmem-cells";
>> 
>> "nvmem-cells" will tell the mtd layer to register a nvmem_device().
>> "u-boot,env" will tell the NVMEM that it should apply the u-boot
>> environment layout to the NVMEM device.
> 
> That's fishy but maybe we can ignore backward compatibility at
> point.

As mentioned in the cover letter, this is why this is an RFC. I
didn't see any users in the device tree, nor can I see how this
would have been used anyway, because you cannot find a cell by
its device tree node, because none is registered. So maybe we
can still change the compatible string.

-michael

> Still you need to update DT binding.
Rafał Miłecki Aug. 28, 2022, 3:05 p.m. UTC | #8
On 25.08.2022 23:44, Michael Walle wrote:
> This is now the third attempt to fetch the MAC addresses from the VPD
> for the Kontron sl28 boards. Previous discussions can be found here:
> https://lore.kernel.org/lkml/20211228142549.1275412-1-michael@walle.cc/
> 
> 
> NVMEM cells are typically added by board code or by the devicetree. But
> as the cells get more complex, there is (valid) push back from the
> devicetree maintainers to not put that handling in the devicetree.

I dropped the ball waiting for Rob's reponse in the
[PATCH 0/2] dt-bindings: nvmem: support describing cells
https://lore.kernel.org/linux-arm-kernel/0b7b8f7ea6569f79524aea1a3d783665@walle.cc/T/

Before we go any further can we have a clear answer from Rob (or
Krzysztof now too?):


Is there any point in having bindings like:

compatible = "mac-address";

for NVMEM cells nodes? So systems (Linux, U-Boot) can handle them in a
more generic way?


Or do we prefer more conditional drivers code (or layouts code as in
this Michael's proposal) that will handle cells properly based on their
names?


I'm not arguing for any solution. I just want to make sure we choose the
right way to proceed.


> Therefore, introduce NVMEM layouts. They operate on the NVMEM device and
> can add cells during runtime. That way it is possible to add complex
> cells than it is possible right now with the offset/length/bits
> description in the device tree. For example, you can have post processing
> for individual cells (think of endian swapping, or ethernet offset
> handling). You can also have cells which have no static offset, like the
> ones in an u-boot environment. The last patches will convert the current
> u-boot environment driver to a NVMEM layout and lifting the restriction
> that it only works with mtd devices. But as it will change the required
> compatible strings, it is marked as RFC for now. It also needs to have
> its device tree schema update which is left out here.

So do I get it right that we want to have:

1. NVMEM drivers for providing I/O access to NVMEM devices
2. NVMEM layouts for parsing & NVMEM cells and translating their content
?

I guess it sounds good and seems to be a clean solution.


One thing I believe you need to handle is replacing "cell_post_process"
callback with your layout thing.

I find it confusing to have
1. cell_post_process() CB at NVMEM device level
2. post_process() CB at NVMEM cell level


> For now, the layouts are selected by a specifc compatible string in a
> device tree. E.g. the VPD on the kontron sl28 do (within a SPI flash node):
>    compatible = "kontron,sl28-vpd", "user-otp";
> or if you'd use the u-boot environment (within an MTD patition):
>    compatible = "u-boot,env", "nvmem";
> 
> The "user-otp" (or "nvmem") will lead to a NVMEM device, the
> "kontron,sl28-vpd" (or "u-boot,env") will then apply the specific layout
> on top of the NVMEM device.
> 
> NVMEM layouts as modules?
> While possible in principle, it doesn't make any sense because the NVMEM
> core can't be compiled as a module. The layouts needs to be available at
> probe time. (That is also the reason why they get registered with
> subsys_initcall().) So if the NVMEM core would be a module, the layouts
> could be modules, too.
> 
> Michael Walle (14):
>    net: add helper eth_addr_add()
>    of: base: add of_parse_phandle_with_optional_args()
>    nvmem: core: add an index parameter to the cell
>    nvmem: core: drop the removal of the cells in nvmem_add_cells()
>    nvmem: core: add nvmem_add_one_cell()
>    nvmem: core: introduce NVMEM layouts
>    nvmem: core: add per-cell post processing
>    dt-bindings: mtd: relax the nvmem compatible string
>    dt-bindings: nvmem: add YAML schema for the sl28 vpd layout
>    nvmem: layouts: add sl28vpd layout
>    nvmem: core: export nvmem device size
>    nvmem: layouts: rewrite the u-boot-env driver as a NVMEM layout
>    nvmem: layouts: u-boot-env: add device node
>    arm64: dts: ls1028a: sl28: get MAC addresses from VPD
> 
>   .../devicetree/bindings/mtd/mtd.yaml          |   7 +-
>   .../nvmem/layouts/kontron,sl28-vpd.yaml       |  52 +++++
>   .../fsl-ls1028a-kontron-kbox-a-230-ls.dts     |   8 +
>   .../fsl-ls1028a-kontron-sl28-var1.dts         |   2 +
>   .../fsl-ls1028a-kontron-sl28-var2.dts         |   4 +
>   .../fsl-ls1028a-kontron-sl28-var4.dts         |   2 +
>   .../freescale/fsl-ls1028a-kontron-sl28.dts    |  13 ++
>   drivers/nvmem/Kconfig                         |   2 +
>   drivers/nvmem/Makefile                        |   1 +
>   drivers/nvmem/core.c                          | 183 +++++++++++----
>   drivers/nvmem/imx-ocotp.c                     |   4 +-
>   drivers/nvmem/layouts/Kconfig                 |  22 ++
>   drivers/nvmem/layouts/Makefile                |   7 +
>   drivers/nvmem/layouts/sl28vpd.c               | 144 ++++++++++++
>   drivers/nvmem/layouts/u-boot-env.c            | 147 ++++++++++++
>   drivers/nvmem/u-boot-env.c                    | 218 ------------------
>   include/linux/etherdevice.h                   |  14 ++
>   include/linux/nvmem-consumer.h                |  11 +
>   include/linux/nvmem-provider.h                |  47 +++-
>   include/linux/of.h                            |  25 ++
>   20 files changed, 649 insertions(+), 264 deletions(-)
>   create mode 100644 Documentation/devicetree/bindings/nvmem/layouts/kontron,sl28-vpd.yaml
>   create mode 100644 drivers/nvmem/layouts/Kconfig
>   create mode 100644 drivers/nvmem/layouts/Makefile
>   create mode 100644 drivers/nvmem/layouts/sl28vpd.c
>   create mode 100644 drivers/nvmem/layouts/u-boot-env.c
>   delete mode 100644 drivers/nvmem/u-boot-env.c
Michael Walle Aug. 29, 2022, 8:22 a.m. UTC | #9
Hi,

Am 2022-08-28 17:05, schrieb Rafał Miłecki:
> On 25.08.2022 23:44, Michael Walle wrote:
>> This is now the third attempt to fetch the MAC addresses from the VPD
>> for the Kontron sl28 boards. Previous discussions can be found here:
>> https://lore.kernel.org/lkml/20211228142549.1275412-1-michael@walle.cc/
>> 
>> 
>> NVMEM cells are typically added by board code or by the devicetree. 
>> But
>> as the cells get more complex, there is (valid) push back from the
>> devicetree maintainers to not put that handling in the devicetree.
> 
> I dropped the ball waiting for Rob's reponse in the
> [PATCH 0/2] dt-bindings: nvmem: support describing cells
> https://lore.kernel.org/linux-arm-kernel/0b7b8f7ea6569f79524aea1a3d783665@walle.cc/T/
> 
> Before we go any further can we have a clear answer from Rob (or
> Krzysztof now too?):
> 
> 
> Is there any point in having bindings like:
> 
> compatible = "mac-address";
> 
> for NVMEM cells nodes? So systems (Linux, U-Boot) can handle them in a
> more generic way?
> 
> 
> Or do we prefer more conditional drivers code (or layouts code as in
> this Michael's proposal) that will handle cells properly based on their
> names?

What do you mean by "based on their names?".

> I'm not arguing for any solution. I just want to make sure we choose 
> the
> right way to proceed.

With the 'compatible = "mac-address"', how would you detect what kind
of transformation you need to apply? You could guess ascii, yes. But
swapping bytes? You cannot guess that. So you'd need additional 
information
coming from the device tree. But Rob was quite clear that this shouldn't
be in the device tree:

| I still don't think trying to encode transformations of data into the 
DT
| is right approach.

https://lore.kernel.org/linux-devicetree/YaZ5JNCFeKcdIfu8@robh.at.kernel.org/

He also mention that the compatible should be on the nvmem device level
and should use specific compatible strings:
https://lore.kernel.org/linux-devicetree/CAL_JsqL55mZJ6jUyQACer2pKMNDV08-FgwBREsJVgitnuF18Cg@mail.gmail.com/

And IMHO that makes sense, because it matches the hardware and not any
NVMEM cells which is the *content* of a memory device.

And as you see in the sl28vpd layout, it allows you to do much more, 
like
checking for integrity, and make it future proof by supporting different
versions of this sl28vpd layout.

What if you use it with the u-boot,env? You wouldn't need it because
u-boot,env will already know how to interpret it as an ascii string
(and it also know the offset). In this sense, u-boot,env is already a
compatible string describing the content of a NVMEM device (and the
compatible string is at the device level).

>> Therefore, introduce NVMEM layouts. They operate on the NVMEM device 
>> and
>> can add cells during runtime. That way it is possible to add complex
>> cells than it is possible right now with the offset/length/bits
>> description in the device tree. For example, you can have post 
>> processing
>> for individual cells (think of endian swapping, or ethernet offset
>> handling). You can also have cells which have no static offset, like 
>> the
>> ones in an u-boot environment. The last patches will convert the 
>> current
>> u-boot environment driver to a NVMEM layout and lifting the 
>> restriction
>> that it only works with mtd devices. But as it will change the 
>> required
>> compatible strings, it is marked as RFC for now. It also needs to have
>> its device tree schema update which is left out here.
> 
> So do I get it right that we want to have:
> 
> 1. NVMEM drivers for providing I/O access to NVMEM devices
> 2. NVMEM layouts for parsing & NVMEM cells and translating their 
> content
> ?

Correct.

> I guess it sounds good and seems to be a clean solution.

Good to hear :)

> One thing I believe you need to handle is replacing "cell_post_process"
> callback with your layout thing.
> 
> I find it confusing to have
> 1. cell_post_process() CB at NVMEM device level
> 2. post_process() CB at NVMEM cell level

What is wrong with having a callback at both levels?

Granted, in this particular case (it is just used at one place), I still
think that it is the wrong approach to add this transformation in the
driver (in this particular case). The driver is supposed to give you
access to the SoC's fuse box, but it will magically change the content
of a cell if the nvmem consumer named this cell "mac-address" (which
you also found confusing the last time and I do too!).

The driver itself doesn't add any cells on its own, so I cannot register
a .post_process hook there. Therefore, you'd need that post_process hook
on every cell, which is equivalent to have a post_process hook at
device level.

Unless you have a better idea. I'll leave that up to NXP to fix that (or
leave it like that).

-michael
Srinivas Kandagatla Aug. 30, 2022, 1:36 p.m. UTC | #10
On 25/08/2022 22:44, Michael Walle wrote:
> NVMEM layouts are used to generate NVMEM cells during runtime. Think of
> an EEPROM with a well-defined conent. For now, the content can be
> described by a device tree or a board file. But this only works if the
> offsets and lengths are static and don't change. One could also argue
> that putting the layout of the EEPROM in the device tree is the wrong
> place. Instead, the device tree should just have a specific compatible
> string.
> 
> Right now there are two use cases:
>   (1) The NVMEM cell needs special processing. E.g. if it only specifies
>       a base MAC address offset and you need to add an offset, or it
>       needs to parse a MAC from ASCII format or some proprietary format.
>       (Post processing of cells is added in a later commit).
>   (2) u-boot environment parsing. The cells don't have a particular
>       offset but it needs parsing the content to determine the offsets
>       and length.
> 
> Signed-off-by: Michael Walle <michael@walle.cc>
> ---
>   drivers/nvmem/Kconfig          |  2 ++
>   drivers/nvmem/Makefile         |  1 +
>   drivers/nvmem/core.c           | 57 ++++++++++++++++++++++++++++++++++
>   drivers/nvmem/layouts/Kconfig  |  5 +++
>   drivers/nvmem/layouts/Makefile |  4 +++
>   include/linux/nvmem-provider.h | 38 +++++++++++++++++++++++
>   6 files changed, 107 insertions(+)
>   create mode 100644 drivers/nvmem/layouts/Kconfig
>   create mode 100644 drivers/nvmem/layouts/Makefile

update to ./Documentation/driver-api/nvmem.rst would help others.

> 
> diff --git a/drivers/nvmem/Kconfig b/drivers/nvmem/Kconfig
> index bab8a29c9861..1416837b107b 100644
> --- a/drivers/nvmem/Kconfig
> +++ b/drivers/nvmem/Kconfig
> @@ -357,4 +357,6 @@ config NVMEM_U_BOOT_ENV
>   
>   	  If compiled as module it will be called nvmem_u-boot-env.
>   
> +source "drivers/nvmem/layouts/Kconfig"
> +
>   endif
> diff --git a/drivers/nvmem/Makefile b/drivers/nvmem/Makefile
> index 399f9972d45b..cd5a5baa2f3a 100644
> --- a/drivers/nvmem/Makefile
> +++ b/drivers/nvmem/Makefile
> @@ -5,6 +5,7 @@
>   
>   obj-$(CONFIG_NVMEM)		+= nvmem_core.o
>   nvmem_core-y			:= core.o
> +obj-y				+= layouts/
>   
>   # Devices
>   obj-$(CONFIG_NVMEM_BCM_OCOTP)	+= nvmem-bcm-ocotp.o
> diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c
> index 3dfd149374a8..5357fc378700 100644
> --- a/drivers/nvmem/core.c
> +++ b/drivers/nvmem/core.c
> @@ -74,6 +74,9 @@ static LIST_HEAD(nvmem_lookup_list);
>   
>   static BLOCKING_NOTIFIER_HEAD(nvmem_notifier);
>   
> +static DEFINE_SPINLOCK(nvmem_layout_lock);
> +static LIST_HEAD(nvmem_layouts);
> +
>   static int __nvmem_reg_read(struct nvmem_device *nvmem, unsigned int offset,
>   			    void *val, size_t bytes)
>   {
> @@ -744,6 +747,56 @@ static int nvmem_add_cells_from_of(struct nvmem_device *nvmem)
>   	return 0;
>   }
>   
> +int nvmem_register_layout(struct nvmem_layout *layout)
> +{
> +	spin_lock(&nvmem_layout_lock);
> +	list_add(&layout->node, &nvmem_layouts);
> +	spin_unlock(&nvmem_layout_lock);
> +
> +	return 0;
> +}
> +EXPORT_SYMBOL_GPL(nvmem_register_layout);

we should provide nvmem_unregister_layout too, so that providers can add 
them if they can in there respective drivers.

> +
> +static struct nvmem_layout *nvmem_get_compatible_layout(struct device_node *np)
> +{
> +	struct nvmem_layout *p, *ret = NULL;
> +
> +	spin_lock(&nvmem_layout_lock);
> +
> +	list_for_each_entry(p, &nvmem_layouts, node) {
> +		if (of_match_node(p->of_match_table, np)) {
> +			ret = p;
> +			break;
> +		}
> +	}
> +
> +	spin_unlock(&nvmem_layout_lock);
> +
> +	return ret;
> +}
> +
> +static int nvmem_add_cells_from_layout(struct nvmem_device *nvmem)
> +{
> +	struct nvmem_layout *layout;
> +
> +	layout = nvmem_get_compatible_layout(nvmem->dev.of_node);
> +	if (layout)
> +		layout->add_cells(&nvmem->dev, nvmem, layout);

access to add_cells can crash hear as we did not check it before adding 
in to list.
Or
we could relax add_cells callback for usecases like imx-octop.


> +
> +	return 0;
> +}
> +
> +const void *nvmem_layout_get_match_data(struct nvmem_device *nvmem,
> +					struct nvmem_layout *layout)
> +{
> +	const struct of_device_id *match;
> +
> +	match = of_match_node(layout->of_match_table, nvmem->dev.of_node);
> +
> +	return match ? match->data : NULL;
> +}
> +EXPORT_SYMBOL_GPL(nvmem_layout_get_match_data);
> +
>   /**
>    * nvmem_register() - Register a nvmem device for given nvmem_config.
>    * Also creates a binary entry in /sys/bus/nvmem/devices/dev-name/nvmem
> @@ -872,6 +925,10 @@ struct nvmem_device *nvmem_register(const struct nvmem_config *config)
>   	if (rval)
>   		goto err_remove_cells;
>   
> +	rval = nvmem_add_cells_from_layout(nvmem);
> +	if (rval)
> +		goto err_remove_cells;
> +
>   	blocking_notifier_call_chain(&nvmem_notifier, NVMEM_ADD, nvmem);
>   
>   	return nvmem;
> diff --git a/drivers/nvmem/layouts/Kconfig b/drivers/nvmem/layouts/Kconfig
> new file mode 100644
> index 000000000000..9ad3911d1605
> --- /dev/null
> +++ b/drivers/nvmem/layouts/Kconfig
> @@ -0,0 +1,5 @@
> +# SPDX-License-Identifier: GPL-2.0
> +
> +menu "Layout Types"
> +
> +endmenu
> diff --git a/drivers/nvmem/layouts/Makefile b/drivers/nvmem/layouts/Makefile
> new file mode 100644
> index 000000000000..6fdb3c60a4fa
> --- /dev/null
> +++ b/drivers/nvmem/layouts/Makefile
> @@ -0,0 +1,4 @@
> +# SPDX-License-Identifier: GPL-2.0
> +#
> +# Makefile for nvmem layouts.
> +#
> diff --git a/include/linux/nvmem-provider.h b/include/linux/nvmem-provider.h
> index e710404959e7..323685841e9f 100644
> --- a/include/linux/nvmem-provider.h
> +++ b/include/linux/nvmem-provider.h
> @@ -127,6 +127,28 @@ struct nvmem_cell_table {
>   	struct list_head	node;
>   };
>   
> +/**
> + * struct nvmem_layout - NVMEM layout definitions
> + *
> + * @name:		Layout name.
> + * @of_match_table:	Open firmware match table.
> + * @add_cells:		Will be called if a nvmem device is found which
> + *			has this layout. The function will add layout
> + *			specific cells with nvmem_add_one_cell().
> + * @node:		List node.
> + *
> + * A nvmem device can hold a well defined structure which can just be
> + * evaluated during runtime. For example a TLV list, or a list of "name=val"
> + * pairs. A nvmem layout can parse the nvmem device and add appropriate
> + * cells.
> + */
> +struct nvmem_layout {
> +	const char *name;
> +	const struct of_device_id *of_match_table;

looking at this, I think its doable to convert the existing 
cell_post_process callback to layouts by adding a layout specific 
callback here.


--srini
> +	int (*add_cells)(struct nvmem_device *nvmem, struct nvmem_layout *layout);
> +	struct list_head node;
> +};
> +
>   #if IS_ENABLED(CONFIG_NVMEM)
>   
>   struct nvmem_device *nvmem_register(const struct nvmem_config *cfg);
> @@ -141,6 +163,10 @@ void nvmem_del_cell_table(struct nvmem_cell_table *table);
>   int nvmem_add_one_cell(struct nvmem_device *nvmem,
>   		       const struct nvmem_cell_info *info);
>   
> +int nvmem_register_layout(struct nvmem_layout *layout);
> +const void *nvmem_layout_get_match_data(struct nvmem_device *nvmem,
> +					struct nvmem_layout *layout);
> +
>   #else
>   
>   static inline struct nvmem_device *nvmem_register(const struct nvmem_config *c)
> @@ -161,5 +187,17 @@ static inline void nvmem_del_cell_table(struct nvmem_cell_table *table) {}
>   static inline int nvmem_add_one_cell(struct nvmem_device *nvmem,
>   				     const struct nvmem_cell_info *info) {}
>   
> +static inline int nvmem_register_layout(struct nvmem_layout *layout)
> +{
> +	return -EOPNOTSUPP;
> +}
> +
> +static inline const void *
> +nvmem_layout_get_match_data(struct nvmem_device *nvmem,
> +			    struct nvmem_layout *layout)
> +{
> +	return NULL;
> +}
> +
>   #endif /* CONFIG_NVMEM */
>   #endif  /* ifndef _LINUX_NVMEM_PROVIDER_H */
Srinivas Kandagatla Aug. 30, 2022, 1:37 p.m. UTC | #11
On 25/08/2022 22:44, Michael Walle wrote:
> Instead of relying on the name the consumer is using for the cell, like
> it is done for the nvmem .cell_post_process configuration parameter,
> provide a per-cell post processing hook. This can then be populated by
> the NVMEM provider (or the NVMEM layout) when adding the cell.
> 
> Signed-off-by: Michael Walle <michael@walle.cc>
> ---
>   drivers/nvmem/core.c           | 16 ++++++++++++++++
>   include/linux/nvmem-consumer.h |  5 +++++
>   2 files changed, 21 insertions(+)
> 
> diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c
> index 5357fc378700..cbfbe6264e6c 100644
> --- a/drivers/nvmem/core.c
> +++ b/drivers/nvmem/core.c
> @@ -52,6 +52,7 @@ struct nvmem_cell_entry {
>   	int			bytes;
>   	int			bit_offset;
>   	int			nbits;
> +	nvmem_cell_post_process_t post_process;


two post_processing callbacks for cells is confusing tbh, we could 
totally move to use of cell->post_process.

one idea is to point cell->post_process to nvmem->cell_post_process 
during cell creation time which should clean this up a bit.

Other option is to move to using layouts for every thing.

prefixing post_process with read should also make it explicit that this 
callback is very specific to reads only.


>   	struct device_node	*np;
>   	struct nvmem_device	*nvmem;
>   	struct list_head	node;
> @@ -468,6 +469,7 @@ static int nvmem_cell_info_to_nvmem_cell_entry_nodup(struct nvmem_device *nvmem,
>   	cell->offset = info->offset;
>   	cell->bytes = info->bytes;
>   	cell->name = info->name;
> +	cell->post_process = info->post_process;
>   
>   	cell->bit_offset = info->bit_offset;
>   	cell->nbits = info->nbits;
> @@ -1500,6 +1502,13 @@ static int __nvmem_cell_read(struct nvmem_device *nvmem,
>   	if (cell->bit_offset || cell->nbits)
>   		nvmem_shift_read_buffer_in_place(cell, buf);
>   
> +	if (cell->post_process) {
> +		rc = cell->post_process(nvmem->priv, id, index,
> +					cell->offset, buf, cell->bytes);
> +		if (rc)
> +			return rc;
> +	}
> +
>   	if (nvmem->cell_post_process) {
>   		rc = nvmem->cell_post_process(nvmem->priv, id, index,
>   					      cell->offset, buf, cell->bytes);
> @@ -1608,6 +1617,13 @@ static int __nvmem_cell_entry_write(struct nvmem_cell_entry *cell, void *buf, si
>   	    (cell->bit_offset == 0 && len != cell->bytes))
>   		return -EINVAL;
>   
> +	/*
> +	 * Any cells which have a post_process hook are read-only because we
> +	 * cannot reverse the operation and it might affect other cells, too.
> +	 */
> +	if (cell->post_process)
> +		return -EINVAL;

Post process was always implicitly for reads only, this check should 
also tie the loose ends of cell_post_processing callback.


--srini
> +
>   	if (cell->bit_offset || cell->nbits) {
>   		buf = nvmem_cell_prepare_write_buffer(cell, buf, len);
>   		if (IS_ERR(buf))
> diff --git a/include/linux/nvmem-consumer.h b/include/linux/nvmem-consumer.h
> index 980f9c9ac0bc..761b8ef78adc 100644
> --- a/include/linux/nvmem-consumer.h
> +++ b/include/linux/nvmem-consumer.h
> @@ -19,6 +19,10 @@ struct device_node;
>   struct nvmem_cell;
>   struct nvmem_device;
>   
> +/* duplicated from nvmem-provider.h */
> +typedef int (*nvmem_cell_post_process_t)(void *priv, const char *id, int index,
> +					 unsigned int offset, void *buf, size_t bytes);
> +
>   struct nvmem_cell_info {
>   	const char		*name;
>   	unsigned int		offset;
> @@ -26,6 +30,7 @@ struct nvmem_cell_info {
>   	unsigned int		bit_offset;
>   	unsigned int		nbits;
>   	struct device_node	*np;
> +	nvmem_cell_post_process_t post_process;
>   };
>   
>   /**
Srinivas Kandagatla Aug. 30, 2022, 1:37 p.m. UTC | #12
Thanks Michael for the work.

On 29/08/2022 09:22, Michael Walle wrote:
> 
>> One thing I believe you need to handle is replacing "cell_post_process"
>> callback with your layout thing.
>>
>> I find it confusing to have
>> 1. cell_post_process() CB at NVMEM device level
>> 2. post_process() CB at NVMEM cell level
> 
> What is wrong with having a callback at both levels?

we should converge this tbh, its more than one code paths to deal with 
similar usecases.

I have put down some thoughts in "[PATCH v1 06/14] nvmem: core: 
introduce NVMEM layouts" and "[PATCH v1 07/14] nvmem: core: add per-cell 
post processing" review.


--srini
> 
> Granted, in this particular case (it is just used at one place), I still
> think that it is the wrong approach to add this transformation in the
> driver (in this particular case). The driver is supposed to give you
> access to the SoC's fuse box, but it will magically change the content
> of a cell if the nvmem consumer named this cell "mac-address" (which
> you also found confusing the last time and I do too!).
> 
> The driver itself doesn't add any cells on its own, so I cannot register
> a .post_process hook there. Therefore, you'd need that post_process hook
> on every cell, which is equivalent to have a post_process hook at
> device level.
> 
> Unless you have a better idea. I'll leave that up to NXP to fix that (or
> leave it like that).
> 
> -michael
Michael Walle Aug. 30, 2022, 2:20 p.m. UTC | #13
Hi,

Am 2022-08-30 15:37, schrieb Srinivas Kandagatla:
> On 25/08/2022 22:44, Michael Walle wrote:
>> Instead of relying on the name the consumer is using for the cell, 
>> like
>> it is done for the nvmem .cell_post_process configuration parameter,
>> provide a per-cell post processing hook. This can then be populated by
>> the NVMEM provider (or the NVMEM layout) when adding the cell.
>> 
>> Signed-off-by: Michael Walle <michael@walle.cc>
>> ---
>>   drivers/nvmem/core.c           | 16 ++++++++++++++++
>>   include/linux/nvmem-consumer.h |  5 +++++
>>   2 files changed, 21 insertions(+)
>> 
>> diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c
>> index 5357fc378700..cbfbe6264e6c 100644
>> --- a/drivers/nvmem/core.c
>> +++ b/drivers/nvmem/core.c
>> @@ -52,6 +52,7 @@ struct nvmem_cell_entry {
>>   	int			bytes;
>>   	int			bit_offset;
>>   	int			nbits;
>> +	nvmem_cell_post_process_t post_process;
> 
> 
> two post_processing callbacks for cells is confusing tbh, we could
> totally move to use of cell->post_process.
> 
> one idea is to point cell->post_process to nvmem->cell_post_process
> during cell creation time which should clean this up a bit.

You'll then trigger the read-only check below for all the cells
if nvmem->cell_post_process is set.

> Other option is to move to using layouts for every thing.

As mentioned in a previous reply, I can't see how it could be
achieved. The problem here is that:
  (1) the layout isn't creating the cells, the OF parser is
  (2) even if we would create the cells, we wouldn't know
      which cell needs the post_process. So we are back to
      the situation above, were we have to add it to all
      the cells, making them read-only. [We depend on the
      name of the nvmem-consumer to apply the hook.

> prefixing post_process with read should also make it explicit that
> this callback is very specific to reads only.

good idea.

-michael

>>   	struct device_node	*np;
>>   	struct nvmem_device	*nvmem;
>>   	struct list_head	node;
>> @@ -468,6 +469,7 @@ static int 
>> nvmem_cell_info_to_nvmem_cell_entry_nodup(struct nvmem_device *nvmem,
>>   	cell->offset = info->offset;
>>   	cell->bytes = info->bytes;
>>   	cell->name = info->name;
>> +	cell->post_process = info->post_process;
>>     	cell->bit_offset = info->bit_offset;
>>   	cell->nbits = info->nbits;
>> @@ -1500,6 +1502,13 @@ static int __nvmem_cell_read(struct 
>> nvmem_device *nvmem,
>>   	if (cell->bit_offset || cell->nbits)
>>   		nvmem_shift_read_buffer_in_place(cell, buf);
>>   +	if (cell->post_process) {
>> +		rc = cell->post_process(nvmem->priv, id, index,
>> +					cell->offset, buf, cell->bytes);
>> +		if (rc)
>> +			return rc;
>> +	}
>> +
>>   	if (nvmem->cell_post_process) {
>>   		rc = nvmem->cell_post_process(nvmem->priv, id, index,
>>   					      cell->offset, buf, cell->bytes);
>> @@ -1608,6 +1617,13 @@ static int __nvmem_cell_entry_write(struct 
>> nvmem_cell_entry *cell, void *buf, si
>>   	    (cell->bit_offset == 0 && len != cell->bytes))
>>   		return -EINVAL;
>>   +	/*
>> +	 * Any cells which have a post_process hook are read-only because we
>> +	 * cannot reverse the operation and it might affect other cells, 
>> too.
>> +	 */
>> +	if (cell->post_process)
>> +		return -EINVAL;
> 
> Post process was always implicitly for reads only, this check should
> also tie the loose ends of cell_post_processing callback.
> 
> 
> --srini
>> +
>>   	if (cell->bit_offset || cell->nbits) {
>>   		buf = nvmem_cell_prepare_write_buffer(cell, buf, len);
>>   		if (IS_ERR(buf))
>> diff --git a/include/linux/nvmem-consumer.h 
>> b/include/linux/nvmem-consumer.h
>> index 980f9c9ac0bc..761b8ef78adc 100644
>> --- a/include/linux/nvmem-consumer.h
>> +++ b/include/linux/nvmem-consumer.h
>> @@ -19,6 +19,10 @@ struct device_node;
>>   struct nvmem_cell;
>>   struct nvmem_device;
>>   +/* duplicated from nvmem-provider.h */
>> +typedef int (*nvmem_cell_post_process_t)(void *priv, const char *id, 
>> int index,
>> +					 unsigned int offset, void *buf, size_t bytes);
>> +
>>   struct nvmem_cell_info {
>>   	const char		*name;
>>   	unsigned int		offset;
>> @@ -26,6 +30,7 @@ struct nvmem_cell_info {
>>   	unsigned int		bit_offset;
>>   	unsigned int		nbits;
>>   	struct device_node	*np;
>> +	nvmem_cell_post_process_t post_process;
>>   };
>>     /**
Michael Walle Aug. 30, 2022, 2:24 p.m. UTC | #14
Am 2022-08-30 15:36, schrieb Srinivas Kandagatla:
> On 25/08/2022 22:44, Michael Walle wrote:
>> NVMEM layouts are used to generate NVMEM cells during runtime. Think 
>> of
>> an EEPROM with a well-defined conent. For now, the content can be
>> described by a device tree or a board file. But this only works if the
>> offsets and lengths are static and don't change. One could also argue
>> that putting the layout of the EEPROM in the device tree is the wrong
>> place. Instead, the device tree should just have a specific compatible
>> string.
>> 
>> Right now there are two use cases:
>>   (1) The NVMEM cell needs special processing. E.g. if it only 
>> specifies
>>       a base MAC address offset and you need to add an offset, or it
>>       needs to parse a MAC from ASCII format or some proprietary 
>> format.
>>       (Post processing of cells is added in a later commit).
>>   (2) u-boot environment parsing. The cells don't have a particular
>>       offset but it needs parsing the content to determine the offsets
>>       and length.
>> 
>> Signed-off-by: Michael Walle <michael@walle.cc>
>> ---
>>   drivers/nvmem/Kconfig          |  2 ++
>>   drivers/nvmem/Makefile         |  1 +
>>   drivers/nvmem/core.c           | 57 
>> ++++++++++++++++++++++++++++++++++
>>   drivers/nvmem/layouts/Kconfig  |  5 +++
>>   drivers/nvmem/layouts/Makefile |  4 +++
>>   include/linux/nvmem-provider.h | 38 +++++++++++++++++++++++
>>   6 files changed, 107 insertions(+)
>>   create mode 100644 drivers/nvmem/layouts/Kconfig
>>   create mode 100644 drivers/nvmem/layouts/Makefile
> 
> update to ./Documentation/driver-api/nvmem.rst would help others.

Sure. Didn't know about that one.

>> diff --git a/drivers/nvmem/Kconfig b/drivers/nvmem/Kconfig
>> index bab8a29c9861..1416837b107b 100644
>> --- a/drivers/nvmem/Kconfig
>> +++ b/drivers/nvmem/Kconfig
>> @@ -357,4 +357,6 @@ config NVMEM_U_BOOT_ENV
>>     	  If compiled as module it will be called nvmem_u-boot-env.
>>   +source "drivers/nvmem/layouts/Kconfig"
>> +
>>   endif
>> diff --git a/drivers/nvmem/Makefile b/drivers/nvmem/Makefile
>> index 399f9972d45b..cd5a5baa2f3a 100644
>> --- a/drivers/nvmem/Makefile
>> +++ b/drivers/nvmem/Makefile
>> @@ -5,6 +5,7 @@
>>     obj-$(CONFIG_NVMEM)		+= nvmem_core.o
>>   nvmem_core-y			:= core.o
>> +obj-y				+= layouts/
>>     # Devices
>>   obj-$(CONFIG_NVMEM_BCM_OCOTP)	+= nvmem-bcm-ocotp.o
>> diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c
>> index 3dfd149374a8..5357fc378700 100644
>> --- a/drivers/nvmem/core.c
>> +++ b/drivers/nvmem/core.c
>> @@ -74,6 +74,9 @@ static LIST_HEAD(nvmem_lookup_list);
>>     static BLOCKING_NOTIFIER_HEAD(nvmem_notifier);
>>   +static DEFINE_SPINLOCK(nvmem_layout_lock);
>> +static LIST_HEAD(nvmem_layouts);
>> +
>>   static int __nvmem_reg_read(struct nvmem_device *nvmem, unsigned int 
>> offset,
>>   			    void *val, size_t bytes)
>>   {
>> @@ -744,6 +747,56 @@ static int nvmem_add_cells_from_of(struct 
>> nvmem_device *nvmem)
>>   	return 0;
>>   }
>>   +int nvmem_register_layout(struct nvmem_layout *layout)
>> +{
>> +	spin_lock(&nvmem_layout_lock);
>> +	list_add(&layout->node, &nvmem_layouts);
>> +	spin_unlock(&nvmem_layout_lock);
>> +
>> +	return 0;
>> +}
>> +EXPORT_SYMBOL_GPL(nvmem_register_layout);
> 
> we should provide nvmem_unregister_layout too, so that providers can
> add them if they can in there respective drivers.

Actually, that was the idea; that you can have layouts outside of 
layouts/.
I also had a nvmem_unregister_layout() but removed it because it was 
dead
code. Will re-add it again.

>> +
>> +static struct nvmem_layout *nvmem_get_compatible_layout(struct 
>> device_node *np)
>> +{
>> +	struct nvmem_layout *p, *ret = NULL;
>> +
>> +	spin_lock(&nvmem_layout_lock);
>> +
>> +	list_for_each_entry(p, &nvmem_layouts, node) {
>> +		if (of_match_node(p->of_match_table, np)) {
>> +			ret = p;
>> +			break;
>> +		}
>> +	}
>> +
>> +	spin_unlock(&nvmem_layout_lock);
>> +
>> +	return ret;
>> +}
>> +
>> +static int nvmem_add_cells_from_layout(struct nvmem_device *nvmem)
>> +{
>> +	struct nvmem_layout *layout;
>> +
>> +	layout = nvmem_get_compatible_layout(nvmem->dev.of_node);
>> +	if (layout)
>> +		layout->add_cells(&nvmem->dev, nvmem, layout);
> 
> access to add_cells can crash hear as we did not check it before
> adding in to list.
> Or
> we could relax add_cells callback for usecases like imx-octop.

good catch, will use layout && layout->add_cells.

>> +
>> +	return 0;
>> +}
>> +
>> +const void *nvmem_layout_get_match_data(struct nvmem_device *nvmem,
>> +					struct nvmem_layout *layout)
>> +{
>> +	const struct of_device_id *match;
>> +
>> +	match = of_match_node(layout->of_match_table, nvmem->dev.of_node);
>> +
>> +	return match ? match->data : NULL;
>> +}
>> +EXPORT_SYMBOL_GPL(nvmem_layout_get_match_data);
>> +
>>   /**
>>    * nvmem_register() - Register a nvmem device for given 
>> nvmem_config.
>>    * Also creates a binary entry in 
>> /sys/bus/nvmem/devices/dev-name/nvmem
>> @@ -872,6 +925,10 @@ struct nvmem_device *nvmem_register(const struct 
>> nvmem_config *config)
>>   	if (rval)
>>   		goto err_remove_cells;
>>   +	rval = nvmem_add_cells_from_layout(nvmem);
>> +	if (rval)
>> +		goto err_remove_cells;
>> +
>>   	blocking_notifier_call_chain(&nvmem_notifier, NVMEM_ADD, nvmem);
>>     	return nvmem;
>> diff --git a/drivers/nvmem/layouts/Kconfig 
>> b/drivers/nvmem/layouts/Kconfig
>> new file mode 100644
>> index 000000000000..9ad3911d1605
>> --- /dev/null
>> +++ b/drivers/nvmem/layouts/Kconfig
>> @@ -0,0 +1,5 @@
>> +# SPDX-License-Identifier: GPL-2.0
>> +
>> +menu "Layout Types"
>> +
>> +endmenu
>> diff --git a/drivers/nvmem/layouts/Makefile 
>> b/drivers/nvmem/layouts/Makefile
>> new file mode 100644
>> index 000000000000..6fdb3c60a4fa
>> --- /dev/null
>> +++ b/drivers/nvmem/layouts/Makefile
>> @@ -0,0 +1,4 @@
>> +# SPDX-License-Identifier: GPL-2.0
>> +#
>> +# Makefile for nvmem layouts.
>> +#
>> diff --git a/include/linux/nvmem-provider.h 
>> b/include/linux/nvmem-provider.h
>> index e710404959e7..323685841e9f 100644
>> --- a/include/linux/nvmem-provider.h
>> +++ b/include/linux/nvmem-provider.h
>> @@ -127,6 +127,28 @@ struct nvmem_cell_table {
>>   	struct list_head	node;
>>   };
>>   +/**
>> + * struct nvmem_layout - NVMEM layout definitions
>> + *
>> + * @name:		Layout name.
>> + * @of_match_table:	Open firmware match table.
>> + * @add_cells:		Will be called if a nvmem device is found which
>> + *			has this layout. The function will add layout
>> + *			specific cells with nvmem_add_one_cell().
>> + * @node:		List node.
>> + *
>> + * A nvmem device can hold a well defined structure which can just be
>> + * evaluated during runtime. For example a TLV list, or a list of 
>> "name=val"
>> + * pairs. A nvmem layout can parse the nvmem device and add 
>> appropriate
>> + * cells.
>> + */
>> +struct nvmem_layout {
>> +	const char *name;
>> +	const struct of_device_id *of_match_table;
> 
> looking at this, I think its doable to convert the existing
> cell_post_process callback to layouts by adding a layout specific
> callback here.

can you elaborate on that?

-michael
Srinivas Kandagatla Aug. 30, 2022, 2:43 p.m. UTC | #15
On 30/08/2022 15:24, Michael Walle wrote:
> Am 2022-08-30 15:36, schrieb Srinivas Kandagatla:
>> On 25/08/2022 22:44, Michael Walle wrote:
>>> NVMEM layouts are used to generate NVMEM cells during runtime. Think of
>>> an EEPROM with a well-defined conent. For now, the content can be
>>> described by a device tree or a board file. But this only works if the
>>> offsets and lengths are static and don't change. One could also argue
>>> that putting the layout of the EEPROM in the device tree is the wrong
>>> place. Instead, the device tree should just have a specific compatible
>>> string.
>>>
>>> Right now there are two use cases:
>>>   (1) The NVMEM cell needs special processing. E.g. if it only specifies
>>>       a base MAC address offset and you need to add an offset, or it
>>>       needs to parse a MAC from ASCII format or some proprietary format.
>>>       (Post processing of cells is added in a later commit).
>>>   (2) u-boot environment parsing. The cells don't have a particular
>>>       offset but it needs parsing the content to determine the offsets
>>>       and length.
>>>
>>> Signed-off-by: Michael Walle <michael@walle.cc>
>>> ---
>>>   drivers/nvmem/Kconfig          |  2 ++
>>>   drivers/nvmem/Makefile         |  1 +
>>>   drivers/nvmem/core.c           | 57 ++++++++++++++++++++++++++++++++++
>>>   drivers/nvmem/layouts/Kconfig  |  5 +++
>>>   drivers/nvmem/layouts/Makefile |  4 +++
>>>   include/linux/nvmem-provider.h | 38 +++++++++++++++++++++++
>>>   6 files changed, 107 insertions(+)
>>>   create mode 100644 drivers/nvmem/layouts/Kconfig
>>>   create mode 100644 drivers/nvmem/layouts/Makefile
>>
>> update to ./Documentation/driver-api/nvmem.rst would help others.
> 
> Sure. Didn't know about that one.
> 
>>> diff --git a/drivers/nvmem/Kconfig b/drivers/nvmem/Kconfig
>>> index bab8a29c9861..1416837b107b 100644
>>> --- a/drivers/nvmem/Kconfig
>>> +++ b/drivers/nvmem/Kconfig
>>> @@ -357,4 +357,6 @@ config NVMEM_U_BOOT_ENV
>>>           If compiled as module it will be called nvmem_u-boot-env.
>>>   +source "drivers/nvmem/layouts/Kconfig"
>>> +
>>>   endif
>>> diff --git a/drivers/nvmem/Makefile b/drivers/nvmem/Makefile
>>> index 399f9972d45b..cd5a5baa2f3a 100644
>>> --- a/drivers/nvmem/Makefile
>>> +++ b/drivers/nvmem/Makefile
>>> @@ -5,6 +5,7 @@
>>>     obj-$(CONFIG_NVMEM)        += nvmem_core.o
>>>   nvmem_core-y            := core.o
>>> +obj-y                += layouts/
>>>     # Devices
>>>   obj-$(CONFIG_NVMEM_BCM_OCOTP)    += nvmem-bcm-ocotp.o
>>> diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c
>>> index 3dfd149374a8..5357fc378700 100644
>>> --- a/drivers/nvmem/core.c
>>> +++ b/drivers/nvmem/core.c
>>> @@ -74,6 +74,9 @@ static LIST_HEAD(nvmem_lookup_list);
>>>     static BLOCKING_NOTIFIER_HEAD(nvmem_notifier);
>>>   +static DEFINE_SPINLOCK(nvmem_layout_lock);
>>> +static LIST_HEAD(nvmem_layouts);
>>> +
>>>   static int __nvmem_reg_read(struct nvmem_device *nvmem, unsigned 
>>> int offset,
>>>                   void *val, size_t bytes)
>>>   {
>>> @@ -744,6 +747,56 @@ static int nvmem_add_cells_from_of(struct 
>>> nvmem_device *nvmem)
>>>       return 0;
>>>   }
>>>   +int nvmem_register_layout(struct nvmem_layout *layout)
>>> +{
>>> +    spin_lock(&nvmem_layout_lock);
>>> +    list_add(&layout->node, &nvmem_layouts);
>>> +    spin_unlock(&nvmem_layout_lock);
>>> +
>>> +    return 0;
>>> +}
>>> +EXPORT_SYMBOL_GPL(nvmem_register_layout);
>>
>> we should provide nvmem_unregister_layout too, so that providers can
>> add them if they can in there respective drivers.
> 
> Actually, that was the idea; that you can have layouts outside of layouts/.
> I also had a nvmem_unregister_layout() but removed it because it was dead
> code. Will re-add it again.
> 
>>> +
>>> +static struct nvmem_layout *nvmem_get_compatible_layout(struct 
>>> device_node *np)
>>> +{
>>> +    struct nvmem_layout *p, *ret = NULL;
>>> +
>>> +    spin_lock(&nvmem_layout_lock);
>>> +
>>> +    list_for_each_entry(p, &nvmem_layouts, node) {
>>> +        if (of_match_node(p->of_match_table, np)) {
>>> +            ret = p;
>>> +            break;
>>> +        }
>>> +    }
>>> +
>>> +    spin_unlock(&nvmem_layout_lock);
>>> +
>>> +    return ret;
>>> +}
>>> +
>>> +static int nvmem_add_cells_from_layout(struct nvmem_device *nvmem)
>>> +{
>>> +    struct nvmem_layout *layout;
>>> +
>>> +    layout = nvmem_get_compatible_layout(nvmem->dev.of_node);
>>> +    if (layout)
>>> +        layout->add_cells(&nvmem->dev, nvmem, layout);
>>
>> access to add_cells can crash hear as we did not check it before
>> adding in to list.
>> Or
>> we could relax add_cells callback for usecases like imx-octop.
> 
> good catch, will use layout && layout->add_cells.
> 
>>> +
>>> +    return 0;
>>> +}
>>> +
>>> +const void *nvmem_layout_get_match_data(struct nvmem_device *nvmem,
>>> +                    struct nvmem_layout *layout)
>>> +{
>>> +    const struct of_device_id *match;
>>> +
>>> +    match = of_match_node(layout->of_match_table, nvmem->dev.of_node);
>>> +
>>> +    return match ? match->data : NULL;
>>> +}
>>> +EXPORT_SYMBOL_GPL(nvmem_layout_get_match_data);
>>> +
>>>   /**
>>>    * nvmem_register() - Register a nvmem device for given nvmem_config.
>>>    * Also creates a binary entry in 
>>> /sys/bus/nvmem/devices/dev-name/nvmem
>>> @@ -872,6 +925,10 @@ struct nvmem_device *nvmem_register(const struct 
>>> nvmem_config *config)
>>>       if (rval)
>>>           goto err_remove_cells;
>>>   +    rval = nvmem_add_cells_from_layout(nvmem);
>>> +    if (rval)
>>> +        goto err_remove_cells;
>>> +
>>>       blocking_notifier_call_chain(&nvmem_notifier, NVMEM_ADD, nvmem);
>>>         return nvmem;
>>> diff --git a/drivers/nvmem/layouts/Kconfig 
>>> b/drivers/nvmem/layouts/Kconfig
>>> new file mode 100644
>>> index 000000000000..9ad3911d1605
>>> --- /dev/null
>>> +++ b/drivers/nvmem/layouts/Kconfig
>>> @@ -0,0 +1,5 @@
>>> +# SPDX-License-Identifier: GPL-2.0
>>> +
>>> +menu "Layout Types"
>>> +
>>> +endmenu
>>> diff --git a/drivers/nvmem/layouts/Makefile 
>>> b/drivers/nvmem/layouts/Makefile
>>> new file mode 100644
>>> index 000000000000..6fdb3c60a4fa
>>> --- /dev/null
>>> +++ b/drivers/nvmem/layouts/Makefile
>>> @@ -0,0 +1,4 @@
>>> +# SPDX-License-Identifier: GPL-2.0
>>> +#
>>> +# Makefile for nvmem layouts.
>>> +#
>>> diff --git a/include/linux/nvmem-provider.h 
>>> b/include/linux/nvmem-provider.h
>>> index e710404959e7..323685841e9f 100644
>>> --- a/include/linux/nvmem-provider.h
>>> +++ b/include/linux/nvmem-provider.h
>>> @@ -127,6 +127,28 @@ struct nvmem_cell_table {
>>>       struct list_head    node;
>>>   };
>>>   +/**
>>> + * struct nvmem_layout - NVMEM layout definitions
>>> + *
>>> + * @name:        Layout name.
>>> + * @of_match_table:    Open firmware match table.
>>> + * @add_cells:        Will be called if a nvmem device is found which
>>> + *            has this layout. The function will add layout
>>> + *            specific cells with nvmem_add_one_cell().
>>> + * @node:        List node.
>>> + *
>>> + * A nvmem device can hold a well defined structure which can just be
>>> + * evaluated during runtime. For example a TLV list, or a list of 
>>> "name=val"
>>> + * pairs. A nvmem layout can parse the nvmem device and add appropriate
>>> + * cells.
>>> + */
>>> +struct nvmem_layout {
>>> +    const char *name;
>>> +    const struct of_device_id *of_match_table;
>>
>> looking at this, I think its doable to convert the existing
>> cell_post_process callback to layouts by adding a layout specific
>> callback here.
> 
> can you elaborate on that?

If we relax add_cells + add nvmem_unregister_layout() and update struct 
nvmem_layout to include post_process callback like

struct nvmem_layout {
	const char *name;
	const struct of_device_id *of_match_table;
	int (*add_cells)(struct nvmem_device *nvmem, struct nvmem_layout *layout);
	struct list_head node;
	/* default callback for every cell */
	nvmem_cell_post_process_t post_process;
};

then we can move imx-ocotp to this layout style without add_cell 
callback, and finally get rid of cell_process_callback from both 
nvmem_config and nvmem_device.

If layout specific post_process callback is available and cell does not 
have a callback set then we can can be either updated cell post_process 
callback with this one or invoke layout specific callback directly.

does that make sense?


--srini


> 
> -michael
Michael Walle Aug. 30, 2022, 3:02 p.m. UTC | #16
Am 2022-08-30 16:43, schrieb Srinivas Kandagatla:

>>>> diff --git a/drivers/nvmem/layouts/Makefile 
>>>> b/drivers/nvmem/layouts/Makefile
>>>> new file mode 100644
>>>> index 000000000000..6fdb3c60a4fa
>>>> --- /dev/null
>>>> +++ b/drivers/nvmem/layouts/Makefile
>>>> @@ -0,0 +1,4 @@
>>>> +# SPDX-License-Identifier: GPL-2.0
>>>> +#
>>>> +# Makefile for nvmem layouts.
>>>> +#
>>>> diff --git a/include/linux/nvmem-provider.h 
>>>> b/include/linux/nvmem-provider.h
>>>> index e710404959e7..323685841e9f 100644
>>>> --- a/include/linux/nvmem-provider.h
>>>> +++ b/include/linux/nvmem-provider.h
>>>> @@ -127,6 +127,28 @@ struct nvmem_cell_table {
>>>>       struct list_head    node;
>>>>   };
>>>>   +/**
>>>> + * struct nvmem_layout - NVMEM layout definitions
>>>> + *
>>>> + * @name:        Layout name.
>>>> + * @of_match_table:    Open firmware match table.
>>>> + * @add_cells:        Will be called if a nvmem device is found 
>>>> which
>>>> + *            has this layout. The function will add layout
>>>> + *            specific cells with nvmem_add_one_cell().
>>>> + * @node:        List node.
>>>> + *
>>>> + * A nvmem device can hold a well defined structure which can just 
>>>> be
>>>> + * evaluated during runtime. For example a TLV list, or a list of 
>>>> "name=val"
>>>> + * pairs. A nvmem layout can parse the nvmem device and add 
>>>> appropriate
>>>> + * cells.
>>>> + */
>>>> +struct nvmem_layout {
>>>> +    const char *name;
>>>> +    const struct of_device_id *of_match_table;
>>> 
>>> looking at this, I think its doable to convert the existing
>>> cell_post_process callback to layouts by adding a layout specific
>>> callback here.
>> 
>> can you elaborate on that?
> 
> If we relax add_cells + add nvmem_unregister_layout() and update
> struct nvmem_layout to include post_process callback like
> 
> struct nvmem_layout {
> 	const char *name;
> 	const struct of_device_id *of_match_table;
> 	int (*add_cells)(struct nvmem_device *nvmem, struct nvmem_layout 
> *layout);
> 	struct list_head node;
> 	/* default callback for every cell */
> 	nvmem_cell_post_process_t post_process;
> };
> 
> then we can move imx-ocotp to this layout style without add_cell
> callback, and finally get rid of cell_process_callback from both
> nvmem_config and nvmem_device.
> 
> If layout specific post_process callback is available and cell does
> not have a callback set then we can can be either updated cell
> post_process callback with this one or invoke layout specific callback
> directly.
> 
> does that make sense?

Yes I get what you mean. BUT I'm not so sure; it mixes different
things together. Layouts will add cells, analogue to
nvmem_add_cells_from_of() or nvmem_add_cells_from_table(). With
the hook above, the layout mechanism is abused to add post
processing to cells added by other means.

What is then the difference to the driver having that "global"
post process hook?

The correct place to add the per-cell hook in this case would be
nvmem_add_cells_from_of().

-michael
Srinivas Kandagatla Aug. 30, 2022, 3:23 p.m. UTC | #17
On 30/08/2022 16:02, Michael Walle wrote:
> Am 2022-08-30 16:43, schrieb Srinivas Kandagatla:
> 
>>>>> diff --git a/drivers/nvmem/layouts/Makefile 
>>>>> b/drivers/nvmem/layouts/Makefile
>>>>> new file mode 100644
>>>>> index 000000000000..6fdb3c60a4fa
>>>>> --- /dev/null
>>>>> +++ b/drivers/nvmem/layouts/Makefile
>>>>> @@ -0,0 +1,4 @@
>>>>> +# SPDX-License-Identifier: GPL-2.0
>>>>> +#
>>>>> +# Makefile for nvmem layouts.
>>>>> +#
>>>>> diff --git a/include/linux/nvmem-provider.h 
>>>>> b/include/linux/nvmem-provider.h
>>>>> index e710404959e7..323685841e9f 100644
>>>>> --- a/include/linux/nvmem-provider.h
>>>>> +++ b/include/linux/nvmem-provider.h
>>>>> @@ -127,6 +127,28 @@ struct nvmem_cell_table {
>>>>>       struct list_head    node;
>>>>>   };
>>>>>   +/**
>>>>> + * struct nvmem_layout - NVMEM layout definitions
>>>>> + *
>>>>> + * @name:        Layout name.
>>>>> + * @of_match_table:    Open firmware match table.
>>>>> + * @add_cells:        Will be called if a nvmem device is found which
>>>>> + *            has this layout. The function will add layout
>>>>> + *            specific cells with nvmem_add_one_cell().
>>>>> + * @node:        List node.
>>>>> + *
>>>>> + * A nvmem device can hold a well defined structure which can just be
>>>>> + * evaluated during runtime. For example a TLV list, or a list of 
>>>>> "name=val"
>>>>> + * pairs. A nvmem layout can parse the nvmem device and add 
>>>>> appropriate
>>>>> + * cells.
>>>>> + */
>>>>> +struct nvmem_layout {
>>>>> +    const char *name;
>>>>> +    const struct of_device_id *of_match_table;
>>>>
>>>> looking at this, I think its doable to convert the existing
>>>> cell_post_process callback to layouts by adding a layout specific
>>>> callback here.
>>>
>>> can you elaborate on that?
>>
>> If we relax add_cells + add nvmem_unregister_layout() and update
>> struct nvmem_layout to include post_process callback like
>>
>> struct nvmem_layout {
>>     const char *name;
>>     const struct of_device_id *of_match_table;
>>     int (*add_cells)(struct nvmem_device *nvmem, struct nvmem_layout 
>> *layout);
>>     struct list_head node;
>>     /* default callback for every cell */
>>     nvmem_cell_post_process_t post_process;
>> };
>>
>> then we can move imx-ocotp to this layout style without add_cell
>> callback, and finally get rid of cell_process_callback from both
>> nvmem_config and nvmem_device.
>>
>> If layout specific post_process callback is available and cell does
>> not have a callback set then we can can be either updated cell
>> post_process callback with this one or invoke layout specific callback
>> directly.
>>
>> does that make sense?
> 
> Yes I get what you mean. BUT I'm not so sure; it mixes different
> things together. Layouts will add cells, analogue to
> nvmem_add_cells_from_of() or nvmem_add_cells_from_table(). With
> the hook above, the layout mechanism is abused to add post
> processing to cells added by other means.

We are still defining what layout exactly mean w.r.t to nvmem :-)

> 

There are two aspects to this as nvmem core is concerned

1> parse and add cells based on some provider specific algo/stucture.
2> post process cell data before user can see it.

In some cases we need 1 and 2 while in other cases we just need 1 or 2.

Having an unified interface would help with maintenance and removing 
duplication.

> What is then the difference to the driver having that "global"
> post process hook?

w.r.t post processing there should be no difference.

cell can have no post-processing or a default post processing or a 
specific one depending on its configuration.

> 
> The correct place to add the per-cell hook in this case would be
> nvmem_add_cells_from_of().

yes, that is the place where it should go. we have to work on the 
details but if provider is associated with a layout then this should be 
doable.


--srini
> 
> -michael
Krzysztof Kozlowski Aug. 31, 2022, 7:48 a.m. UTC | #18
On 28/08/2022 18:05, Rafał Miłecki wrote:
> On 25.08.2022 23:44, Michael Walle wrote:
>> This is now the third attempt to fetch the MAC addresses from the VPD
>> for the Kontron sl28 boards. Previous discussions can be found here:
>> https://lore.kernel.org/lkml/20211228142549.1275412-1-michael@walle.cc/
>>
>>
>> NVMEM cells are typically added by board code or by the devicetree. But
>> as the cells get more complex, there is (valid) push back from the
>> devicetree maintainers to not put that handling in the devicetree.
> 
> I dropped the ball waiting for Rob's reponse in the
> [PATCH 0/2] dt-bindings: nvmem: support describing cells
> https://lore.kernel.org/linux-arm-kernel/0b7b8f7ea6569f79524aea1a3d783665@walle.cc/T/
> 
> Before we go any further can we have a clear answer from Rob (or
> Krzysztof now too?):
> 
> 
> Is there any point in having bindings like:
> 
> compatible = "mac-address";
> 
> for NVMEM cells nodes? So systems (Linux, U-Boot) can handle them in a
> more generic way?

I think Rob is already in the subject, but I wonder how would that work
for U-Boot? You might have multiple cards, so how does this matching
would work? IOW, what problem would it solve comparing to existing
solution (alias to ethernet device with mac-address field)?

Best regards,
Krzysztof
Michael Walle Sept. 1, 2022, 4:26 p.m. UTC | #19
Hi netdev maintainers,

Am 2022-08-25 23:44, schrieb Michael Walle:
> Add a helper to add an offset to a ethernet address. This comes in 
> handy
> if you have a base ethernet address for multiple interfaces.
> 
> Signed-off-by: Michael Walle <michael@walle.cc>

Would it be possible to get an Ack for this patch, so I don't have
to repost this large (and still growing) series to netdev every time?

I guess it would be ok to have this go through another tree?

-michael
Andrew Lunn Sept. 1, 2022, 4:31 p.m. UTC | #20
On Thu, Sep 01, 2022 at 06:26:39PM +0200, Michael Walle wrote:
> Hi netdev maintainers,
> 
> Am 2022-08-25 23:44, schrieb Michael Walle:
> > Add a helper to add an offset to a ethernet address. This comes in handy
> > if you have a base ethernet address for multiple interfaces.
> > 
> > Signed-off-by: Michael Walle <michael@walle.cc>
> 
> Would it be possible to get an Ack for this patch, so I don't have
> to repost this large (and still growing) series to netdev every time?

Looks O.K. to me

Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Jakub Kicinski Sept. 1, 2022, 7:54 p.m. UTC | #21
On Thu, 01 Sep 2022 18:26:39 +0200 Michael Walle wrote:
> Am 2022-08-25 23:44, schrieb Michael Walle:
> > Add a helper to add an offset to a ethernet address. This comes in 
> > handy
> > if you have a base ethernet address for multiple interfaces.
> > 
> > Signed-off-by: Michael Walle <michael@walle.cc>  
> 
> Would it be possible to get an Ack for this patch, so I don't have
> to repost this large (and still growing) series to netdev every time?
> 
> I guess it would be ok to have this go through another tree?

Andrew's ack is strong enough, but in case there's any doubt:

Acked-by: Jakub Kicinski <kuba@kernel.org>