Message ID | 20220901221857.2600340-1-michael@walle.cc |
---|---|
Headers | show |
Series | nvmem: core: introduce NVMEM layouts | expand |
Hi Michael, Srinivas, + Thomas and Robert michael@walle.cc wrote on Fri, 2 Sep 2022 00:18:37 +0200: > This is now the third attempt to fetch the MAC addresses from the VPD > for the Kontron sl28 boards. Previous discussions can be found here: > https://lore.kernel.org/lkml/20211228142549.1275412-1-michael@walle.cc/ > > > NVMEM cells are typically added by board code or by the devicetree. But > as the cells get more complex, there is (valid) push back from the > devicetree maintainers to not put that handling in the devicetree. > > Therefore, introduce NVMEM layouts. They operate on the NVMEM device and > can add cells during runtime. That way it is possible to add more complex > cells than it is possible right now with the offset/length/bits > description in the device tree. For example, you can have post processing > for individual cells (think of endian swapping, or ethernet offset > handling). > > The imx-ocotp driver is the only user of the global post processing hook, > convert it to nvmem layouts and drop the global post pocessing hook. Please > note, that this change is only compile-time tested. These layouts are an excellent idea. I actually have a new use case for them. In modern Ethernet switches which follow the ONIE standard [1] there is an nvmem device which contains a standardized type-length-value array with many information about manufacturing and mac addresses. There is no "static" pattern there and anyway so many possible entries that it would be very tedious to list all of them in the bindings, as each manufacturer chooses what it want to export on each of its devices (although reading the data sequentially and extracting the cells is rather straightforward). Moreover, the specification [1] does not define any storage device type, so it can be eg. an MTD device or an EEPROM. Having an nvmem device provider separated from the nvmem cells provider makes complete sense, the "layout" drivers idea proposed by Michael seem to be a perfect fit. Srinivas, can you give us an update on what you think about this series (not a commitment, just how you feel it overall)? Michael, is there a v3 in preparation? I'll try to write something on top of your v2 otherwise. > You can also have cells which have no static offset, like the > ones in an u-boot environment. The last patches will convert the current > u-boot environment driver to a NVMEM layout and lifting the restriction > that it only works with mtd devices. But as it will change the required > compatible strings, it is marked as RFC for now. It also needs to have > its device tree schema update which is left out here. These two patches > are not expected to be applied, but rather to show another example of > how to use the layouts. Actually I think these two matches make complete sense, right now one can only use the u-boot-env cells if the environment is stored in an mtd device, of course this covers many cases but not all of them and it would be really nice to have this first layout example merged, not only on the mailing list. > For now, the layouts are selected by a specific compatible string in a > device tree. E.g. the VPD on the kontron sl28 do (within a SPI flash node): > compatible = "kontron,sl28-vpd", "user-otp"; > or if you'd use the u-boot environment (within an MTD patition): > compatible = "u-boot,env", "nvmem"; > > The "user-otp" (or "nvmem") will lead to a NVMEM device, the > "kontron,sl28-vpd" (or "u-boot,env") will then apply the specific layout > on top of the NVMEM device. Thanks, Miquèl
On 21/09/2022 10:58, Miquel Raynal wrote: > > Srinivas, can you give us an update on what you think about this > series (not a commitment, just how you feel it overall)? > Overall this is going in right direction, there are few bindings related comments once those are sorted out it should be good to go. From NVMEM side am happy with this feature, which has been a long pending one. We have few discussions on ONIE standard before, layouts would fit in nicely. --srini > Michael, is there a v3 in preparation? I'll try to write something on > top of your v2 otherwise.
Hi Srinivas, Thanks for the quick feedback. srinivas.kandagatla@linaro.org wrote on Thu, 22 Sep 2022 22:22:17 +0100: > On 21/09/2022 10:58, Miquel Raynal wrote: > > > > Srinivas, can you give us an update on what you think about this > > series (not a commitment, just how you feel it overall)? > > > Overall this is going in right direction, there are few bindings related comments once those are sorted out it should be good to go. Ok, let's tackle those. > From NVMEM side am happy with this feature, which has been a long pending one. > > We have few discussions on ONIE standard before, layouts would fit in nicely. I agree they would. Thanks, Miquèl
Hi Michael, I have a few additional questions regarding the bindings. michael@walle.cc wrote on Fri, 2 Sep 2022 00:18:37 +0200: > This is now the third attempt to fetch the MAC addresses from the VPD > for the Kontron sl28 boards. Previous discussions can be found here: > https://lore.kernel.org/lkml/20211228142549.1275412-1-michael@walle.cc/ > > > NVMEM cells are typically added by board code or by the devicetree. But > as the cells get more complex, there is (valid) push back from the > devicetree maintainers to not put that handling in the devicetree. > > Therefore, introduce NVMEM layouts. They operate on the NVMEM device and > can add cells during runtime. That way it is possible to add more complex > cells than it is possible right now with the offset/length/bits > description in the device tree. For example, you can have post processing > for individual cells (think of endian swapping, or ethernet offset > handling). > > The imx-ocotp driver is the only user of the global post processing hook, > convert it to nvmem layouts and drop the global post pocessing hook. Please > note, that this change is only compile-time tested. > > You can also have cells which have no static offset, like the > ones in an u-boot environment. The last patches will convert the current > u-boot environment driver to a NVMEM layout and lifting the restriction > that it only works with mtd devices. But as it will change the required > compatible strings, it is marked as RFC for now. It also needs to have > its device tree schema update which is left out here. These two patches > are not expected to be applied, but rather to show another example of > how to use the layouts. > > For now, the layouts are selected by a specific compatible string in a > device tree. E.g. the VPD on the kontron sl28 do (within a SPI flash node): > compatible = "kontron,sl28-vpd", "user-otp"; > or if you'd use the u-boot environment (within an MTD patition): > compatible = "u-boot,env", "nvmem"; > > The "user-otp" (or "nvmem") will lead to a NVMEM device, the > "kontron,sl28-vpd" (or "u-boot,env") will then apply the specific layout > on top of the NVMEM device. So if I understand correctly, there should be: - one DT node defining the storage medium eeprom/mtd/whatever, - another DT node defining the nvmem device with two compatibles, the "nvmem" (or "user-otp") and the layout. Is this correct? Actually I was a bit surprised because generally speaking, DT maintainers (rightfully) do not want to describe how we use devices, the nvmem abstraction looks like a Linux thing when on top of mtd devices for instance, so I just wanted to confirm this point. Then, as we have an nvmem device described in the DT, why not just pointing at the nvmem device from the cell consumer, rather than still having the need to define all the cells that the nvmem device will produce in the DT? Maybe an example to show what I mean. Here is the current way: nvmem_provider: nvmem-provider { properties; mycell: my_cell { [properties;] }; }; And we point to a cell with: nvmem-cells = <&mycell>; But, as for the tlv tables, there are many cells that will be produced, and the driver may anyway just ask for the cell name (eg. performing a lookup of the "mac-address" cell name), so why bothering to describe all the cells in the DT, like: nvmem-cells-providers = <&nvmem_provider>; What do you think? Maybe for the mac addresses this is a bit limiting as, in practice, we often have base mac addresses available and using: nvmem-cells = <&mycell INDEX>; allows to dynamically create many different mac addresses, but I wonder if the approach would be interesting for other cell types. Just an open question. Thanks, Miquèl
Hi, Am 2022-09-23 17:47, schrieb Miquel Raynal: > I have a few additional questions regarding the bindings. > > michael@walle.cc wrote on Fri, 2 Sep 2022 00:18:37 +0200: > >> This is now the third attempt to fetch the MAC addresses from the VPD >> for the Kontron sl28 boards. Previous discussions can be found here: >> https://lore.kernel.org/lkml/20211228142549.1275412-1-michael@walle.cc/ >> >> >> NVMEM cells are typically added by board code or by the devicetree. >> But >> as the cells get more complex, there is (valid) push back from the >> devicetree maintainers to not put that handling in the devicetree. >> >> Therefore, introduce NVMEM layouts. They operate on the NVMEM device >> and >> can add cells during runtime. That way it is possible to add more >> complex >> cells than it is possible right now with the offset/length/bits >> description in the device tree. For example, you can have post >> processing >> for individual cells (think of endian swapping, or ethernet offset >> handling). >> >> The imx-ocotp driver is the only user of the global post processing >> hook, >> convert it to nvmem layouts and drop the global post pocessing hook. >> Please >> note, that this change is only compile-time tested. >> >> You can also have cells which have no static offset, like the >> ones in an u-boot environment. The last patches will convert the >> current >> u-boot environment driver to a NVMEM layout and lifting the >> restriction >> that it only works with mtd devices. But as it will change the >> required >> compatible strings, it is marked as RFC for now. It also needs to have >> its device tree schema update which is left out here. These two >> patches >> are not expected to be applied, but rather to show another example of >> how to use the layouts. >> >> For now, the layouts are selected by a specific compatible string in a >> device tree. E.g. the VPD on the kontron sl28 do (within a SPI flash >> node): >> compatible = "kontron,sl28-vpd", "user-otp"; >> or if you'd use the u-boot environment (within an MTD patition): >> compatible = "u-boot,env", "nvmem"; >> >> The "user-otp" (or "nvmem") will lead to a NVMEM device, the >> "kontron,sl28-vpd" (or "u-boot,env") will then apply the specific >> layout >> on top of the NVMEM device. > > So if I understand correctly, there should be: > - one DT node defining the storage medium eeprom/mtd/whatever, > - another DT node defining the nvmem device with two compatibles, the > "nvmem" (or "user-otp") and the layout. > Is this correct? Actually I was a bit surprised because generally > speaking, DT maintainers (rightfully) do not want to describe how we > use devices, the nvmem abstraction looks like a Linux thing when on top > of mtd devices for instance, so I just wanted to confirm this point. What do you mean by two nodes? Two separate ones or one being a subnode of the other? There is only one (storage) node and nvmem cells as subnodes. The two compatibles aren't strictly needed. But it will simplify the drivers in linux greatly. Otherwise the storage driver would need to know for which compatibles it needs to register a nvmem device. E.g. MTD devices determine that by the "nvmem" compatible. The OTP driver will probe by "{user,factory}-otp". If you'd only have one compatible, the storage driver would need a list of all the layouts so it can register a nvmem device. But also from a device tree POV this makes sense IMHO because the second compatible is a more specific one. With only the more generic compatible you just get a nvmem device without any cells - or only the cells described in the device tree. Regarding "describe how the devices are used": then there shouldn't be nvmem (cell) bindings at all, because you are actually describing how you are using the nvmem provider. So IMHO having for example the compatible "kontron,sl28-vpd" is actually fits more than having a nvmem provider compatible with cells subnodes. > Then, as we have an nvmem device described in the DT, why not just > pointing at the nvmem device from the cell consumer, rather than still > having the need to define all the cells that the nvmem device will > produce in the DT? See also https://lore.kernel.org/linux-devicetree/4bf16e18-1591-8bc9-7c46-649391de3761@linaro.org/ > Maybe an example to show what I mean. Here is the current way: > > nvmem_provider: nvmem-provider { > properties; > > mycell: my_cell { > [properties;] > }; > }; > > And we point to a cell with: > > nvmem-cells = <&mycell>; > > But, as for the tlv tables, there are many cells that will be produced, > and the driver may anyway just ask for the cell name (eg. performing a > lookup of the "mac-address" cell name), so why bothering to describe > all > the cells in the DT, like: > > nvmem-cells-providers = <&nvmem_provider>; > > What do you think? Ok, you even go one step further and even removing the argument of the phandle and you are proposing to use the nvmem-cell-name, right? That might work with simple cells created by a layout. But what if there are two consumers with different names for the same cell? Consumer bindings might already be present, e.g. the ethernet bindings will use "mac-address". What if there is another binding which want to use that cell but doesn't name it "mac-address"? IMHO to reference a nvmem cell you shouldn't rely on the consumer. Also keep in mind, that the number of arguments is fixed and given by the "#.*-cells" property found on the target node. Therefore, that won't work if you have cells where one has an argument and another don't. > Maybe for the mac addresses this is a bit limiting as, in practice, we > often have base mac addresses available and using: > > nvmem-cells = <&mycell INDEX>; > > allows to dynamically create many different mac addresses, but I wonder > if the approach would be interesting for other cell types. Just an open > question. So how would your idea work with that? Maybe we could support both? But again, I'm not sure if it is a good idea to mix the consumer with the provider. -michael