mbox series

[RFC,0/2] Support for TI Page-based Address Translator

Message ID 20190607193523.25700-1-afd@ti.com
Headers show
Series Support for TI Page-based Address Translator | expand

Message

Andrew Davis June 7, 2019, 7:35 p.m. UTC
Hello all,

So I've got a new IP on our new SoC I'm looking to make use of and would
like some help figuring out what framework best matches its function. The
IP is called a "Page-based Address Translator" or PAT. A PAT instance
(there are 5 of these things on our J721e device[0]) is basically a
really simple IOMMU sitting on the interconnect between the device bus
and what is effectively our northbridge called
MSMC (DRAM/SRAM/L3-Cache/Coherency controller).

Simplified it looks about like this:

         CPUs
          |
DRAM --- MSMC --- SRAM/L3
          |
        NAVSS - (PATs)
          |
  --- Device Bus ---------
 |      |      |          |
Device  Device  Device   etc..

Each PAT has a set a window in high memory (about 0x48_0000_0000 area)
for which any transaction with an address targeting its window will be
routed into that PAT. The PAT then does a simple calculation based on
the how far into the window the address is and the current page size,
does a lookup to an internally held table of translations, then sends the
transaction back out on the interconnect with a new address. Usually this
address should be towards somewhere in DRAM, but can be some other device
or even back into PAT (I'm not sure there is a valid use-case for this
but just a point of interest).

My gut reaction is that this is an IOMMU which belongs in the IOMMU
subsystem. But there are a couple oddities that make me less sure it is
really suitable for the IOMMU framework. First it doesn't sit in front of
any devices, it sits in front of *all* devices, this means we would have
every device claim it as an IOMMU parent, even though many devices also
have a traditional IOMMU connected. Second, there is only a limited
window of address space per PAT, this means we will get fragmentation and
allocation failures on occasion, in this way it looks to me more like AGP
GART. Third, the window is in high-memory, so unlike some IOMMU devices
which can be used to allow DMA to high-mem from low-mem only devices, PAT
can't be used for that. Lastly it doesn't provide any isolation, if the
access does not target the PAT window it is not used (that is not to say
we don't have isolation, just that it is taken care of by other parts of
the interconnect).

This means, to me, that PAT has one main purpose: making
physically-contiguous views of scattered pages in system memory for DMA.
But it does that really well, the whole translation table is held in a
PAT-internal SRAM giving 1 bus cycle latency and at full bus bandwidth.

So what are my options here, is IOMMU the right way to go or not?

Looking around the kernel I also see the char dev ARP/GART interface
which looks like a good fit, but also looks quite dated and my guess
deprecated at this point. Moving right along..

Another thing I saw is we have the support upstream of the DMM device[1]
available in some OMAPx/AM57x SoCs. I'm a little more familiar with this
device. The DMM is a bundle of IPs and in fact one of them is called
"PAT" and it even does basically the same thing this incarnation of "PAT"
does. It's upstream integration design is a bit questionable
unfortunately, the DMM support was integrated into the OMAPDRM display
driver, which does make some sense then given its support for rotation
(using TILER IP contained in DMM). The issue with this was that the
DMM/TILER/PAT IP was not part of the our display IP, but instead out at
the end of the shared device bus, inside the external memory controller.
Like this new PAT this meant that any IP that could make use of it, but
only the display framework could actually provide buffers backed by it.
This meant, for instance, if we wanted to decode some video buffer using
our video decoder we would have to allocate from DRM framework then pass
that over to the V4L2 system. This doesn't make much sense and required
the user-space to know about this odd situation and allocate from the
right spot or else have to use up valuable CMA space or waste memory with
dedicated carveouts.

Another idea would be to have this as a special central allocator
(exposed through DMA-BUF heaps[2] or ION) that would give out normal
system memory as a DMA-BUF but remap it with PAT if a device that only
supports contiguous memory tries to attach/map that DMA-BUF.

One last option would be to allow user-space to choose to make the buffer
contiguous when it needs. That's what the driver in this series allows.
We expose a remapping device, user-space passes it a non-contiguous
DMA-BUF handle and it passes a contiguous one back. Simple as that.

So how do we use this, lets take Android for example, we don't know at
allocation time if a rendering buffer will end up going back into the GPU
for further processing, or if it will be consumed directly by the display.
This is a problem for us as our GPU can work with non-contiguous buffers
but our display cannot, so any buffers that could possibly go to the
display at some point currently needs to be allocated as contiguous from
the start, this leads to a lot of unneeded use of carveout/CMA memory.
With this driver on the other hand, we allocate regular non-contiguous
system memory (again using DMA-BUF heaps, but ION could work here too),
then only when a buffer is about to be sent to the display we pass the
handle to this DMA-BUF to our driver here and take the handle it gives
back and pass that to the display instead.

As said, it is probably not the ideal solution but it does work and was
used for some early testing of the IP.

Well, sorry for the wall of text.
Any and all suggestions very welcome and appreciated.

Thanks,
Andrew

[0] http://www.ti.com/lit/pdf/spruil1
[1] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/gpu/drm/omapdrm/omap_dmm_tiler.c
[2] https://lkml.org/lkml/2019/6/6/1211

Andrew F. Davis (2):
  dt-bindings: soc: ti: Add TI PAT bindings
  soc: ti: Add Support for the TI Page-based Address Translator (PAT)

 .../devicetree/bindings/misc/ti,pat.txt       |  34 ++
 drivers/soc/ti/Kconfig                        |   9 +
 drivers/soc/ti/Makefile                       |   1 +
 drivers/soc/ti/ti-pat.c                       | 569 ++++++++++++++++++
 include/uapi/linux/ti-pat.h                   |  44 ++
 5 files changed, 657 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/misc/ti,pat.txt
 create mode 100644 drivers/soc/ti/ti-pat.c
 create mode 100644 include/uapi/linux/ti-pat.h

Comments

Tero Kristo June 18, 2019, 9:07 a.m. UTC | #1
On 07/06/2019 22:35, Andrew F. Davis wrote:
> This patch adds a driver for the Page-based Address Translator (PAT)
> present on various TI SoCs. A PAT device performs address translation
> using tables stored in an internal SRAM. Each PAT supports a set number
> of pages, each occupying a programmable 4KB, 16KB, 64KB, or 1MB of
> addresses in a window for which an incoming transaction will be
> translated.
> 
> Signed-off-by: Andrew F. Davis <afd@ti.com>
> ---
>   drivers/soc/ti/Kconfig      |   9 +
>   drivers/soc/ti/Makefile     |   1 +
>   drivers/soc/ti/ti-pat.c     | 569 ++++++++++++++++++++++++++++++++++++
>   include/uapi/linux/ti-pat.h |  44 +++
>   4 files changed, 623 insertions(+)
>   create mode 100644 drivers/soc/ti/ti-pat.c
>   create mode 100644 include/uapi/linux/ti-pat.h
> 
> diff --git a/drivers/soc/ti/Kconfig b/drivers/soc/ti/Kconfig
> index f0be35d3dcba..b838ae74d01f 100644
> --- a/drivers/soc/ti/Kconfig
> +++ b/drivers/soc/ti/Kconfig
> @@ -86,4 +86,13 @@ config TI_SCI_INTA_MSI_DOMAIN
>   	help
>   	  Driver to enable Interrupt Aggregator specific MSI Domain.
>   
> +config TI_PAT
> +	tristate "TI PAT DMA-BUF exporter"
> +	select REGMAP

What is the reasoning for using regmap for internal register access? Why 
not just use direct readl/writel for everything? To me it seems this is 
only used during probe time also...

> +	help
> +	  Driver for TI Page-based Address Translator (PAT). This driver
> +	  provides the an API allowing the remapping of a non-contiguous
> +	  DMA-BUF into a contiguous one that is sutable for devices needing
> +	  coniguous memory.

Minor typo: contiguous.

> +
>   endif # SOC_TI
> diff --git a/drivers/soc/ti/Makefile b/drivers/soc/ti/Makefile
> index b3868d392d4f..1369642b40c3 100644
> --- a/drivers/soc/ti/Makefile
> +++ b/drivers/soc/ti/Makefile
> @@ -9,3 +9,4 @@ obj-$(CONFIG_AMX3_PM)			+= pm33xx.o
>   obj-$(CONFIG_WKUP_M3_IPC)		+= wkup_m3_ipc.o
>   obj-$(CONFIG_TI_SCI_PM_DOMAINS)		+= ti_sci_pm_domains.o
>   obj-$(CONFIG_TI_SCI_INTA_MSI_DOMAIN)	+= ti_sci_inta_msi.o
> +obj-$(CONFIG_TI_PAT)			+= ti-pat.o
> diff --git a/drivers/soc/ti/ti-pat.c b/drivers/soc/ti/ti-pat.c
> new file mode 100644
> index 000000000000..7359ea0f7ccf
> --- /dev/null
> +++ b/drivers/soc/ti/ti-pat.c
> @@ -0,0 +1,569 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * TI PAT mapped DMA-BUF memory re-exporter
> + *
> + * Copyright (C) 2018-2019 Texas Instruments Incorporated - http://www.ti.com/
> + *	Andrew F. Davis <afd@ti.com>
> + */
> +
> +#include <linux/fs.h>
> +#include <linux/module.h>
> +#include <linux/platform_device.h>
> +#include <linux/mod_devicetable.h>
> +#include <linux/uaccess.h>
> +#include <linux/miscdevice.h>
> +#include <linux/regmap.h>
> +#include <linux/dma-buf.h>
> +#include <linux/genalloc.h>
> +#include <linux/vmalloc.h>
> +#include <linux/slab.h>
> +
> +#include <linux/ti-pat.h>
> +
> +#define TI_PAT_DRIVER_NAME	"ti-pat"

Why do you have a define for this seeing it is only used in single location?

> +
> +/* TI PAT MMRS registers */
> +#define TI_PAT_MMRS_PID		0x0 /* Revision Register */
> +#define TI_PAT_MMRS_CONFIG	0x4 /* Config Register */
> +#define TI_PAT_MMRS_CONTROL	0x10 /* Control Register */
> +
> +/* TI PAT CONTROL register field values */
> +#define TI_PAT_CONTROL_ARB_MODE_UF	0x0 /* Updates first */
> +#define TI_PAT_CONTROL_ARB_MODE_RR	0x2 /* Round-robin */
> +
> +#define TI_PAT_CONTROL_PAGE_SIZE_4KB	0x0
> +#define TI_PAT_CONTROL_PAGE_SIZE_16KB	0x1
> +#define TI_PAT_CONTROL_PAGE_SIZE_64KB	0x2
> +#define TI_PAT_CONTROL_PAGE_SIZE_1MB	0x3
> +
> +static unsigned int ti_pat_page_sizes[] = {
> +	[TI_PAT_CONTROL_PAGE_SIZE_4KB]  = 4 * 1024,
> +	[TI_PAT_CONTROL_PAGE_SIZE_16KB] = 16 * 1024,
> +	[TI_PAT_CONTROL_PAGE_SIZE_64KB] = 64 * 1024,
> +	[TI_PAT_CONTROL_PAGE_SIZE_1MB]  = 1024 * 1024,
> +};
> +
> +enum ti_pat_mmrs_fields {
> +	/* Revision */
> +	F_PID_MAJOR,
> +	F_PID_MINOR,
> +
> +	/* Controls */
> +	F_CONTROL_ARB_MODE,
> +	F_CONTROL_PAGE_SIZE,
> +	F_CONTROL_REPLACE_OID_EN,
> +	F_CONTROL_EN,
> +
> +	/* sentinel */
> +	F_MAX_FIELDS
> +};
> +
> +static const struct reg_field ti_pat_mmrs_reg_fields[] = {
> +	/* Revision */
> +	[F_PID_MAJOR]			= REG_FIELD(TI_PAT_MMRS_PID, 8, 10),
> +	[F_PID_MINOR]			= REG_FIELD(TI_PAT_MMRS_PID, 0, 5),
> +	/* Controls */
> +	[F_CONTROL_ARB_MODE]		= REG_FIELD(TI_PAT_MMRS_CONTROL, 6, 7),
> +	[F_CONTROL_PAGE_SIZE]		= REG_FIELD(TI_PAT_MMRS_CONTROL, 4, 5),
> +	[F_CONTROL_REPLACE_OID_EN]	= REG_FIELD(TI_PAT_MMRS_CONTROL, 1, 1),
> +	[F_CONTROL_EN]			= REG_FIELD(TI_PAT_MMRS_CONTROL, 0, 0),
> +};
> +
> +/**
> + * struct ti_pat_data - PAT device instance data
> + * @dev: PAT device structure
> + * @mdev: misc device
> + * @mmrs_map: Register map of MMRS region
> + * @table_base: Base address of TABLE region

Please add kerneldoc comments for all fields.

> + */
> +struct ti_pat_data {
> +	struct device *dev;
> +	struct miscdevice mdev;
> +	struct regmap *mmrs_map;
> +	struct regmap_field *mmrs_fields[F_MAX_FIELDS];
> +	void __iomem *table_base;
> +	unsigned int page_count;
> +	unsigned int page_size;
> +	phys_addr_t window_base;
> +	struct gen_pool *pool;
> +};
> +

Kerneldoc comments for below structs would be also useful, especially 
for ti_pat_buffer.

> +struct ti_pat_dma_buf_attachment {
> +	struct device *dev;
> +	struct sg_table *table;
> +	struct ti_pat_buffer *buffer;
> +	struct list_head list;
> +};
> +
> +struct ti_pat_buffer {
> +	struct ti_pat_data *pat;
> +	struct dma_buf *i_dma_buf;
> +	size_t size;
> +	unsigned long offset;
> +	struct dma_buf *e_dma_buf;
> +
> +	struct dma_buf_attachment *attachment;
> +	struct sg_table *sgt;
> +
> +	struct list_head attachments;
> +	int map_count;
> +
> +	struct mutex lock;
> +};
> +
> +static const struct regmap_config ti_pat_regmap_config = {
> +	.reg_bits = 32,
> +	.val_bits = 32,
> +	.reg_stride = 4,
> +};
> +
> +static int ti_pat_dma_buf_attach(struct dma_buf *dmabuf,
> +				 struct dma_buf_attachment *attachment)
> +{
> +	struct ti_pat_dma_buf_attachment *a;
> +	struct ti_pat_buffer *buffer = dmabuf->priv;
> +
> +	a = kzalloc(sizeof(*a), GFP_KERNEL);
> +	if (!a)
> +		return -ENOMEM;
> +
> +	a->dev = attachment->dev;
> +	a->buffer = buffer;
> +	INIT_LIST_HEAD(&a->list);
> +
> +	a->table = kzalloc(sizeof(*a->table), GFP_KERNEL);
> +	if (!a->table) {
> +		kfree(a);
> +		return -ENOMEM;
> +	}
> +
> +	if (sg_alloc_table(a->table, 1, GFP_KERNEL)) {
> +		kfree(a->table);
> +		kfree(a);
> +		return -ENOMEM;
> +	}
> +
> +	sg_set_page(a->table->sgl, pfn_to_page(PFN_DOWN(buffer->offset)), buffer->size, 0);
> +
> +	attachment->priv = a;
> +
> +	mutex_lock(&buffer->lock);
> +	/* First time attachment we attach to parent */
> +	if (list_empty(&buffer->attachments)) {
> +		buffer->attachment = dma_buf_attach(buffer->i_dma_buf, buffer->pat->dev);
> +		if (IS_ERR(buffer->attachment)) {
> +			dev_err(buffer->pat->dev, "Unable to attach to parent DMA-BUF\n");
> +			mutex_unlock(&buffer->lock);
> +			kfree(a->table);
> +			kfree(a);
> +			return PTR_ERR(buffer->attachment);
> +		}
> +	}
> +	list_add(&a->list, &buffer->attachments);
> +	mutex_unlock(&buffer->lock);
> +
> +	return 0;
> +}
> +
> +static void ti_pat_dma_buf_detatch(struct dma_buf *dmabuf,
> +				   struct dma_buf_attachment *attachment)

Func name should be ti_pat_dma_buf_detach instead?

Other than that, I can't see anything obvious with my limited experience 
with dma_buf. Is there a simple way to test this driver btw?

-Tero

--
Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki. Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki
Robin Murphy July 2, 2019, 3:49 p.m. UTC | #2
On 07/06/2019 20:35, Andrew F. Davis wrote:
> Hello all,
> 
> So I've got a new IP on our new SoC I'm looking to make use of and would
> like some help figuring out what framework best matches its function. The
> IP is called a "Page-based Address Translator" or PAT. A PAT instance
> (there are 5 of these things on our J721e device[0]) is basically a
> really simple IOMMU sitting on the interconnect between the device bus
> and what is effectively our northbridge called
> MSMC (DRAM/SRAM/L3-Cache/Coherency controller).
> 
> Simplified it looks about like this:
> 
>           CPUs
>            |
> DRAM --- MSMC --- SRAM/L3
>            |
>          NAVSS - (PATs)
>            |
>    --- Device Bus ---------
>   |      |      |          |
> Device  Device  Device   etc..
> 
> Each PAT has a set a window in high memory (about 0x48_0000_0000 area)
> for which any transaction with an address targeting its window will be
> routed into that PAT. The PAT then does a simple calculation based on
> the how far into the window the address is and the current page size,
> does a lookup to an internally held table of translations, then sends the
> transaction back out on the interconnect with a new address. Usually this
> address should be towards somewhere in DRAM, but can be some other device
> or even back into PAT (I'm not sure there is a valid use-case for this
> but just a point of interest).
> 
> My gut reaction is that this is an IOMMU which belongs in the IOMMU
> subsystem. But there are a couple oddities that make me less sure it is
> really suitable for the IOMMU framework. First it doesn't sit in front of
> any devices, it sits in front of *all* devices, this means we would have
> every device claim it as an IOMMU parent, even though many devices also
> have a traditional IOMMU connected. Second, there is only a limited
> window of address space per PAT, this means we will get fragmentation and
> allocation failures on occasion, in this way it looks to me more like AGP
> GART. Third, the window is in high-memory, so unlike some IOMMU devices
> which can be used to allow DMA to high-mem from low-mem only devices, PAT
> can't be used for that. Lastly it doesn't provide any isolation, if the
> access does not target the PAT window it is not used (that is not to say
> we don't have isolation, just that it is taken care of by other parts of
> the interconnect).
> 
> This means, to me, that PAT has one main purpose: making
> physically-contiguous views of scattered pages in system memory for DMA.
> But it does that really well, the whole translation table is held in a
> PAT-internal SRAM giving 1 bus cycle latency and at full bus bandwidth.
> 
> So what are my options here, is IOMMU the right way to go or not?

FWIW, that sounds almost exactly like my (vague) understanding of other 
GARTs, and as such should be pretty well manageable via the IOMMU API - 
we already have tegra-gart, for example. The aperture contention issue 
could certainly be mitigated by letting the firmware claim it's only 
associated with the display and any other devices which really need it.

A further interesting avenue of investigation - now that Christoph's 
recent work has made it much more possible - would be a second set of 
IOMMU DMA ops tailored for "GART-like" domains where force_aperture=0, 
which could behave as dma-direct wherever possible and only use IOMMU 
remaps when absolutely necessary.

Robin.

> Looking around the kernel I also see the char dev ARP/GART interface
> which looks like a good fit, but also looks quite dated and my guess
> deprecated at this point. Moving right along..
> 
> Another thing I saw is we have the support upstream of the DMM device[1]
> available in some OMAPx/AM57x SoCs. I'm a little more familiar with this
> device. The DMM is a bundle of IPs and in fact one of them is called
> "PAT" and it even does basically the same thing this incarnation of "PAT"
> does. It's upstream integration design is a bit questionable
> unfortunately, the DMM support was integrated into the OMAPDRM display
> driver, which does make some sense then given its support for rotation
> (using TILER IP contained in DMM). The issue with this was that the
> DMM/TILER/PAT IP was not part of the our display IP, but instead out at
> the end of the shared device bus, inside the external memory controller.
> Like this new PAT this meant that any IP that could make use of it, but
> only the display framework could actually provide buffers backed by it.
> This meant, for instance, if we wanted to decode some video buffer using
> our video decoder we would have to allocate from DRM framework then pass
> that over to the V4L2 system. This doesn't make much sense and required
> the user-space to know about this odd situation and allocate from the
> right spot or else have to use up valuable CMA space or waste memory with
> dedicated carveouts.
> 
> Another idea would be to have this as a special central allocator
> (exposed through DMA-BUF heaps[2] or ION) that would give out normal
> system memory as a DMA-BUF but remap it with PAT if a device that only
> supports contiguous memory tries to attach/map that DMA-BUF.
> 
> One last option would be to allow user-space to choose to make the buffer
> contiguous when it needs. That's what the driver in this series allows.
> We expose a remapping device, user-space passes it a non-contiguous
> DMA-BUF handle and it passes a contiguous one back. Simple as that.
> 
> So how do we use this, lets take Android for example, we don't know at
> allocation time if a rendering buffer will end up going back into the GPU
> for further processing, or if it will be consumed directly by the display.
> This is a problem for us as our GPU can work with non-contiguous buffers
> but our display cannot, so any buffers that could possibly go to the
> display at some point currently needs to be allocated as contiguous from
> the start, this leads to a lot of unneeded use of carveout/CMA memory.
> With this driver on the other hand, we allocate regular non-contiguous
> system memory (again using DMA-BUF heaps, but ION could work here too),
> then only when a buffer is about to be sent to the display we pass the
> handle to this DMA-BUF to our driver here and take the handle it gives
> back and pass that to the display instead.
> 
> As said, it is probably not the ideal solution but it does work and was
> used for some early testing of the IP.
> 
> Well, sorry for the wall of text.
> Any and all suggestions very welcome and appreciated.
> 
> Thanks,
> Andrew
> 
> [0] http://www.ti.com/lit/pdf/spruil1
> [1] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/gpu/drm/omapdrm/omap_dmm_tiler.c
> [2] https://lkml.org/lkml/2019/6/6/1211
> 
> Andrew F. Davis (2):
>    dt-bindings: soc: ti: Add TI PAT bindings
>    soc: ti: Add Support for the TI Page-based Address Translator (PAT)
> 
>   .../devicetree/bindings/misc/ti,pat.txt       |  34 ++
>   drivers/soc/ti/Kconfig                        |   9 +
>   drivers/soc/ti/Makefile                       |   1 +
>   drivers/soc/ti/ti-pat.c                       | 569 ++++++++++++++++++
>   include/uapi/linux/ti-pat.h                   |  44 ++
>   5 files changed, 657 insertions(+)
>   create mode 100644 Documentation/devicetree/bindings/misc/ti,pat.txt
>   create mode 100644 drivers/soc/ti/ti-pat.c
>   create mode 100644 include/uapi/linux/ti-pat.h
>
Andrew Davis July 31, 2019, 3:28 p.m. UTC | #3
On 6/18/19 5:07 AM, Tero Kristo wrote:
> On 07/06/2019 22:35, Andrew F. Davis wrote:
>> This patch adds a driver for the Page-based Address Translator (PAT)
>> present on various TI SoCs. A PAT device performs address translation
>> using tables stored in an internal SRAM. Each PAT supports a set number
>> of pages, each occupying a programmable 4KB, 16KB, 64KB, or 1MB of
>> addresses in a window for which an incoming transaction will be
>> translated.
>>
>> Signed-off-by: Andrew F. Davis <afd@ti.com>
>> ---
>>   drivers/soc/ti/Kconfig      |   9 +
>>   drivers/soc/ti/Makefile     |   1 +
>>   drivers/soc/ti/ti-pat.c     | 569 ++++++++++++++++++++++++++++++++++++
>>   include/uapi/linux/ti-pat.h |  44 +++
>>   4 files changed, 623 insertions(+)
>>   create mode 100644 drivers/soc/ti/ti-pat.c
>>   create mode 100644 include/uapi/linux/ti-pat.h
>>
>> diff --git a/drivers/soc/ti/Kconfig b/drivers/soc/ti/Kconfig
>> index f0be35d3dcba..b838ae74d01f 100644
>> --- a/drivers/soc/ti/Kconfig
>> +++ b/drivers/soc/ti/Kconfig
>> @@ -86,4 +86,13 @@ config TI_SCI_INTA_MSI_DOMAIN
>>       help
>>         Driver to enable Interrupt Aggregator specific MSI Domain.
>>   +config TI_PAT
>> +    tristate "TI PAT DMA-BUF exporter"
>> +    select REGMAP
> 
> What is the reasoning for using regmap for internal register access? Why
> not just use direct readl/writel for everything? To me it seems this is
> only used during probe time also...
> 

There are two register spaces, the configuration space, and the actual
translation table data. I use regmap for all the configuration space.
Direct readl/writel would also work, but I prefer regmap as it lets me
work with field names vs using masks and shifts, even if it adds a
little extra code in tables at the top.

>> +    help
>> +      Driver for TI Page-based Address Translator (PAT). This driver
>> +      provides the an API allowing the remapping of a non-contiguous
>> +      DMA-BUF into a contiguous one that is sutable for devices needing
>> +      coniguous memory.
> 
> Minor typo: contiguous.
> 

ACK

>> +
>>   endif # SOC_TI
>> diff --git a/drivers/soc/ti/Makefile b/drivers/soc/ti/Makefile
>> index b3868d392d4f..1369642b40c3 100644
>> --- a/drivers/soc/ti/Makefile
>> +++ b/drivers/soc/ti/Makefile
>> @@ -9,3 +9,4 @@ obj-$(CONFIG_AMX3_PM)            += pm33xx.o
>>   obj-$(CONFIG_WKUP_M3_IPC)        += wkup_m3_ipc.o
>>   obj-$(CONFIG_TI_SCI_PM_DOMAINS)        += ti_sci_pm_domains.o
>>   obj-$(CONFIG_TI_SCI_INTA_MSI_DOMAIN)    += ti_sci_inta_msi.o
>> +obj-$(CONFIG_TI_PAT)            += ti-pat.o
>> diff --git a/drivers/soc/ti/ti-pat.c b/drivers/soc/ti/ti-pat.c
>> new file mode 100644
>> index 000000000000..7359ea0f7ccf
>> --- /dev/null
>> +++ b/drivers/soc/ti/ti-pat.c
>> @@ -0,0 +1,569 @@
>> +// SPDX-License-Identifier: GPL-2.0
>> +/*
>> + * TI PAT mapped DMA-BUF memory re-exporter
>> + *
>> + * Copyright (C) 2018-2019 Texas Instruments Incorporated -
>> http://www.ti.com/
>> + *    Andrew F. Davis <afd@ti.com>
>> + */
>> +
>> +#include <linux/fs.h>
>> +#include <linux/module.h>
>> +#include <linux/platform_device.h>
>> +#include <linux/mod_devicetable.h>
>> +#include <linux/uaccess.h>
>> +#include <linux/miscdevice.h>
>> +#include <linux/regmap.h>
>> +#include <linux/dma-buf.h>
>> +#include <linux/genalloc.h>
>> +#include <linux/vmalloc.h>
>> +#include <linux/slab.h>
>> +
>> +#include <linux/ti-pat.h>
>> +
>> +#define TI_PAT_DRIVER_NAME    "ti-pat"
> 
> Why do you have a define for this seeing it is only used in single
> location?
> 

Just habit when starting a driver, but you are right it is not needed here.

>> +
>> +/* TI PAT MMRS registers */
>> +#define TI_PAT_MMRS_PID        0x0 /* Revision Register */
>> +#define TI_PAT_MMRS_CONFIG    0x4 /* Config Register */
>> +#define TI_PAT_MMRS_CONTROL    0x10 /* Control Register */
>> +
>> +/* TI PAT CONTROL register field values */
>> +#define TI_PAT_CONTROL_ARB_MODE_UF    0x0 /* Updates first */
>> +#define TI_PAT_CONTROL_ARB_MODE_RR    0x2 /* Round-robin */
>> +
>> +#define TI_PAT_CONTROL_PAGE_SIZE_4KB    0x0
>> +#define TI_PAT_CONTROL_PAGE_SIZE_16KB    0x1
>> +#define TI_PAT_CONTROL_PAGE_SIZE_64KB    0x2
>> +#define TI_PAT_CONTROL_PAGE_SIZE_1MB    0x3
>> +
>> +static unsigned int ti_pat_page_sizes[] = {
>> +    [TI_PAT_CONTROL_PAGE_SIZE_4KB]  = 4 * 1024,
>> +    [TI_PAT_CONTROL_PAGE_SIZE_16KB] = 16 * 1024,
>> +    [TI_PAT_CONTROL_PAGE_SIZE_64KB] = 64 * 1024,
>> +    [TI_PAT_CONTROL_PAGE_SIZE_1MB]  = 1024 * 1024,
>> +};
>> +
>> +enum ti_pat_mmrs_fields {
>> +    /* Revision */
>> +    F_PID_MAJOR,
>> +    F_PID_MINOR,
>> +
>> +    /* Controls */
>> +    F_CONTROL_ARB_MODE,
>> +    F_CONTROL_PAGE_SIZE,
>> +    F_CONTROL_REPLACE_OID_EN,
>> +    F_CONTROL_EN,
>> +
>> +    /* sentinel */
>> +    F_MAX_FIELDS
>> +};
>> +
>> +static const struct reg_field ti_pat_mmrs_reg_fields[] = {
>> +    /* Revision */
>> +    [F_PID_MAJOR]            = REG_FIELD(TI_PAT_MMRS_PID, 8, 10),
>> +    [F_PID_MINOR]            = REG_FIELD(TI_PAT_MMRS_PID, 0, 5),
>> +    /* Controls */
>> +    [F_CONTROL_ARB_MODE]        = REG_FIELD(TI_PAT_MMRS_CONTROL, 6, 7),
>> +    [F_CONTROL_PAGE_SIZE]        = REG_FIELD(TI_PAT_MMRS_CONTROL, 4, 5),
>> +    [F_CONTROL_REPLACE_OID_EN]    = REG_FIELD(TI_PAT_MMRS_CONTROL, 1,
>> 1),
>> +    [F_CONTROL_EN]            = REG_FIELD(TI_PAT_MMRS_CONTROL, 0, 0),
>> +};
>> +
>> +/**
>> + * struct ti_pat_data - PAT device instance data
>> + * @dev: PAT device structure
>> + * @mdev: misc device
>> + * @mmrs_map: Register map of MMRS region
>> + * @table_base: Base address of TABLE region
> 
> Please add kerneldoc comments for all fields.
> 

Will add.

>> + */
>> +struct ti_pat_data {
>> +    struct device *dev;
>> +    struct miscdevice mdev;
>> +    struct regmap *mmrs_map;
>> +    struct regmap_field *mmrs_fields[F_MAX_FIELDS];
>> +    void __iomem *table_base;
>> +    unsigned int page_count;
>> +    unsigned int page_size;
>> +    phys_addr_t window_base;
>> +    struct gen_pool *pool;
>> +};
>> +
> 
> Kerneldoc comments for below structs would be also useful, especially
> for ti_pat_buffer.
> 

Will add.

>> +struct ti_pat_dma_buf_attachment {
>> +    struct device *dev;
>> +    struct sg_table *table;
>> +    struct ti_pat_buffer *buffer;
>> +    struct list_head list;
>> +};
>> +
>> +struct ti_pat_buffer {
>> +    struct ti_pat_data *pat;
>> +    struct dma_buf *i_dma_buf;
>> +    size_t size;
>> +    unsigned long offset;
>> +    struct dma_buf *e_dma_buf;
>> +
>> +    struct dma_buf_attachment *attachment;
>> +    struct sg_table *sgt;
>> +
>> +    struct list_head attachments;
>> +    int map_count;
>> +
>> +    struct mutex lock;
>> +};
>> +
>> +static const struct regmap_config ti_pat_regmap_config = {
>> +    .reg_bits = 32,
>> +    .val_bits = 32,
>> +    .reg_stride = 4,
>> +};
>> +
>> +static int ti_pat_dma_buf_attach(struct dma_buf *dmabuf,
>> +                 struct dma_buf_attachment *attachment)
>> +{
>> +    struct ti_pat_dma_buf_attachment *a;
>> +    struct ti_pat_buffer *buffer = dmabuf->priv;
>> +
>> +    a = kzalloc(sizeof(*a), GFP_KERNEL);
>> +    if (!a)
>> +        return -ENOMEM;
>> +
>> +    a->dev = attachment->dev;
>> +    a->buffer = buffer;
>> +    INIT_LIST_HEAD(&a->list);
>> +
>> +    a->table = kzalloc(sizeof(*a->table), GFP_KERNEL);
>> +    if (!a->table) {
>> +        kfree(a);
>> +        return -ENOMEM;
>> +    }
>> +
>> +    if (sg_alloc_table(a->table, 1, GFP_KERNEL)) {
>> +        kfree(a->table);
>> +        kfree(a);
>> +        return -ENOMEM;
>> +    }
>> +
>> +    sg_set_page(a->table->sgl, pfn_to_page(PFN_DOWN(buffer->offset)),
>> buffer->size, 0);
>> +
>> +    attachment->priv = a;
>> +
>> +    mutex_lock(&buffer->lock);
>> +    /* First time attachment we attach to parent */
>> +    if (list_empty(&buffer->attachments)) {
>> +        buffer->attachment = dma_buf_attach(buffer->i_dma_buf,
>> buffer->pat->dev);
>> +        if (IS_ERR(buffer->attachment)) {
>> +            dev_err(buffer->pat->dev, "Unable to attach to parent
>> DMA-BUF\n");
>> +            mutex_unlock(&buffer->lock);
>> +            kfree(a->table);
>> +            kfree(a);
>> +            return PTR_ERR(buffer->attachment);
>> +        }
>> +    }
>> +    list_add(&a->list, &buffer->attachments);
>> +    mutex_unlock(&buffer->lock);
>> +
>> +    return 0;
>> +}
>> +
>> +static void ti_pat_dma_buf_detatch(struct dma_buf *dmabuf,
>> +                   struct dma_buf_attachment *attachment)
> 
> Func name should be ti_pat_dma_buf_detach instead?
> 

Good catch, will fix.

> Other than that, I can't see anything obvious with my limited experience
> with dma_buf. Is there a simple way to test this driver btw?
> 

Simple way? No not really.. What I've been doing is allocating a
non-contiguous buffer (from system DMA-BUF heaps), writing some test
pattern to it, using this driver to convert the buffer, then sending the
new handle to our DSS (display subsystem which cannot handle
non-contiguous buffers). If all is working the test pattern is displayed
correctly.

Thanks,
Andrew

> -Tero
> 
> -- 
> Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki.
> Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki