mbox

[GIT,PULL,v2,1/4] ARM: tegra: IOMMU support for v3.19

Message ID 1416993053-3630-1-git-send-email-thierry.reding@gmail.com
State New
Headers show

Pull-request

git://git.kernel.org/pub/scm/linux/kernel/git/tegra/linux.git tags/tegra-for-3.19-iommu-rebased

Message

Thierry Reding Nov. 26, 2014, 9:10 a.m. UTC
Hi ARM SoC maintainers,

The following changes since commit 0690cbd2e55a72a8eae557c389d1a136ed9fa142:

  powerpc/iommu: Rename iommu_[un]map_sg functions (2014-11-18 11:30:01 +0100)

are available in the git repository at:

  git://git.kernel.org/pub/scm/linux/kernel/git/tegra/linux.git tags/tegra-for-3.19-iommu-rebased

for you to fetch changes up to 37d21f8918f2aa2289036cc778ee188951e53288:

  memory: Add NVIDIA Tegra memory controller support (2014-11-26 09:45:02 +0100)

Here's a version of the pull request rebased on top of Joerg's core
branch from the IOMMU tree. It has the advantage of having resolved
the merge conflicts and the disadvantage of pulling in v3.18-rc3.

Thanks,
Thierry

----------------------------------------------------------------
ARM: tegra: IOMMU support for v3.19

This adds the driver pieces required for IOMMU support on Tegra30,
Tegra114 and Tegra124.

----------------------------------------------------------------
Thierry Reding (6):
      Merge branch 'core' of git://git.kernel.org/.../joro/iommu
      clk: tegra: Implement memory-controller clock
      amba: Add Kconfig file
      ARM: tegra: Move AHB Kconfig to drivers/amba
      of: Add NVIDIA Tegra memory controller binding
      memory: Add NVIDIA Tegra memory controller support

 .../memory-controllers/nvidia,tegra-mc.txt         |   36 +
 arch/arm/Kconfig                                   |    3 -
 arch/arm/mach-tegra/Kconfig                        |    9 +-
 arch/arm64/Kconfig                                 |    3 -
 drivers/Kconfig                                    |    2 +
 drivers/amba/Kconfig                               |   14 +
 drivers/clk/tegra/clk-divider.c                    |   13 +
 drivers/clk/tegra/clk-tegra114.c                   |    7 +-
 drivers/clk/tegra/clk-tegra124.c                   |    7 +-
 drivers/clk/tegra/clk-tegra20.c                    |    8 +-
 drivers/clk/tegra/clk-tegra30.c                    |    7 +-
 drivers/clk/tegra/clk.h                            |    2 +
 drivers/iommu/Kconfig                              |   12 +-
 drivers/iommu/tegra-smmu.c                         | 1604 +++++++-------------
 drivers/memory/Kconfig                             |   12 +-
 drivers/memory/Makefile                            |    3 +-
 drivers/memory/tegra/Kconfig                       |    7 +
 drivers/memory/tegra/Makefile                      |    7 +
 drivers/memory/tegra/mc.c                          |  301 ++++
 drivers/memory/tegra/mc.h                          |   53 +
 drivers/memory/tegra/tegra114.c                    |  948 ++++++++++++
 drivers/memory/tegra/tegra124.c                    |  995 ++++++++++++
 drivers/memory/tegra/tegra30.c                     |  970 ++++++++++++
 drivers/memory/tegra30-mc.c                        |  378 -----
 include/dt-bindings/clock/tegra114-car.h           |    2 +-
 include/dt-bindings/clock/tegra124-car.h           |    2 +-
 include/dt-bindings/clock/tegra20-car.h            |    2 +-
 include/dt-bindings/memory/tegra114-mc.h           |   25 +
 include/dt-bindings/memory/tegra124-mc.h           |   31 +
 include/dt-bindings/memory/tegra30-mc.h            |   24 +
 include/soc/tegra/mc.h                             |  107 ++
 31 files changed, 4089 insertions(+), 1505 deletions(-)
 create mode 100644 Documentation/devicetree/bindings/memory-controllers/nvidia,tegra-mc.txt
 create mode 100644 drivers/amba/Kconfig
 create mode 100644 drivers/memory/tegra/Kconfig
 create mode 100644 drivers/memory/tegra/Makefile
 create mode 100644 drivers/memory/tegra/mc.c
 create mode 100644 drivers/memory/tegra/mc.h
 create mode 100644 drivers/memory/tegra/tegra114.c
 create mode 100644 drivers/memory/tegra/tegra124.c
 create mode 100644 drivers/memory/tegra/tegra30.c
 delete mode 100644 drivers/memory/tegra30-mc.c
 create mode 100644 include/dt-bindings/memory/tegra114-mc.h
 create mode 100644 include/dt-bindings/memory/tegra124-mc.h
 create mode 100644 include/dt-bindings/memory/tegra30-mc.h
 create mode 100644 include/soc/tegra/mc.h

Comments

Arnd Bergmann Nov. 28, 2014, 10:20 p.m. UTC | #1
On Wednesday 26 November 2014, Thierry Reding wrote:
> Hi ARM SoC maintainers,
> 
> The following changes since commit 0690cbd2e55a72a8eae557c389d1a136ed9fa142:
> 
>   powerpc/iommu: Rename iommu_[un]map_sg functions (2014-11-18 11:30:01 +0100)
> 
> are available in the git repository at:
> 
>   git://git.kernel.org/pub/scm/linux/kernel/git/tegra/linux.git tags/tegra-for-3.19-iommu-rebased
> 
> for you to fetch changes up to 37d21f8918f2aa2289036cc778ee188951e53288:
> 
>   memory: Add NVIDIA Tegra memory controller support (2014-11-26 09:45:02 +0100)
> 
> Here's a version of the pull request rebased on top of Joerg's core
> branch from the IOMMU tree. It has the advantage of having resolved
> the merge conflicts and the disadvantage of pulling in v3.18-rc3.
> 

Hi Thierry,

sorry for taking my time on this, I had some concerns when I first
looked at the memory controller driver and binding (after I got your
pull request), and wanted to be sure everything is fine before I merge
it.

Pulling in v3.18-rc3 is not a problem, since the next/soc branch is
already based on -rc3 (you can see that yourself if you check out the
branch from the arm-soc tree), and I'm absolutely fine with merging
either v1 or v2 of this, the conflict seems harmless, and so does 
pulling in the dependency.

The extra attention for the binding is because the base iommu binding
is still very new and gives some options to driver authors, and I want
to make sure we are setting a good example for others to look at.
My problem is mainly lack of understanding for your hardware requirements,
so hopefully you can clarify this all and I can just merge it, or we
find an easy way to change the code if I have stumbled on a problem.

My main question is about the relation between 'swgroup' and 'id'
settings. What do the two things mean respectively, does your driver
define how they get combined, or is each master hardcoded to both?

I generally dislike having SoC-specific lookup tables in drivers for
things that could be fully described in DT, so I really want to understand
why you added those tables.

The .smmu.reg and .smmu.bit settings seem to directly correlate to the
.id value in the table, so I'm assuming you could derive one from the
other if it was advantagous. The name is only used for debugging purposes
and we could also leave it out if we had a way to kill off the other fields
of the table.

In the comment you mention that the latency allowance is set up to the
hardware defaults, so I guess in the current version this is not required,
but the .la.reg/shift/mask fields can't be determined from the other
fields. This means in order to implement the extension for changing the
setting in the future, you'd have to add some other way of communicating
it without the table, right?

Back to the swgroup/id fields: if you need both, would it make sense
to have #iommu-cells = <2> and pass both from the dma master device?

	Arnd
Thierry Reding Dec. 1, 2014, 3:05 p.m. UTC | #2
On Fri, Nov 28, 2014 at 11:20:19PM +0100, Arnd Bergmann wrote:
> On Wednesday 26 November 2014, Thierry Reding wrote:
> > Hi ARM SoC maintainers,
> > 
> > The following changes since commit 0690cbd2e55a72a8eae557c389d1a136ed9fa142:
> > 
> >   powerpc/iommu: Rename iommu_[un]map_sg functions (2014-11-18 11:30:01 +0100)
> > 
> > are available in the git repository at:
> > 
> >   git://git.kernel.org/pub/scm/linux/kernel/git/tegra/linux.git tags/tegra-for-3.19-iommu-rebased
> > 
> > for you to fetch changes up to 37d21f8918f2aa2289036cc778ee188951e53288:
> > 
> >   memory: Add NVIDIA Tegra memory controller support (2014-11-26 09:45:02 +0100)
> > 
> > Here's a version of the pull request rebased on top of Joerg's core
> > branch from the IOMMU tree. It has the advantage of having resolved
> > the merge conflicts and the disadvantage of pulling in v3.18-rc3.
> > 
> 
> Hi Thierry,
> 
> sorry for taking my time on this, I had some concerns when I first
> looked at the memory controller driver and binding (after I got your
> pull request), and wanted to be sure everything is fine before I merge
> it.
> 
> Pulling in v3.18-rc3 is not a problem, since the next/soc branch is
> already based on -rc3 (you can see that yourself if you check out the
> branch from the arm-soc tree), and I'm absolutely fine with merging
> either v1 or v2 of this, the conflict seems harmless, and so does 
> pulling in the dependency.
> 
> The extra attention for the binding is because the base iommu binding
> is still very new and gives some options to driver authors, and I want
> to make sure we are setting a good example for others to look at.
> My problem is mainly lack of understanding for your hardware requirements,
> so hopefully you can clarify this all and I can just merge it, or we
> find an easy way to change the code if I have stumbled on a problem.
> 
> My main question is about the relation between 'swgroup' and 'id'
> settings. What do the two things mean respectively, does your driver
> define how they get combined, or is each master hardcoded to both?

An ID refers to the client ID. Each client ID represents one requester
and a set of IDs makes up one SWGROUP. For example there are two display
controllers, each being three clients, yet there's only two SWGROUPs for
them:

	SWGROUP dc: - display0a
	            - display0b
	            - display0c

	SWGROUP dcb: - display0ab
	             - display0bb
	             - display0cb

Each SWGROUP can be assigned a separate address space. That is, an
address space for SWGROUP dc will apply for all clients in that group.
However it can be additionally specified for which clients in a group
IOMMU address translation should be enabled. Theoretically one could
enable translation only for display0a and display0b, but not for
display0c. I don't immediately see when that would be desirable, hence
the driver always enables translation for all clients in a group.

> I generally dislike having SoC-specific lookup tables in drivers for
> things that could be fully described in DT, so I really want to understand
> why you added those tables.

What exactly is your concern with these tables? Is it the size? Roughly
these are 3.5 KiB of .rodata per SoC generation. If that's too much I
think I could perhaps get that down to something like half of it using
bitfields (reg could be 11 bits, bit/shift could be 5, so that both fit
into a single u16 instead of 2 u32). That'll probably increase code size
a little to extract these fields, but that may be a suitable tradeoff.

> The .smmu.reg and .smmu.bit settings seem to directly correlate to the
> .id value in the table, so I'm assuming you could derive one from the
> other if it was advantagous.

This could be done using something like this:

	unsigned int reg = 0x228 + (id / 32) * 4;
	unsigned int bit = id % 32;

However there are a couple of entries that are special, notable the PTC
and MPCORE clients. Those have a SWGROUP but none of these enable bits,
so I can't immediately think of a way to do that nicely. We'd probably
have to hard-code checks for that, making it somewhat brittle. Having
this all in a simple lookup table makes this really straightforward and
easy to verify for correctness by comparing the table to internal files
that specify these registers.

> The name is only used for debugging purposes and we could also leave
> it out if we had a way to kill off the other fields of the table.

The name is also used in error messages to clarify where an error comes
from. That's been very essential in tracking down various issues in some
drivers (DRM primarily).

Even if we could get rid of the SMMU related fields by special-casing in
code (I haven't measured, but that might actually remove all the benefit
of removing the table entries), we'd still need the latency allowance
registers, so I don't think we can remove the table altogether.

> In the comment you mention that the latency allowance is set up to the
> hardware defaults, so I guess in the current version this is not required,
> but the .la.reg/shift/mask fields can't be determined from the other
> fields. This means in order to implement the extension for changing the
> setting in the future, you'd have to add some other way of communicating
> it without the table, right?

Right, there is no programmatic way to derive the latency allowance data
from the ID. They're more or less randomly spread across the registers.

Note that there are other knobs in the memory controller associated with
the memory clients, though they aren't in use (yet). Having this kind of
table gives us a very nice way of collecting the per-client data in one
location and then use that for register programming.

> Back to the swgroup/id fields: if you need both, would it make sense
> to have #iommu-cells = <2> and pass both from the dma master device?

I think they are really orthogonal settings. Client IDs have nothing to
do with the IOMMU specifically. The IOMMU is primarily concerned with
the SWGROUP. The driver only makes sure that IOVA translation is enabled
for each client in a SWGROUP.

Thierry
Arnd Bergmann Dec. 1, 2014, 5:55 p.m. UTC | #3
On Monday 01 December 2014 16:05:54 Thierry Reding wrote:
> On Fri, Nov 28, 2014 at 11:20:19PM +0100, Arnd Bergmann wrote:
> > On Wednesday 26 November 2014, Thierry Reding wrote:
> > > Hi ARM SoC maintainers,
> > > 
> > > The following changes since commit 0690cbd2e55a72a8eae557c389d1a136ed9fa142:
> > > 
> > >   powerpc/iommu: Rename iommu_[un]map_sg functions (2014-11-18 11:30:01 +0100)
> > > 
> > > are available in the git repository at:
> > > 
> > >   git://git.kernel.org/pub/scm/linux/kernel/git/tegra/linux.git tags/tegra-for-3.19-iommu-rebased
> > > 
> > > for you to fetch changes up to 37d21f8918f2aa2289036cc778ee188951e53288:
> > > 
> > >   memory: Add NVIDIA Tegra memory controller support (2014-11-26 09:45:02 +0100)
> > > 
> > > Here's a version of the pull request rebased on top of Joerg's core
> > > branch from the IOMMU tree. It has the advantage of having resolved
> > > the merge conflicts and the disadvantage of pulling in v3.18-rc3.
> > > 
> > 
> > Hi Thierry,
> > 
> > sorry for taking my time on this, I had some concerns when I first
> > looked at the memory controller driver and binding (after I got your
> > pull request), and wanted to be sure everything is fine before I merge
> > it.
> > 
> > Pulling in v3.18-rc3 is not a problem, since the next/soc branch is
> > already based on -rc3 (you can see that yourself if you check out the
> > branch from the arm-soc tree), and I'm absolutely fine with merging
> > either v1 or v2 of this, the conflict seems harmless, and so does 
> > pulling in the dependency.
> > 
> > The extra attention for the binding is because the base iommu binding
> > is still very new and gives some options to driver authors, and I want
> > to make sure we are setting a good example for others to look at.
> > My problem is mainly lack of understanding for your hardware requirements,
> > so hopefully you can clarify this all and I can just merge it, or we
> > find an easy way to change the code if I have stumbled on a problem.
> > 
> > My main question is about the relation between 'swgroup' and 'id'
> > settings. What do the two things mean respectively, does your driver
> > define how they get combined, or is each master hardcoded to both?
> 
> An ID refers to the client ID. Each client ID represents one requester
> and a set of IDs makes up one SWGROUP. For example there are two display
> controllers, each being three clients, yet there's only two SWGROUPs for
> them:
> 
> 	SWGROUP dc: - display0a
> 	            - display0b
> 	            - display0c
> 
> 	SWGROUP dcb: - display0ab
> 	             - display0bb
> 	             - display0cb
> 
> Each SWGROUP can be assigned a separate address space. That is, an
> address space for SWGROUP dc will apply for all clients in that group.
> However it can be additionally specified for which clients in a group
> IOMMU address translation should be enabled. Theoretically one could
> enable translation only for display0a and display0b, but not for
> display0c. I don't immediately see when that would be desirable, hence
> the driver always enables translation for all clients in a group.

Ok, I see. So specifying both SWGROUP and ID in DT would let you do
that, but you don't think anybody ever wants it.

> > I generally dislike having SoC-specific lookup tables in drivers for
> > things that could be fully described in DT, so I really want to understand
> > why you added those tables.
> 
> What exactly is your concern with these tables? Is it the size? Roughly
> these are 3.5 KiB of .rodata per SoC generation. If that's too much I
> think I could perhaps get that down to something like half of it using
> bitfields (reg could be 11 bits, bit/shift could be 5, so that both fit
> into a single u16 instead of 2 u32). That'll probably increase code size
> a little to extract these fields, but that may be a suitable tradeoff.

I'm not too worried about the size. My concern is mainly the fact that
you have to add a new file for each new SoC, which is not necessary if
we can find a way to make the binding generic enough to cover any variant.

It's less of an issue for the particular implementation from NVIDIA,
since you have a relatively low number of SoC designs coming out,
compared to some of the other vendors, and in particular compared to
the ARM SMMU that will be shared across many vendors. I definitely
would not want to see per-SoC files for each chip that contains an
ARM SMMU, and I also would like to see IOMMU drivers in general being
implemented in a similar fashion, and your driver sets an example that
others should better not copy IMHO ;-)

> > The .smmu.reg and .smmu.bit settings seem to directly correlate to the
> > .id value in the table, so I'm assuming you could derive one from the
> > other if it was advantagous.
> 
> This could be done using something like this:
> 
> 	unsigned int reg = 0x228 + (id / 32) * 4;
> 	unsigned int bit = id % 32;
> 
> However there are a couple of entries that are special, notable the PTC
> and MPCORE clients. Those have a SWGROUP but none of these enable bits,
> so I can't immediately think of a way to do that nicely. We'd probably
> have to hard-code checks for that, making it somewhat brittle. Having
> this all in a simple lookup table makes this really straightforward and
> easy to verify for correctness by comparing the table to internal files
> that specify these registers.

I see.

> > The name is only used for debugging purposes and we could also leave
> > it out if we had a way to kill off the other fields of the table.
> 
> The name is also used in error messages to clarify where an error comes
> from. That's been very essential in tracking down various issues in some
> drivers (DRM primarily).

Couldn't you print the DT path of the DMA master to provide the
same information?

> > In the comment you mention that the latency allowance is set up to the
> > hardware defaults, so I guess in the current version this is not required,
> > but the .la.reg/shift/mask fields can't be determined from the other
> > fields. This means in order to implement the extension for changing the
> > setting in the future, you'd have to add some other way of communicating
> > it without the table, right?
> 
> Right, there is no programmatic way to derive the latency allowance data
> from the ID. They're more or less randomly spread across the registers.
> 
> Note that there are other knobs in the memory controller associated with
> the memory clients, though they aren't in use (yet). Having this kind of
> table gives us a very nice way of collecting the per-client data in one
> location and then use that for register programming.

So what does "lacency allowance" actually mean, and why do you need to
access this field from Linux? I'm not too familiar with memory controller
concepts, so I'm sorry if this is an RTFM question.

> > Back to the swgroup/id fields: if you need both, would it make sense
> > to have #iommu-cells = <2> and pass both from the dma master device?
> 
> I think they are really orthogonal settings. Client IDs have nothing to
> do with the IOMMU specifically. The IOMMU is primarily concerned with
> the SWGROUP. The driver only makes sure that IOVA translation is enabled
> for each client in a SWGROUP.

Hmm, where are they actually needed both at the same time then? IOW,
why can't the IOMMU driver just deal with the SWGROUP and ignore the
table with the ID and LA values, leaving that to the memory controller
driver?

	Arnd
Thierry Reding Dec. 2, 2014, 11:51 a.m. UTC | #4
On Mon, Dec 01, 2014 at 06:55:01PM +0100, Arnd Bergmann wrote:
> On Monday 01 December 2014 16:05:54 Thierry Reding wrote:
> > On Fri, Nov 28, 2014 at 11:20:19PM +0100, Arnd Bergmann wrote:
> > > On Wednesday 26 November 2014, Thierry Reding wrote:
> > > > Hi ARM SoC maintainers,
> > > > 
> > > > The following changes since commit 0690cbd2e55a72a8eae557c389d1a136ed9fa142:
> > > > 
> > > >   powerpc/iommu: Rename iommu_[un]map_sg functions (2014-11-18 11:30:01 +0100)
> > > > 
> > > > are available in the git repository at:
> > > > 
> > > >   git://git.kernel.org/pub/scm/linux/kernel/git/tegra/linux.git tags/tegra-for-3.19-iommu-rebased
> > > > 
> > > > for you to fetch changes up to 37d21f8918f2aa2289036cc778ee188951e53288:
> > > > 
> > > >   memory: Add NVIDIA Tegra memory controller support (2014-11-26 09:45:02 +0100)
> > > > 
> > > > Here's a version of the pull request rebased on top of Joerg's core
> > > > branch from the IOMMU tree. It has the advantage of having resolved
> > > > the merge conflicts and the disadvantage of pulling in v3.18-rc3.
> > > > 
> > > 
> > > Hi Thierry,
> > > 
> > > sorry for taking my time on this, I had some concerns when I first
> > > looked at the memory controller driver and binding (after I got your
> > > pull request), and wanted to be sure everything is fine before I merge
> > > it.
> > > 
> > > Pulling in v3.18-rc3 is not a problem, since the next/soc branch is
> > > already based on -rc3 (you can see that yourself if you check out the
> > > branch from the arm-soc tree), and I'm absolutely fine with merging
> > > either v1 or v2 of this, the conflict seems harmless, and so does 
> > > pulling in the dependency.
> > > 
> > > The extra attention for the binding is because the base iommu binding
> > > is still very new and gives some options to driver authors, and I want
> > > to make sure we are setting a good example for others to look at.
> > > My problem is mainly lack of understanding for your hardware requirements,
> > > so hopefully you can clarify this all and I can just merge it, or we
> > > find an easy way to change the code if I have stumbled on a problem.
> > > 
> > > My main question is about the relation between 'swgroup' and 'id'
> > > settings. What do the two things mean respectively, does your driver
> > > define how they get combined, or is each master hardcoded to both?
> > 
> > An ID refers to the client ID. Each client ID represents one requester
> > and a set of IDs makes up one SWGROUP. For example there are two display
> > controllers, each being three clients, yet there's only two SWGROUPs for
> > them:
> > 
> > 	SWGROUP dc: - display0a
> > 	            - display0b
> > 	            - display0c
> > 
> > 	SWGROUP dcb: - display0ab
> > 	             - display0bb
> > 	             - display0cb
> > 
> > Each SWGROUP can be assigned a separate address space. That is, an
> > address space for SWGROUP dc will apply for all clients in that group.
> > However it can be additionally specified for which clients in a group
> > IOMMU address translation should be enabled. Theoretically one could
> > enable translation only for display0a and display0b, but not for
> > display0c. I don't immediately see when that would be desirable, hence
> > the driver always enables translation for all clients in a group.
> 
> Ok, I see. So specifying both SWGROUP and ID in DT would let you do
> that, but you don't think anybody ever wants it.

Correct. I don't think it makes sense and it'd require significant work
in a driver to know which buffers need translation and which don't. In
fact I don't think you could make that work with the current frameworks
because both the IOMMU and DMA APIs operate at a struct device level. A
client could therefore not be distinguished from another.

> > > I generally dislike having SoC-specific lookup tables in drivers for
> > > things that could be fully described in DT, so I really want to understand
> > > why you added those tables.
> > 
> > What exactly is your concern with these tables? Is it the size? Roughly
> > these are 3.5 KiB of .rodata per SoC generation. If that's too much I
> > think I could perhaps get that down to something like half of it using
> > bitfields (reg could be 11 bits, bit/shift could be 5, so that both fit
> > into a single u16 instead of 2 u32). That'll probably increase code size
> > a little to extract these fields, but that may be a suitable tradeoff.
> 
> I'm not too worried about the size. My concern is mainly the fact that
> you have to add a new file for each new SoC, which is not necessary if
> we can find a way to make the binding generic enough to cover any variant.

There's nothing generic about this. The way how you control the various
groups might be the same, but the set of groups and clients varies per
generation. The tables not only allow us to have simple parameterized
code to handle the various generations, it also gives us a way to
specify which groups and clients are valid for a given generation and
sanity check the DT content.

> It's less of an issue for the particular implementation from NVIDIA,
> since you have a relatively low number of SoC designs coming out,
> compared to some of the other vendors, and in particular compared to
> the ARM SMMU that will be shared across many vendors. I definitely
> would not want to see per-SoC files for each chip that contains an
> ARM SMMU, and I also would like to see IOMMU drivers in general being
> implemented in a similar fashion, and your driver sets an example that
> others should better not copy IMHO ;-)

Actually I disagree. I think this sets exactly the right example. Since
none of these associations are configurable or change from board to
board, everything is implied by the compatible property. Adding this
data to DT would therefore be completely redundant.

Moving the data to DT would also mean that we would be adding a table of
registers to the DT. There used to be a time when there was concensus at
least that that was a really bad idea. Has that changed in the last few
years?

I do agree that it makes sense not to have these tables for completely
generic IOMMUs like the ARM SMMU, though, but like you said, those are
generic because they are designed to be used in a variety of SoCs. The
Tegra SMMU is very tightly coupled with each specific SoC generation.

> > > The .smmu.reg and .smmu.bit settings seem to directly correlate to the
> > > .id value in the table, so I'm assuming you could derive one from the
> > > other if it was advantagous.
> > 
> > This could be done using something like this:
> > 
> > 	unsigned int reg = 0x228 + (id / 32) * 4;
> > 	unsigned int bit = id % 32;
> > 
> > However there are a couple of entries that are special, notable the PTC
> > and MPCORE clients. Those have a SWGROUP but none of these enable bits,
> > so I can't immediately think of a way to do that nicely. We'd probably
> > have to hard-code checks for that, making it somewhat brittle. Having
> > this all in a simple lookup table makes this really straightforward and
> > easy to verify for correctness by comparing the table to internal files
> > that specify these registers.
> 
> I see.
> 
> > > The name is only used for debugging purposes and we could also leave
> > > it out if we had a way to kill off the other fields of the table.
> > 
> > The name is also used in error messages to clarify where an error comes
> > from. That's been very essential in tracking down various issues in some
> > drivers (DRM primarily).
> 
> Couldn't you print the DT path of the DMA master to provide the
> same information?

To do that I would need to keep a mapping of struct device_node * to
client ID. And it would only give me a single name even for devices with
multiple clients.

Going back to the DRM example I gave earlier, it's been extremely
valuable for the memory controller to spit out the exact name of the
faulting client, because it immediately indicates whether there's
something wrong with the root window (window A) or one of the overlays
(window B or C).

> > > In the comment you mention that the latency allowance is set up to the
> > > hardware defaults, so I guess in the current version this is not required,
> > > but the .la.reg/shift/mask fields can't be determined from the other
> > > fields. This means in order to implement the extension for changing the
> > > setting in the future, you'd have to add some other way of communicating
> > > it without the table, right?
> > 
> > Right, there is no programmatic way to derive the latency allowance data
> > from the ID. They're more or less randomly spread across the registers.
> > 
> > Note that there are other knobs in the memory controller associated with
> > the memory clients, though they aren't in use (yet). Having this kind of
> > table gives us a very nice way of collecting the per-client data in one
> > location and then use that for register programming.
> 
> So what does "lacency allowance" actually mean, and why do you need to
> access this field from Linux? I'm not too familiar with memory controller
> concepts, so I'm sorry if this is an RTFM question.

Essentially this is an input to the memory controller's arbitration unit
and sets an upper bound on the latency of outstanding requests. Under
memory bandwidth pressure this allows the memory controller to give
priority to requests from clients with lower latency tolerance. Display
for example would usually have a fairly low tolerance because stalling
requests for too long will cause the pixel data FIFO to underrun and
cause visual artifacts.

So to make this work the goal is for memory clients to register their
bandwidth needs with some central entity that uses it along with the
memory client's FIFO depth to compute the value for the latency
allowance register. The unit for this value is an internal tick from the
memory controller, in turn based on the EMC frequency. Therefore this is
usually only computed once for a given configuration and can remain the
same across EMC frequency changes.

There are patches in the works to add support for EMC frequency scaling
and also latency allowance programming.

> > > Back to the swgroup/id fields: if you need both, would it make sense
> > > to have #iommu-cells = <2> and pass both from the dma master device?
> > 
> > I think they are really orthogonal settings. Client IDs have nothing to
> > do with the IOMMU specifically. The IOMMU is primarily concerned with
> > the SWGROUP. The driver only makes sure that IOVA translation is enabled
> > for each client in a SWGROUP.
> 
> Hmm, where are they actually needed both at the same time then? IOW,
> why can't the IOMMU driver just deal with the SWGROUP and ignore the
> table with the ID and LA values, leaving that to the memory controller
> driver?

The IOMMU driver still needs the list of clients so that it can enable
translation for each of the clients pertaining to a given group. That
is, if display controller A requests IOVA translation this maps to
TEGRA_SWGROUP_DC, but internally in order for the translation to
actually happen we also need to enable IOVA translation for each of the
clients in that group.

Note that we do share the memory clients table between the MC and the
IOMMU drivers, so there's not actually any duplication there.

Thierry
Arnd Bergmann Dec. 4, 2014, 1:36 p.m. UTC | #5
On Tuesday 02 December 2014 12:51:24 Thierry Reding wrote:
> On Mon, Dec 01, 2014 at 06:55:01PM +0100, Arnd Bergmann wrote:
> > On Monday 01 December 2014 16:05:54 Thierry Reding wrote:
> > > On Fri, Nov 28, 2014 at 11:20:19PM +0100, Arnd Bergmann wrote:
> > > > On Wednesday 26 November 2014, Thierry Reding wrote:
> > > An ID refers to the client ID. Each client ID represents one requester
> > > and a set of IDs makes up one SWGROUP. For example there are two display
> > > controllers, each being three clients, yet there's only two SWGROUPs for
> > > them:
> > > 
> > > 	SWGROUP dc: - display0a
> > > 	            - display0b
> > > 	            - display0c
> > > 
> > > 	SWGROUP dcb: - display0ab
> > > 	             - display0bb
> > > 	             - display0cb
> > > 
> > > Each SWGROUP can be assigned a separate address space. That is, an
> > > address space for SWGROUP dc will apply for all clients in that group.
> > > However it can be additionally specified for which clients in a group
> > > IOMMU address translation should be enabled. Theoretically one could
> > > enable translation only for display0a and display0b, but not for
> > > display0c. I don't immediately see when that would be desirable, hence
> > > the driver always enables translation for all clients in a group.
> > 
> > Ok, I see. So specifying both SWGROUP and ID in DT would let you do
> > that, but you don't think anybody ever wants it.
> 
> Correct. I don't think it makes sense and it'd require significant work
> in a driver to know which buffers need translation and which don't. In
> fact I don't think you could make that work with the current frameworks
> because both the IOMMU and DMA APIs operate at a struct device level. A
> client could therefore not be distinguished from another.

I was assuming that you'd have one 'struct device' per client in all
cases, so you'd have a unique association between a swgroup/id tuple
and the device pointer that you pass into the dma-mapping and IOMMU APIs.

> > It's less of an issue for the particular implementation from NVIDIA,
> > since you have a relatively low number of SoC designs coming out,
> > compared to some of the other vendors, and in particular compared to
> > the ARM SMMU that will be shared across many vendors. I definitely
> > would not want to see per-SoC files for each chip that contains an
> > ARM SMMU, and I also would like to see IOMMU drivers in general being
> > implemented in a similar fashion, and your driver sets an example that
> > others should better not copy IMHO ;-)
> 
> Actually I disagree. I think this sets exactly the right example. Since
> none of these associations are configurable or change from board to
> board, everything is implied by the compatible property. Adding this
> data to DT would therefore be completely redundant.
> 
> Moving the data to DT would also mean that we would be adding a table of
> registers to the DT. There used to be a time when there was concensus at
> least that that was a really bad idea. Has that changed in the last few
> years?

I wasn't thinking of adding the entire table to the IOMMU node, that would
indeed achieve nothing compare to having the table in the driver source.

What I was trying to get at was whether you could make it work without
a table whatsoever, by doing the DT IOMMU reference on the ID level rather
than the more coarse SWGROUP level, and doing the memory controller
settings (latency allowance) separately from that.

> > > > The name is only used for debugging purposes and we could also leave
> > > > it out if we had a way to kill off the other fields of the table.
> > > 
> > > The name is also used in error messages to clarify where an error comes
> > > from. That's been very essential in tracking down various issues in some
> > > drivers (DRM primarily).
> > 
> > Couldn't you print the DT path of the DMA master to provide the
> > same information?
> 
> To do that I would need to keep a mapping of struct device_node * to
> client ID. And it would only give me a single name even for devices with
> multiple clients.

As above, my expectation was to have a separate device node per client.

> Going back to the DRM example I gave earlier, it's been extremely
> valuable for the memory controller to spit out the exact name of the
> faulting client, because it immediately indicates whether there's
> something wrong with the root window (window A) or one of the overlays
> (window B or C).

I'm not entirely buying this. You can clearly print the number and
have the person debugging the driver manually look up the ID in a table,
even if you don't have a separate struct device per ID. Obviously
printing a cleartext identifier is a bit nicer, but you wouldn't
add the table just for this purpose, just like we don't have tables
to map irq/gpio/dma/... numbers to strings in general.

> > > > In the comment you mention that the latency allowance is set up to the
> > > > hardware defaults, so I guess in the current version this is not required,
> > > > but the .la.reg/shift/mask fields can't be determined from the other
> > > > fields. This means in order to implement the extension for changing the
> > > > setting in the future, you'd have to add some other way of communicating
> > > > it without the table, right?
> > > 
> > > Right, there is no programmatic way to derive the latency allowance data
> > > from the ID. They're more or less randomly spread across the registers.
> > > 
> > > Note that there are other knobs in the memory controller associated with
> > > the memory clients, though they aren't in use (yet). Having this kind of
> > > table gives us a very nice way of collecting the per-client data in one
> > > location and then use that for register programming.
> > 
> > So what does "lacency allowance" actually mean, and why do you need to
> > access this field from Linux? I'm not too familiar with memory controller
> > concepts, so I'm sorry if this is an RTFM question.
> 
> Essentially this is an input to the memory controller's arbitration unit
> and sets an upper bound on the latency of outstanding requests. Under
> memory bandwidth pressure this allows the memory controller to give
> priority to requests from clients with lower latency tolerance. Display
> for example would usually have a fairly low tolerance because stalling
> requests for too long will cause the pixel data FIFO to underrun and
> cause visual artifacts.
> 
> So to make this work the goal is for memory clients to register their
> bandwidth needs with some central entity that uses it along with the
> memory client's FIFO depth to compute the value for the latency
> allowance register. The unit for this value is an internal tick from the
> memory controller, in turn based on the EMC frequency. Therefore this is
> usually only computed once for a given configuration and can remain the
> same across EMC frequency changes.
> 
> There are patches in the works to add support for EMC frequency scaling
> and also latency allowance programming.

Ok, I see. The part that I'm missing here is how the client driver
knows its number, as you write that we don't have a device node per
client. Do you have a particular binding in mind already?

> > > > Back to the swgroup/id fields: if you need both, would it make sense
> > > > to have #iommu-cells = <2> and pass both from the dma master device?
> > > 
> > > I think they are really orthogonal settings. Client IDs have nothing to
> > > do with the IOMMU specifically. The IOMMU is primarily concerned with
> > > the SWGROUP. The driver only makes sure that IOVA translation is enabled
> > > for each client in a SWGROUP.
> > 
> > Hmm, where are they actually needed both at the same time then? IOW,
> > why can't the IOMMU driver just deal with the SWGROUP and ignore the
> > table with the ID and LA values, leaving that to the memory controller
> > driver?
> 
> The IOMMU driver still needs the list of clients so that it can enable
> translation for each of the clients pertaining to a given group. That
> is, if display controller A requests IOVA translation this maps to
> TEGRA_SWGROUP_DC, but internally in order for the translation to
> actually happen we also need to enable IOVA translation for each of the
> clients in that group.
> 
> Note that we do share the memory clients table between the MC and the
> IOMMU drivers, so there's not actually any duplication there.

This again comes down to the question of whether you have one device
node per client or one device node per group, so let's focus on that
question and then make a decision based on that. The other discussion
points are at this stage not important, so if you can convince me that
doing one node per group is better than one node per id, I'd be happy
to take your patches (with a summary of the discussion added to the
merge commit).

The IOMMU core explictly understands the concept of IOMMU groups that
share a translation between multiple clients, and this would seem like
the best fit for the hardware you describe, but evidently you came to
a different conclusion, as you have more knowledge of the details behind
it. I can see that most groups have only one ID, but there are a few
that have up to four clients:

$ git show 37d21f8918f2aa2289036cc778ee188951e53288 | grep swgroup | uniq -c  | grep -v reg.= | grep -v 1 | sort | uniq -c
      1       2 +               .swgroup = TEGRA_SWGROUP_A9AVP,
      1       2 +               .swgroup = TEGRA_SWGROUP_EMUCIF,
      2       2 +               .swgroup = TEGRA_SWGROUP_G2,
      1       2 +               .swgroup = TEGRA_SWGROUP_GPU,
      3       2 +               .swgroup = TEGRA_SWGROUP_HC,
      2       2 +               .swgroup = TEGRA_SWGROUP_NV,
      6       2 +               .swgroup = TEGRA_SWGROUP_PPCS,
      2       2 +               .swgroup = TEGRA_SWGROUP_TSEC,
      1       2 +               .swgroup = TEGRA_SWGROUP_VIC,
      2       2 +               .swgroup = TEGRA_SWGROUP_XUSB_DEV,
      2       2 +               .swgroup = TEGRA_SWGROUP_XUSB_HOST,
      2       3 +               .swgroup = TEGRA_SWGROUP_EPP,
      1       3 +               .swgroup = TEGRA_SWGROUP_ISP2,
      1       3 +               .swgroup = TEGRA_SWGROUP_ISP2B,
      1       3 +               .swgroup = TEGRA_SWGROUP_MPE,
      1       4 +               .swgroup = TEGRA_SWGROUP_NV,
      6       4 +               .swgroup = TEGRA_SWGROUP_VDE,
      2       4 +               .swgroup = TEGRA_SWGROUP_VI,

Are all of these devices where you'd naturally describe each
group as a single device node in DT?

	Arnd
Thierry Reding Dec. 4, 2014, 2:43 p.m. UTC | #6
We discussed this on IRC and come to the conclusion that this approach
(encoding the table in the driver) was indeed the best for this
particular type of setup. For the record I'll try to explain the same
here and provide more details.

On Thu, Dec 04, 2014 at 02:36:18PM +0100, Arnd Bergmann wrote:
> On Tuesday 02 December 2014 12:51:24 Thierry Reding wrote:
> > On Mon, Dec 01, 2014 at 06:55:01PM +0100, Arnd Bergmann wrote:
> > > On Monday 01 December 2014 16:05:54 Thierry Reding wrote:
> > > > On Fri, Nov 28, 2014 at 11:20:19PM +0100, Arnd Bergmann wrote:
> > > > > On Wednesday 26 November 2014, Thierry Reding wrote:
> > > > An ID refers to the client ID. Each client ID represents one requester
> > > > and a set of IDs makes up one SWGROUP. For example there are two display
> > > > controllers, each being three clients, yet there's only two SWGROUPs for
> > > > them:
> > > > 
> > > > 	SWGROUP dc: - display0a
> > > > 	            - display0b
> > > > 	            - display0c
> > > > 
> > > > 	SWGROUP dcb: - display0ab
> > > > 	             - display0bb
> > > > 	             - display0cb
> > > > 
> > > > Each SWGROUP can be assigned a separate address space. That is, an
> > > > address space for SWGROUP dc will apply for all clients in that group.
> > > > However it can be additionally specified for which clients in a group
> > > > IOMMU address translation should be enabled. Theoretically one could
> > > > enable translation only for display0a and display0b, but not for
> > > > display0c. I don't immediately see when that would be desirable, hence
> > > > the driver always enables translation for all clients in a group.
> > > 
> > > Ok, I see. So specifying both SWGROUP and ID in DT would let you do
> > > that, but you don't think anybody ever wants it.
> > 
> > Correct. I don't think it makes sense and it'd require significant work
> > in a driver to know which buffers need translation and which don't. In
> > fact I don't think you could make that work with the current frameworks
> > because both the IOMMU and DMA APIs operate at a struct device level. A
> > client could therefore not be distinguished from another.
> 
> I was assuming that you'd have one 'struct device' per client in all
> cases, so you'd have a unique association between a swgroup/id tuple
> and the device pointer that you pass into the dma-mapping and IOMMU APIs.

The majority of devices have two clients: one for read transactions,
another for write transactions. These are typically named <module>r and
<module>w, respectively. But each such module is a single device and
represented by a single device tree node.

The display controllers are somewhat exceptional in that they only read
data, so there are no write clients. But they also have a couple of
clients, one for each display window (or overlay). Like you said, this
looks really like each client is a unidirectional special-purpose DMA
master.

Some examples:

	HDA: 2 clients: hdar and hdaw
	SATA: 2 clients: satar and satar
	DC: 6 clients: display{0a,0b,0c,hc,t,d}
	DCB: 4 clients: display{0ab,0bb,0cb,hcb}

Each of those is a single IP block, and each has a SWGROUP that contains
the set of all the memory clients.

> > > It's less of an issue for the particular implementation from NVIDIA,
> > > since you have a relatively low number of SoC designs coming out,
> > > compared to some of the other vendors, and in particular compared to
> > > the ARM SMMU that will be shared across many vendors. I definitely
> > > would not want to see per-SoC files for each chip that contains an
> > > ARM SMMU, and I also would like to see IOMMU drivers in general being
> > > implemented in a similar fashion, and your driver sets an example that
> > > others should better not copy IMHO ;-)
> > 
> > Actually I disagree. I think this sets exactly the right example. Since
> > none of these associations are configurable or change from board to
> > board, everything is implied by the compatible property. Adding this
> > data to DT would therefore be completely redundant.
> > 
> > Moving the data to DT would also mean that we would be adding a table of
> > registers to the DT. There used to be a time when there was concensus at
> > least that that was a really bad idea. Has that changed in the last few
> > years?
> 
> I wasn't thinking of adding the entire table to the IOMMU node, that would
> indeed achieve nothing compare to having the table in the driver source.
> 
> What I was trying to get at was whether you could make it work without
> a table whatsoever, by doing the DT IOMMU reference on the ID level rather
> than the more coarse SWGROUP level, and doing the memory controller
> settings (latency allowance) separately from that.

Like the examples above show, there is no 1:1 relationship between
clients and devices. Rather one device usually has multiple clients, but
all clients that pertain to one device are arranged in one SWGROUP.

> > > > > In the comment you mention that the latency allowance is set up to the
> > > > > hardware defaults, so I guess in the current version this is not required,
> > > > > but the .la.reg/shift/mask fields can't be determined from the other
> > > > > fields. This means in order to implement the extension for changing the
> > > > > setting in the future, you'd have to add some other way of communicating
> > > > > it without the table, right?
> > > > 
> > > > Right, there is no programmatic way to derive the latency allowance data
> > > > from the ID. They're more or less randomly spread across the registers.
> > > > 
> > > > Note that there are other knobs in the memory controller associated with
> > > > the memory clients, though they aren't in use (yet). Having this kind of
> > > > table gives us a very nice way of collecting the per-client data in one
> > > > location and then use that for register programming.
> > > 
> > > So what does "lacency allowance" actually mean, and why do you need to
> > > access this field from Linux? I'm not too familiar with memory controller
> > > concepts, so I'm sorry if this is an RTFM question.
> > 
> > Essentially this is an input to the memory controller's arbitration unit
> > and sets an upper bound on the latency of outstanding requests. Under
> > memory bandwidth pressure this allows the memory controller to give
> > priority to requests from clients with lower latency tolerance. Display
> > for example would usually have a fairly low tolerance because stalling
> > requests for too long will cause the pixel data FIFO to underrun and
> > cause visual artifacts.
> > 
> > So to make this work the goal is for memory clients to register their
> > bandwidth needs with some central entity that uses it along with the
> > memory client's FIFO depth to compute the value for the latency
> > allowance register. The unit for this value is an internal tick from the
> > memory controller, in turn based on the EMC frequency. Therefore this is
> > usually only computed once for a given configuration and can remain the
> > same across EMC frequency changes.
> > 
> > There are patches in the works to add support for EMC frequency scaling
> > and also latency allowance programming.
> 
> Ok, I see. The part that I'm missing here is how the client driver
> knows its number, as you write that we don't have a device node per
> client. Do you have a particular binding in mind already?

I was thinking that each device tree node would get an additional
property, maybe something like the below. I'm not sure if it makes sense
to turn this into a generic binding, given that this is likely to be
implemented fairly differently on other SoCs, or perhaps other SoCs
don't even have an equivalent of it.

	mc: memory-controller@70019000 {
		compatible = "nvidia,tegra124-mc";
		...

		#nvidia,memory-client-cells = <1>;
	};

	dc@54200000 {
		compatible = "nvidia,tegra124-dc";

		...

		nvidia,memory-client = <&mc 1 &mc 3 &mc 5 &mc 16 &mc 90 &mc 115>;
	};

Maybe we'd even need something like nvidia,memory-client-names so that
drivers can determine for which specific clients to set the latency
allowance.

> > > > > Back to the swgroup/id fields: if you need both, would it make sense
> > > > > to have #iommu-cells = <2> and pass both from the dma master device?
> > > > 
> > > > I think they are really orthogonal settings. Client IDs have nothing to
> > > > do with the IOMMU specifically. The IOMMU is primarily concerned with
> > > > the SWGROUP. The driver only makes sure that IOVA translation is enabled
> > > > for each client in a SWGROUP.
> > > 
> > > Hmm, where are they actually needed both at the same time then? IOW,
> > > why can't the IOMMU driver just deal with the SWGROUP and ignore the
> > > table with the ID and LA values, leaving that to the memory controller
> > > driver?
> > 
> > The IOMMU driver still needs the list of clients so that it can enable
> > translation for each of the clients pertaining to a given group. That
> > is, if display controller A requests IOVA translation this maps to
> > TEGRA_SWGROUP_DC, but internally in order for the translation to
> > actually happen we also need to enable IOVA translation for each of the
> > clients in that group.
> > 
> > Note that we do share the memory clients table between the MC and the
> > IOMMU drivers, so there's not actually any duplication there.
> 
> This again comes down to the question of whether you have one device
> node per client or one device node per group, so let's focus on that
> question and then make a decision based on that. The other discussion
> points are at this stage not important, so if you can convince me that
> doing one node per group is better than one node per id, I'd be happy
> to take your patches (with a summary of the discussion added to the
> merge commit).
> 
> The IOMMU core explictly understands the concept of IOMMU groups that
> share a translation between multiple clients, and this would seem like
> the best fit for the hardware you describe, but evidently you came to
> a different conclusion, as you have more knowledge of the details behind
> it. I can see that most groups have only one ID, but there are a few
> that have up to four clients:
> 
> $ git show 37d21f8918f2aa2289036cc778ee188951e53288 | grep swgroup | uniq -c  | grep -v reg.= | grep -v 1 | sort | uniq -c
>       1       2 +               .swgroup = TEGRA_SWGROUP_A9AVP,
>       1       2 +               .swgroup = TEGRA_SWGROUP_EMUCIF,
>       2       2 +               .swgroup = TEGRA_SWGROUP_G2,
>       1       2 +               .swgroup = TEGRA_SWGROUP_GPU,
>       3       2 +               .swgroup = TEGRA_SWGROUP_HC,
>       2       2 +               .swgroup = TEGRA_SWGROUP_NV,
>       6       2 +               .swgroup = TEGRA_SWGROUP_PPCS,
>       2       2 +               .swgroup = TEGRA_SWGROUP_TSEC,
>       1       2 +               .swgroup = TEGRA_SWGROUP_VIC,
>       2       2 +               .swgroup = TEGRA_SWGROUP_XUSB_DEV,
>       2       2 +               .swgroup = TEGRA_SWGROUP_XUSB_HOST,
>       2       3 +               .swgroup = TEGRA_SWGROUP_EPP,
>       1       3 +               .swgroup = TEGRA_SWGROUP_ISP2,
>       1       3 +               .swgroup = TEGRA_SWGROUP_ISP2B,
>       1       3 +               .swgroup = TEGRA_SWGROUP_MPE,
>       1       4 +               .swgroup = TEGRA_SWGROUP_NV,
>       6       4 +               .swgroup = TEGRA_SWGROUP_VDE,
>       2       4 +               .swgroup = TEGRA_SWGROUP_VI,
> 
> Are all of these devices where you'd naturally describe each
> group as a single device node in DT?

I think that listing mixes together the mc_clients and swgroups tables.
Other than that, yes, each of these groups would be a single device tree
node. See also the examples above.

Thierry
Arnd Bergmann Dec. 4, 2014, 3:03 p.m. UTC | #7
On Thursday 04 December 2014 15:43:56 Thierry Reding wrote:
> We discussed this on IRC and come to the conclusion that this approach
> (encoding the table in the driver) was indeed the best for this
> particular type of setup. For the record I'll try to explain the same
> here and provide more details.

Yes, thanks a lot!

> > I was assuming that you'd have one 'struct device' per client in all
> > cases, so you'd have a unique association between a swgroup/id tuple
> > and the device pointer that you pass into the dma-mapping and IOMMU APIs.
> 
> The majority of devices have two clients: one for read transactions,
> another for write transactions. These are typically named <module>r and
> <module>w, respectively. But each such module is a single device and
> represented by a single device tree node.
> 
> The display controllers are somewhat exceptional in that they only read
> data, so there are no write clients. But they also have a couple of
> clients, one for each display window (or overlay). Like you said, this
> looks really like each client is a unidirectional special-purpose DMA
> master.
> 
> Some examples:
> 
> 	HDA: 2 clients: hdar and hdaw
> 	SATA: 2 clients: satar and satar
> 	DC: 6 clients: display{0a,0b,0c,hc,t,d}
> 	DCB: 4 clients: display{0ab,0bb,0cb,hcb}
> 
> Each of those is a single IP block, and each has a SWGROUP that contains
> the set of all the memory clients.

Yep

> > > There are patches in the works to add support for EMC frequency scaling
> > > and also latency allowance programming.
> > 
> > Ok, I see. The part that I'm missing here is how the client driver
> > knows its number, as you write that we don't have a device node per
> > client. Do you have a particular binding in mind already?
> 
> I was thinking that each device tree node would get an additional
> property, maybe something like the below. I'm not sure if it makes sense
> to turn this into a generic binding, given that this is likely to be
> implemented fairly differently on other SoCs, or perhaps other SoCs
> don't even have an equivalent of it.
> 
> 	mc: memory-controller@70019000 {
> 		compatible = "nvidia,tegra124-mc";
> 		...
> 
> 		#nvidia,memory-client-cells = <1>;
> 	};
> 
> 	dc@54200000 {
> 		compatible = "nvidia,tegra124-dc";
> 
> 		...
> 
> 		nvidia,memory-client = <&mc 1 &mc 3 &mc 5 &mc 16 &mc 90 &mc 115>;
> 	};
> 
> Maybe we'd even need something like nvidia,memory-client-names so that
> drivers can determine for which specific clients to set the latency
> allowance.

Yes. We'd have to discuss the binding with some of the other SoC maintainers
to see if they might have a use for this too, but this certainly makes
sense.

	Arnd