diff mbox series

[v4,1/5] dt-bindings: PCI: brcmstb: brcm,{enable-l1ss,completion-timeout-us} props

Message ID 20230428223500.23337-2-jim2101024@gmail.com
State Not Applicable, archived
Headers show
Series PCI: brcmstb: Configure appropriate HW CLKREQ# mode | expand

Checks

Context Check Description
robh/checkpatch success
robh/patch-applied success
robh/dtbs-check warning build log
robh/dt-meta-schema success

Commit Message

Jim Quinlan April 28, 2023, 10:34 p.m. UTC
This commit introduces two new properties:

brcm,enable-l1ss (bool):

  The Broadcom STB/CM PCIe HW -- a core that is also used by RPi SOCs --
  requires the driver probe() to deliberately place the HW one of three
  CLKREQ# modes:

  (a) CLKREQ# driven by the RC unconditionally
  (b) CLKREQ# driven by the EP for ASPM L0s, L1
  (c) Bidirectional CLKREQ#, as used for L1 Substates (L1SS).

  The HW+driver can tell the difference between downstream devices that
  need (a) and (b), but does not know when to configure (c).  All devices
  should work fine when the driver chooses (a) or (b), but (c) may be
  desired to realize the extra power savings that L1SS offers.  So we
  introduce the boolean "brcm,enable-l1ss" property to inform the driver
  that (c) is desired.  Setting this property only makes sense when the
  downstream device is L1SS-capable and the OS is configured to activate
  this mode (e.g. policy==superpowersave).

  This property is already present in the Raspian version of Linux, but the
  upstream driver implementaion that follows adds more details and discerns
  between (a) and (b).

brcm,completion-timeout-us (u32):

  Our HW will cause a CPU abort on any PCI transaction completion abort
  error.  It makes sense then to increase the timeout value for this type
  of error in hopes that the response is merely delayed.  Further,
  L1SS-capable devices may have a long L1SS exit time and may require a
  custom timeout value: we've been asked by our customers to make this
  configurable for just this reason.

Signed-off-by: Jim Quinlan <jim2101024@gmail.com>
Reviewed-by: Rob Herring <robh@kernel.org>
---
 .../devicetree/bindings/pci/brcm,stb-pcie.yaml   | 16 ++++++++++++++++
 1 file changed, 16 insertions(+)

Comments

Bjorn Helgaas April 30, 2023, 7:10 p.m. UTC | #1
On Fri, Apr 28, 2023 at 06:34:55PM -0400, Jim Quinlan wrote:
> This commit introduces two new properties:

Doing two things makes this a candidate for splitting into two
patches, as you've already done for the driver support.  They seem
incidentally related but not indivisible.

> brcm,enable-l1ss (bool):
> 
>   The Broadcom STB/CM PCIe HW -- a core that is also used by RPi SOCs --
>   requires the driver probe() to deliberately place the HW one of three
>   CLKREQ# modes:
> 
>   (a) CLKREQ# driven by the RC unconditionally
>   (b) CLKREQ# driven by the EP for ASPM L0s, L1
>   (c) Bidirectional CLKREQ#, as used for L1 Substates (L1SS).
> 
>   The HW+driver can tell the difference between downstream devices that
>   need (a) and (b), but does not know when to configure (c).  All devices
>   should work fine when the driver chooses (a) or (b), but (c) may be
>   desired to realize the extra power savings that L1SS offers.  So we
>   introduce the boolean "brcm,enable-l1ss" property to inform the driver
>   that (c) is desired.  Setting this property only makes sense when the
>   downstream device is L1SS-capable and the OS is configured to activate
>   this mode (e.g. policy==superpowersave).

Is this related to the existing generic "supports-clkreq" property?  I
guess not, because supports-clkreq looks like a description of CLKREQ
signal routing, while brcm,enable-l1ss looks like a description of
what kind of downstream device is present?

What bad things would happen if the driver always configured (c)?

Other platforms don't require this, and having to edit the DT based on
what PCIe device is plugged in seems wrong.  If brcmstb does need it,
that suggests a hardware defect.  If we need this to work around a
defect, that's OK, but we should acknowledge the defect so we can stop
using this for future hardware that doesn't need it.

Maybe the name should be more specific to CLKREQ#, since this doesn't
actually *enable* L1SS; apparently it's just one of the pieces needed
to enable L1SS?

>   This property is already present in the Raspian version of Linux, but the
>   upstream driver implementaion that follows adds more details and discerns
>   between (a) and (b).

s/implementaion/implementation/

> brcm,completion-timeout-us (u32):
> 
>   Our HW will cause a CPU abort on any PCI transaction completion abort
>   error.  It makes sense then to increase the timeout value for this type
>   of error in hopes that the response is merely delayed.  Further,
>   L1SS-capable devices may have a long L1SS exit time and may require a
>   custom timeout value: we've been asked by our customers to make this
>   configurable for just this reason.

I asked before whether this should be made generic and not
brcm-specific, since completion timeouts are generic PCIe things.  I
didn't see any discussion, but Rob reviewed this so I guess it's OK
as-is.

Is there something unique about brcm that requires this?  I think it's
common for PCIe Completion Timeouts to cause CPU aborts.

Surely other drivers need to configure the completion timeout, but
pcie-rcar-host.c and pcie-rcar-ep.c are the only ones I could find.
Maybe the brcmstb power-up values are just too small?  Does the
correct value need to be in DT, or could it just be built into the
driver?

This sounds like something dependent on the downstream device
connected, which again sounds hard for users to deal with.  How would
they know what to use here?

> Signed-off-by: Jim Quinlan <jim2101024@gmail.com>
> Reviewed-by: Rob Herring <robh@kernel.org>
> ---
>  .../devicetree/bindings/pci/brcm,stb-pcie.yaml   | 16 ++++++++++++++++
>  1 file changed, 16 insertions(+)
> 
> diff --git a/Documentation/devicetree/bindings/pci/brcm,stb-pcie.yaml b/Documentation/devicetree/bindings/pci/brcm,stb-pcie.yaml
> index 7e15aae7d69e..239cc95545bd 100644
> --- a/Documentation/devicetree/bindings/pci/brcm,stb-pcie.yaml
> +++ b/Documentation/devicetree/bindings/pci/brcm,stb-pcie.yaml
> @@ -64,6 +64,22 @@ properties:
>  
>    aspm-no-l0s: true
>  
> +  brcm,enable-l1ss:
> +    description: Indicates that PCIe L1SS power savings
> +      are desired, the downstream device is L1SS-capable, and the
> +      OS has been configured to enable this mode.  For boards
> +      using a mini-card connector, this mode may not meet the
> +      TCRLon maximum time of 400ns, as specified in 3.2.5.2.5
> +      of the PCI Express Mini CEM 2.0 specification.
> +    type: boolean
> +
> +  brcm,completion-timeout-us:
> +    description: Number of microseconds before PCI transaction
> +      completion timeout abort is signalled.
> +    minimum: 16
> +    default: 1000000
> +    maximum: 19884107
> +
>    brcm,scb-sizes:
>      description: u64 giving the 64bit PCIe memory
>        viewport size of a memory controller.  There may be up to
> -- 
> 2.17.1
> 
> 
> _______________________________________________
> linux-arm-kernel mailing list
> linux-arm-kernel@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
Jim Quinlan May 3, 2023, 2:38 p.m. UTC | #2
On Sun, Apr 30, 2023 at 3:10 PM Bjorn Helgaas <helgaas@kernel.org> wrote:
>
> On Fri, Apr 28, 2023 at 06:34:55PM -0400, Jim Quinlan wrote:
> > This commit introduces two new properties:
>
> Doing two things makes this a candidate for splitting into two
> patches, as you've already done for the driver support.  They seem
> incidentally related but not indivisible.
>
> > brcm,enable-l1ss (bool):
> >
> >   The Broadcom STB/CM PCIe HW -- a core that is also used by RPi SOCs --
> >   requires the driver probe() to deliberately place the HW one of three
> >   CLKREQ# modes:
> >
> >   (a) CLKREQ# driven by the RC unconditionally
> >   (b) CLKREQ# driven by the EP for ASPM L0s, L1
> >   (c) Bidirectional CLKREQ#, as used for L1 Substates (L1SS).
> >
> >   The HW+driver can tell the difference between downstream devices that
> >   need (a) and (b), but does not know when to configure (c).  All devices
> >   should work fine when the driver chooses (a) or (b), but (c) may be
> >   desired to realize the extra power savings that L1SS offers.  So we
> >   introduce the boolean "brcm,enable-l1ss" property to inform the driver
> >   that (c) is desired.  Setting this property only makes sense when the
> >   downstream device is L1SS-capable and the OS is configured to activate
> >   this mode (e.g. policy==superpowersave).
>
> Is this related to the existing generic "supports-clkreq" property?  I
> guess not, because supports-clkreq looks like a description of CLKREQ
> signal routing, while brcm,enable-l1ss looks like a description of
> what kind of downstream device is present?

It is related, I thought about using it,  but not helpful for our needs.  Both
cases (b) and (c)  assume "supports-clkreq", and our HW needs to know
the difference between them.  Further, we have a register that tells
us if the endpoint device has requested a CLKREQ#, so we already
have this information.

As an aside, I would think that the "supports-clkreq" property should be in
the port-driver or endpoint node.

>
> What bad things would happen if the driver always configured (c)?
Well, our driver has traditionally only supported (b) and our existing
boards have
been designed with this in mind.  I would not want to switch modes w'o
the user/customer/engineer opting-in to do so.
Further, the PCIe HW engineer
told me defaulting to (c) was a bad idea and was "asking for trouble".
Note that the commit's comment has that warning about L1SS mode
not meeting this 400ns spec, and I suspect that many of our existing designs
have bumped into that.

But to answer your question, I haven't found a scenario that did not work
by setting mode (c).  That doesn't mean they are not out there.

>
> Other platforms don't require this, and having to edit the DT based on
> what PCIe device is plugged in seems wrong.  If brcmstb does need it,
> that suggests a hardware defect.  If we need this to work around a
> defect, that's OK, but we should acknowledge the defect so we can stop
> using this for future hardware that doesn't need it.

All devices should work w/o the user having to change the DT.  Only if they
desire L1SS must they add the "brcm,enable-l1ss" property.

Now there is this case where Cyril has found a regression, but recent
investigation
into this indicates that this particular failure was due to the RPi
CM4 using a "beta" eeprom
version -- after updating, it works fine.

>
> Maybe the name should be more specific to CLKREQ#, since this doesn't
> actually *enable* L1SS; apparently it's just one of the pieces needed
> to enable L1SS?

The other pieces are:  (a) policy == POWERSUPERSAVE and (b) an L1SS-capable
device, which seem unrelated and are out of the scope of the driver.

The RPi Raspian folks have been using "brcm,enable-l1ss"  for a while now and
I would prefer to keep that name for compatibility.

>
> >   This property is already present in the Raspian version of Linux, but the
> >   upstream driver implementaion that follows adds more details and discerns
> >   between (a) and (b).
>
> s/implementaion/implementation/
>
> > brcm,completion-timeout-us (u32):
> >
> >   Our HW will cause a CPU abort on any PCI transaction completion abort
> >   error.  It makes sense then to increase the timeout value for this type
> >   of error in hopes that the response is merely delayed.  Further,
> >   L1SS-capable devices may have a long L1SS exit time and may require a
> >   custom timeout value: we've been asked by our customers to make this
> >   configurable for just this reason.
>
> I asked before whether this should be made generic and not
> brcm-specific, since completion timeouts are generic PCIe things.  I
> didn't see any discussion, but Rob reviewed this so I guess it's OK
> as-is.
I am going to drop it, thanks for questioning its purpose and
I apologize for  the noise.

Regards,
Jim Quinlan
Broadcom STB
>
> Is there something unique about brcm that requires this?  I think it's
> common for PCIe Completion Timeouts to cause CPU aborts.
>
> Surely other drivers need to configure the completion timeout, but
> pcie-rcar-host.c and pcie-rcar-ep.c are the only ones I could find.
> Maybe the brcmstb power-up values are just too small?  Does the
> correct value need to be in DT, or could it just be built into the
> driver?
>
> This sounds like something dependent on the downstream device
> connected, which again sounds hard for users to deal with.  How would
> they know what to use here?
>
> > Signed-off-by: Jim Quinlan <jim2101024@gmail.com>
> > Reviewed-by: Rob Herring <robh@kernel.org>
> > ---
> >  .../devicetree/bindings/pci/brcm,stb-pcie.yaml   | 16 ++++++++++++++++
> >  1 file changed, 16 insertions(+)
> >
> > diff --git a/Documentation/devicetree/bindings/pci/brcm,stb-pcie.yaml b/Documentation/devicetree/bindings/pci/brcm,stb-pcie.yaml
> > index 7e15aae7d69e..239cc95545bd 100644
> > --- a/Documentation/devicetree/bindings/pci/brcm,stb-pcie.yaml
> > +++ b/Documentation/devicetree/bindings/pci/brcm,stb-pcie.yaml
> > @@ -64,6 +64,22 @@ properties:
> >
> >    aspm-no-l0s: true
> >
> > +  brcm,enable-l1ss:
> > +    description: Indicates that PCIe L1SS power savings
> > +      are desired, the downstream device is L1SS-capable, and the
> > +      OS has been configured to enable this mode.  For boards
> > +      using a mini-card connector, this mode may not meet the
> > +      TCRLon maximum time of 400ns, as specified in 3.2.5.2.5
> > +      of the PCI Express Mini CEM 2.0 specification.
> > +    type: boolean
> > +
> > +  brcm,completion-timeout-us:
> > +    description: Number of microseconds before PCI transaction
> > +      completion timeout abort is signalled.
> > +    minimum: 16
> > +    default: 1000000
> > +    maximum: 19884107
> > +
> >    brcm,scb-sizes:
> >      description: u64 giving the 64bit PCIe memory
> >        viewport size of a memory controller.  There may be up to
> > --
> > 2.17.1
> >
> >
> > _______________________________________________
> > linux-arm-kernel mailing list
> > linux-arm-kernel@lists.infradead.org
> > http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
Bjorn Helgaas May 3, 2023, 6:07 p.m. UTC | #3
On Wed, May 03, 2023 at 10:38:57AM -0400, Jim Quinlan wrote:
> On Sun, Apr 30, 2023 at 3:10 PM Bjorn Helgaas <helgaas@kernel.org> wrote:
> > On Fri, Apr 28, 2023 at 06:34:55PM -0400, Jim Quinlan wrote:
> > > brcm,enable-l1ss (bool):
> > >
> > >   The Broadcom STB/CM PCIe HW -- a core that is also used by RPi SOCs --
> > >   requires the driver probe() to deliberately place the HW one of three
> > >   CLKREQ# modes:
> > >
> > >   (a) CLKREQ# driven by the RC unconditionally
> > >   (b) CLKREQ# driven by the EP for ASPM L0s, L1
> > >   (c) Bidirectional CLKREQ#, as used for L1 Substates (L1SS).
> > >
> > >   The HW+driver can tell the difference between downstream devices that
> > >   need (a) and (b), but does not know when to configure (c).  All devices
> > >   should work fine when the driver chooses (a) or (b), but (c) may be
> > >   desired to realize the extra power savings that L1SS offers.  So we
> > >   introduce the boolean "brcm,enable-l1ss" property to inform the driver
> > >   that (c) is desired.  Setting this property only makes sense when the
> > >   downstream device is L1SS-capable and the OS is configured to activate
> > >   this mode (e.g. policy==superpowersave).
> ...

> > What bad things would happen if the driver always configured (c)?
>
> Well, our driver has traditionally only supported (b) and our
> existing boards have been designed with this in mind.  I would not
> want to switch modes w'o the user/customer/engineer opting-in to do
> so.  Further, the PCIe HW engineer told me defaulting to (c) was a
> bad idea and was "asking for trouble".  Note that the commit's
> comment has that warning about L1SS mode not meeting this 400ns
> spec, and I suspect that many of our existing designs have bumped
> into that.
> 
> But to answer your question, I haven't found a scenario that did not
> work by setting mode (c).  That doesn't mean they are not out there.
> 
> > Other platforms don't require this, and having to edit the DT
> > based on what PCIe device is plugged in seems wrong.  If brcmstb
> > does need it, that suggests a hardware defect.  If we need this to
> > work around a defect, that's OK, but we should acknowledge the
> > defect so we can stop using this for future hardware that doesn't
> > need it.
> 
> All devices should work w/o the user having to change the DT.  Only
> if they desire L1SS must they add the "brcm,enable-l1ss" property.

I thought the DT was supposed to describe properties of the
*hardware*, but this seems more like "use this untested clkreq
configuration," which maybe could be done via a module parameter?

Whatever the mechanism, it looks like patch 2/5 makes brcmstb
advertise the appropriate ASPM and L1SS stuff in the PCIe and L1SS
Capabilities so the OS will do the right thing without any core
changes.

> > Maybe the name should be more specific to CLKREQ#, since this
> > doesn't actually *enable* L1SS; apparently it's just one of the
> > pieces needed to enable L1SS?
> 
> The other pieces are:  (a) policy == POWERSUPERSAVE and (b) an
> L1SS-capable device, which seem unrelated and are out of the scope
> of the driver.

Right.  Of course, if ASPM and L1SS support are advertised, the OS can
still choose whether to enable them, and that choice can change at
run-time.

> The RPi Raspian folks have been using "brcm,enable-l1ss"  for a
> while now and I would prefer to keep that name for compatibility.

BTW, the DT comment in the patch refers to PCIe Mini CEM .0 sec
3.2.5.2.5.  I think the correct section is 3.2.5.2.2 (at least in the
r2.1 spec).

There's also a footnote to the effect that T_CRLon is allowed to
exceed 400ns when LTR is supported and enabled.  L1.2 requires LTR, so
if L1.2 is the case where brcmstb exceeds 400ns, that might not be a
problem.

Bjorn
Jim Quinlan May 3, 2023, 9:38 p.m. UTC | #4
On Wed, May 3, 2023 at 2:07 PM Bjorn Helgaas <helgaas@kernel.org> wrote:
>
> On Wed, May 03, 2023 at 10:38:57AM -0400, Jim Quinlan wrote:
> > On Sun, Apr 30, 2023 at 3:10 PM Bjorn Helgaas <helgaas@kernel.org> wrote:
> > > On Fri, Apr 28, 2023 at 06:34:55PM -0400, Jim Quinlan wrote:
> > > > brcm,enable-l1ss (bool):
> > > >
> > > >   The Broadcom STB/CM PCIe HW -- a core that is also used by RPi SOCs --
> > > >   requires the driver probe() to deliberately place the HW one of three
> > > >   CLKREQ# modes:
> > > >
> > > >   (a) CLKREQ# driven by the RC unconditionally
> > > >   (b) CLKREQ# driven by the EP for ASPM L0s, L1
> > > >   (c) Bidirectional CLKREQ#, as used for L1 Substates (L1SS).
> > > >
> > > >   The HW+driver can tell the difference between downstream devices that
> > > >   need (a) and (b), but does not know when to configure (c).  All devices
> > > >   should work fine when the driver chooses (a) or (b), but (c) may be
> > > >   desired to realize the extra power savings that L1SS offers.  So we
> > > >   introduce the boolean "brcm,enable-l1ss" property to inform the driver
> > > >   that (c) is desired.  Setting this property only makes sense when the
> > > >   downstream device is L1SS-capable and the OS is configured to activate
> > > >   this mode (e.g. policy==superpowersave).
> > ...
>
> > > What bad things would happen if the driver always configured (c)?
> >
> > Well, our driver has traditionally only supported (b) and our
> > existing boards have been designed with this in mind.  I would not
> > want to switch modes w'o the user/customer/engineer opting-in to do
> > so.  Further, the PCIe HW engineer told me defaulting to (c) was a
> > bad idea and was "asking for trouble".  Note that the commit's
> > comment has that warning about L1SS mode not meeting this 400ns
> > spec, and I suspect that many of our existing designs have bumped
> > into that.
> >
> > But to answer your question, I haven't found a scenario that did not
> > work by setting mode (c).  That doesn't mean they are not out there.
> >
> > > Other platforms don't require this, and having to edit the DT
> > > based on what PCIe device is plugged in seems wrong.  If brcmstb
> > > does need it, that suggests a hardware defect.  If we need this to
> > > work around a defect, that's OK, but we should acknowledge the
> > > defect so we can stop using this for future hardware that doesn't
> > > need it.
> >
> > All devices should work w/o the user having to change the DT.  Only
> > if they desire L1SS must they add the "brcm,enable-l1ss" property.
>
> I thought the DT was supposed to describe properties of the
> *hardware*, but this seems more like "use this untested clkreq
> configuration," which maybe could be done via a module parameter?
Electrically, it has been tested, but  specifically for L1SS capable
devices.  What is untested AFAICT are platforms using this mode on
non-L1SS capable
devices.  I was not aware that Raspian OS was turning this on as a
default until the CM4 came out.

As far as "DT describing the HW only", one doesn't have to go far to
find exceptions to the rule.
One example off the top of my head is "linux,pci-domain" -- all this
does is assign
an "id" to a controller to make life easier.  We've gone from not
using it, with three controllers no less,
to using it, but the HW was the same all along.

WRT bootline param
pci=[<domain>:]<bus>:<dev>.<func>[/<dev>.<func>]*pci:<vendor>:<device>[:<subvendor>:<subdevice>]:
this does not look compatible for vendor specific DT options like
"brcm,enable-l1ss".  I observe that pci_dev_str_match_path() is a
static function and I don't see a single option in pci.c  that is
vendor specific.  FWIW, moving something like this to the bootline
would not be popular with our customers; for some reason they really
don't like changes to the bootline.


>
> Whatever the mechanism, it looks like patch 2/5 makes brcmstb
> advertise the appropriate ASPM and L1SS stuff in the PCIe and L1SS
> Capabilities so the OS will do the right thing without any core
> changes.
>
> > > Maybe the name should be more specific to CLKREQ#, since this
> > > doesn't actually *enable* L1SS; apparently it's just one of the
> > > pieces needed to enable L1SS?
> >
> > The other pieces are:  (a) policy == POWERSUPERSAVE and (b) an
> > L1SS-capable device, which seem unrelated and are out of the scope
> > of the driver.
>
> Right.  Of course, if ASPM and L1SS support are advertised, the OS can
> still choose whether to enable them, and that choice can change at
> run-time.
Agree.

Thanks & regards,
Jim Quinlan
Broadcom STB
>
> > The RPi Raspian folks have been using "brcm,enable-l1ss"  for a
> > while now and I would prefer to keep that name for compatibility.
>
> BTW, the DT comment in the patch refers to PCIe Mini CEM .0 sec
> 3.2.5.2.5.  I think the correct section is 3.2.5.2.2 (at least in the
> r2.1 spec).
>
> There's also a footnote to the effect that T_CRLon is allowed to
> exceed 400ns when LTR is supported and enabled.  L1.2 requires LTR, so
> if L1.2 is the case where brcmstb exceeds 400ns, that might not be a
> problem.
>
> Bjorn
Bjorn Helgaas May 3, 2023, 10:18 p.m. UTC | #5
On Wed, May 03, 2023 at 05:38:15PM -0400, Jim Quinlan wrote:
> On Wed, May 3, 2023 at 2:07 PM Bjorn Helgaas <helgaas@kernel.org> wrote:
> > On Wed, May 03, 2023 at 10:38:57AM -0400, Jim Quinlan wrote:
> > > On Sun, Apr 30, 2023 at 3:10 PM Bjorn Helgaas <helgaas@kernel.org> wrote:
> > > > On Fri, Apr 28, 2023 at 06:34:55PM -0400, Jim Quinlan wrote:
> > > > > brcm,enable-l1ss (bool):
> > > > >
> > > > >   The Broadcom STB/CM PCIe HW -- a core that is also used by RPi SOCs --
> > > > >   requires the driver probe() to deliberately place the HW one of three
> > > > >   CLKREQ# modes:
> > > > >
> > > > >   (a) CLKREQ# driven by the RC unconditionally
> > > > >   (b) CLKREQ# driven by the EP for ASPM L0s, L1
> > > > >   (c) Bidirectional CLKREQ#, as used for L1 Substates (L1SS).
> > > > >
> > > > >   The HW+driver can tell the difference between downstream devices that
> > > > >   need (a) and (b), but does not know when to configure (c).  All devices
> > > > >   should work fine when the driver chooses (a) or (b), but (c) may be
> > > > >   desired to realize the extra power savings that L1SS offers.  So we
> > > > >   introduce the boolean "brcm,enable-l1ss" property to inform the driver
> > > > >   that (c) is desired.  Setting this property only makes sense when the
> > > > >   downstream device is L1SS-capable and the OS is configured to activate
> > > > >   this mode (e.g. policy==superpowersave).
> > > ...
> >
> > > > What bad things would happen if the driver always configured (c)?
> > >
> > > Well, our driver has traditionally only supported (b) and our
> > > existing boards have been designed with this in mind.  I would not
> > > want to switch modes w'o the user/customer/engineer opting-in to do
> > > so.  Further, the PCIe HW engineer told me defaulting to (c) was a
> > > bad idea and was "asking for trouble".  Note that the commit's
> > > comment has that warning about L1SS mode not meeting this 400ns
> > > spec, and I suspect that many of our existing designs have bumped
> > > into that.
> > >
> > > But to answer your question, I haven't found a scenario that did not
> > > work by setting mode (c).  That doesn't mean they are not out there.
> > >
> > > > Other platforms don't require this, and having to edit the DT
> > > > based on what PCIe device is plugged in seems wrong.  If brcmstb
> > > > does need it, that suggests a hardware defect.  If we need this to
> > > > work around a defect, that's OK, but we should acknowledge the
> > > > defect so we can stop using this for future hardware that doesn't
> > > > need it.
> > >
> > > All devices should work w/o the user having to change the DT.  Only
> > > if they desire L1SS must they add the "brcm,enable-l1ss" property.
> >
> > I thought the DT was supposed to describe properties of the
> > *hardware*, but this seems more like "use this untested clkreq
> > configuration," which maybe could be done via a module parameter?
>
> Electrically, it has been tested, but  specifically for L1SS capable
> devices.  What is untested AFAICT are platforms using this mode on
> non-L1SS capable devices.

Non-L1SS behavior is a subset of L1SS, so if you've tested with L1SS
enabled, I would think you'd be covered.

But I'm not a hardware engineer, so maybe there's some subtlety there.
The "asking for trouble" comment from your engineer is definitely
concerning, but I have no idea what's behind that.

And obviously even if we have "brcm,enable-l1ss", the user may decide
to disable L1SS administratively, so even if the Root Port and the
device both support L1SS, it may be never be enabled.

> WRT bootline param
> pci=[<domain>:]<bus>:<dev>.<func>[/<dev>.<func>]*pci:<vendor>:<device>[:<subvendor>:<subdevice>]:
> this does not look compatible for vendor specific DT options like
> "brcm,enable-l1ss".  I observe that pci_dev_str_match_path() is a
> static function and I don't see a single option in pci.c  that is
> vendor specific.  FWIW, moving something like this to the bootline
> would not be popular with our customers; for some reason they really
> don't like changes to the bootline.

They prefer editing the DT?

I agree the "pci=B:D.F" stuff is a bit ugly.  Do you have multiple
slots such that you would have to apply this parameter to some but not
others?  I guess I was imagining a single-slot system where you
wouldn't need to identify the specific device because there *is* only
one.

Bjorn
Jim Quinlan May 5, 2023, 12:39 p.m. UTC | #6
On Wed, May 3, 2023 at 6:18 PM Bjorn Helgaas <helgaas@kernel.org> wrote:
>
> On Wed, May 03, 2023 at 05:38:15PM -0400, Jim Quinlan wrote:
> > On Wed, May 3, 2023 at 2:07 PM Bjorn Helgaas <helgaas@kernel.org> wrote:
> > > On Wed, May 03, 2023 at 10:38:57AM -0400, Jim Quinlan wrote:
> > > > On Sun, Apr 30, 2023 at 3:10 PM Bjorn Helgaas <helgaas@kernel.org> wrote:
> > > > > On Fri, Apr 28, 2023 at 06:34:55PM -0400, Jim Quinlan wrote:
> > > > > > brcm,enable-l1ss (bool):
> > > > > >
> > > > > >   The Broadcom STB/CM PCIe HW -- a core that is also used by RPi SOCs --
> > > > > >   requires the driver probe() to deliberately place the HW one of three
> > > > > >   CLKREQ# modes:
> > > > > >
> > > > > >   (a) CLKREQ# driven by the RC unconditionally
> > > > > >   (b) CLKREQ# driven by the EP for ASPM L0s, L1
> > > > > >   (c) Bidirectional CLKREQ#, as used for L1 Substates (L1SS).
> > > > > >
> > > > > >   The HW+driver can tell the difference between downstream devices that
> > > > > >   need (a) and (b), but does not know when to configure (c).  All devices
> > > > > >   should work fine when the driver chooses (a) or (b), but (c) may be
> > > > > >   desired to realize the extra power savings that L1SS offers.  So we
> > > > > >   introduce the boolean "brcm,enable-l1ss" property to inform the driver
> > > > > >   that (c) is desired.  Setting this property only makes sense when the
> > > > > >   downstream device is L1SS-capable and the OS is configured to activate
> > > > > >   this mode (e.g. policy==superpowersave).
> > > > ...
> > >
> > > > > What bad things would happen if the driver always configured (c)?
> > > >
> > > > Well, our driver has traditionally only supported (b) and our
> > > > existing boards have been designed with this in mind.  I would not
> > > > want to switch modes w'o the user/customer/engineer opting-in to do
> > > > so.  Further, the PCIe HW engineer told me defaulting to (c) was a
> > > > bad idea and was "asking for trouble".  Note that the commit's
> > > > comment has that warning about L1SS mode not meeting this 400ns
> > > > spec, and I suspect that many of our existing designs have bumped
> > > > into that.
> > > >
> > > > But to answer your question, I haven't found a scenario that did not
> > > > work by setting mode (c).  That doesn't mean they are not out there.
> > > >
> > > > > Other platforms don't require this, and having to edit the DT
> > > > > based on what PCIe device is plugged in seems wrong.  If brcmstb
> > > > > does need it, that suggests a hardware defect.  If we need this to
> > > > > work around a defect, that's OK, but we should acknowledge the
> > > > > defect so we can stop using this for future hardware that doesn't
> > > > > need it.
> > > >
> > > > All devices should work w/o the user having to change the DT.  Only
> > > > if they desire L1SS must they add the "brcm,enable-l1ss" property.
> > >
> > > I thought the DT was supposed to describe properties of the
> > > *hardware*, but this seems more like "use this untested clkreq
> > > configuration," which maybe could be done via a module parameter?
> >
> > Electrically, it has been tested, but  specifically for L1SS capable
> > devices.  What is untested AFAICT are platforms using this mode on
> > non-L1SS capable devices.
>
> Non-L1SS behavior is a subset of L1SS, so if you've tested with L1SS
> enabled, I would think you'd be covered.
>
> But I'm not a hardware engineer, so maybe there's some subtlety there.
> The "asking for trouble" comment from your engineer is definitely
> concerning, but I have no idea what's behind that.
>
> And obviously even if we have "brcm,enable-l1ss", the user may decide
> to disable L1SS administratively, so even if the Root Port and the
> device both support L1SS, it may be never be enabled.
>
> > WRT bootline param
> > pci=[<domain>:]<bus>:<dev>.<func>[/<dev>.<func>]*pci:<vendor>:<device>[:<subvendor>:<subdevice>]:
> > this does not look compatible for vendor specific DT options like
> > "brcm,enable-l1ss".  I observe that pci_dev_str_match_path() is a
> > static function and I don't see a single option in pci.c  that is
> > vendor specific.  FWIW, moving something like this to the bootline
> > would not be popular with our customers; for some reason they really
> > don't like changes to the bootline.
>
> They prefer editing the DT?
>
> I agree the "pci=B:D.F" stuff is a bit ugly.  Do you have multiple
> slots such that you would have to apply this parameter to some but not
> others?  I guess I was imagining a single-slot system where you
> wouldn't need to identify the specific device because there *is* only
> one.
Hi Bjorn,

We typically have a single device per controller.  Occasionally, there
is a mismatch in needs, and the customer adds a switch to their board
until we can add another controller to the next rev of the SOC.

Some of our customers have a habit of  doing "rmmod, sleep, insmod" on
 the RC driver for various uncommon reasons, so "pci,linux-domain"
was quite useful for them to simplify their shell scripts.

As far as preferring DT:  customers have to modify the DT already*, so
they really don't want to be modifying two separate configurations (DT
and boot params).   Often, the DT blob is stored in a different
partition or medium than the bootline params, and it is a hassle to
configure both and keep them  in "sync".

Regards,
Jim Quinlan
Broadcom STB

* We have a tool system  that we and our customers use which takes a
high-level configuration file and generates a custom DT blob and
bootloader for a particular SOC/board(s).   And we provide the default
config, so our customers only have to change a few things.  For
example, adding "-l1ss" to the existing "pcie -n 0" line will do what
you'd expect.  And this is actually not a good example of the tool's
power.


>
> Bjorn
Bjorn Helgaas May 5, 2023, 1:34 p.m. UTC | #7
On Fri, May 05, 2023 at 08:39:52AM -0400, Jim Quinlan wrote:
> On Wed, May 3, 2023 at 6:18 PM Bjorn Helgaas <helgaas@kernel.org> wrote:
> > On Wed, May 03, 2023 at 05:38:15PM -0400, Jim Quinlan wrote:
> > > On Wed, May 3, 2023 at 2:07 PM Bjorn Helgaas <helgaas@kernel.org> wrote:
> > > > On Wed, May 03, 2023 at 10:38:57AM -0400, Jim Quinlan wrote:
> > > > > On Sun, Apr 30, 2023 at 3:10 PM Bjorn Helgaas <helgaas@kernel.org> wrote:
> > > > > > On Fri, Apr 28, 2023 at 06:34:55PM -0400, Jim Quinlan wrote:
> > > > > > > brcm,enable-l1ss (bool):
> > > > > > >
> > > > > > >   The Broadcom STB/CM PCIe HW -- a core that is also used by RPi SOCs --
> > > > > > >   requires the driver probe() to deliberately place the HW one of three
> > > > > > >   CLKREQ# modes:
> > > > > > >
> > > > > > >   (a) CLKREQ# driven by the RC unconditionally
> > > > > > >   (b) CLKREQ# driven by the EP for ASPM L0s, L1
> > > > > > >   (c) Bidirectional CLKREQ#, as used for L1 Substates (L1SS).
> > > > > > >
> > > > > > >   The HW+driver can tell the difference between downstream devices that
> > > > > > >   need (a) and (b), but does not know when to configure (c).  All devices
> > > > > > >   should work fine when the driver chooses (a) or (b), but (c) may be
> > > > > > >   desired to realize the extra power savings that L1SS offers.  So we
> > > > > > >   introduce the boolean "brcm,enable-l1ss" property to inform the driver
> > > > > > >   that (c) is desired.  Setting this property only makes sense when the
> > > > > > >   downstream device is L1SS-capable and the OS is configured to activate
> > > > > > >   this mode (e.g. policy==superpowersave).

Just noticed that this should be "policy==powersupersave"

> > > > > > What bad things would happen if the driver always configured (c)?
> > > > >
> > > > > Well, our driver has traditionally only supported (b) and our
> > > > > existing boards have been designed with this in mind.  I would not
> > > > > want to switch modes w'o the user/customer/engineer opting-in to do
> > > > > so.  Further, the PCIe HW engineer told me defaulting to (c) was a
> > > > > bad idea and was "asking for trouble".  Note that the commit's
> > > > > comment has that warning about L1SS mode not meeting this 400ns
> > > > > spec, and I suspect that many of our existing designs have bumped
> > > > > into that.
> > > > >
> > > > > But to answer your question, I haven't found a scenario that did not
> > > > > work by setting mode (c).  That doesn't mean they are not out there.
> > > > >
> > > > > > Other platforms don't require this, and having to edit the DT
> > > > > > based on what PCIe device is plugged in seems wrong.  If brcmstb
> > > > > > does need it, that suggests a hardware defect.  If we need this to
> > > > > > work around a defect, that's OK, but we should acknowledge the
> > > > > > defect so we can stop using this for future hardware that doesn't
> > > > > > need it.
> > > > >
> > > > > All devices should work w/o the user having to change the DT.  Only
> > > > > if they desire L1SS must they add the "brcm,enable-l1ss" property.
> > > >
> > > > I thought the DT was supposed to describe properties of the
> > > > *hardware*, but this seems more like "use this untested clkreq
> > > > configuration," which maybe could be done via a module parameter?
> > >
> > > Electrically, it has been tested, but  specifically for L1SS capable
> > > devices.  What is untested AFAICT are platforms using this mode on
> > > non-L1SS capable devices.
> >
> > Non-L1SS behavior is a subset of L1SS, so if you've tested with L1SS
> > enabled, I would think you'd be covered.

I think this point is still worth considering.  Maybe your hardware
folks have an opinion here?

> > But I'm not a hardware engineer, so maybe there's some subtlety there.
> > The "asking for trouble" comment from your engineer is definitely
> > concerning, but I have no idea what's behind that.
> >
> > And obviously even if we have "brcm,enable-l1ss", the user may decide
> > to disable L1SS administratively, so even if the Root Port and the
> > device both support L1SS, it may be never be enabled.
> >
> > > WRT bootline param
> > > pci=[<domain>:]<bus>:<dev>.<func>[/<dev>.<func>]*pci:<vendor>:<device>[:<subvendor>:<subdevice>]:
> > > this does not look compatible for vendor specific DT options like
> > > "brcm,enable-l1ss".  I observe that pci_dev_str_match_path() is a
> > > static function and I don't see a single option in pci.c  that is
> > > vendor specific.  FWIW, moving something like this to the bootline
> > > would not be popular with our customers; for some reason they really
> > > don't like changes to the bootline.
> >
> > They prefer editing the DT?
> >
> > I agree the "pci=B:D.F" stuff is a bit ugly.  Do you have multiple
> > slots such that you would have to apply this parameter to some but not
> > others?  I guess I was imagining a single-slot system where you
> > wouldn't need to identify the specific device because there *is* only
> > one.
> 
> We typically have a single device per controller.  Occasionally,
> there is a mismatch in needs, and the customer adds a switch to
> their board until we can add another controller to the next rev of
> the SOC.

If you add a switch, it sounds like there's still only a single link
between the brcmstb controller and the switch.  I'm assuming
"brcm,enable-l1ss" only affects CLKREQ# on that link and it has
nothing to do with links below the switch.

(c) must be the standard PCIe situation because no other systems
require the user to configure CLKREQ# based on the type of card
plugged in.  And we don't know about any actual problems that happen
in (c) with any cards.

That makes me think the ideal end state would be to use (c) by
default so everything just works like every other platform with no
fuss.  If there's some situation that requires (a) or (b), there could
be a property or parameter to select *that* because that would be the
unusual case.

But obviously the comment from the hardware engineer:

> > > > > Further, the PCIe HW engineer told me defaulting to (c) was
> > > > > a bad idea and was "asking for trouble".

would need to be understood before doing that.

Bjorn
Jim Quinlan May 5, 2023, 2:40 p.m. UTC | #8
On Fri, May 5, 2023 at 9:34 AM Bjorn Helgaas <helgaas@kernel.org> wrote:
>
> On Fri, May 05, 2023 at 08:39:52AM -0400, Jim Quinlan wrote:
> > On Wed, May 3, 2023 at 6:18 PM Bjorn Helgaas <helgaas@kernel.org> wrote:
> > > On Wed, May 03, 2023 at 05:38:15PM -0400, Jim Quinlan wrote:
> > > > On Wed, May 3, 2023 at 2:07 PM Bjorn Helgaas <helgaas@kernel.org> wrote:
> > > > > On Wed, May 03, 2023 at 10:38:57AM -0400, Jim Quinlan wrote:
> > > > > > On Sun, Apr 30, 2023 at 3:10 PM Bjorn Helgaas <helgaas@kernel.org> wrote:
> > > > > > > On Fri, Apr 28, 2023 at 06:34:55PM -0400, Jim Quinlan wrote:
> > > > > > > > brcm,enable-l1ss (bool):
> > > > > > > >
> > > > > > > >   The Broadcom STB/CM PCIe HW -- a core that is also used by RPi SOCs --
> > > > > > > >   requires the driver probe() to deliberately place the HW one of three
> > > > > > > >   CLKREQ# modes:
> > > > > > > >
> > > > > > > >   (a) CLKREQ# driven by the RC unconditionally
> > > > > > > >   (b) CLKREQ# driven by the EP for ASPM L0s, L1
> > > > > > > >   (c) Bidirectional CLKREQ#, as used for L1 Substates (L1SS).
> > > > > > > >
> > > > > > > >   The HW+driver can tell the difference between downstream devices that
> > > > > > > >   need (a) and (b), but does not know when to configure (c).  All devices
> > > > > > > >   should work fine when the driver chooses (a) or (b), but (c) may be
> > > > > > > >   desired to realize the extra power savings that L1SS offers.  So we
> > > > > > > >   introduce the boolean "brcm,enable-l1ss" property to inform the driver
> > > > > > > >   that (c) is desired.  Setting this property only makes sense when the
> > > > > > > >   downstream device is L1SS-capable and the OS is configured to activate
> > > > > > > >   this mode (e.g. policy==superpowersave).
>
> Just noticed that this should be "policy==powersupersave"
>
> > > > > > > What bad things would happen if the driver always configured (c)?
> > > > > >
> > > > > > Well, our driver has traditionally only supported (b) and our
> > > > > > existing boards have been designed with this in mind.  I would not
> > > > > > want to switch modes w'o the user/customer/engineer opting-in to do
> > > > > > so.  Further, the PCIe HW engineer told me defaulting to (c) was a
> > > > > > bad idea and was "asking for trouble".  Note that the commit's
> > > > > > comment has that warning about L1SS mode not meeting this 400ns
> > > > > > spec, and I suspect that many of our existing designs have bumped
> > > > > > into that.
> > > > > >
> > > > > > But to answer your question, I haven't found a scenario that did not
> > > > > > work by setting mode (c).  That doesn't mean they are not out there.
> > > > > >
> > > > > > > Other platforms don't require this, and having to edit the DT
> > > > > > > based on what PCIe device is plugged in seems wrong.  If brcmstb
> > > > > > > does need it, that suggests a hardware defect.  If we need this to
> > > > > > > work around a defect, that's OK, but we should acknowledge the
> > > > > > > defect so we can stop using this for future hardware that doesn't
> > > > > > > need it.
> > > > > >
> > > > > > All devices should work w/o the user having to change the DT.  Only
> > > > > > if they desire L1SS must they add the "brcm,enable-l1ss" property.
> > > > >
> > > > > I thought the DT was supposed to describe properties of the
> > > > > *hardware*, but this seems more like "use this untested clkreq
> > > > > configuration," which maybe could be done via a module parameter?
> > > >
> > > > Electrically, it has been tested, but  specifically for L1SS capable
> > > > devices.  What is untested AFAICT are platforms using this mode on
> > > > non-L1SS capable devices.
> > >
> > > Non-L1SS behavior is a subset of L1SS, so if you've tested with L1SS
> > > enabled, I would think you'd be covered.
>
> I think this point is still worth considering.  Maybe your hardware
> folks have an opinion here?
See below.
>
> > > But I'm not a hardware engineer, so maybe there's some subtlety there.
> > > The "asking for trouble" comment from your engineer is definitely
> > > concerning, but I have no idea what's behind that.
> > >
> > > And obviously even if we have "brcm,enable-l1ss", the user may decide
> > > to disable L1SS administratively, so even if the Root Port and the
> > > device both support L1SS, it may be never be enabled.
> > >
> > > > WRT bootline param
> > > > pci=[<domain>:]<bus>:<dev>.<func>[/<dev>.<func>]*pci:<vendor>:<device>[:<subvendor>:<subdevice>]:
> > > > this does not look compatible for vendor specific DT options like
> > > > "brcm,enable-l1ss".  I observe that pci_dev_str_match_path() is a
> > > > static function and I don't see a single option in pci.c  that is
> > > > vendor specific.  FWIW, moving something like this to the bootline
> > > > would not be popular with our customers; for some reason they really
> > > > don't like changes to the bootline.
> > >
> > > They prefer editing the DT?
> > >
> > > I agree the "pci=B:D.F" stuff is a bit ugly.  Do you have multiple
> > > slots such that you would have to apply this parameter to some but not
> > > others?  I guess I was imagining a single-slot system where you
> > > wouldn't need to identify the specific device because there *is* only
> > > one.
> >
> > We typically have a single device per controller.  Occasionally,
> > there is a mismatch in needs, and the customer adds a switch to
> > their board until we can add another controller to the next rev of
> > the SOC.
>
> If you add a switch, it sounds like there's still only a single link
> between the brcmstb controller and the switch.  I'm assuming
> "brcm,enable-l1ss" only affects CLKREQ# on that link and it has
> nothing to do with links below the switch.
>
> (c) must be the standard PCIe situation because no other systems
> require the user to configure CLKREQ# based on the type of card
> plugged in.  And we don't know about any actual problems that happen
> in (c) with any cards.
>
> That makes me think the ideal end state would be to use (c) by
> default so everything just works like every other platform with no
> fuss.  If there's some situation that requires (a) or (b), there could
> be a property or parameter to select *that* because that would be the
> unusual case.
>
> But obviously the comment from the hardware engineer:
>
> > > > > > Further, the PCIe HW engineer told me defaulting to (c) was
> > > > > > a bad idea and was "asking for trouble".
>
> would need to be understood before doing that.

Keep in mind that our controller is already unusual in that it
requires this manual mode setting whereas
other controllers don't seem to have this issue.  As far as discussing
this with the HW person, either I am not understanding the reason(s)
or he is not explaining them well.  We've tried a couple of times,
FWIW.   At any rate,  one thing he has repeated with emphasis  is that
only l1ss capable devices should be using our l1ss mode.
For me, this feedback  trumps all other choices.

Finally, experience has made me quite wary of silently changing a
default for all of our STB/CM existing customers, regardless of the
fact that the CM4 Raspian folks have been using l1ss-mode (most likely
as a workaround to boot with cards lacking a functional CLKREQ# pin).

Regards.
Jim Quinlan
Broadcom STB

>
> Bjorn
Bjorn Helgaas May 5, 2023, 2:54 p.m. UTC | #9
On Fri, May 05, 2023 at 10:40:20AM -0400, Jim Quinlan wrote:
> On Fri, May 5, 2023 at 9:34 AM Bjorn Helgaas <helgaas@kernel.org> wrote:
> > On Fri, May 05, 2023 at 08:39:52AM -0400, Jim Quinlan wrote:
> > > On Wed, May 3, 2023 at 6:18 PM Bjorn Helgaas <helgaas@kernel.org> wrote:
> > > > On Wed, May 03, 2023 at 05:38:15PM -0400, Jim Quinlan wrote:
> > > > > On Wed, May 3, 2023 at 2:07 PM Bjorn Helgaas <helgaas@kernel.org> wrote:
> > > > > > On Wed, May 03, 2023 at 10:38:57AM -0400, Jim Quinlan wrote:
> > > > > > > On Sun, Apr 30, 2023 at 3:10 PM Bjorn Helgaas <helgaas@kernel.org> wrote:
> > > > > > > > On Fri, Apr 28, 2023 at 06:34:55PM -0400, Jim Quinlan wrote:
> > > > > > > > > brcm,enable-l1ss (bool):
> > > > > > > > >
> > > > > > > > >   The Broadcom STB/CM PCIe HW -- a core that is also used by RPi SOCs --
> > > > > > > > >   requires the driver probe() to deliberately place the HW one of three
> > > > > > > > >   CLKREQ# modes:
> > > > > > > > >
> > > > > > > > >   (a) CLKREQ# driven by the RC unconditionally
> > > > > > > > >   (b) CLKREQ# driven by the EP for ASPM L0s, L1
> > > > > > > > >   (c) Bidirectional CLKREQ#, as used for L1 Substates (L1SS).
> > > > > > > > >
> > > > > > > > >   The HW+driver can tell the difference between downstream devices that
> > > > > > > > >   need (a) and (b), but does not know when to configure (c).  All devices
> > > > > > > > >   should work fine when the driver chooses (a) or (b), but (c) may be
> > > > > > > > >   desired to realize the extra power savings that L1SS offers.  So we
> > > > > > > > >   introduce the boolean "brcm,enable-l1ss" property to inform the driver
> > > > > > > > >   that (c) is desired.  Setting this property only makes sense when the
> > > > > > > > >   downstream device is L1SS-capable and the OS is configured to activate
> > > > > > > > >   this mode (e.g. policy==superpowersave).
> >
> > Just noticed that this should be "policy==powersupersave"
> >
> > > > > > > > What bad things would happen if the driver always configured (c)?
> > > > > > >
> > > > > > > Well, our driver has traditionally only supported (b) and our
> > > > > > > existing boards have been designed with this in mind.  I would not
> > > > > > > want to switch modes w'o the user/customer/engineer opting-in to do
> > > > > > > so.  Further, the PCIe HW engineer told me defaulting to (c) was a
> > > > > > > bad idea and was "asking for trouble".  Note that the commit's
> > > > > > > comment has that warning about L1SS mode not meeting this 400ns
> > > > > > > spec, and I suspect that many of our existing designs have bumped
> > > > > > > into that.
> > > > > > >
> > > > > > > But to answer your question, I haven't found a scenario that did not
> > > > > > > work by setting mode (c).  That doesn't mean they are not out there.
> > > > > > >
> > > > > > > > Other platforms don't require this, and having to edit the DT
> > > > > > > > based on what PCIe device is plugged in seems wrong.  If brcmstb
> > > > > > > > does need it, that suggests a hardware defect.  If we need this to
> > > > > > > > work around a defect, that's OK, but we should acknowledge the
> > > > > > > > defect so we can stop using this for future hardware that doesn't
> > > > > > > > need it.
> > > > > > >
> > > > > > > All devices should work w/o the user having to change the DT.  Only
> > > > > > > if they desire L1SS must they add the "brcm,enable-l1ss" property.
> > > > > >
> > > > > > I thought the DT was supposed to describe properties of the
> > > > > > *hardware*, but this seems more like "use this untested clkreq
> > > > > > configuration," which maybe could be done via a module parameter?
> > > > >
> > > > > Electrically, it has been tested, but  specifically for L1SS capable
> > > > > devices.  What is untested AFAICT are platforms using this mode on
> > > > > non-L1SS capable devices.
> > > >
> > > > Non-L1SS behavior is a subset of L1SS, so if you've tested with L1SS
> > > > enabled, I would think you'd be covered.
> >
> > I think this point is still worth considering.  Maybe your hardware
> > folks have an opinion here?
> See below.
> >
> > > > But I'm not a hardware engineer, so maybe there's some subtlety there.
> > > > The "asking for trouble" comment from your engineer is definitely
> > > > concerning, but I have no idea what's behind that.
> > > >
> > > > And obviously even if we have "brcm,enable-l1ss", the user may decide
> > > > to disable L1SS administratively, so even if the Root Port and the
> > > > device both support L1SS, it may be never be enabled.
> > > >
> > > > > WRT bootline param
> > > > > pci=[<domain>:]<bus>:<dev>.<func>[/<dev>.<func>]*pci:<vendor>:<device>[:<subvendor>:<subdevice>]:
> > > > > this does not look compatible for vendor specific DT options like
> > > > > "brcm,enable-l1ss".  I observe that pci_dev_str_match_path() is a
> > > > > static function and I don't see a single option in pci.c  that is
> > > > > vendor specific.  FWIW, moving something like this to the bootline
> > > > > would not be popular with our customers; for some reason they really
> > > > > don't like changes to the bootline.
> > > >
> > > > They prefer editing the DT?
> > > >
> > > > I agree the "pci=B:D.F" stuff is a bit ugly.  Do you have multiple
> > > > slots such that you would have to apply this parameter to some but not
> > > > others?  I guess I was imagining a single-slot system where you
> > > > wouldn't need to identify the specific device because there *is* only
> > > > one.
> > >
> > > We typically have a single device per controller.  Occasionally,
> > > there is a mismatch in needs, and the customer adds a switch to
> > > their board until we can add another controller to the next rev of
> > > the SOC.
> >
> > If you add a switch, it sounds like there's still only a single link
> > between the brcmstb controller and the switch.  I'm assuming
> > "brcm,enable-l1ss" only affects CLKREQ# on that link and it has
> > nothing to do with links below the switch.
> >
> > (c) must be the standard PCIe situation because no other systems
> > require the user to configure CLKREQ# based on the type of card
> > plugged in.  And we don't know about any actual problems that happen
> > in (c) with any cards.
> >
> > That makes me think the ideal end state would be to use (c) by
> > default so everything just works like every other platform with no
> > fuss.  If there's some situation that requires (a) or (b), there could
> > be a property or parameter to select *that* because that would be the
> > unusual case.
> >
> > But obviously the comment from the hardware engineer:
> >
> > > > > > > Further, the PCIe HW engineer told me defaulting to (c) was
> > > > > > > a bad idea and was "asking for trouble".
> >
> > would need to be understood before doing that.
> 
> Keep in mind that our controller is already unusual in that it
> requires this manual mode setting whereas
> other controllers don't seem to have this issue.  As far as discussing
> this with the HW person, either I am not understanding the reason(s)
> or he is not explaining them well.  We've tried a couple of times,
> FWIW.   At any rate,  one thing he has repeated with emphasis  is that
> only l1ss capable devices should be using our l1ss mode.
> For me, this feedback  trumps all other choices.
> 
> Finally, experience has made me quite wary of silently changing a
> default for all of our STB/CM existing customers, regardless of the
> fact that the CM4 Raspian folks have been using l1ss-mode (most likely
> as a workaround to boot with cards lacking a functional CLKREQ# pin).

OK.  This seems like a pretty significant hardware deficiency, but
sounds like there's no good way around it.

Hopefully future designs will not have this issue because it really
breaks the compatibility story of PCI.

Bjorn
diff mbox series

Patch

diff --git a/Documentation/devicetree/bindings/pci/brcm,stb-pcie.yaml b/Documentation/devicetree/bindings/pci/brcm,stb-pcie.yaml
index 7e15aae7d69e..239cc95545bd 100644
--- a/Documentation/devicetree/bindings/pci/brcm,stb-pcie.yaml
+++ b/Documentation/devicetree/bindings/pci/brcm,stb-pcie.yaml
@@ -64,6 +64,22 @@  properties:
 
   aspm-no-l0s: true
 
+  brcm,enable-l1ss:
+    description: Indicates that PCIe L1SS power savings
+      are desired, the downstream device is L1SS-capable, and the
+      OS has been configured to enable this mode.  For boards
+      using a mini-card connector, this mode may not meet the
+      TCRLon maximum time of 400ns, as specified in 3.2.5.2.5
+      of the PCI Express Mini CEM 2.0 specification.
+    type: boolean
+
+  brcm,completion-timeout-us:
+    description: Number of microseconds before PCI transaction
+      completion timeout abort is signalled.
+    minimum: 16
+    default: 1000000
+    maximum: 19884107
+
   brcm,scb-sizes:
     description: u64 giving the 64bit PCIe memory
       viewport size of a memory controller.  There may be up to