Message ID | 1667451512-9655-2-git-send-email-quic_sibis@quicinc.com |
---|---|
State | Changes Requested, archived |
Headers | show |
Series | Add support for SCMI QTI Memlat Vendor Protocol | expand |
Context | Check | Description |
---|---|---|
robh/checkpatch | success | |
robh/patch-applied | success | |
robh/dt-meta-schema | fail | build log |
On Thu, Nov 03, 2022 at 10:28:31AM +0530, Sibi Sankar wrote: > Add bindings support for the SCMI QTI memlat (memory latency) vendor > protocol. The memlat vendor protocol enables the frequency scaling of > various buses (L3/LLCC/DDR) based on the memory latency governor > running on the CPUSS Control Processor. > > Signed-off-by: Sibi Sankar <quic_sibis@quicinc.com> > --- > .../devicetree/bindings/firmware/arm,scmi.yaml | 164 +++++++++++++++++++++ > 1 file changed, 164 insertions(+) > > diff --git a/Documentation/devicetree/bindings/firmware/arm,scmi.yaml b/Documentation/devicetree/bindings/firmware/arm,scmi.yaml > index 1c0388da6721..efc8a5a8bffe 100644 > --- a/Documentation/devicetree/bindings/firmware/arm,scmi.yaml > +++ b/Documentation/devicetree/bindings/firmware/arm,scmi.yaml > @@ -189,6 +189,47 @@ properties: > reg: > const: 0x18 > > + protocol@80: > + type: object > + properties: > + reg: > + const: 0x80 > + > + qcom,bus-type: > + $ref: /schemas/types.yaml#/definitions/uint32-array > + items: > + minItems: 1 > + description: > + Identifier of the bus type to be scaled by the memlat protocol. > + Why is this part of the provider of the service ? > + cpu-map: > + type: object > + description: > + The list of all cpu cluster configurations to be tracked by the memlat protocol > + > + patternProperties: > + '^cluster[0-9]': > + type: object > + description: > + Each cluster node describes the frequency domain associated with the > + CPUFREQ HW engine and bandwidth requirements of the buses to be scaled. > + > + properties: > + operating-points-v2: true > + > + qcom,freq-domain: > + $ref: /schemas/types.yaml#/definitions/phandle-array > + description: > + Reference to the frequency domain of the CPUFREQ HW engine > + items: > + - items: > + - description: phandle to CPUFREQ HW engine > + - description: frequency domain associated with the cluster > + > + required: > + - qcom,freq-domain > + - operating-points-v2 > + I would avoid all these here as part of provider node. It should be part of the consumer to have all these details and do what it needs to do with any such information.
On Thu, 03 Nov 2022 10:28:31 +0530, Sibi Sankar wrote: > Add bindings support for the SCMI QTI memlat (memory latency) vendor > protocol. The memlat vendor protocol enables the frequency scaling of > various buses (L3/LLCC/DDR) based on the memory latency governor > running on the CPUSS Control Processor. > > Signed-off-by: Sibi Sankar <quic_sibis@quicinc.com> > --- > .../devicetree/bindings/firmware/arm,scmi.yaml | 164 +++++++++++++++++++++ > 1 file changed, 164 insertions(+) > My bot found errors running 'make DT_CHECKER_FLAGS=-m dt_binding_check' on your patch (DT_CHECKER_FLAGS is new in v5.13): yamllint warnings/errors: dtschema/dtc warnings/errors: /builds/robherring/dt-review-ci/linux/Documentation/devicetree/bindings/firmware/arm,scmi.example.dtb: scmi: mbox-names: ['tx'] is too short From schema: /builds/robherring/dt-review-ci/linux/Documentation/devicetree/bindings/firmware/arm,scmi.yaml Documentation/devicetree/bindings/firmware/arm,scmi.example.dtb:0:0: /example-3/soc/mailbox@17400000: failed to match any schema with compatible: ['qcom,cpucp-mbox'] doc reference errors (make refcheckdocs): See https://patchwork.ozlabs.org/patch/ This check can fail if there are any dependencies. The base for a patch series is generally the most recent rc1. If you already ran 'make dt_binding_check' and didn't see the above error(s), then make sure 'yamllint' is installed and dt-schema is up to date: pip3 install dtschema --upgrade Please check and re-submit.
On Thu, Nov 03, 2022 at 10:28:31AM +0530, Sibi Sankar wrote: > Add bindings support for the SCMI QTI memlat (memory latency) vendor > protocol. The memlat vendor protocol enables the frequency scaling of > various buses (L3/LLCC/DDR) based on the memory latency governor > running on the CPUSS Control Processor. I thought the interconnect binding was what provided details for bus scaling. > > Signed-off-by: Sibi Sankar <quic_sibis@quicinc.com> > --- > .../devicetree/bindings/firmware/arm,scmi.yaml | 164 +++++++++++++++++++++ > 1 file changed, 164 insertions(+) > > diff --git a/Documentation/devicetree/bindings/firmware/arm,scmi.yaml b/Documentation/devicetree/bindings/firmware/arm,scmi.yaml > index 1c0388da6721..efc8a5a8bffe 100644 > --- a/Documentation/devicetree/bindings/firmware/arm,scmi.yaml > +++ b/Documentation/devicetree/bindings/firmware/arm,scmi.yaml > @@ -189,6 +189,47 @@ properties: > reg: > const: 0x18 > > + protocol@80: > + type: object > + properties: > + reg: > + const: 0x80 > + > + qcom,bus-type: > + $ref: /schemas/types.yaml#/definitions/uint32-array > + items: > + minItems: 1 > + description: > + Identifier of the bus type to be scaled by the memlat protocol. > + > + cpu-map: cpu-map only goes under /cpus node. > + type: object > + description: > + The list of all cpu cluster configurations to be tracked by the memlat protocol > + > + patternProperties: > + '^cluster[0-9]': > + type: object > + description: > + Each cluster node describes the frequency domain associated with the > + CPUFREQ HW engine and bandwidth requirements of the buses to be scaled. > + > + properties: cpu-map nodes don't have properties. > + operating-points-v2: true > + > + qcom,freq-domain: Please don't add new users of this. Use the performance-domains binding instead. > + $ref: /schemas/types.yaml#/definitions/phandle-array > + description: > + Reference to the frequency domain of the CPUFREQ HW engine > + items: > + - items: > + - description: phandle to CPUFREQ HW engine > + - description: frequency domain associated with the cluster > + > + required: > + - qcom,freq-domain > + - operating-points-v2 > + > additionalProperties: false > > patternProperties: > @@ -429,4 +470,127 @@ examples: > }; > }; > > + - | > + #include <dt-bindings/interrupt-controller/arm-gic.h> > + > + firmware { > + scmi { > + compatible = "arm,scmi"; > + > + #address-cells = <1>; > + #size-cells = <0>; > + > + mboxes = <&cpucp_mbox>; > + mbox-names = "tx"; > + shmem = <&cpu_scp_lpri>; > + > + scmi_memlat: protocol@80 { > + reg = <0x80>; > + qcom,bus-type = <0x2>; > + > + cpu-map { > + cluster0 { > + qcom,freq-domain = <&cpufreq_hw 0>; > + operating-points-v2 = <&cpu0_opp_table>; > + }; > + > + cluster1 { > + qcom,freq-domain = <&cpufreq_hw 1>; > + operating-points-v2 = <&cpu4_opp_table>; > + }; > + > + cluster2 { > + qcom,freq-domain = <&cpufreq_hw 2>; > + operating-points-v2 = <&cpu7_opp_table>; > + }; > + }; > + }; > + }; > + > + cpu0_opp_table: opp-table-cpu0 { > + compatible = "operating-points-v2"; > + > + cpu0_opp_300mhz: opp-300000000 { > + opp-hz = /bits/ 64 <300000000>; > + opp-peak-kBps = <9600000>; > + }; > + > + cpu0_opp_1325mhz: opp-1324800000 { > + opp-hz = /bits/ 64 <1324800000>; > + opp-peak-kBps = <33792000>; > + }; > + > + cpu0_opp_2016mhz: opp-2016000000 { > + opp-hz = /bits/ 64 <2016000000>; > + opp-peak-kBps = <48537600>; > + }; > + }; > + > + cpu4_opp_table: opp-table-cpu4 { > + compatible = "operating-points-v2"; > + > + cpu4_opp_691mhz: opp-691200000 { > + opp-hz = /bits/ 64 <691200000>; > + opp-peak-kBps = <9600000>; > + }; > + > + cpu4_opp_941mhz: opp-940800000 { > + opp-hz = /bits/ 64 <940800000>; > + opp-peak-kBps = <17817600>; > + }; > + > + cpu4_opp_2611mhz: opp-2611200000 { > + opp-hz = /bits/ 64 <2611200000>; > + opp-peak-kBps = <48537600>; > + }; > + }; > + > + cpu7_opp_table: opp-table-cpu7 { > + compatible = "operating-points-v2"; > + > + cpu7_opp_806mhz: opp-806400000 { > + opp-hz = /bits/ 64 <806400000>; > + opp-peak-kBps = <9600000>; > + }; > + > + cpu7_opp_2381mhz: opp-2380800000 { > + opp-hz = /bits/ 64 <2380800000>; > + opp-peak-kBps = <44851200>; > + }; > + > + cpu7_opp_2515mhz: opp-2515200000 { > + opp-hz = /bits/ 64 <2515200000>; > + opp-peak-kBps = <48537600>; > + }; > + }; > + }; > + > + > + soc { > + #address-cells = <2>; > + #size-cells = <2>; > + > + cpucp_mbox: mailbox@17400000 { > + compatible = "qcom,cpucp-mbox"; > + reg = <0x0 0x17c00000 0x0 0x10>, <0x0 0x18590300 0x0 0x700>; > + interrupts = <GIC_SPI 62 IRQ_TYPE_LEVEL_HIGH>; > + #mbox-cells = <0>; > + }; > + > + sram@18509400 { > + compatible = "mmio-sram"; > + reg = <0x0 0x18509400 0x0 0x400>; > + no-memory-wc; > + > + #address-cells = <1>; > + #size-cells = <1>; > + ranges = <0x0 0x0 0x18509400 0x400>; > + > + cpu_scp_lpri: scp-sram-section@0 { > + compatible = "arm,scmi-shmem"; > + reg = <0x0 0x80>; > + }; > + }; > + }; > + > ... > -- > 2.7.4 > >
Hey Rob, Thanks for taking time to review the series. On 11/4/22 23:33, Rob Herring wrote: > On Thu, Nov 03, 2022 at 10:28:31AM +0530, Sibi Sankar wrote: >> Add bindings support for the SCMI QTI memlat (memory latency) vendor >> protocol. The memlat vendor protocol enables the frequency scaling of >> various buses (L3/LLCC/DDR) based on the memory latency governor >> running on the CPUSS Control Processor. > > I thought the interconnect binding was what provided details for bus > scaling. The bus scaling in this particular case is done by SCP FW and not from any kernel client. The SCMI vendor protocol would be used to pass on the bandwidth requirements during initialization and SCP FW would vote on it independently after it is > >> >> Signed-off-by: Sibi Sankar <quic_sibis@quicinc.com> >> --- >> .../devicetree/bindings/firmware/arm,scmi.yaml | 164 +++++++++++++++++++++ >> 1 file changed, 164 insertions(+) >> >> diff --git a/Documentation/devicetree/bindings/firmware/arm,scmi.yaml b/Documentation/devicetree/bindings/firmware/arm,scmi.yaml >> index 1c0388da6721..efc8a5a8bffe 100644 >> --- a/Documentation/devicetree/bindings/firmware/arm,scmi.yaml >> +++ b/Documentation/devicetree/bindings/firmware/arm,scmi.yaml >> @@ -189,6 +189,47 @@ properties: >> reg: >> const: 0x18 >> >> + protocol@80: >> + type: object >> + properties: >> + reg: >> + const: 0x80 >> + >> + qcom,bus-type: >> + $ref: /schemas/types.yaml#/definitions/uint32-array >> + items: >> + minItems: 1 >> + description: >> + Identifier of the bus type to be scaled by the memlat protocol. >> + >> + cpu-map: > > cpu-map only goes under /cpus node. sure will use a qcom specific node instead > >> + type: object >> + description: >> + The list of all cpu cluster configurations to be tracked by the memlat protocol >> + >> + patternProperties: >> + '^cluster[0-9]': >> + type: object >> + description: >> + Each cluster node describes the frequency domain associated with the >> + CPUFREQ HW engine and bandwidth requirements of the buses to be scaled. >> + >> + properties: > > cpu-map nodes don't have properties. ack > >> + operating-points-v2: true >> + >> + qcom,freq-domain: > > Please don't add new users of this. Use the performance-domains binding > instead. The plan was to re-use the ^^ to determine frequency domain of the cpus since they are already present in the dts. I guess using performance-domains bindings would require a corresponding change in qcom-cpufreq-hw driver as well. Ack. > >> + $ref: /schemas/types.yaml#/definitions/phandle-array >> + description: >> + Reference to the frequency domain of the CPUFREQ HW engine >> + items: >> + - items: >> + - description: phandle to CPUFREQ HW engine >> + - description: frequency domain associated with the cluster >> + >> + required: >> + - qcom,freq-domain >> + - operating-points-v2 >> + >> additionalProperties: false >> >> patternProperties: >> @@ -429,4 +470,127 @@ examples: >> }; >> }; >> >> + - | >> + #include <dt-bindings/interrupt-controller/arm-gic.h> >> + >> + firmware { >> + scmi { >> + compatible = "arm,scmi"; >> + >> + #address-cells = <1>; >> + #size-cells = <0>; >> + >> + mboxes = <&cpucp_mbox>; >> + mbox-names = "tx"; >> + shmem = <&cpu_scp_lpri>; >> + >> + scmi_memlat: protocol@80 { >> + reg = <0x80>; >> + qcom,bus-type = <0x2>; >> + >> + cpu-map { >> + cluster0 { >> + qcom,freq-domain = <&cpufreq_hw 0>; >> + operating-points-v2 = <&cpu0_opp_table>; >> + }; >> + >> + cluster1 { >> + qcom,freq-domain = <&cpufreq_hw 1>; >> + operating-points-v2 = <&cpu4_opp_table>; >> + }; >> + >> + cluster2 { >> + qcom,freq-domain = <&cpufreq_hw 2>; >> + operating-points-v2 = <&cpu7_opp_table>; >> + }; >> + }; >> + }; >> + }; >> + >> + cpu0_opp_table: opp-table-cpu0 { >> + compatible = "operating-points-v2"; >> + >> + cpu0_opp_300mhz: opp-300000000 { >> + opp-hz = /bits/ 64 <300000000>; >> + opp-peak-kBps = <9600000>; >> + }; >> + >> + cpu0_opp_1325mhz: opp-1324800000 { >> + opp-hz = /bits/ 64 <1324800000>; >> + opp-peak-kBps = <33792000>; >> + }; >> + >> + cpu0_opp_2016mhz: opp-2016000000 { >> + opp-hz = /bits/ 64 <2016000000>; >> + opp-peak-kBps = <48537600>; >> + }; >> + }; >> + >> + cpu4_opp_table: opp-table-cpu4 { >> + compatible = "operating-points-v2"; >> + >> + cpu4_opp_691mhz: opp-691200000 { >> + opp-hz = /bits/ 64 <691200000>; >> + opp-peak-kBps = <9600000>; >> + }; >> + >> + cpu4_opp_941mhz: opp-940800000 { >> + opp-hz = /bits/ 64 <940800000>; >> + opp-peak-kBps = <17817600>; >> + }; >> + >> + cpu4_opp_2611mhz: opp-2611200000 { >> + opp-hz = /bits/ 64 <2611200000>; >> + opp-peak-kBps = <48537600>; >> + }; >> + }; >> + >> + cpu7_opp_table: opp-table-cpu7 { >> + compatible = "operating-points-v2"; >> + >> + cpu7_opp_806mhz: opp-806400000 { >> + opp-hz = /bits/ 64 <806400000>; >> + opp-peak-kBps = <9600000>; >> + }; >> + >> + cpu7_opp_2381mhz: opp-2380800000 { >> + opp-hz = /bits/ 64 <2380800000>; >> + opp-peak-kBps = <44851200>; >> + }; >> + >> + cpu7_opp_2515mhz: opp-2515200000 { >> + opp-hz = /bits/ 64 <2515200000>; >> + opp-peak-kBps = <48537600>; >> + }; >> + }; >> + }; >> + >> + >> + soc { >> + #address-cells = <2>; >> + #size-cells = <2>; >> + >> + cpucp_mbox: mailbox@17400000 { >> + compatible = "qcom,cpucp-mbox"; >> + reg = <0x0 0x17c00000 0x0 0x10>, <0x0 0x18590300 0x0 0x700>; >> + interrupts = <GIC_SPI 62 IRQ_TYPE_LEVEL_HIGH>; >> + #mbox-cells = <0>; >> + }; >> + >> + sram@18509400 { >> + compatible = "mmio-sram"; >> + reg = <0x0 0x18509400 0x0 0x400>; >> + no-memory-wc; >> + >> + #address-cells = <1>; >> + #size-cells = <1>; >> + ranges = <0x0 0x0 0x18509400 0x400>; >> + >> + cpu_scp_lpri: scp-sram-section@0 { >> + compatible = "arm,scmi-shmem"; >> + reg = <0x0 0x80>; >> + }; >> + }; >> + }; >> + >> ... >> -- >> 2.7.4 >> >>
diff --git a/Documentation/devicetree/bindings/firmware/arm,scmi.yaml b/Documentation/devicetree/bindings/firmware/arm,scmi.yaml index 1c0388da6721..efc8a5a8bffe 100644 --- a/Documentation/devicetree/bindings/firmware/arm,scmi.yaml +++ b/Documentation/devicetree/bindings/firmware/arm,scmi.yaml @@ -189,6 +189,47 @@ properties: reg: const: 0x18 + protocol@80: + type: object + properties: + reg: + const: 0x80 + + qcom,bus-type: + $ref: /schemas/types.yaml#/definitions/uint32-array + items: + minItems: 1 + description: + Identifier of the bus type to be scaled by the memlat protocol. + + cpu-map: + type: object + description: + The list of all cpu cluster configurations to be tracked by the memlat protocol + + patternProperties: + '^cluster[0-9]': + type: object + description: + Each cluster node describes the frequency domain associated with the + CPUFREQ HW engine and bandwidth requirements of the buses to be scaled. + + properties: + operating-points-v2: true + + qcom,freq-domain: + $ref: /schemas/types.yaml#/definitions/phandle-array + description: + Reference to the frequency domain of the CPUFREQ HW engine + items: + - items: + - description: phandle to CPUFREQ HW engine + - description: frequency domain associated with the cluster + + required: + - qcom,freq-domain + - operating-points-v2 + additionalProperties: false patternProperties: @@ -429,4 +470,127 @@ examples: }; }; + - | + #include <dt-bindings/interrupt-controller/arm-gic.h> + + firmware { + scmi { + compatible = "arm,scmi"; + + #address-cells = <1>; + #size-cells = <0>; + + mboxes = <&cpucp_mbox>; + mbox-names = "tx"; + shmem = <&cpu_scp_lpri>; + + scmi_memlat: protocol@80 { + reg = <0x80>; + qcom,bus-type = <0x2>; + + cpu-map { + cluster0 { + qcom,freq-domain = <&cpufreq_hw 0>; + operating-points-v2 = <&cpu0_opp_table>; + }; + + cluster1 { + qcom,freq-domain = <&cpufreq_hw 1>; + operating-points-v2 = <&cpu4_opp_table>; + }; + + cluster2 { + qcom,freq-domain = <&cpufreq_hw 2>; + operating-points-v2 = <&cpu7_opp_table>; + }; + }; + }; + }; + + cpu0_opp_table: opp-table-cpu0 { + compatible = "operating-points-v2"; + + cpu0_opp_300mhz: opp-300000000 { + opp-hz = /bits/ 64 <300000000>; + opp-peak-kBps = <9600000>; + }; + + cpu0_opp_1325mhz: opp-1324800000 { + opp-hz = /bits/ 64 <1324800000>; + opp-peak-kBps = <33792000>; + }; + + cpu0_opp_2016mhz: opp-2016000000 { + opp-hz = /bits/ 64 <2016000000>; + opp-peak-kBps = <48537600>; + }; + }; + + cpu4_opp_table: opp-table-cpu4 { + compatible = "operating-points-v2"; + + cpu4_opp_691mhz: opp-691200000 { + opp-hz = /bits/ 64 <691200000>; + opp-peak-kBps = <9600000>; + }; + + cpu4_opp_941mhz: opp-940800000 { + opp-hz = /bits/ 64 <940800000>; + opp-peak-kBps = <17817600>; + }; + + cpu4_opp_2611mhz: opp-2611200000 { + opp-hz = /bits/ 64 <2611200000>; + opp-peak-kBps = <48537600>; + }; + }; + + cpu7_opp_table: opp-table-cpu7 { + compatible = "operating-points-v2"; + + cpu7_opp_806mhz: opp-806400000 { + opp-hz = /bits/ 64 <806400000>; + opp-peak-kBps = <9600000>; + }; + + cpu7_opp_2381mhz: opp-2380800000 { + opp-hz = /bits/ 64 <2380800000>; + opp-peak-kBps = <44851200>; + }; + + cpu7_opp_2515mhz: opp-2515200000 { + opp-hz = /bits/ 64 <2515200000>; + opp-peak-kBps = <48537600>; + }; + }; + }; + + + soc { + #address-cells = <2>; + #size-cells = <2>; + + cpucp_mbox: mailbox@17400000 { + compatible = "qcom,cpucp-mbox"; + reg = <0x0 0x17c00000 0x0 0x10>, <0x0 0x18590300 0x0 0x700>; + interrupts = <GIC_SPI 62 IRQ_TYPE_LEVEL_HIGH>; + #mbox-cells = <0>; + }; + + sram@18509400 { + compatible = "mmio-sram"; + reg = <0x0 0x18509400 0x0 0x400>; + no-memory-wc; + + #address-cells = <1>; + #size-cells = <1>; + ranges = <0x0 0x0 0x18509400 0x400>; + + cpu_scp_lpri: scp-sram-section@0 { + compatible = "arm,scmi-shmem"; + reg = <0x0 0x80>; + }; + }; + }; + ...
Add bindings support for the SCMI QTI memlat (memory latency) vendor protocol. The memlat vendor protocol enables the frequency scaling of various buses (L3/LLCC/DDR) based on the memory latency governor running on the CPUSS Control Processor. Signed-off-by: Sibi Sankar <quic_sibis@quicinc.com> --- .../devicetree/bindings/firmware/arm,scmi.yaml | 164 +++++++++++++++++++++ 1 file changed, 164 insertions(+)