Message ID | 20211106121058.1843173-1-kai.heng.feng@canonical.com |
---|---|
Headers | show |
Series | Let NVMe with HMB use native power control again | expand |
Acked-by: Tim Gardner <tim.gardner@canonical.com> On 11/6/21 6:10 AM, Kai-Heng Feng wrote: > BugLink: https://bugs.launchpad.net/bugs/1950042 > > [Impact] > NVMe with HMB may still do DMA during suspend, so there was a commit > that put the NVMe to PCI D3 during suspend to prevent DMA activities. > However, this makes them consumes much more power because modern NVMe > requires to stay at PCI D0 to make its natve power control work. > > [Fix] > Instead of put the NVMe to PCI D3 and reset it afterward, simply disable > HMB and re-enable HMB, for suspend and resume respectively. > > [Test] > On affected system, Intel SoC can only reach PC3 during suspend. > With the SRU applied, the Intel SoC can reach PC10 and SLP_S0 and use > significant less power. > > [Where problems could occur] > The original approach, i.e. disable NVMe and put it to PCI D3 to prevent > DMA activies, was just a precaution. There wasn't any case that > indicates it happens in practice. > > This is a different approach to the same problem, which is a theoretical > problem. > > Keith Busch (4): > nvme-pci: use attribute group for cmb sysfs > nvme-pci: cmb sysfs: one file, one value > nvme-pci: disable hmb on idle suspend > nvme: allow user toggling hmb usage > > drivers/nvme/host/pci.c | 165 +++++++++++++++++++++++++++++++--------- > 1 file changed, 131 insertions(+), 34 deletions(-) >
On 6.11.2021 14.10, Kai-Heng Feng wrote: > BugLink: https://bugs.launchpad.net/bugs/1950042 > > [Impact] > NVMe with HMB may still do DMA during suspend, so there was a commit > that put the NVMe to PCI D3 during suspend to prevent DMA activities. > However, this makes them consumes much more power because modern NVMe > requires to stay at PCI D0 to make its natve power control work. > > [Fix] > Instead of put the NVMe to PCI D3 and reset it afterward, simply disable > HMB and re-enable HMB, for suspend and resume respectively. > > [Test] > On affected system, Intel SoC can only reach PC3 during suspend. > With the SRU applied, the Intel SoC can reach PC10 and SLP_S0 and use > significant less power. > > [Where problems could occur] > The original approach, i.e. disable NVMe and put it to PCI D3 to prevent > DMA activies, was just a precaution. There wasn't any case that > indicates it happens in practice. > > This is a different approach to the same problem, which is a theoretical > problem. > > Keith Busch (4): > nvme-pci: use attribute group for cmb sysfs > nvme-pci: cmb sysfs: one file, one value > nvme-pci: disable hmb on idle suspend > nvme: allow user toggling hmb usage > > drivers/nvme/host/pci.c | 165 +++++++++++++++++++++++++++++++--------- > 1 file changed, 131 insertions(+), 34 deletions(-) > applied to oem-5.14, thanks
On 6.11.2021 14.10, Kai-Heng Feng wrote: > BugLink: https://bugs.launchpad.net/bugs/1950042 > > [Impact] > NVMe with HMB may still do DMA during suspend, so there was a commit > that put the NVMe to PCI D3 during suspend to prevent DMA activities. > However, this makes them consumes much more power because modern NVMe > requires to stay at PCI D0 to make its natve power control work. > > [Fix] > Instead of put the NVMe to PCI D3 and reset it afterward, simply disable > HMB and re-enable HMB, for suspend and resume respectively. > > [Test] > On affected system, Intel SoC can only reach PC3 during suspend. > With the SRU applied, the Intel SoC can reach PC10 and SLP_S0 and use > significant less power. > > [Where problems could occur] > The original approach, i.e. disable NVMe and put it to PCI D3 to prevent > DMA activies, was just a precaution. There wasn't any case that > indicates it happens in practice. > > This is a different approach to the same problem, which is a theoretical > problem. > > Keith Busch (4): > nvme-pci: use attribute group for cmb sysfs > nvme-pci: cmb sysfs: one file, one value > nvme-pci: disable hmb on idle suspend > nvme: allow user toggling hmb usage > > drivers/nvme/host/pci.c | 165 +++++++++++++++++++++++++++++++--------- > 1 file changed, 131 insertions(+), 34 deletions(-) > applied to oem-5.13, thanks
On 06.11.21 13:10, Kai-Heng Feng wrote: > BugLink: https://bugs.launchpad.net/bugs/1950042 > > [Impact] > NVMe with HMB may still do DMA during suspend, so there was a commit > that put the NVMe to PCI D3 during suspend to prevent DMA activities. > However, this makes them consumes much more power because modern NVMe > requires to stay at PCI D0 to make its natve power control work. > > [Fix] > Instead of put the NVMe to PCI D3 and reset it afterward, simply disable > HMB and re-enable HMB, for suspend and resume respectively. > > [Test] > On affected system, Intel SoC can only reach PC3 during suspend. > With the SRU applied, the Intel SoC can reach PC10 and SLP_S0 and use > significant less power. > > [Where problems could occur] > The original approach, i.e. disable NVMe and put it to PCI D3 to prevent > DMA activies, was just a precaution. There wasn't any case that > indicates it happens in practice. > > This is a different approach to the same problem, which is a theoretical > problem. > > Keith Busch (4): > nvme-pci: use attribute group for cmb sysfs > nvme-pci: cmb sysfs: one file, one value > nvme-pci: disable hmb on idle suspend > nvme: allow user toggling hmb usage > > drivers/nvme/host/pci.c | 165 +++++++++++++++++++++++++++++++--------- > 1 file changed, 131 insertions(+), 34 deletions(-) > Acked-by: Stefan Bader <stefan.bader@canonical.com>
Acked-by: Tim Gardner <tim.gardner@canonical.com> clean cherry-picks, good test results. On 11/6/21 6:10 AM, Kai-Heng Feng wrote: > BugLink: https://bugs.launchpad.net/bugs/1950042 > > [Impact] > NVMe with HMB may still do DMA during suspend, so there was a commit > that put the NVMe to PCI D3 during suspend to prevent DMA activities. > However, this makes them consumes much more power because modern NVMe > requires to stay at PCI D0 to make its natve power control work. > > [Fix] > Instead of put the NVMe to PCI D3 and reset it afterward, simply disable > HMB and re-enable HMB, for suspend and resume respectively. > > [Test] > On affected system, Intel SoC can only reach PC3 during suspend. > With the SRU applied, the Intel SoC can reach PC10 and SLP_S0 and use > significant less power. > > [Where problems could occur] > The original approach, i.e. disable NVMe and put it to PCI D3 to prevent > DMA activies, was just a precaution. There wasn't any case that > indicates it happens in practice. > > This is a different approach to the same problem, which is a theoretical > problem. > > Keith Busch (4): > nvme-pci: use attribute group for cmb sysfs > nvme-pci: cmb sysfs: one file, one value > nvme-pci: disable hmb on idle suspend > nvme: allow user toggling hmb usage > > drivers/nvme/host/pci.c | 165 +++++++++++++++++++++++++++++++--------- > 1 file changed, 131 insertions(+), 34 deletions(-) >
On 06.11.21 13:10, Kai-Heng Feng wrote: > BugLink: https://bugs.launchpad.net/bugs/1950042 > > [Impact] > NVMe with HMB may still do DMA during suspend, so there was a commit > that put the NVMe to PCI D3 during suspend to prevent DMA activities. > However, this makes them consumes much more power because modern NVMe > requires to stay at PCI D0 to make its natve power control work. > > [Fix] > Instead of put the NVMe to PCI D3 and reset it afterward, simply disable > HMB and re-enable HMB, for suspend and resume respectively. > > [Test] > On affected system, Intel SoC can only reach PC3 during suspend. > With the SRU applied, the Intel SoC can reach PC10 and SLP_S0 and use > significant less power. > > [Where problems could occur] > The original approach, i.e. disable NVMe and put it to PCI D3 to prevent > DMA activies, was just a precaution. There wasn't any case that > indicates it happens in practice. > > This is a different approach to the same problem, which is a theoretical > problem. > > Keith Busch (4): > nvme-pci: use attribute group for cmb sysfs > nvme-pci: cmb sysfs: one file, one value > nvme-pci: disable hmb on idle suspend > nvme: allow user toggling hmb usage > > drivers/nvme/host/pci.c | 165 +++++++++++++++++++++++++++++++--------- > 1 file changed, 131 insertions(+), 34 deletions(-) > Applied to hirsute,impish:linux/master-next. Thanks. -Stefan
applied to oem-5.10, thanks