Message ID | 1462792417-3232-1-git-send-email-nikunj@linux.vnet.ibm.com |
---|---|
State | Accepted |
Headers | show |
On 09.05.2016 13:13, Nikunj A Dadhania wrote: > As this was done at byte granularity, erasing complete nvram(64K > default) took a lot of time. To reduce the number of rtas call per byte > write which is expensive, the erase is done at one shot using the > nvram_buffer that is initiated during the nvram_init call for > RTAS_NVRAM. > > After this patch there is ~450msec improvement during boot. Default qemu > booting does not provide file backed nvram, so every boot there would be > full erase of 64K. ... > Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com> > --- > lib/libnvram/nvram.c | 11 +++++++++++ > 1 file changed, 11 insertions(+) > > diff --git a/lib/libnvram/nvram.c b/lib/libnvram/nvram.c > index 473814e..99deb2a 100644 > --- a/lib/libnvram/nvram.c > +++ b/lib/libnvram/nvram.c > @@ -373,6 +373,17 @@ void erase_nvram(int offset, int len) > { > int i; > > +#ifdef RTAS_NVRAM > + char *erase_buf = get_nvram_buffer(len); > + if (erase_buf) { > + /* Speed up by erasing all memory at once */ > + memset(erase_buf, 0, len); > + nvram_store(offset, erase_buf, len); > + free_nvram_buffer(erase_buf); > + return; > + } > + /* If get_nvram_buffer failed, fall through to default code */ > +#endif > for (i=offset; i<offset+len; i++) > nvram_write_byte(i, 0); > } > Reviewed-by: Thomas Huth <thuth@redhat.com>
On 09/05/16 21:13, Nikunj A Dadhania wrote: > As this was done at byte granularity, erasing complete nvram(64K > default) took a lot of time. To reduce the number of rtas call per byte > write which is expensive, the erase is done at one shot using the > nvram_buffer that is initiated during the nvram_init call for > RTAS_NVRAM. > > After this patch there is ~450msec improvement during boot. Default qemu > booting does not provide file backed nvram, so every boot there would be > full erase of 64K. > > Before this patch: > > real 0m2.214s > user 0m0.015s > sys 0m0.006s > > real 0m2.222s > user 0m0.014s > sys 0m0.005s > > real 0m2.201s > user 0m0.010s > sys 0m0.005s > > After this patch: > > real 0m1.762s > user 0m0.014s > sys 0m0.006s > > real 0m1.773s > user 0m0.011s > sys 0m0.004s > > real 0m1.754s > user 0m0.013s > sys 0m0.005s Thanks, applied. btw have you received a mail from the patchworks that the patch status has changed? If so, I would prefer not to generate more noise with these "applied" messages in the maillist if possible. > > Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com> > --- > lib/libnvram/nvram.c | 11 +++++++++++ > 1 file changed, 11 insertions(+) > > diff --git a/lib/libnvram/nvram.c b/lib/libnvram/nvram.c > index 473814e..99deb2a 100644 > --- a/lib/libnvram/nvram.c > +++ b/lib/libnvram/nvram.c > @@ -373,6 +373,17 @@ void erase_nvram(int offset, int len) > { > int i; > > +#ifdef RTAS_NVRAM > + char *erase_buf = get_nvram_buffer(len); > + if (erase_buf) { > + /* Speed up by erasing all memory at once */ > + memset(erase_buf, 0, len); > + nvram_store(offset, erase_buf, len); > + free_nvram_buffer(erase_buf); > + return; > + } > + /* If get_nvram_buffer failed, fall through to default code */ > +#endif > for (i=offset; i<offset+len; i++) > nvram_write_byte(i, 0); > } >
Alexey Kardashevskiy <aik@ozlabs.ru> writes: > On 09/05/16 21:13, Nikunj A Dadhania wrote: >> As this was done at byte granularity, erasing complete nvram(64K >> default) took a lot of time. To reduce the number of rtas call per byte >> write which is expensive, the erase is done at one shot using the >> nvram_buffer that is initiated during the nvram_init call for >> RTAS_NVRAM. >> >> After this patch there is ~450msec improvement during boot. Default qemu >> booting does not provide file backed nvram, so every boot there would be >> full erase of 64K. >> >> Before this patch: >> >> real 0m2.214s >> user 0m0.015s >> sys 0m0.006s >> >> real 0m2.222s >> user 0m0.014s >> sys 0m0.005s >> >> real 0m2.201s >> user 0m0.010s >> sys 0m0.005s >> >> After this patch: >> >> real 0m1.762s >> user 0m0.014s >> sys 0m0.006s >> >> real 0m1.773s >> user 0m0.011s >> sys 0m0.004s >> >> real 0m1.754s >> user 0m0.013s >> sys 0m0.005s > > Thanks, applied. > > btw have you received a mail from the patchworks that the patch status has > changed? If so, I would prefer not to generate more noise with these > "applied" messages in the maillist if possible. No, havent received any message. Regards, Nikunj
diff --git a/lib/libnvram/nvram.c b/lib/libnvram/nvram.c index 473814e..99deb2a 100644 --- a/lib/libnvram/nvram.c +++ b/lib/libnvram/nvram.c @@ -373,6 +373,17 @@ void erase_nvram(int offset, int len) { int i; +#ifdef RTAS_NVRAM + char *erase_buf = get_nvram_buffer(len); + if (erase_buf) { + /* Speed up by erasing all memory at once */ + memset(erase_buf, 0, len); + nvram_store(offset, erase_buf, len); + free_nvram_buffer(erase_buf); + return; + } + /* If get_nvram_buffer failed, fall through to default code */ +#endif for (i=offset; i<offset+len; i++) nvram_write_byte(i, 0); }
As this was done at byte granularity, erasing complete nvram(64K default) took a lot of time. To reduce the number of rtas call per byte write which is expensive, the erase is done at one shot using the nvram_buffer that is initiated during the nvram_init call for RTAS_NVRAM. After this patch there is ~450msec improvement during boot. Default qemu booting does not provide file backed nvram, so every boot there would be full erase of 64K. Before this patch: real 0m2.214s user 0m0.015s sys 0m0.006s real 0m2.222s user 0m0.014s sys 0m0.005s real 0m2.201s user 0m0.010s sys 0m0.005s After this patch: real 0m1.762s user 0m0.014s sys 0m0.006s real 0m1.773s user 0m0.011s sys 0m0.004s real 0m1.754s user 0m0.013s sys 0m0.005s Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com> --- lib/libnvram/nvram.c | 11 +++++++++++ 1 file changed, 11 insertions(+)