Message ID | 1487803956-24668-1-git-send-email-angelo.compagnucci@gmail.com |
---|---|
State | Accepted |
Headers | show |
Hello, On Wed, 22 Feb 2017 23:52:36 +0100, Angelo Compagnucci wrote: > Currently the buildroot website is not leveraging common > techniques to have a better loading speed (browser caching, > gzip compression, deflating). > This commit provides an .htaccess files with all the needed to > enable mentioned features. This .htaccess only works if the > webserver has deflate, expires and headers modules enabled. > > Signed-off-by: Angelo Compagnucci <angelo.compagnucci@gmail.com> Thanks for your proposal. Since this needs to work with the Apache configuration on the server side, I've mailed the OSUOSL folks to ask them if this is going to work. You are in Cc to this e-mail. Regarding your other patch crunching and resizing the PNG files, it didn't make it to the list because of the size of the patch. However, while I understand the crunching, I do not understand why you arbitrarily reduced the size of the pictures by 60%. Could you explain the reasoning behind this? Thanks! Thomas
Dear Thomas Petazzoni, 2017-02-23 21:54 GMT+01:00 Thomas Petazzoni <thomas.petazzoni@free-electrons.com>: > Hello, > > On Wed, 22 Feb 2017 23:52:36 +0100, Angelo Compagnucci wrote: >> Currently the buildroot website is not leveraging common >> techniques to have a better loading speed (browser caching, >> gzip compression, deflating). >> This commit provides an .htaccess files with all the needed to >> enable mentioned features. This .htaccess only works if the >> webserver has deflate, expires and headers modules enabled. >> >> Signed-off-by: Angelo Compagnucci <angelo.compagnucci@gmail.com> > > Thanks for your proposal. Since this needs to work with the Apache > configuration on the server side, I've mailed the OSUOSL folks to ask > them if this is going to work. You are in Cc to this e-mail. My patch doesn't necessary requires a server side configuration change, usually the module I used in htaccess are enabled in common apache configuration. BTW the use of the modules by a virtualhost is not enabled by default and .htaccess simply enables it. On a shared webserver with hundres of websites for example, changing the cache control setting doesn't require the server sysadmin intervention, cause the htaccess is used to accomodate user preferences on a per site base. Buildroot is probably served by a shared webserver with the required modules enabled, but they are not correctly configured for the buildroot.org virtualhost. So simply dropping the .htaccess in the buildroot virtualhost root directory should do the trick and enable that configuraion. > > Regarding your other patch crunching and resizing the PNG files, it > didn't make it to the list because of the size of the patch. However, > while I understand the crunching, I do not understand why you > arbitrarily reduced the size of the pictures by 60%. Could you explain > the reasoning behind this? Pngcrush is a software for reducing the size of png images without affecting quality. That means that we have to use less bandwidth to serve the same content as before. Sincerely, Angelo > > Thanks! > > Thomas > -- > Thomas Petazzoni, CTO, Free Electrons > Embedded Linux and Kernel engineering > http://free-electrons.com
Hello, On Thu, 23 Feb 2017 22:08:05 +0100, Angelo Compagnucci wrote: > My patch doesn't necessary requires a server side configuration > change, usually the module I used in htaccess are enabled in common > apache configuration. BTW the use of the modules by a virtualhost is > not enabled by default and .htaccess simply enables it. > On a shared webserver with hundres of websites for example, changing > the cache control setting doesn't require the server sysadmin > intervention, cause the htaccess is used to accomodate user > preferences on a per site base. > Buildroot is probably served by a shared webserver with the required > modules enabled, but they are not correctly configured for the > buildroot.org virtualhost. > So simply dropping the .htaccess in the buildroot virtualhost root > directory should do the trick and enable that configuraion. Right. I want to see what the admins of the webserver have to say though. > Pngcrush is a software for reducing the size of png images without > affecting quality. That means that we have to use less bandwidth to > serve the same content as before. I'm OK with what pngcrush does, but in your commit log, you said "The images resized to ~60% of the original size.". To me it means that you have reduced the size (in pixel) of the images to make them smaller in size (in bytes). Or is the 60% number just the saving (in bytes) thanks to using pngcrush? Thanks, Thomas
Dear Thomas Petazzoni, 2017-02-23 22:15 GMT+01:00 Thomas Petazzoni <thomas.petazzoni@free-electrons.com>: > Hello, > > On Thu, 23 Feb 2017 22:08:05 +0100, Angelo Compagnucci wrote: > >> My patch doesn't necessary requires a server side configuration >> change, usually the module I used in htaccess are enabled in common >> apache configuration. BTW the use of the modules by a virtualhost is >> not enabled by default and .htaccess simply enables it. >> On a shared webserver with hundres of websites for example, changing >> the cache control setting doesn't require the server sysadmin >> intervention, cause the htaccess is used to accomodate user >> preferences on a per site base. >> Buildroot is probably served by a shared webserver with the required >> modules enabled, but they are not correctly configured for the >> buildroot.org virtualhost. >> So simply dropping the .htaccess in the buildroot virtualhost root >> directory should do the trick and enable that configuraion. > > Right. I want to see what the admins of the webserver have to say > though. > >> Pngcrush is a software for reducing the size of png images without >> affecting quality. That means that we have to use less bandwidth to >> serve the same content as before. > > I'm OK with what pngcrush does, but in your commit log, you said "The > images resized to ~60% of the original size.". To me it means that you > have reduced the size (in pixel) of the images to make them smaller in > size (in bytes). > > Or is the 60% number just the saving (in bytes) thanks to using > pngcrush? Yes, of course. I saved ~40% byte size from the original files. I'm resending a version 2 directly to Peter, using another software (pngquant) I obtained a better compression of pngs, in that commit I'll reword the message to be more explicit on bytes saving than on resizing. Sincerely, Angelo. > > Thanks, > > Thomas > -- > Thomas Petazzoni, CTO, Free Electrons > Embedded Linux and Kernel engineering > http://free-electrons.com
Hello, On Thu, 23 Feb 2017 22:20:22 +0100, Angelo Compagnucci wrote: > > Or is the 60% number just the saving (in bytes) thanks to using > > pngcrush? > > Yes, of course. Ah, OK, that wasn't clear to me. Then we're all good. > I saved ~40% byte size from the original files. I'm resending a > version 2 directly to Peter, using another software (pngquant) I > obtained a better compression of pngs, in that commit I'll reword the > message to be more explicit on bytes saving than on resizing. Good, thanks! Thomas
On 23-02-17 22:15, Thomas Petazzoni wrote: > On Thu, 23 Feb 2017 22:08:05 +0100, Angelo Compagnucci wrote: > >> My patch doesn't necessary requires a server side configuration >> change, usually the module I used in htaccess are enabled in common >> apache configuration. BTW the use of the modules by a virtualhost is >> not enabled by default and .htaccess simply enables it. >> On a shared webserver with hundres of websites for example, changing >> the cache control setting doesn't require the server sysadmin >> intervention, cause the htaccess is used to accomodate user >> preferences on a per site base. >> Buildroot is probably served by a shared webserver with the required >> modules enabled, but they are not correctly configured for the >> buildroot.org virtualhost. >> So simply dropping the .htaccess in the buildroot virtualhost root >> directory should do the trick and enable that configuraion. > Right. I want to see what the admins of the webserver have to say > though. Angelo is right though: this .htaccess file basically checks what the server supports and enables features in the webserver if they are available. So even if the osuosl webserver currently doesn't have one of these modules installed, it still makes sense to have it in the .htaccess file so it will be used when they do install the module. Regards, Arnout
Hi Angelo, On 22-02-17 23:52, Angelo Compagnucci wrote: > Currently the buildroot website is not leveraging common > techniques to have a better loading speed (browser caching, > gzip compression, deflating). Sounds like a great idea. I had my doubts about the usefulness of deflate/compress - most of our pages are relatively small, and the big ones (e.g. bootstrap) come from a cdn. However, the news page for example is more than 100K and that compresses nicely. > This commit provides an .htaccess files with all the needed to > enable mentioned features. This .htaccess only works if the > webserver has deflate, expires and headers modules enabled. This is not phrased very well. It should be: This .htaccess file checks the modules that are available on the webserver, and configures them appropriately if they are. The best way to test this, IMHO, is to just deploy it and run some timing experiments on the server. Perhaps it would be nice if you could do a measurement now, and then one after deployment, to see the improvement? > > Signed-off-by: Angelo Compagnucci <angelo.compagnucci@gmail.com> > --- > docs/website/.htaccess | 62 ++++++++++++++++++++++++++++++++++++++++++++++++++ > 1 file changed, 62 insertions(+) > create mode 100644 docs/website/.htaccess > > diff --git a/docs/website/.htaccess b/docs/website/.htaccess > new file mode 100644 > index 0000000..b02beb8 > --- /dev/null > +++ b/docs/website/.htaccess > @@ -0,0 +1,62 @@ > +# BEGIN Compress text files > +<ifModule mod_deflate.c> > + <filesMatch ".(css|js|x?html?|php)$"> > + SetOutputFilter DEFLATE > + </filesMatch> > +</ifModule> > +# END Compress text files What happens when the server has both deflate and gzip enabled? > + > +# BEGIN Expire headers > +<ifModule mod_expires.c> > + ExpiresActive On > + ExpiresDefault "access plus 1 seconds" > + ExpiresByType image/x-icon "access plus 2592000 seconds" Please use "1 week" instead of "2592000 seconds". Or is the idea to use the same numbers as in the Cache-Control headers? > + ExpiresByType image/jpeg "access plus 2592000 seconds" > + ExpiresByType image/png "access plus 2592000 seconds" > + ExpiresByType image/gif "access plus 2592000 seconds" So we have to make sure, if we ever update an image, that we give it a different name. Which is a good idea anyway :-) > + ExpiresByType application/x-shockwave-flash "access plus 2592000 seconds" > + ExpiresByType text/css "access plus 604800 seconds" > + ExpiresByType text/javascript "access plus 216000 seconds" > + ExpiresByType application/javascript "access plus 216000 seconds" > + ExpiresByType application/x-javascript "access plus 216000 seconds" Hm, the only javascript we serve is js/buildroot.js, which is only 3507 bytes. We would also have to change its name if we ever modify it, otherwise clients are going to see inconsistent results for up to two days... I think it's better to align this with the html time. Same goes for the css, we only have a total of 7KB of css. > + ExpiresByType text/html "access plus 600 seconds" > + ExpiresByType application/xhtml+xml "access plus 600 seconds" So, if the news page gets updated, it can take up to 10 hours before clients see it. Seems OK. > +</ifModule> > +# END Expire headers > + > +# BEGIN Cache-Control Headers > +<ifModule mod_headers.c> > + <filesMatch ".(ico|jpe?g|png|gif|swf)$"> > + Header set Cache-Control "max-age=2592000, public" > + </filesMatch> > + <filesMatch ".(css)$"> > + Header set Cache-Control "max-age=604800, public" > + </filesMatch> > + <filesMatch ".(js)$"> > + Header set Cache-Control "max-age=216000, private" > + </filesMatch> > + <filesMatch ".(x?html?|php)$"> > + Header set Cache-Control "max-age=600, private, must-revalidate" > + </filesMatch> > +</ifModule> > +# END Cache-Control Headers > + > +# BEGIN Turn ETags Off > +<ifModule mod_headers.c> > + Header unset ETag Why turn etags off? Anyway, it is currently not turned on so this makes no difference. But the way that we use our web pages, the best option would be to use the Last-Modified header. It's currently not setting that. Do you know how to enable it? Regards, Arnout > +</ifModule> > +FileETag None > +# END Turn ETags Off > + > +# BEGIN gzip > +<ifModule mod_gzip.c> > +mod_gzip_on Yes > +mod_gzip_dechunk Yes > +mod_gzip_item_include file .(html?|txt|css|js)$ > +mod_gzip_item_include handler ^cgi-script$ > +mod_gzip_item_include mime ^text/.* > +mod_gzip_item_include mime ^application/x-javascript.* > +mod_gzip_item_exclude mime ^image/.* > +mod_gzip_item_exclude rspheader ^Content-Encoding:.*gzip.* > +</ifModule> > +# END gzip >
Dear Arnout Vandecappelle, 2017-02-24 18:15 GMT+01:00 Arnout Vandecappelle <arnout@mind.be>: > Hi Angelo, > > On 22-02-17 23:52, Angelo Compagnucci wrote: >> Currently the buildroot website is not leveraging common >> techniques to have a better loading speed (browser caching, >> gzip compression, deflating). > > Sounds like a great idea. I had my doubts about the usefulness of > deflate/compress - most of our pages are relatively small, and the big ones > (e.g. bootstrap) come from a cdn. However, the news page for example is more > than 100K and that compresses nicely. > >> This commit provides an .htaccess files with all the needed to >> enable mentioned features. This .htaccess only works if the >> webserver has deflate, expires and headers modules enabled. > > This is not phrased very well. It should be: > > This .htaccess file checks the modules that are available on > the webserver, and configures them appropriately if they are. Will rephrase and submit a v2. > > The best way to test this, IMHO, is to just deploy it and run some timing > experiments on the server. Perhaps it would be nice if you could do a > measurement now, and then one after deployment, to see the improvement? There is a tool for that and what I'm currently using [1]. > >> >> Signed-off-by: Angelo Compagnucci <angelo.compagnucci@gmail.com> >> --- >> docs/website/.htaccess | 62 ++++++++++++++++++++++++++++++++++++++++++++++++++ >> 1 file changed, 62 insertions(+) >> create mode 100644 docs/website/.htaccess >> >> diff --git a/docs/website/.htaccess b/docs/website/.htaccess >> new file mode 100644 >> index 0000000..b02beb8 >> --- /dev/null >> +++ b/docs/website/.htaccess >> @@ -0,0 +1,62 @@ >> +# BEGIN Compress text files >> +<ifModule mod_deflate.c> >> + <filesMatch ".(css|js|x?html?|php)$"> >> + SetOutputFilter DEFLATE >> + </filesMatch> >> +</ifModule> >> +# END Compress text files > > What happens when the server has both deflate and gzip enabled? > >> + >> +# BEGIN Expire headers >> +<ifModule mod_expires.c> >> + ExpiresActive On >> + ExpiresDefault "access plus 1 seconds" >> + ExpiresByType image/x-icon "access plus 2592000 seconds" > > Please use "1 week" instead of "2592000 seconds". Or is the idea to use the > same numbers as in the Cache-Control headers? > >> + ExpiresByType image/jpeg "access plus 2592000 seconds" >> + ExpiresByType image/png "access plus 2592000 seconds" >> + ExpiresByType image/gif "access plus 2592000 seconds" > > So we have to make sure, if we ever update an image, that we give it a > different name. Which is a good idea anyway :-) > >> + ExpiresByType application/x-shockwave-flash "access plus 2592000 seconds" >> + ExpiresByType text/css "access plus 604800 seconds" >> + ExpiresByType text/javascript "access plus 216000 seconds" >> + ExpiresByType application/javascript "access plus 216000 seconds" >> + ExpiresByType application/x-javascript "access plus 216000 seconds" > > Hm, the only javascript we serve is js/buildroot.js, which is only 3507 bytes. > We would also have to change its name if we ever modify it, otherwise clients > are going to see inconsistent results for up to two days... I think it's better > to align this with the html time. Same goes for the css, we only have a total of > 7KB of css. > >> + ExpiresByType text/html "access plus 600 seconds" >> + ExpiresByType application/xhtml+xml "access plus 600 seconds" > > So, if the news page gets updated, it can take up to 10 hours before clients > see it. Seems OK. > >> +</ifModule> >> +# END Expire headers >> + >> +# BEGIN Cache-Control Headers >> +<ifModule mod_headers.c> >> + <filesMatch ".(ico|jpe?g|png|gif|swf)$"> >> + Header set Cache-Control "max-age=2592000, public" >> + </filesMatch> >> + <filesMatch ".(css)$"> >> + Header set Cache-Control "max-age=604800, public" >> + </filesMatch> >> + <filesMatch ".(js)$"> >> + Header set Cache-Control "max-age=216000, private" >> + </filesMatch> >> + <filesMatch ".(x?html?|php)$"> >> + Header set Cache-Control "max-age=600, private, must-revalidate" >> + </filesMatch> >> +</ifModule> >> +# END Cache-Control Headers >> + >> +# BEGIN Turn ETags Off >> +<ifModule mod_headers.c> >> + Header unset ETag > > Why turn etags off? Anyway, it is currently not turned on so this makes no > difference. But the way that we use our web pages, the best option would be to > use the Last-Modified header. It's currently not setting that. Do you know how > to enable it? You can read the explaination here [2] and related resources. > > Regards, > Arnout > >> +</ifModule> >> +FileETag None >> +# END Turn ETags Off >> + >> +# BEGIN gzip >> +<ifModule mod_gzip.c> >> +mod_gzip_on Yes >> +mod_gzip_dechunk Yes >> +mod_gzip_item_include file .(html?|txt|css|js)$ >> +mod_gzip_item_include handler ^cgi-script$ >> +mod_gzip_item_include mime ^text/.* >> +mod_gzip_item_include mime ^application/x-javascript.* >> +mod_gzip_item_exclude mime ^image/.* >> +mod_gzip_item_exclude rspheader ^Content-Encoding:.*gzip.* >> +</ifModule> >> +# END gzip I think there is a bit of confusion on browser caching here. Browser caching is always client side and these we are making here are only suggestions to the browser. When a browser requests a content, if the content is changed, it gets the new content. If the server instead gives enough information, the browser could decide to not download the resource cause it matches with the cache. Without enough information, the browser follow the secure behavior to get always the full resource from web. Caching behavior could be easily circumvented using CTRL+F5 cause it's totally a client side decision to get the full content from server or the cache. So having an .htaccess gives the needed hints to the browser on how handle the cache and decide when download the full content. Obviously if an image file has changed the user will get new image. I think that if this not were the default behavior the web have imploded from quite some time! Honestly I don't want to fine tuning each one of this settings. They are based on proven standards (Ex: [2]), there are tons of examples online. We serve only static content and the 95% of the website is not changed for a year at least, so suggesting to the browser to cache the most seems sensible to me. If an user do really want to see the latest content, to be sure he could always hit CRTL+F5 or clear the browser cache. On the google page speed insights you can find tons of suggestions and good practices to get the most from your website. For the deflate/gzip question: enabling both means that the browser could choose what to request and how. The rationale here is that cpu cycles are free on a 3G/4G connected device, bytes or megabytes of traffic are not. If you like cosmetic changes to do (like "1 week" instead of seconds) I can do that. Sincerely, Angelo. [1] https://developers.google.com/speed/pagespeed/insights/?hl=IT&url=https%3A%2F%2Fbuildroot.org [2] https://htaccessbook.com/disable-etags/ [3] http://www.seomix.fr/guide-htaccess-performances-et-temps-de-chargement/ >> > > -- > Arnout Vandecappelle arnout at mind be > Senior Embedded Software Architect +32-16-286500 > Essensium/Mind http://www.mind.be > G.Geenslaan 9, 3001 Leuven, Belgium BE 872 984 063 RPR Leuven > LinkedIn profile: http://www.linkedin.com/in/arnoutvandecappelle > GPG fingerprint: 7493 020B C7E3 8618 8DEC 222C 82EB F404 F9AC 0DDF
On 24-02-17 19:24, Angelo Compagnucci wrote: > Dear Arnout Vandecappelle, > > 2017-02-24 18:15 GMT+01:00 Arnout Vandecappelle <arnout@mind.be>: >> Hi Angelo, >> >> On 22-02-17 23:52, Angelo Compagnucci wrote: >>> Currently the buildroot website is not leveraging common >>> techniques to have a better loading speed (browser caching, >>> gzip compression, deflating). [snip] >> The best way to test this, IMHO, is to just deploy it and run some timing >> experiments on the server. Perhaps it would be nice if you could do a >> measurement now, and then one after deployment, to see the improvement? > > There is a tool for that and what I'm currently using [1]. For those who don't speak Italian: [4] :-) But it doesn't report any actual speeds, does it? It just says "minifying CSS can save 1008 bytes". So for comparing performance before/after it doesn't do much. [snip] >>> +# BEGIN Turn ETags Off >>> +<ifModule mod_headers.c> >>> + Header unset ETag >> >> Why turn etags off? Anyway, it is currently not turned on so this makes no >> difference. But the way that we use our web pages, the best option would be to >> use the Last-Modified header. It's currently not setting that. Do you know how >> to enable it? > > You can read the explaination here [2] and related resources. That site refers to two others (Google and Yahoo) that both say to enable ETags if appropriate... Google just says to enable ETags, Yahoo says to disable it when the same site is provided by multiple servers - which is not the case for us AFAIK. In addition, your [1] says to "Leverage browser caching", refering to [5] that says to enable ETags again (both are Google so that makes sense). [snip] > I think there is a bit of confusion on browser caching here. Browser > caching is always client side and these we are making here are only > suggestions to the browser. When a browser requests a content, if the > content is changed, it gets the new content. If the server instead > gives enough information, the browser could decide to not download the > resource cause it matches with the cache. Without enough information, > the browser follow the secure behavior to get always the full resource > from web. > Caching behavior could be easily circumvented using CTRL+F5 cause it's > totally a client side decision to get the full content from server or > the cache. > So having an .htaccess gives the needed hints to the browser on how > handle the cache and decide when download the full content. > Obviously if an image file has changed the user will get new image. I > think that if this not were the default behavior the web have imploded > from quite some time! No that's not true. The browser can't know that the image has changed, except by downloading it again... The Last-Modified-Time and ETags headers are supposed to work around that, by allowing the browser to just issue the request and not wait for the full download. The Cache-Control is of course better, because then the browser doesn't have to make any request. But the problem is that the webserver can't accurately predict when the content will change. So you just give a hint of how long it's going to stay valid. > Honestly I don't want to fine tuning each one of this settings. They > are based on proven standards (Ex: [2]), there are tons of examples > online. > We serve only static content and the 95% of the website is not changed > for a year at least, so suggesting to the browser to cache the most > seems sensible to me. > If an user do really want to see the latest content, to be sure he > could always hit CRTL+F5 or clear the browser cache. Sure, that's why I said the 10 minutes for the html is OK. However, I am concerned about the two css files and the javascript file. It is likely that they are updated together with the html. E.g., a javascript function would be added in buildroot.js and used in news.html. If the two files have a different expiry time, then the browser will use the new HTML file with the old javascript file, with who knows what kind of strange results. Note that I'm concerned about this out of a bad experience. Guess how we solved it? With ETags :-) Not that I'm sure that it was the best solution though. "Professional" website solve this by versioning the resources. Basically when deploying, buildroot.js would be renamed to buildroot.42.js and all references to it in html files are updated as well. This way, if the js file changes, its name changes as well so it is automatically uncached. Also, if a cached HTML file is used, it will still point to the old version of the js file so it's consistent. However, I don't think that we want to go there - it requires complicated deployment, for limited gain. What could be done, on the other hand, is to use server-side includes for the css and js. But that's a separate patch of course. > On the google page speed insights you can find tons of suggestions and > good practices to get the most from your website. Note that you didn't do anything about what Google identifies as the most critical issue. > For the deflate/gzip question: enabling both means that the browser > could choose what to request and how. The rationale here is that cpu > cycles are free on a 3G/4G connected device, bytes or megabytes of > traffic are not. > > If you like cosmetic changes to do (like "1 week" instead of seconds) > I can do that. Unless if you say that you want to keep the number the same in expires and Cache-Control, which does make sense. Either way, you can add my Reviewed-by: Arnout Vandecappelle (Essensium/Mind) <arnout@mind.be> Even if it's not perfect yet, it's certainly an improvement. Regards, Arnout > > Sincerely, Angelo. > > > [1] https://developers.google.com/speed/pagespeed/insights/?hl=IT&url=https%3A%2F%2Fbuildroot.org > [2] https://htaccessbook.com/disable-etags/ > [3] http://www.seomix.fr/guide-htaccess-performances-et-temps-de-chargement/ [4] https://developers.google.com/speed/pagespeed/insights/?hl=EN&url=https%3A%2F%2Fbuildroot.org [5] https://developers.google.com/speed/docs/insights/LeverageBrowserCaching
Hello, On Wed, 22 Feb 2017 23:52:36 +0100, Angelo Compagnucci wrote: > Currently the buildroot website is not leveraging common > techniques to have a better loading speed (browser caching, > gzip compression, deflating). > This commit provides an .htaccess files with all the needed to > enable mentioned features. This .htaccess only works if the > webserver has deflate, expires and headers modules enabled. > > Signed-off-by: Angelo Compagnucci <angelo.compagnucci@gmail.com> > --- > docs/website/.htaccess | 62 ++++++++++++++++++++++++++++++++++++++++++++++++++ > 1 file changed, 62 insertions(+) > create mode 100644 docs/website/.htaccess Applied to master, thanks. Let's see if this actually works, and gives useful improvements :) Thanks! Thomas
diff --git a/docs/website/.htaccess b/docs/website/.htaccess new file mode 100644 index 0000000..b02beb8 --- /dev/null +++ b/docs/website/.htaccess @@ -0,0 +1,62 @@ +# BEGIN Compress text files +<ifModule mod_deflate.c> + <filesMatch ".(css|js|x?html?|php)$"> + SetOutputFilter DEFLATE + </filesMatch> +</ifModule> +# END Compress text files + +# BEGIN Expire headers +<ifModule mod_expires.c> + ExpiresActive On + ExpiresDefault "access plus 1 seconds" + ExpiresByType image/x-icon "access plus 2592000 seconds" + ExpiresByType image/jpeg "access plus 2592000 seconds" + ExpiresByType image/png "access plus 2592000 seconds" + ExpiresByType image/gif "access plus 2592000 seconds" + ExpiresByType application/x-shockwave-flash "access plus 2592000 seconds" + ExpiresByType text/css "access plus 604800 seconds" + ExpiresByType text/javascript "access plus 216000 seconds" + ExpiresByType application/javascript "access plus 216000 seconds" + ExpiresByType application/x-javascript "access plus 216000 seconds" + ExpiresByType text/html "access plus 600 seconds" + ExpiresByType application/xhtml+xml "access plus 600 seconds" +</ifModule> +# END Expire headers + +# BEGIN Cache-Control Headers +<ifModule mod_headers.c> + <filesMatch ".(ico|jpe?g|png|gif|swf)$"> + Header set Cache-Control "max-age=2592000, public" + </filesMatch> + <filesMatch ".(css)$"> + Header set Cache-Control "max-age=604800, public" + </filesMatch> + <filesMatch ".(js)$"> + Header set Cache-Control "max-age=216000, private" + </filesMatch> + <filesMatch ".(x?html?|php)$"> + Header set Cache-Control "max-age=600, private, must-revalidate" + </filesMatch> +</ifModule> +# END Cache-Control Headers + +# BEGIN Turn ETags Off +<ifModule mod_headers.c> + Header unset ETag +</ifModule> +FileETag None +# END Turn ETags Off + +# BEGIN gzip +<ifModule mod_gzip.c> +mod_gzip_on Yes +mod_gzip_dechunk Yes +mod_gzip_item_include file .(html?|txt|css|js)$ +mod_gzip_item_include handler ^cgi-script$ +mod_gzip_item_include mime ^text/.* +mod_gzip_item_include mime ^application/x-javascript.* +mod_gzip_item_exclude mime ^image/.* +mod_gzip_item_exclude rspheader ^Content-Encoding:.*gzip.* +</ifModule> +# END gzip
Currently the buildroot website is not leveraging common techniques to have a better loading speed (browser caching, gzip compression, deflating). This commit provides an .htaccess files with all the needed to enable mentioned features. This .htaccess only works if the webserver has deflate, expires and headers modules enabled. Signed-off-by: Angelo Compagnucci <angelo.compagnucci@gmail.com> --- docs/website/.htaccess | 62 ++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 62 insertions(+) create mode 100644 docs/website/.htaccess