9.2 KiB
Quickstart guide
This is a quickstart guide for deploying LibResilient on a website. This guide makes a few assumptions:
- the website in question is a static site;
- the administrators of the website have shell access on the hosting server, and ability to install software there;
- LibResilient is going to be deployed for the whole site.
These assumptions are made to simplify this quickstart guide and are not necessary for LibResilient to be able
The website
We are going to assume a simple website, consisting of:
index.html
favicon.ico
- an
/assets/
directory, containing:style.css
logo.png
font.woff
- a
/blog/
directory, containing two blog posts:01-first.html
02-second.html
In fact, this hypothetical website is very similar to (and only a bit simpler than) Resilient.Is, the homepage of this project.
First steps
We shall start with a completely minimal (but not really useful) deployment of LibResilient, and then gradually add functionality.
To start, we need:
-
libresilient.js
This script is responsible for loading the service worker script. It can be included using a<script>
tag or copy-pasted into the HTML. We'll go with the<script>
tag.
thelibresilient.js
script should be located in the same directory as theservice-worker.js
script. -
service-worker.js
This is the heart of LibResilient. Once loaded, it will use the supplied configuration (inconfig.json
) to load and configure plugins. Plugins in turn will perform actual requests and other tasks. -
the
fetch
plugin
This LibResilient plugin uses the basic HTTP Fetch to retrieve content.
LibResilient expects plugins in theplugins/
subdirectory of the directory where theservice-worker.js
script is located, so this file should be saved as/plugins/fetch.js
for our hypothetical website. -
config.json
That's the config file, and should also reside in the same directory asservice-worker.js
.
We will write it from scratch, although an example is available here.
Our config.json
has to be a valid JSON file; for now it should only contain this:
{
"plugins": [{
"name": "fetch"
}],
"loggedComponents": ["service-worker", "fetch"]
}
Let's unpack this:
-
The
plugins
key contains an array of objects.
Each object defines configuration for a plugin. For simplest plugins, the minimal configuration is just the name of the plugin. Based on the nameservice-worker.js
establishes which file to load for a given plugin — in this case, it will be./plugins/fetch.js
(relative to whereservice-worker.js
is). -
The
loggedComponents
key is a narray of strings.
It lists the components whose logs should be visible in the deveoper console in the browser. Theservice-worker.js
script logs messages as the "service-worker" component, thefetch
plugin as (you guessed it!) "fetch".
We want the log messages visible for both of them, just so that we know what's going on. In a production environment we would perhaps want to limit the log messages to only some components.
With all this, our website structure now looks like this:
index.html
favicon.ico
/assets/
style.css
logo.png
font.woff
/blog/
01-first.html
02-second.html
config.json
libresilient.js
service-worker.js
/plugins/
fetch.js
We also need to add this to the <head>
section of our index.html
, and HTML files in the /blog/
directory:
<script defer src="/libresilient.js"></script>
Once we deploy these changes, our HTML files will load libresilient.js
for each visitor, which in turn will register service-worker.js
. That code in turn will load config.json
, and based on it, will load /plugins/fetch.js
.
Each user of our website, after visiting any of the HTML pages, will now have their browser load and register the Libresilient service worker, as configured. From that point on all initiated in the context of our website will always be handled by LibResilient, and in this particular configuration — the fetch
plugin.
This doesn't yet provide any interesting functionality, though. So how about we do that next.
Adding cache
Bare minimum would be to add offline cache to our website. This would at least allow our visitors to continue to browse content they've already loaded once even if theya re offline or if our site is down for whatever reason.
This is now easy to do. We need just two things:
-
the
cache
plugin
This LibResilient plugin makes use of the Cache API to store and retrieve content offline.
As withfetch
plugin before, we need it in the/plugins/
subdirectory of our website. -
a small modification of our
config.json
to enable thecache
plugin.
Our website structure is now:
index.html
favicon.ico
/assets/
style.css
logo.png
font.woff
/blog/
01-first.html
02-second.html
config.json
libresilient.js
service-worker.js
/plugins/
fetch.js
cache.js
Our config.json
should now look like this:
{
"plugins": [{
"name": "fetch"
},{
"name": "cache"
}],
"loggedComponents": ["service-worker", "fetch", "cache"],
defaultPluginTimeout: 1000
}
Note the addition of the cache
plugin config, and a "cache" component in loggedComponents
. The cache
plugin does not require any other configuration to work, so everything remains nice and simple.
You will also note the additional key in the config file: defaultPluginTimeout
. This defines how long (in ms; 1000
there means "2 seconds") does LibResilient wait for a response from a plugin before it decides that it is not going to work, and moves on to the next plugin. By default this is set to 10000
(so, 10s), which is almost certainly too long for a website as simple as in our example. One second seems reasonable.
What this gives us is that any content successfully retrieved by fetch
will now be cached for offline use. If the website goes down for whatever reason (and the fetch
plugin starts returning errors or just times out), users who had visited before will continue to have access to content they had already accessed.
Note on plugin types
The
cache
plugin is a "stashing" plugin in LibResilient nomenclature. Such plugins have no way of accessing remote content, they are only good at saving such content locally for later, offline use. Currently there are no other stashing plugins, but anything that can save data locally and is available in Service Workers could be used to write new ones.Other types of plugins are:
"transport" plugins
These are the plugins that are able to access content remotely, by whatever means;fetch
plugin is an example of transport plugins. There are others."wrapper" plugins
These are plugins that wrap other plugins to add functionality. To function, wrapping plugins need other plugins to "wrap". We will cover this later.
Cache-first?
What if we do it the other way around, and configure the cache
plugin before the fetch
plugin? In that case we end up with a so-called "cache-first" strategy.
In case of LibResilient this means that the first time a visitor loads our example website, as their cache is empty, the cache
plugin will fail to return content. This will lead LibResilient to try the next configured plugin, which in this case is fetch
. Content will get fetched by it, and then stashed locally by the cache
plugin.
Next time that same visitor loads that particular resource, it will be served from cache, so response will be instantaneous. In the background, however, LibResilient will still use the fetch
plugin to try to retrieve newer version of that content. If it is retrieved and indeed newer, it will be stashed by the cache
plugin.
Note on stashing in LibResilient
LibResilient treats stashing plugins in a special way. If there are multiple plugins configured and a stashing plugin (like the
cache
plugin) is among them, then:
- when content is retrieved by a transport plugin (like
fetch
) configured before a stashing plugin, that content is then stashed by the stashing plugin for later offline use.- if all transport plugins configured before a stashing plugin fail and stashed content exists and is returned, LibResilient will then run any transport plugins, which are configured after the stashing plugin, in the background to try to retrieve a fresh version of the content; if any of these succeeds, the response will be stashed by the stashing plugin.