more QUICKSTART.md

merge-requests/12/merge
Michał 'rysiek' Woźniak 2022-03-09 23:51:45 +00:00
rodzic 4cfaa1b87c
commit 67b8a3f7cc
1 zmienionych plików z 73 dodań i 2 usunięć

Wyświetl plik

@ -53,7 +53,6 @@ Our `config.json` has to be a [valid JSON file](https://jsonlint.com/); for now
}],
"loggedComponents": ["service-worker", "fetch"]
}
```
Let's unpack this:
@ -88,7 +87,7 @@ We also need to add this to the `<head>` section of our `index.html`, and HTML f
<script defer src="/libresilient.js"></script>
```
Once we deploy these changes, our HTML files will load `libresilient.js` for each visitor, which in turn will register `service-worker.js`. That code in turn will load `config.json`, and based on it, will load the `/plugins/fetch.js`.
Once we deploy these changes, our HTML files will load `libresilient.js` for each visitor, which in turn will register `service-worker.js`. That code in turn will load `config.json`, and based on it, will load `/plugins/fetch.js`.
Each user of our website, after visiting any of the HTML pages, will now have their browser load and register the Libresilient service worker, as configured. From that point on all initiated in the context of our website will always be handled by LibResilient, and in this particular configuration — the `fetch` plugin.
@ -97,3 +96,75 @@ This doesn't yet provide any interesting functionality, though. So how about we
## Adding cache
Bare minimum would be to add offline cache to our website. This would at least allow our visitors to continue to browse content they've already loaded once even if theya re offline or if our site is down for whatever reason.
This is now easy to do. We need just two things:
- the [`cache` plugin](https://gitlab.com/rysiekpl/libresilient/-/blob/master/plugins/cache.js)\
This LibResilient plugin makes use of the [Cache API](https://developer.mozilla.org/en-US/docs/Web/API/Cache) to store and retrieve content offline.\
As with `fetch` plugin before, we need it in the `/plugins/` subdirectory of our website.
- a small modification of our `config.json` to enable the `cache` plugin.
Our website structure is now:
- `index.html`
- `favicon.ico`
- `/assets/`
- `style.css`
- `logo.png`
- `font.woff`
- `/blog/`
- `01-first.html`
- `02-second.html`
- `config.json`
- `libresilient.js`
- `service-worker.js`
- `/plugins/`
- `fetch.js`
- **`cache.js`**
Our `config.json` should now look like this:
```json
{
"plugins": [{
"name": "fetch"
},{
"name": "cache"
}],
"loggedComponents": ["service-worker", "fetch", "cache"],
defaultPluginTimeout: 1000
}
```
Note the addition of the `cache` plugin config, and a "cache" component in `loggedComponents`. The `cache` plugin does not require any other configuration to work, so everything remains nice and simple.
You will also note the additional key in the config file: `defaultPluginTimeout`. This defines how long (in ms; `1000` there means "2 seconds") does LibResilient wait for a response from a plugin before it decides that it is not going to work, and moves on to the next plugin. By default this is set to `10000` (so, 10s), which is almost certainly too long for a website as simple as in our example. One second seems reasonable.
What this gives us is that any content successfully retrieved by `fetch` will now be cached for offline use. If the website goes down for whatever reason (and the `fetch` plugin starts returning errors or just times out), users who had visited before will continue to have access to content they had already accessed.
> ### Note on plugin types
>
> The `cache` plugin is a "stashing" plugin in LibResilient nomenclature. Such plugins have no way of accessing remote content, they are only good at saving such content locally for later, offline use. Currently there are no other stashing plugins, but anything that can save data locally and is available in Service Workers could be used to write new ones.
>
> Other types of plugins are:
>
> - **"transport"** plugins\
> These are the plugins that are able to access content remotely, by whatever means; `fetch` plugin is an example of transport plugins. There are others.
>
> - **"wrapper"** plugins\
> These are plugins that wrap other plugins to add functionality. To function, wrapping plugins need other plugins to "wrap". We will cover this later.
### Cache-first?
What if we do it the other way around, and configure the `cache` plugin before the `fetch` plugin? In that case we end up with a so-called ["cache-first"](https://apiumhub.com/tech-blog-barcelona/service-worker-caching/#Cache_first) strategy.
In case of LibResilient this means that the first time a visitor loads our example website, as their cache is empty, the `cache` plugin will fail to return content. This will lead LibResilient to try the next configured plugin, which in this case is `fetch`. Content will get fetched by it, and then stashed locally by the `cache` plugin.
Next time that same visitor loads that particular resource, it will be served from cache, so response will be instantaneous. In the background, however, LibResilient will still use the `fetch` plugin to try to retrieve newer version of that content. If it is retrieved and indeed newer, it will be stashed by the `cache` plugin.
> ### Note on stashing in LibResilient
>
> LibResilient treats stashing plugins in a special way. If there are multiple plugins configured and a stashing plugin (like the `cache` plugin) is among them, then:
> - when content is retrieved by a transport plugin (like `fetch`) configured *before* a stashing plugin, that content is then stashed by the stashing plugin for later offline use.
> - if all transport plugins configured *before* a stashing plugin fail and stashed content exists and is returned, LibResilient will then run any transport plugins, which are configured *after* the stashing plugin, in the background to try to retrieve a fresh version of the content; if any of these succeeds, the response will be stashed by the stashing plugin.