#
tokens: 49658/50000 22/1033 files (page 20/25)
lines: on (toggle) GitHub
raw markdown copy reset
This is page 20 of 25. Use http://codebase.md/id/docs/get_started/create/presentation_layout.html?lines=true&page={x} to view the full context.

# Directory Structure

```
├── .ci
│   ├── check-markdownfmt.sh
│   ├── check-metadata.sh
│   ├── check-pr-no-readme.sh
│   ├── check-required-files.sh
│   ├── check-short.sh
│   ├── check-ymlfmt.sh
│   └── get-markdownfmt.sh
├── .common-templates
│   ├── maintainer-community.md
│   ├── maintainer-docker.md
│   ├── maintainer-hashicorp.md
│   └── maintainer-influxdata.md
├── .dockerignore
├── .github
│   └── workflows
│       └── ci.yml
├── .template-helpers
│   ├── arches.sh
│   ├── autogenerated-warning.md
│   ├── compose.md
│   ├── generate-dockerfile-links-partial.sh
│   ├── generate-dockerfile-links-partial.tmpl
│   ├── get-help.md
│   ├── issues.md
│   ├── license-common.md
│   ├── template.md
│   ├── variant-alpine.md
│   ├── variant-default-buildpack-deps.md
│   ├── variant-default-debian.md
│   ├── variant-default-ubuntu.md
│   ├── variant-onbuild.md
│   ├── variant-slim.md
│   ├── variant-windowsservercore.md
│   ├── variant.md
│   └── variant.sh
├── adminer
│   ├── compose.yaml
│   ├── content.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── aerospike
│   ├── content.md
│   ├── github-repo
│   ├── issues.md
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── almalinux
│   ├── content.md
│   ├── github-repo
│   ├── issues.md
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── alpine
│   ├── content.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── alt
│   ├── content.md
│   ├── github-repo
│   ├── issues.md
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── amazoncorretto
│   ├── content.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── amazonlinux
│   ├── content.md
│   ├── github-repo
│   ├── issues.md
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── api-firewall
│   ├── content.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.svg
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── arangodb
│   ├── content.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── archlinux
│   ├── content.md
│   ├── github-repo
│   ├── issues.md
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── backdrop
│   ├── compose.yaml
│   ├── content.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── bash
│   ├── content.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── bonita
│   ├── compose.yaml
│   ├── content.md
│   ├── get-help.md
│   ├── github-repo
│   ├── issues.md
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── buildpack-deps
│   ├── content.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── busybox
│   ├── content.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   ├── README.md
│   ├── variant-glibc.md
│   ├── variant-musl.md
│   ├── variant-uclibc.md
│   └── variant.md
├── caddy
│   ├── content.md
│   ├── get-help.md
│   ├── github-repo
│   ├── license.md
│   ├── logo-120.png
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── cassandra
│   ├── content.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── chronograf
│   ├── content.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── cirros
│   ├── content.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── clearlinux
│   ├── content.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── clefos
│   ├── content.md
│   ├── deprecated.md
│   ├── github-repo
│   ├── issues.md
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── clickhouse
│   ├── content.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.svg
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── clojure
│   ├── content.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── composer
│   ├── content.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── convertigo
│   ├── content.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── couchbase
│   ├── content.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── couchdb
│   ├── content.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── crate
│   ├── content.md
│   ├── get-help.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.svg
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── dart
│   ├── content.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.svg
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── debian
│   ├── content.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   ├── README.md
│   ├── variant-slim.md
│   └── variant.md
├── docker
│   ├── content.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   ├── README.md
│   ├── variant-rootless.md
│   └── variant-windowsservercore.md
├── Dockerfile
├── drupal
│   ├── compose.yaml
│   ├── content.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.svg
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   ├── README.md
│   └── variant-fpm.md
├── eclipse-mosquitto
│   ├── content.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── eclipse-temurin
│   ├── content.md
│   ├── get-help.md
│   ├── github-repo
│   ├── issues.md
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── eggdrop
│   ├── content.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── elasticsearch
│   ├── compose.yaml
│   ├── content.md
│   ├── get-help.md
│   ├── github-repo
│   ├── issues.md
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   ├── README.md
│   └── variant-alpine.md
├── elixir
│   ├── content.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── emqx
│   ├── content.md
│   ├── get-help.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.svg
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── erlang
│   ├── content.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── fedora
│   ├── content.md
│   ├── github-repo
│   ├── issues.md
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── flink
│   ├── content.md
│   ├── get-help.md
│   ├── github-repo
│   ├── issues.md
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── fluentd
│   ├── content.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── friendica
│   ├── compose.yaml
│   ├── content.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.svg
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── gazebo
│   ├── content.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── gcc
│   ├── content.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── generate-repo-stub-readme.sh
├── geonetwork
│   ├── compose.yaml
│   ├── content.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   ├── README.md
│   ├── variant-postgres.md
│   └── variant.md
├── get-categories.sh
├── ghost
│   ├── compose.yaml
│   ├── content.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── golang
│   ├── content.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   ├── README.md
│   ├── variant-alpine.md
│   └── variant-tip.md
├── gradle
│   ├── content.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── groovy
│   ├── content.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── haproxy
│   ├── content.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── haskell
│   ├── content.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   ├── README.md
│   └── variant-slim.md
├── haxe
│   ├── content.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── hello-world
│   ├── content.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   ├── README.md
│   └── update.sh
├── hitch
│   ├── content.md
│   ├── github-repo
│   ├── license.md
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── httpd
│   ├── content.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── hylang
│   ├── content.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── ibm-semeru-runtimes
│   ├── content.md
│   ├── github-repo
│   ├── issues.md
│   ├── license.md
│   ├── logo.svg
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── ibmjava
│   ├── content.md
│   ├── get-help.md
│   ├── github-repo
│   ├── issues.md
│   ├── license.md
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── influxdb
│   ├── content.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   ├── README.md
│   ├── variant-data.md
│   └── variant-meta.md
├── irssi
│   ├── content.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── jetty
│   ├── content.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── joomla
│   ├── compose.yaml
│   ├── content.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── jruby
│   ├── content.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── julia
│   ├── content.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── kapacitor
│   ├── content.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── kibana
│   ├── compose.yaml
│   ├── content.md
│   ├── get-help.md
│   ├── github-repo
│   ├── issues.md
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── kong
│   ├── content.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── krakend
│   ├── content.md
│   ├── get-help.md
│   ├── github-repo
│   ├── license.md
│   ├── logo-120.png
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── LICENSE
├── lightstreamer
│   ├── content.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── liquibase
│   ├── content.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── logstash
│   ├── content.md
│   ├── get-help.md
│   ├── github-repo
│   ├── issues.md
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   ├── README.md
│   └── variant-alpine.md
├── mageia
│   ├── content.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── mariadb
│   ├── compose.yaml
│   ├── content.md
│   ├── get-help.md
│   ├── github-repo
│   ├── issues.md
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── markdownfmt.sh
├── matomo
│   ├── content.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── maven
│   ├── content.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── mediawiki
│   ├── compose.yaml
│   ├── content.md
│   ├── github-repo
│   ├── issues.md
│   ├── license.md
│   ├── logo.svg
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── memcached
│   ├── content.md
│   ├── github-repo
│   ├── license.md
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── metadata.json
├── metadata.sh
├── mongo
│   ├── compose.yaml
│   ├── content.md
│   ├── github-repo
│   ├── license.md
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── mongo-express
│   ├── content.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── monica
│   ├── content.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.svg
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── mono
│   ├── content.md
│   ├── deprecated.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── mysql
│   ├── compose.yaml
│   ├── content.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── nats
│   ├── content.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── neo4j
│   ├── content.md
│   ├── get-help.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── neurodebian
│   ├── content.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── nextcloud
│   ├── content.md
│   ├── deprecated.md
│   ├── github-repo
│   ├── license.md
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── nginx
│   ├── content.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   ├── README.md
│   └── variant-perl.md
├── node
│   ├── content.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── notary
│   ├── content.md
│   ├── github-repo
│   ├── license.md
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── odoo
│   ├── content.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── open-liberty
│   ├── content.md
│   ├── get-help.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── openjdk
│   ├── content.md
│   ├── deprecated.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   ├── README.md
│   ├── variant-alpine.md
│   ├── variant-oracle.md
│   └── variant-slim.md
├── oraclelinux
│   ├── content.md
│   ├── get-help.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   ├── README.md
│   └── variant-slim.md
├── orientdb
│   ├── content.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── parallel-update.sh
├── percona
│   ├── compose.yaml
│   ├── content.md
│   ├── github-repo
│   ├── issues.md
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── perl
│   ├── content.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── photon
│   ├── content.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── php
│   ├── content.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   ├── README.md
│   ├── variant-apache.md
│   ├── variant-cli.md
│   ├── variant-fpm.md
│   └── variant.md
├── php-zendserver
│   ├── content.md
│   ├── deprecated.md
│   ├── github-repo
│   ├── issues.md
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── phpmyadmin
│   ├── compose.yaml
│   ├── content.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── plone
│   ├── content.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.svg
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── postfixadmin
│   ├── compose.yaml
│   ├── content.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   ├── README.md
│   ├── variant-apache.md
│   ├── variant-fpm-alpine.md
│   └── variant-fpm.md
├── postgres
│   ├── compose.yaml
│   ├── content.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── push.pl
├── push.sh
├── pypy
│   ├── content.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── python
│   ├── content.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   ├── README.md
│   └── variant-slim.md
├── r-base
│   ├── content.md
│   ├── github-repo
│   ├── issues.md
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── rabbitmq
│   ├── content.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── rakudo-star
│   ├── content.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── README.md
├── redis
│   ├── content.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── redmine
│   ├── compose.yaml
│   ├── content.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── registry
│   ├── content.md
│   ├── get-help.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── rethinkdb
│   ├── content.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── rocket.chat
│   ├── content.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.svg
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── rockylinux
│   ├── content.md
│   ├── github-repo
│   ├── issues.md
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── ros
│   ├── content.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── ruby
│   ├── content.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── rust
│   ├── content.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── sapmachine
│   ├── content.md
│   ├── get-help.md
│   ├── github-repo
│   ├── issues.md
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── satosa
│   ├── content.md
│   ├── get-help.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.svg
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── scratch
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── silverpeas
│   ├── content.md
│   ├── get-help.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── solr
│   ├── content.md
│   ├── get-help.md
│   ├── github-repo
│   ├── issues.md
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── sonarqube
│   ├── content.md
│   ├── get-help.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── spark
│   ├── content.md
│   ├── get-help.md
│   ├── github-repo
│   ├── issues.md
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── spiped
│   ├── content.md
│   ├── github-repo
│   ├── license.md
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── storm
│   ├── compose.yaml
│   ├── content.md
│   ├── get-help.md
│   ├── github-repo
│   ├── issues.md
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── swift
│   ├── content.md
│   ├── get-help.md
│   ├── github-repo
│   ├── issues.md
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── swipl
│   ├── content.md
│   ├── get-help.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── teamspeak
│   ├── compose.yaml
│   ├── content.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── telegraf
│   ├── content.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── tomcat
│   ├── content.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── tomee
│   ├── content.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── traefik
│   ├── content.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   ├── README.md
│   └── variant-alpine.md
├── ubuntu
│   ├── content.md
│   ├── github-repo
│   ├── issues.md
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── unit
│   ├── content.md
│   ├── get-help.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.svg
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── update.sh
├── varnish
│   ├── content.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── websphere-liberty
│   ├── content.md
│   ├── get-help.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── wordpress
│   ├── compose.yaml
│   ├── content.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   ├── README.md
│   ├── variant-cli.md
│   └── variant-fpm.md
├── xwiki
│   ├── content.md
│   ├── get-help.md
│   ├── github-repo
│   ├── issues.md
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   └── README.md
├── ymlfmt.sh
├── yourls
│   ├── compose.yaml
│   ├── content.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.svg
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   ├── README.md
│   └── variant-fpm.md
├── znc
│   ├── content.md
│   ├── github-repo
│   ├── license.md
│   ├── logo.png
│   ├── maintainer.md
│   ├── metadata.json
│   ├── README-short.txt
│   ├── README.md
│   └── variant-slim.md
└── zookeeper
    ├── compose.yaml
    ├── content.md
    ├── github-repo
    ├── license.md
    ├── logo.png
    ├── maintainer.md
    ├── metadata.json
    ├── README-short.txt
    └── README.md
```

# Files

--------------------------------------------------------------------------------
/notary/content.md:
--------------------------------------------------------------------------------

```markdown
 1 | # How to use this repository
 2 | 
 3 | The Notary respository contains two distinct applications, Notary Server, and Notary Signer. The images for these applications are tagged "server-\*" and "signer-\*" respectively. While the server can be configured to run entirely in memory, this configuration is not be appropriate for a production deployment so you should expect to run both a server *and* and signer.
 4 | 
 5 | Ensure that the images you are running have similar version tags. That is, if you are running the server-0.2.0 tag, you should also be running the similar signer-0.2.0 tag. Running different versions of the server and signer will never be a supported configuration.
 6 | 
 7 | # Notary Server
 8 | 
 9 | The Notary server manages JSON formatted TUF (The Update Framework) metadata for Notary clients and the docker command line tool's Docker Content Trust features. It requires a companion Notary signer instance and a MySQL (or MariaDB) database.
10 | 
11 | ## How to use this image
12 | 
13 | The following sample configuration is included in the image:
14 | 
15 | 	{
16 | 	    "server": {
17 | 	        "http_addr": ":4443",
18 | 	        "tls_key_file": "/certs/notary-server.key",
19 | 	        "tls_cert_file": "/certs/notary-server.crt"
20 | 	    },
21 | 	    "trust_service": {
22 | 	      "type": "remote",
23 | 	      "hostname": "notarysigner",
24 | 	      "port": "7899",
25 | 	      "tls_ca_file": "/certs/root-ca.crt",
26 | 	      "key_algorithm": "ecdsa",
27 | 	      "tls_client_cert": "/certs/notary-server.crt",
28 | 	      "tls_client_key": "/certs/notary-server.key"
29 | 	    },
30 | 	    "logging": {
31 | 	        "level": "info"
32 | 	    },
33 | 	    "storage": {
34 | 	        "backend": "mysql",
35 | 	        "db_url": "server@tcp(mysql:3306)/notaryserver?parseTime=True"
36 | 	    }
37 | 	}
38 | 
39 | The components you *must* provide are the certificates and keys, and the links for the `notarysigner` and `mysql` hostnames. The `root-ca.crt` file enables the Notary server to identify valid signers, which it communicates with over mutual TLS using a GRPC interface. The `notary-server.crt` and`notary-server.key` are used to identify this service to both external clients, and signer instances. All the certificate and key files must be readable by the notary user which is created inside the container and owns the notary-server process.
40 | 
41 | If you require a different configuration, you should wrap this image with your own Dockerfile.
42 | 
43 | For more details on how to configure your Notary server, please read the [docs](https://github.com/theupdateframework/notary/blob/master/docs/reference/server-config.md).
44 | 
45 | # Notary Signer
46 | 
47 | The Notary signer is a support service for the Notary server. It manages private keys and performs all signing operations. It requires a MySQL (or MariaDB) database.
48 | 
49 | ## How to use this image
50 | 
51 | The following sample configuration is included in the image:
52 | 
53 | 	{
54 | 	    "server": {
55 | 	        "http_addr": ":4444",
56 | 	        "grpc_addr": ":7899",
57 | 	        "tls_cert_file": "/certs/notary-signer.crt",
58 | 	        "tls_key_file": "/certs/notary-signer.key",
59 | 	        "client_ca_file": "/certs/notary-server.crt"
60 | 	    },
61 | 	    "logging": {
62 | 	        "level": "info"
63 | 	    },
64 | 	    "storage": {
65 | 	        "backend": "mysql",
66 | 	        "db_url": "signer@tcp(mysql:3306)/notarysigner?parseTime=True"
67 | 	    }
68 | 	}
69 | 
70 | The components you *must* provide are the certificates and keys, and the link for the `mysql` hostname. The `notary-server.crt` file enables the Notary signer to identify valid servers, which it communicates with over mutual TLS using a GRPC interface. The `notary-server.crt` and`notary-server.key` are used to identify this service to both external clients, and signer instances. All the certificate and key files must be readable by the notary user which is created inside the container and owns the notary-signer process.
71 | 
72 | If you require a different configuration, you should wrap this image with your own Dockerfile.
73 | 
74 | For more details on how to configure your Notary signer, please read the [docs](https://github.com/theupdateframework/notary/blob/master/docs/reference/signer-config.md).
75 | 
76 | ## Database Migrations
77 | 
78 | Notary server and signer both use the [migrate tool](https://github.com/golang-migrate/migrate) to manage database updates. The migration files can be found [here](https://github.com/theupdateframework/notary/tree/master/migrations/) and are an ordered list of plain SQL files. The migrate tool manages schema versions to ensure that migrations start and end at the correct point.
79 | 
80 | We strongly recommend you create separate databases and users with restricted permissions such that the server cannot access the signer's database and vice versa.
81 | 
```

--------------------------------------------------------------------------------
/debian/content.md:
--------------------------------------------------------------------------------

```markdown
 1 | # What is Debian?
 2 | 
 3 | Debian is an operating system which is composed primarily of free and open-source software, most of which is under the GNU General Public License, and developed by a group of individuals known as the Debian project. Debian is one of the most popular Linux distributions for personal computers and network servers, and has been used as a base for several other Linux distributions.
 4 | 
 5 | > [wikipedia.org/wiki/Debian](https://en.wikipedia.org/wiki/Debian)
 6 | 
 7 | %%LOGO%%
 8 | 
 9 | # About this image
10 | 
11 | The images in this repository are intended to be as minimal as possible (because of the immutable/layered nature of containers, it's much easier to add than it is to remove). More specifically, they're built from [the "minbase" variant](https://manpages.debian.org/stable/debootstrap/debootstrap.8.en.html#variant=minbase_buildd_fakechroot), which only installs "required" packages, and thus creates the smallest possible footprint that is still "Debian" (as defined/managed by [the Release and FTP teams](https://www.debian.org/intro/organization#distribution) within the project).
12 | 
13 | The `%%IMAGE%%:latest` tag will always point the latest stable release. Stable releases are also tagged with their version (ie, `%%IMAGE%%:11` is an alias for `%%IMAGE%%:bullseye`, `%%IMAGE%%:10` is an alias for `%%IMAGE%%:buster`, etc).
14 | 
15 | The rolling tags (`%%IMAGE%%:stable`, `%%IMAGE%%:testing`, etc) use the rolling suite names in their `/etc/apt/sources.list` file (ie, `deb http://deb.debian.org/debian testing main`).
16 | 
17 | The mirror of choice for these images is [the deb.debian.org CDN pointer/redirector](https://deb.debian.org) so that it's as reliable as possible for the largest subset of users (and is also the default mirror for `debootstrap` as of [2016-10-20](https://anonscm.debian.org/cgit/d-i/debootstrap.git/commit/?id=9e8bc60ad1ccf3a25ce7890526b70059f3e770de)). See the [deb.debian.org homepage](https://deb.debian.org) for more information.
18 | 
19 | If you find yourself needing a Debian release which is EOL (and thus only available from [archive.debian.org](http://archive.debian.org)), you should check out [the `debian/eol` image](https://hub.docker.com/r/debian/eol/), which includes tags for Debian releases as far back as Potato (Debian 2.2), the first release to fully utilize APT.
20 | 
21 | ## Locales
22 | 
23 | Given that it is a faithful "minbase" install of Debian, this image only includes the `C`, `C.UTF-8`, and `POSIX` locales by default. For most uses requiring a UTF-8 locale, `C.UTF-8` is likely sufficient (`-e LANG=C.UTF-8` or `ENV LANG C.UTF-8`).
24 | 
25 | For uses where that is not sufficient, other locales can be installed/generated via the `locales` package. [PostgreSQL has a good example of doing so](https://github.com/docker-library/postgres/blob/69bc540ecfffecce72d49fa7e4a46680350037f9/9.6/Dockerfile#L21-L24), copied below:
26 | 
27 | ```dockerfile
28 | RUN apt-get update && apt-get install -y locales && rm -rf /var/lib/apt/lists/* \
29 | 	&& localedef -i en_US -c -f UTF-8 -A /usr/share/locale/locale.alias en_US.UTF-8
30 | ENV LANG en_US.utf8
31 | ```
32 | 
33 | ## How It's Made
34 | 
35 | The rootfs tarballs for this image are built using [the reproducible-Debian-rootfs tool, `debuerreotype`](https://github.com/debuerreotype/debuerreotype), with an explicit goal being that they are transparent and reproducible. Using the same toolchain, it should be possible to regenerate (clean-room!) the same tarballs used for building the official Debian images. [The `examples/debian.sh` script in that debuerreotype repository](https://github.com/debuerreotype/debuerreotype/blob/master/examples/debian.sh) (and the `debian-all.sh` companion/wrapper) is the canonical entrypoint used for creating the artifacts published in this image (via a process similar to the `docker-run.sh` included in the root of that repository).
36 | 
37 | Additionally, the scripts in [%%GITHUB-REPO%%](%%GITHUB-REPO%%) are used to create each tag's `Dockerfile` and collect architecture-specific tarballs into [`dist-ARCH` branches on the same repository](%%GITHUB-REPO%%/branches), which also contain extra metadata about the artifacts included in each build, such as explicit package versions included in the base image (`rootfs.manifest`), the exact snapshot.debian.org timestamp used for `debuerreotype` invocation (`rootfs.debuerreotype-epoch`), the `sources.list` found in the image (`rootfs.sources-list`) and the one used during image creation (`rootfs.sources-list-snapshot`), etc.
38 | 
39 | For convenience, the SHA256 checksum (and full build command) for each of the primary `rootfs.tar.xz` artifacts are also published at [docker.debian.net](https://docker.debian.net/).
40 | 
```

--------------------------------------------------------------------------------
/yourls/content.md:
--------------------------------------------------------------------------------

```markdown
  1 | # What is YOURLS?
  2 | 
  3 | YOURLS is a set of PHP scripts that will allow you to run Your Own URL Shortener. You'll have full control over your data, detailed stats, analytics, plugins, and more. It's free.
  4 | 
  5 | > [github.com/YOURLS/YOURLS](https://github.com/YOURLS/YOURLS)
  6 | 
  7 | %%LOGO%%
  8 | 
  9 | # How to use this image
 10 | 
 11 | ## Start a `%%IMAGE%%` server instance
 12 | 
 13 | ```console
 14 | $ docker run --name some-%%REPO%% --link some-mysql:mysql \
 15 |     -e YOURLS_SITE="https://example.com" \
 16 |     -e YOURLS_USER="example_username" \
 17 |     -e YOURLS_PASS="example_password" \
 18 |     -d %%IMAGE%%
 19 | ```
 20 | 
 21 | The YOURLS instance accepts a number of environment variables for configuration, see *Environment Variables* section below.
 22 | 
 23 | If you'd like to use an external database instead of a linked `mysql` container, specify the hostname and port with `YOURLS_DB_HOST` along with the password in `YOURLS_DB_PASS` and the username in `YOURLS_DB_USER` (if it is something other than `root`):
 24 | 
 25 | ```console
 26 | $ docker run --name some-%%REPO%%s -e YOURLS_DB_HOST=10.1.2.3:3306 \
 27 |     -e YOURLS_DB_USER=... -e YOURLS_DB_PASS=... -d %%IMAGE%%
 28 | ```
 29 | 
 30 | ## Connect to the YOURLS administration interface
 31 | 
 32 | If you'd like to be able to access the instance from the host without the container's IP, standard port mappings can be used:
 33 | 
 34 | ```console
 35 | $ docker run --name some-%%REPO%% --link some-mysql:mysql -p 8080:80 -d %%IMAGE%%
 36 | ```
 37 | 
 38 | Then, access it via `http://localhost:8080/admin/` or `http://<host-ip>:8080/admin/` in a browser.
 39 | 
 40 | **Note:** On first instantiation, reaching the root folder will generate an error. Access the YOURLS administration interface via the path `/admin/`.
 41 | 
 42 | ## Environment Variables
 43 | 
 44 | When you start the `yourls` image, you can adjust the configuration of the YOURLS instance by passing one or more environment variables on the `docker run` command line.  
 45 | The YOURLS instance accepts [a number of environment variables for configuration](https://yourls.org/docs/guide/essentials/configuration).  
 46 | A few notable/important examples for using this Docker image include the following.
 47 | 
 48 | ### `YOURLS_SITE`
 49 | 
 50 | **Required.**  
 51 | YOURLS instance URL, no trailing slash, lowercase.
 52 | 
 53 | Example: `YOURLS_SITE="https://example.com"`
 54 | 
 55 | ### `YOURLS_USER`
 56 | 
 57 | **Required.**  
 58 | YOURLS instance username.
 59 | 
 60 | Example: `YOURLS_USER="example_username"`
 61 | 
 62 | ### `YOURLS_PASS`
 63 | 
 64 | **Required.**  
 65 | YOURLS instance password.
 66 | 
 67 | Example: `YOURLS_PASS="example_password"`
 68 | 
 69 | ### `YOURLS_DB_HOST`, `YOURLS_DB_USER`, `YOURLS_DB_PASS`
 70 | 
 71 | **Optional if linked `mysql` container.**
 72 | 
 73 | Host, user (defaults to `root`) and password for the database.
 74 | 
 75 | ### `YOURLS_DB_NAME`
 76 | 
 77 | **Optional.**  
 78 | Database name, defaults to `yourls`. The database must have been created before installing YOURLS.
 79 | 
 80 | ### `YOURLS_DB_PREFIX`
 81 | 
 82 | **Optional.**  
 83 | Database tables prefix, defaults to `yourls_`. Only set this when you need to override the default table prefix.
 84 | 
 85 | ## Docker Secrets
 86 | 
 87 | As an alternative to passing sensitive information via environment variables, `_FILE` may be appended to the previously listed environment variables, causing the initialization script to load the values for those variables from files present in the container. In particular, this can be used to load passwords from Docker secrets stored in `/run/secrets/<secret_name>` files. For example:
 88 | 
 89 | ```console
 90 | $ docker run --name some-%%REPO%% -e YOURLS_DB_PASS_FILE=/run/secrets/mysql-root ... -d %%IMAGE%%:tag
 91 | ```
 92 | 
 93 | Currently, this is supported for `YOURLS_DB_HOST`, `YOURLS_DB_USER`, `YOURLS_DB_PASS`, `YOURLS_DB_NAME`, `YOURLS_DB_PREFIX`, `YOURLS_SITE`, `YOURLS_USER`, and `YOURLS_PASS`.
 94 | 
 95 | ## %%COMPOSE%%
 96 | 
 97 | Run `docker compose up`, wait for it to initialize completely, and visit `http://localhost:8080/admin/` or `http://<host-ip>:8080/admin/` (as appropriate).
 98 | 
 99 | ## Adding additional libraries / extensions
100 | 
101 | This image does not provide any additional PHP extensions or other libraries, even if they are required by popular plugins. There are an infinite number of possible plugins, and they potentially require any extension PHP supports. Including every PHP extension that exists would dramatically increase the image size.
102 | 
103 | If you need additional PHP extensions, you'll need to create your own image `FROM` this one. The [documentation of the `php` image](https://github.com/docker-library/docs/blob/master/php/README.md#how-to-install-more-php-extensions) explains how to compile additional extensions.
104 | 
105 | The following Docker Hub features can help with the task of keeping your dependent images up-to-date:
106 | 
107 | -	[Automated Builds](https://docs.docker.com/docker-hub/builds/) let Docker Hub automatically build your Dockerfile each time you push changes to it.
108 | 
```

--------------------------------------------------------------------------------
/satosa/content.md:
--------------------------------------------------------------------------------

```markdown
 1 | # What is SATOSA?
 2 | 
 3 | SATOSA is a configurable proxy for translating between different authentication protocols such as SAML2, OpenID Connect, and OAuth2.
 4 | 
 5 | %%LOGO%%
 6 | 
 7 | # How to use this image
 8 | 
 9 | ## To start a SATOSA instance
10 | 
11 | The basic pattern for starting a `%%REPO%%` instance is:
12 | 
13 | ```sh
14 | docker run --name some-%%REPO%% -d %%IMAGE%%
15 | ```
16 | 
17 | To access the instance from the host without the container's IP, use port mappings:
18 | 
19 | ```sh
20 | docker run --name some-%%REPO%% -p 80:8080 -d %%IMAGE%%
21 | ```
22 | 
23 | The entrypoint script outputs SAML2 metadata to the container log at start time. This metadata refers to the instance's base URL, e.g., `https://example.com`. Browsers must be able to access the instance over HTTPS.
24 | 
25 | # How to extend this image
26 | 
27 | ## Configuration files
28 | 
29 | The `%%REPO%%` image stores its configuration in /etc/satosa. This configuration must persist among instances, particularly the SAML2 entity ID (derived from the proxy's base URL by default) and related keying material. [Use volumes, bind mounts, or custom images](https://docs.docker.com/storage/) to maintain this configuration.
30 | 
31 | ## Entrypoint script
32 | 
33 | The `%%REPO%%` image's entrypoint script runs [Gunicorn](https://gunicorn.org/) by default if the first argument looks like a command-line flag. For example, the following will use a bind mount to provide an X.509 certificate and corresponding private key to the instance, and it will run Gunicorn with HTTPS enabled:
34 | 
35 | ```sh
36 | docker run --name some-%%REPO%% -p 443:8443 \
37 |     -v /etc/letsencrypt/live/some-%%REPO%%/fullchain.pem:/etc/https.crt \
38 |     -v /etc/letsencrypt/live/some-%%REPO%%/privkey.pem:/etc/https.key \
39 |     -d %%IMAGE%% \
40 |     -b0.0.0.0:8443 --certfile /etc/https.crt --keyfile /etc/https.key satosa.wsgi:app
41 | ```
42 | 
43 | If the first argument looks like a command instead of a flag, the entrypoint script will run that instead of Gunicorn. For example, the following will start an interactive, unprivileged shell inside the container:
44 | 
45 | ```sh
46 | docker run -it --name some-%%REPO%% %%IMAGE%% bash
47 | ```
48 | 
49 | ## Environment variables
50 | 
51 | The entrypoint script uses environment variables to generate the initial configuration, which sets SATOSA up as a SAML2 proxy between the free [SAMLtest.ID](https://samltest.id/) test service provider and test identity provider. All of the environment variables are optional.
52 | 
53 | The environment variables' values can be read from [Docker secrets](https://docs.docker.com/engine/swarm/secrets/). Append `_FILE` to the variable name (e.g., `STATE_ENCRYPTION_KEY_FILE`), and set it to the pathname of the corresponding secret (e.g., `/run/secrets/state_encryption_key`).
54 | 
55 | ### `BASE_URL`
56 | 
57 | SATOSA must be hosted at the root of the website. This environment variable optionally specifies the website's base URL, which defaults to `http://example.com`. If set, the base URL *must* be a method plus a hostname without any trailing slash or path components, e.g., `https://idproxy.example.com`, not `https://idproxy.example.com/` nor `https://idproxy.example.com/satosa`.
58 | 
59 | ### `STATE_ENCRYPTION_KEY`
60 | 
61 | SATOSA uses encrypted cookies to track the progress of an authentication flow. This environment variable optionally sets the state cookies' encryption key. If set, the state encryption key *must* be an alphanumeric value, e.g., `12345SameAsMyLuggage`. If not specified, a new random 32-character key will be generated.
62 | 
63 | ### `SAML2_BACKEND_DISCO_SRV`
64 | 
65 | When part of a SAML2 multilateral federation, SATOSA will ask the user to choose an identity provider using a SAML discovery service. This environment variable optionally sets the discovery service URL, which defaults to [SeamlessAccess](https://seamlessaccess.org/).
66 | 
67 | ### `SAML2_BACKEND_CERT` and `SAML2_BACKEND_KEY`
68 | 
69 | SATOSA's SAML2 backend acts like a service provider (relying party), requesting authentication by and attributes from the user's identity provider. It uses public key cryptography to sign authentication requests and decrypt responses. These optional environment variables hold the backend's paired public and private keys in [the PEM format](https://en.wikipedia.org/wiki/Privacy-Enhanced_Mail). If not specified, a new 2048-bit RSA key-pair will be generated using the hostname part of `BASE_URL`.
70 | 
71 | ### `SAML2_FRONTEND_CERT` and `SAML2_FRONTEND_KEY`
72 | 
73 | SATOSA's SAML2 frontend acts like an identity provider (credential service provider), processing authentication requests from and returning user attributes to trusted websites. It uses public key cryptography to sign authentication responses. These optional environment variables hold the frontend's paired public and private keys, also in the PEM format. If not specified, a new 2048-bit RSA key-pair will be generated using the hostname part of `BASE_URL`.
74 | 
```

--------------------------------------------------------------------------------
/influxdb/variant-meta.md:
--------------------------------------------------------------------------------

```markdown
  1 | ## `%%IMAGE%%:meta`
  2 | 
  3 | *This image requires a valid license key from InfluxData.* Please visit our [products page](https://www.influxdata.com/products/) to learn more.
  4 | 
  5 | This image contains the enterprise meta node package for clustering. It is meant to be used in conjunction with the `influxdb:data` package of the same version.
  6 | 
  7 | ### Using this Image
  8 | 
  9 | #### Specifying the license key
 10 | 
 11 | The license key can be specified using either an environment variable or by overriding the configuration file. If you specify the license key directly, the container needs to be able to access the InfluxData portal.
 12 | 
 13 | ```console
 14 | docker run -p 8089:8089 -p 8091:8091 \
 15 |       -e INFLUXDB_ENTERPRISE_LICENSE_KEY=<license-key>
 16 |       %%IMAGE%%:meta
 17 | ```
 18 | 
 19 | #### Running the container
 20 | 
 21 | The examples below will use docker's built-in networking capability. If you use the port exposing feature, the host port and the container port need to be the same.
 22 | 
 23 | First, create a docker network:
 24 | 
 25 | ```console
 26 | docker network create influxdb
 27 | ```
 28 | 
 29 | Start three meta nodes. This is the suggested number of meta nodes. We do not recommend running more or less. If you choose to run more or less, be sure that the number of meta nodes is odd. The hostname must be set on each container to the address that will be used to access the meta node. When using docker networks, the hostname should be made the same as the name of the container.
 30 | 
 31 | ```console
 32 | docker run -d --name=influxdb-meta-0 --network=influxdb \
 33 |       -h influxdb-meta-0 \
 34 |       -e INFLUXDB_ENTERPRISE_LICENSE_KEY=<license-key> \
 35 |       %%IMAGE%%:meta
 36 | docker run -d --name=influxdb-meta-1 --network=influxdb \
 37 |       -h influxdb-meta-1 \
 38 |       -e INFLUXDB_ENTERPRISE_LICENSE_KEY=<license-key> \
 39 |       %%IMAGE%%:meta
 40 | docker run -d --name=influxdb-meta-2 --network=influxdb \
 41 |       -h influxdb-meta-2 \
 42 |       -e INFLUXDB_ENTERPRISE_LICENSE_KEY=<license-key> \
 43 |       %%IMAGE%%:meta
 44 | ```
 45 | 
 46 | When setting the hostname, you can use `-h <hostname>` or you can directly set the environment variable using `-e INFLUXDB_HOSTNAME=<hostname>`.
 47 | 
 48 | After starting the meta nodes, you need to tell them about each other. Choose one of the meta nodes and run `influxd-ctl` in the container.
 49 | 
 50 | ```console
 51 | docker exec influxdb-meta-0 \
 52 |       influxd-ctl add-meta influxdb-meta-1:8091
 53 | docker exec influxdb-meta-0 \
 54 |       influxd-ctl add-meta influxdb-meta-2:8091
 55 | ```
 56 | 
 57 | Or you can just start a single meta node. If you setup a single meta node, you do not need to use `influxd-ctl add-meta`.
 58 | 
 59 | ```console
 60 | docker run -d --name=influxdb-meta --network=influxdb \
 61 |       -h influxdb-meta \
 62 |       -e INFLUXDB_ENTERPRISE_LICENSE_KEY=<license-key> \
 63 |       %%IMAGE%%:meta -single-server
 64 | ```
 65 | 
 66 | #### Connecting the data nodes
 67 | 
 68 | Start the data nodes using `%%IMAGE%%:data` with similar command line arguments to the meta nodes. You can start as many data nodes as are allowed by your license.
 69 | 
 70 | ```console
 71 | docker run -d --name=influxdb-data-0 --network=influxdb \
 72 |       -h influxdb-data-0 \
 73 |       -e INFLUXDB_LICENSE_KEY=<license-key> \
 74 |       %%IMAGE%%:data
 75 | ```
 76 | 
 77 | You can add `-p 8086:8086` to expose the http port to the host machine. After starting the container, choose one of the meta nodes and add the data node to it.
 78 | 
 79 | ```console
 80 | docker exec influxdb-meta-0 \
 81 |       influxd-ctl add-data influxdb-data-0:8088
 82 | ```
 83 | 
 84 | Perform these same steps for any other data nodes that you want to add.
 85 | 
 86 | You can now connect to any of the running data nodes to use your cluster.
 87 | 
 88 | See the [influxdb](https://hub.docker.com/_/influxdb/) image documentation for more details on how to use the data node images.
 89 | 
 90 | #### Configuration
 91 | 
 92 | InfluxDB Meta can be either configured from a config file or using environment variables. To mount a configuration file and use it with the server, you can use this command:
 93 | 
 94 | Generate the default configuration file:
 95 | 
 96 | ```console
 97 | docker run --rm %%IMAGE%%:meta influxd-meta config > influxdb-meta.conf
 98 | ```
 99 | 
100 | Modify the default configuration, which will now be available under `$PWD`. Then start the InfluxDB Meta container.
101 | 
102 | ```console
103 | docker run \
104 |       -v $PWD/influxdb-meta.conf:/etc/influxdb/influxdb-meta.conf:ro \
105 |       %%IMAGE%% -config /etc/influxdb/influxdb-meta.conf
106 | ```
107 | 
108 | Modify `$PWD` to the directory where you want to store the configuration file.
109 | 
110 | For environment variables, the format is `INFLUXDB_$SECTION_$NAME`. All dashes (`-`) are replaced with underscores (`_`). If the variable isn't in a section, then omit that part.
111 | 
112 | Examples:
113 | 
114 | ```console
115 | INFLUXDB_REPORTING_DISABLED=true
116 | INFLUXDB_META_DIR=/path/to/metadir
117 | INFLUXDB_ENTERPRISE_REGISTRATION_ENABLED=true
118 | ```
119 | 
120 | For more information, see how to [Install InfluxDB Enterprise meta nodes](https://docs.influxdata.com/enterprise_influxdb/v1/introduction/installation/meta_node_installation/).
121 | 
```

--------------------------------------------------------------------------------
/clefos/content.md:
--------------------------------------------------------------------------------

```markdown
 1 | # ClefOS
 2 | 
 3 | ClefOS Linux is a community-supported distribution for IBM Z (aka "mainframe") derived from sources freely provided to the public by [CentOS](http://vault.clefos.org/) which in turn is derived from the [Red Hat](ftp://ftp.redhat.com/pub/redhat/linux/enterprise/) sources for Red Hat Enterprise Linux (RHEL). As such, ClefOS Linux aims to be functionally compatible with CentOS and RHEL. The ClefOS Project mainly changes packages to remove upstream vendor branding and artwork. ClefOS Linux is no-cost and free to redistribute. Each ClefOS Linux version is maintained and released according to the CentOS schedule.
 4 | 
 5 | %%LOGO%%
 6 | 
 7 | ## ClefOS image documentation
 8 | 
 9 | The `%%IMAGE%%:latest` tag is always the most recent version currently available.
10 | 
11 | ### Building the Base Image
12 | 
13 | The image is built via the `make` command which will create the tarball and build the image.
14 | 
15 | The `createBase.sh` script is used to create the tarball for the docker build command. The script uses the yum command with the `tsflags=nodocs` option set to reduce the size of the image. In addition a lot of the locale files are eliminated from the image.
16 | 
17 | The `VERSION` file contains the id of the current ClefOS version and will be added as a label within the image.
18 | 
19 | ### Rolling builds
20 | 
21 | The ClefOS Project offers regularly updated images for all active releases. These images will be updated monthly or as needed for emergency fixes. These rolling updates are tagged with the major version number and minor tags as well. For example, if 7.4.1708 is the most current then the build will result in `%%IMAGE%%:7` and `%%IMAGE%%:7.4.1708`. When the next minor level is available then `%%IMAGE%%:7` and `%%IMAGE%%:7.x.yymm` will be identical.
22 | 
23 | ### Overlayfs and yum
24 | 
25 | Recent Docker versions support the [overlayfs](https://docs.docker.com/engine/userguide/storagedriver/overlayfs-driver/) backend, which is enabled by default on most distros supporting it from Docker 1.13 onwards. On ClefOS 7, **that backend requires yum-plugin-ovl to be installed and enabled**; while it is installed by default in recent clefos images, make it sure you retain the `plugins=1` option in `/etc/yum.conf` if you update that file; otherwise, you may encounter errors related to rpmdb checksum failure - see [Docker ticket 10180](https://github.com/docker/docker/issues/10180) for more details.
26 | 
27 | ## Package documentation
28 | 
29 | By default, the ClefOS containers are built using yum's `nodocs` option, which helps reduce the size of the image. If you install a package and discover files missing, please comment out the line `tsflags=nodocs` in `/etc/yum.conf` and reinstall your package.
30 | 
31 | ## Systemd integration
32 | 
33 | Systemd is not included in both the `%%IMAGE%%:7` and `%%IMAGE%%:latest` base containers, but can be created from these bases:
34 | 
35 | ### Dockerfile for systemd base image
36 | 
37 | ```dockerfile
38 | FROM 		%%IMAGE%%:7
39 | 
40 | ENV 		container docker
41 | 
42 | RUN		yum install -y --setopt=tsflags=nodocs systemd && \
43 | 		yum clean all && \
44 | 		rm -rf /var/cache/yum/* /tmp/* /var/log/yum.log
45 | 
46 | RUN 		(cd /lib/systemd/system/sysinit.target.wants/; for i in *; do [ $i == systemd-tmpfiles-setup.service ] || rm -f $i; done); \
47 | 		rm -f /lib/systemd/system/multi-user.target.wants/*;\
48 | 		rm -f /etc/systemd/system/*.wants/*;\
49 | 		rm -f /lib/systemd/system/local-fs.target.wants/*; \
50 | 		rm -f /lib/systemd/system/sockets.target.wants/*udev*; \
51 | 		rm -f /lib/systemd/system/sockets.target.wants/*initctl*; \
52 | 		rm -f /lib/systemd/system/basic.target.wants/*;\
53 | 		rm -f /lib/systemd/system/anaconda.target.wants/*;
54 | 
55 | VOLUME 		["/sys/fs/cgroup"]
56 | 
57 | CMD 		["/usr/sbin/init"]
58 | ```
59 | 
60 | This `Dockerfile` deletes a number of unit files which might cause issues. From here, you are ready to build your base image.
61 | 
62 | ```console
63 | $ docker build --rm -t local/c7-systemd .
64 | ```
65 | 
66 | ### Example systemd enabled app container
67 | 
68 | In order to use the systemd enabled base container created above, you will need to create your `Dockerfile` similar to the one below.
69 | 
70 | ```dockerfile
71 | FROM local/c7-systemd
72 | RUN yum -y install httpd; yum clean all; systemctl enable httpd.service
73 | EXPOSE 80
74 | CMD ["/usr/sbin/init"]
75 | ```
76 | 
77 | Build this image:
78 | 
79 | ```console
80 | $ docker build --rm -t local/c7-systemd-httpd .
81 | ```
82 | 
83 | ### Running a systemd enabled app container
84 | 
85 | In order to run a container with systemd, you will need to mount the cgroups volumes from the host. Below is an example command that will run the systemd enabled httpd container created earlier.
86 | 
87 | ```console
88 | $ docker run -ti -v /sys/fs/cgroup:/sys/fs/cgroup:ro -p 80:80 local/c7-systemd-httpd
89 | ```
90 | 
91 | This container is running with systemd in a limited context, with the cgroups filesystem mounted. There have been reports that if you're using an Ubuntu host, you will need to add `-v /tmp/$(mktemp -d):/run` in addition to the cgroups mount.
92 | 
```

--------------------------------------------------------------------------------
/jetty/content.md:
--------------------------------------------------------------------------------

```markdown
  1 | # What is Jetty?
  2 | 
  3 | Jetty is a pure Java-based HTTP (Web) server and Java Servlet container. While Web Servers are usually associated with serving documents to people, Jetty is now often used for machine to machine communications, usually within larger software frameworks. Jetty is developed as a free and open source project as part of the Eclipse Foundation. The web server is used in products such as Apache ActiveMQ, Alfresco, Apache Geronimo, Apache Maven, Apache Spark, Google App Engine, Eclipse, FUSE, Twitter's Streaming API and Zimbra. Jetty is also the server in open source projects such as Lift, Eucalyptus, Red5, Hadoop and I2P. Jetty supports the latest Java Servlet API (with JSP support) as well as protocols SPDY and WebSocket.
  4 | 
  5 | > [wikipedia.org/wiki/Jetty_(web_server)](https://en.wikipedia.org/wiki/Jetty_%28web_server%29)
  6 | 
  7 | %%LOGO%% Logo &copy; Eclipse Foundation
  8 | 
  9 | # How to use this image.
 10 | 
 11 | To run the default Jetty server in the background, use the following command:
 12 | 
 13 | ```console
 14 | $ docker run -d %%IMAGE%%
 15 | ```
 16 | 
 17 | You can test it by visiting `http://container-ip:8080` or `https://container-ip:8443/` in a browser. To expose your Jetty server to outside requests, use a port mapping as follows:
 18 | 
 19 | ```console
 20 | $ docker run -d -p 80:8080 -p 443:8443 %%IMAGE%%
 21 | ```
 22 | 
 23 | This will map port 8080 inside the container as port 80 on the host and container port 8443 as host port 443. You can then go to `http://host-ip` or `https://host-ip` in a browser.
 24 | 
 25 | ## Environment
 26 | 
 27 | The default Jetty environment in the image is:
 28 | 
 29 | 	JETTY_HOME    =  /usr/local/jetty
 30 | 	JETTY_BASE    =  /var/lib/jetty
 31 | 	TMPDIR        =  /tmp/jetty
 32 | 
 33 | ## Deployment
 34 | 
 35 | Webapps can be [deployed](https://www.eclipse.org/jetty/documentation/current/quickstart-deploying-webapps.html) under `/var/lib/jetty/webapps` in the usual ways (WAR file, exploded WAR directory, or context XML file). To deploy your application to the `/` context, use the name `ROOT.war`, the directory name `ROOT`, or the context file `ROOT.xml` (case insensitive).
 36 | 
 37 | For older EOL'd images based on Jetty 7 or Jetty 8, please follow the [legacy instructions](https://wiki.eclipse.org/Jetty/Howto/Deploy_Web_Applications) on the Eclipse Wiki and deploy under `/usr/local/jetty/webapps` instead of `/var/lib/jetty/webapps`.
 38 | 
 39 | ## Configuration
 40 | 
 41 | The configuration of the Jetty server can be reported by running with the `--list-config` option:
 42 | 
 43 | ```console
 44 | $ docker run -d %%IMAGE%% --list-config
 45 | ```
 46 | 
 47 | Configuration such as parameters and additional modules may also be passed in via the command line. For example:
 48 | 
 49 | ```console
 50 | $ docker run -d %%IMAGE%% --module=jmx jetty.threadPool.maxThreads=500
 51 | ```
 52 | 
 53 | To update the server configuration in a derived Docker image, the `Dockerfile` may enable additional modules with `RUN` commands like:
 54 | 
 55 | ```Dockerfile
 56 | FROM %%IMAGE%%
 57 | 
 58 | RUN java -jar "$JETTY_HOME/start.jar" --add-to-startd=jmx,stats
 59 | ```
 60 | 
 61 | Modules may be configured in a `Dockerfile` by editing the properties in the corresponding `/var/lib/jetty/start.d/*.ini` file or the module can be deactivated by removing that file.
 62 | 
 63 | ### JVM Configuration
 64 | 
 65 | JVM options can be set by passing the `JAVA_OPTIONS` environment variable to the container. For example, to set the maximum heap size to 1 gigabyte, you can run the container as follows:
 66 | 
 67 | ```console
 68 | $ docker run -e JAVA_OPTIONS="-Xmx1g" -d %%IMAGE%%
 69 | ```
 70 | 
 71 | ## Read-only container
 72 | 
 73 | To run `%%IMAGE%%` as a read-only container, have Docker create the `/tmp/jetty` and `/run/jetty` directories as volumes:
 74 | 
 75 | ```console
 76 | $ docker run -d --read-only -v /tmp/jetty -v /run/jetty %%IMAGE%%
 77 | ```
 78 | 
 79 | Since the container is read-only, you'll need to either mount in your webapps directory with `-v /path/to/my/webapps:/var/lib/jetty/webapps` or by populating `/var/lib/jetty/webapps` in a derived image.
 80 | 
 81 | ## HTTP/2 Support
 82 | 
 83 | Starting with version 9.3, Jetty comes with built-in support for HTTP/2. However, due to potential license compatiblity issues with the ALPN library used to implement HTTP/2, the module is not enabled by default. In order to enable HTTP/2 support in a derived `Dockerfile` for private use, you can add a `RUN` command that enables the `http2` module and approve its license as follows:
 84 | 
 85 | ```Dockerfile
 86 | FROM %%IMAGE%%
 87 | 
 88 | RUN java -jar $JETTY_HOME/start.jar --add-to-startd=http2 --approve-all-licenses
 89 | ```
 90 | 
 91 | This will add an `http2.ini` file to the `$JETTY_BASE/start.d` directory and download the required ALPN libraries into `$JETTY_BASE/lib/alpn`, allowing the use of HTTP/2. HTTP/2 connections should be made via the same port as normal HTTPS connections (container port 8443). If you would like to enable the `http2` module via `$JETTY_BASE/start.ini` instead, substitute `--add-to-start` in place of `--add-to-startd` in the `RUN` command above.
 92 | 
 93 | # Security
 94 | 
 95 | By default, this image starts as user `root` and uses Jetty's `setuid` module to drop privileges to user `jetty` after initialization. The `JETTY_BASE` directory at `/var/lib/jetty` is owned by `jetty:jetty` (uid 999, gid 999).
 96 | 
 97 | If you would like the image to start immediately as user `jetty` instead of starting as `root`, you can start the container with `-u jetty`:
 98 | 
 99 | ```console
100 | $ docker run -d -u jetty %%IMAGE%%
101 | ```
102 | 
```

--------------------------------------------------------------------------------
/telegraf/content.md:
--------------------------------------------------------------------------------

```markdown
  1 | # What is telegraf?
  2 | 
  3 | Telegraf is an open source agent for collecting, processing, aggregating, and writing metrics. Based on a plugin system to enable developers in the community to easily add support for additional metric collection. There are five distinct types of plugins:
  4 | 
  5 | -	Input plugins collect metrics from the system, services, or 3rd party APIs
  6 | -	Output plugins write metrics to various destinations
  7 | -	Processor plugins transform, decorate, and/or filter metrics
  8 | -	Aggregator plugins create aggregate metrics (e.g. mean, min, max, quantiles, etc.)
  9 | -	Secret Store plugins are used to hide secrets from the configuration file
 10 | 
 11 | [Telegraf Official Docs](https://docs.influxdata.com/telegraf/latest/get_started/)
 12 | 
 13 | %%LOGO%%
 14 | 
 15 | # How to use this image
 16 | 
 17 | ## Exposed Ports
 18 | 
 19 | -	8125 UDP
 20 | -	8092 UDP
 21 | -	8094 TCP
 22 | 
 23 | ## Configuration file
 24 | 
 25 | The user is required to provide a valid configuration to use the image. A valid configuration has at least one input and one output plugin specified. The following will walk through the general steps to get going.
 26 | 
 27 | ### Basic Example
 28 | 
 29 | Configuration files are TOML-based files that declare which plugins to use. A very simple configuration file, `telegraf.conf`, that collects metrics from the system CPU and outputs the metrics to stdout looks like the following:
 30 | 
 31 | ```toml
 32 | [[inputs.cpu]]
 33 | [[outputs.file]]
 34 | ```
 35 | 
 36 | Once a user has a customized configuration file, they can launch a Telegraf container with it mounted in the expected location:
 37 | 
 38 | ```console
 39 | $ docker run -v $PWD/telegraf.conf:/etc/telegraf/telegraf.conf:ro %%IMAGE%%
 40 | ```
 41 | 
 42 | Modify `$PWD` to the directory where you want to store the configuration file.
 43 | 
 44 | Read more about the Telegraf configuration [here](https://docs.influxdata.com/telegraf/latest/administration/configuration/).
 45 | 
 46 | ### Sample Configuration
 47 | 
 48 | Users can generate a sample configuration using the `config` subcommand. This will provide the user with a basic config that has a handful of input plugins enabled that collect data from the system. However, the user will still need to configure at least one output before the file is ready for use:
 49 | 
 50 | ```console
 51 | $ docker run --rm %%IMAGE%% telegraf config > telegraf.conf
 52 | ```
 53 | 
 54 | ## Supported Plugins Reference
 55 | 
 56 | The following are links to the various plugins that are available in Telegraf:
 57 | 
 58 | -	[Input Plugins](https://docs.influxdata.com/telegraf/latest/plugins/#input-plugins)
 59 | -	[Output Plugins](https://docs.influxdata.com/telegraf/latest/plugins/#output-plugins)
 60 | -	[Processor Plugins](https://docs.influxdata.com/telegraf/latest/plugins/#processor-plugins)
 61 | -	[Aggregator Plugins](https://docs.influxdata.com/telegraf/latest/plugins/#aggregator-plugins)
 62 | 
 63 | # Examples
 64 | 
 65 | ## Monitoring the Docker Engine Host
 66 | 
 67 | One common use case for Telegraf is to monitor the Docker Engine Host from within a container. The recommended technique is to mount the host filesystems into the container and use environment variables to instruct Telegraf where to locate the filesystems.
 68 | 
 69 | The precise files that need to be made available varies from plugin to plugin. Here is an example showing the full set of supported locations:
 70 | 
 71 | ```console
 72 | $ docker run -d --name=telegraf \
 73 | 	-v $PWD/telegraf.conf:/etc/telegraf/telegraf.conf:ro \
 74 | 	-v /:/hostfs:ro \
 75 | 	-e HOST_ETC=/hostfs/etc \
 76 | 	-e HOST_PROC=/hostfs/proc \
 77 | 	-e HOST_SYS=/hostfs/sys \
 78 | 	-e HOST_VAR=/hostfs/var \
 79 | 	-e HOST_RUN=/hostfs/run \
 80 | 	-e HOST_MOUNT_PREFIX=/hostfs \
 81 | 	%%IMAGE%%
 82 | ```
 83 | 
 84 | ## Monitoring docker containers
 85 | 
 86 | To monitor other docker containers, you can use the docker plugin and mount the docker socket into the container. An example configuration is below:
 87 | 
 88 | ```toml
 89 | [[inputs.docker]]
 90 |   endpoint = "unix:///var/run/docker.sock"
 91 | ```
 92 | 
 93 | Then you can start the telegraf container.
 94 | 
 95 | ```console
 96 | $ docker run -d --name=telegraf \
 97 |       --net=influxdb \
 98 |       -v /var/run/docker.sock:/var/run/docker.sock \
 99 |       -v $PWD/telegraf.conf:/etc/telegraf/telegraf.conf:ro \
100 |       %%IMAGE%%
101 | ```
102 | 
103 | Refer to the docker [plugin documentation](https://github.com/influxdata/telegraf/blob/master/plugins/inputs/docker/README.md) for more information.
104 | 
105 | ## Install Additional Packages
106 | 
107 | Some plugins require additional packages to be installed. For example, the `ntpq` plugin requires `ntpq` command. It is recommended to create a custom derivative image to install any needed commands.
108 | 
109 | As an example this Dockerfile add the `mtr-tiny` image to the stock image and save it as `telegraf-mtr.docker`:
110 | 
111 | ```dockerfile
112 | FROM telegraf:1.12.3
113 | 
114 | RUN apt-get update && apt-get install -y --no-install-recommends mtr-tiny && \
115 | 	rm -rf /var/lib/apt/lists/*
116 | ```
117 | 
118 | Build the derivative image:
119 | 
120 | ```console
121 | $ docker build -t telegraf-mtr:1.12.3 - < telegraf-mtr.docker
122 | ```
123 | 
124 | Create a `telegraf.conf` configuration file:
125 | 
126 | ```toml
127 | [[inputs.exec]]
128 |   interval = "60s"
129 |   commands=["mtr -C -n example.org"]
130 |   timeout = "40s"
131 |   data_format = "csv"
132 |   csv_skip_rows = 1
133 |   csv_column_names=["", "", "status", "dest", "hop", "ip", "loss", "snt", "", "", "avg", "best", "worst", "stdev"]
134 |   name_override = "mtr"
135 |   csv_tag_columns = ["dest", "hop", "ip"]
136 | 
137 | [[outputs.file]]
138 |   files = ["stdout"]
139 | ```
140 | 
141 | Run your derivative image:
142 | 
143 | ```console
144 | $ docker run --name telegraf --rm -v $PWD/telegraf.conf:/etc/telegraf/telegraf.conf telegraf-mtr:1.12.3
145 | ```
146 | 
```

--------------------------------------------------------------------------------
/api-firewall/content.md:
--------------------------------------------------------------------------------

```markdown
 1 | %%LOGO%%
 2 | 
 3 | # What is API Firewall?
 4 | 
 5 | Wallarm API Firewall is an open-source light-weighted proxy designed to protect REST API endpoints in cloud-native environments by hardening based on a strict OpenAPI/Swagger schema validation. Wallarm API Firewall relies on a positive security model allowing calls that match a predefined API specification for requests and responses, while rejecting everything else.
 6 | 
 7 | The **key features** of API Firewall are:
 8 | 
 9 | -	Protect REST API endpoints by blocking requests and responses that do not match the OAS/Swagger schema
10 | -	Discover Shadow API endpoints
11 | -	If using OAuth 2.0 protocol-based authentication, validate access tokens
12 | -	Quick and easy deployment and configuration
13 | -	Customization of request and response processing modes, response codes and log format
14 | 
15 | # Use cases
16 | 
17 | -	Block abnormal requests and responses that do not match the OpenAPI 3.0 specification (if running API Firewall in the blocking mode)
18 | -	Discover Shadow APIs and undocumented endpoints (if running API Firewall in the logging mode)
19 | -	Log abnormal requests and responses that do not match the OpenAPI 3.0 specification (if running API Firewall in the logging mode)
20 | 
21 | # API schema validation and positive security model
22 | 
23 | When starting API Firewall, you should provide the [OpenAPI 3.0 specification](https://swagger.io/specification/) of the application to be protected with API Firewall. The started API Firewall will operate as a reverse proxy and validate whether requests and responses match the schema defined in the specification.
24 | 
25 | The traffic that does not match the schema will be logged using the [`STDOUT` and `STDERR` Docker services](https://docs.docker.com/config/containers/logging/) or blocked (depending on the configured API Firewall operation mode). If operating in the logging mode and detecting the traffic on endpoints that are not included in the specification, API Firewall also logs these endpoints as the shadow ones (except for endpoints returning the code `404`).
26 | 
27 | ![API Firewall scheme](https://github.com/wallarm/api-firewall/blob/2ace2714ac5777694bde85c8cdbb1308e98a7fea/images/firewall-as-proxy.png?raw=true)
28 | 
29 | Provided API schema should be described using the [OpenAPI 3.0 specification](https://swagger.io/specification/) in the YAML or JSON file (`.yaml`, `.yml`, `.json` file extensions).
30 | 
31 | By allowing you to set the traffic requirements with the OpenAPI 3.0 specification, Wallarm API Firewall relies on a positive security model.
32 | 
33 | # Technical characteristics
34 | 
35 | API Firewall works as a reverse proxy with a built-in OpenAPI 3.0 request and response validator. The validator is written in Go and optimized for extreme performance and near-zero added latency.
36 | 
37 | # Starting API Firewall
38 | 
39 | To download, install, and start Wallarm API Firewall on Docker, see the [instructions](https://docs.wallarm.com/api-firewall/installation-guides/docker-container/).
40 | 
41 | # Demos
42 | 
43 | You can try API Firewall by running the demo environment that deploys an example application protected with Wallarm API Firewall. There are two available demo environments:
44 | 
45 | -	[Wallarm API Firewall demo with Docker Compose](https://github.com/wallarm/api-firewall/tree/main/demo/docker-compose)
46 | -	[Wallarm API Firewall demo with Kubernetes](https://github.com/wallarm/api-firewall/tree/main/demo/kubernetes)
47 | 
48 | # Wallarm's blog articles related to API Firewall
49 | 
50 | -	[Discovering Shadow APIs with API Firewall](https://lab.wallarm.com/discovering-shadow-apis-with-a-api-firewall/)
51 | -	[Wallarm API Firewall outperforms NGINX in a production environment](https://lab.wallarm.com/wallarm-api-firewall-outperforms-nginx-in-a-production-environment/)
52 | 
53 | # Performance
54 | 
55 | When creating API Firewall, we prioritized speed and efficiency to ensure that our customers would have the fastest APIs possible. Our latest tests demonstrate that the average time required for API Firewall to process one request is 1.339 ms:
56 | 
57 | ```console
58 | $ ab -c 200 -n 10000 -p ./large.json -T application/json http://127.0.0.1:8282/test/signup
59 | 
60 | Document Path:          /test/signup
61 | Document Length:        20 bytes
62 | 
63 | Concurrency Level:      200
64 | Time taken for tests:   0.769 seconds
65 | Complete requests:      10000
66 | Failed requests:        0
67 | Total transferred:      2150000 bytes
68 | Total body sent:        283770000
69 | HTML transferred:       200000 bytes
70 | Requests per second:    13005.81 [#/sec] (mean)
71 | Time per request:       15.378 [ms] (mean)
72 | Time per request:       0.077 [ms] (mean, across all concurrent requests)
73 | Transfer rate:          2730.71 [Kbytes/sec] received
74 |                         360415.95 kb/s sent
75 |                         363146.67 kb/s total
76 | 
77 | Connection Times (ms)
78 |               min  mean[+/-sd] median   max
79 | Connect:        0    5   1.6      5      12
80 | Processing:     2   10   5.4      9      59
81 | Waiting:        2    8   5.2      7      56
82 | Total:          3   15   5.7     14      68
83 | 
84 | Percentage of the requests served within a certain time (ms)
85 |   50%     14
86 |   66%     15
87 |   75%     16
88 |   80%     17
89 |   90%     18
90 |   95%     23
91 |   98%     36
92 |   99%     44
93 |  100%     68 (longest request)
94 | ```
95 | 
96 | These performance results are not the only ones we have got during API Firewall testing. Other results along with the methods used to improve API Firewall performance are described in this [Wallarm's blog article](https://lab.wallarm.com/wallarm-api-firewall-outperforms-nginx-in-a-production-environment/).
97 | 
```

--------------------------------------------------------------------------------
/lightstreamer/content.md:
--------------------------------------------------------------------------------

```markdown
  1 | # What is Lightstreamer Server?
  2 | 
  3 | Lightstreamer is a real-time messaging server optimized for the Internet. Blending WebSockets, HTTP, and push notifications, it streams data to/from mobile, tablet, browser-based, desktop, and IoT applications.
  4 | 
  5 | For more information and related downloads for Lightstreamer Server and other Lightstreamer products, please visit [www.lightstreamer.com](https://www.lightstreamer.com).
  6 | 
  7 | %%LOGO%%
  8 | 
  9 | # How to use this image
 10 | 
 11 | ## Up and Running
 12 | 
 13 | Launch the container with the default configuration:
 14 | 
 15 | ```console
 16 | $ docker run --name ls-server -d -p 80:8080 %%IMAGE%%
 17 | ```
 18 | 
 19 | This will map port 8080 inside the container to port 80 on local host. Then point your browser to `http://localhost` and watch the Welcome page showing real-time data flowing in from the locally deployed demo application, which is a first overview of the unique features offered by the Lightstreamer technology. More examples are available online at the [demo site](https://demos.lightstreamer.com).
 20 | 
 21 | ## Custom settings
 22 | 
 23 | It is possible to customize each aspect of the Lightstreamer instance running into the container. For example, a specific configuration file may be supplied as follows:
 24 | 
 25 | ```console
 26 | $ docker run --name ls-server -v /path/to/my-lightstreamer_conf.xml:/lightstreamer/conf/lightstreamer_conf.xml -d -p 80:8080 %%IMAGE%%
 27 | ```
 28 | 
 29 | In the same way, you could provide a custom logging configuration, maybe in this case also specifying a dedicated volume to ensure both the persistence of log files and better performance of the container:
 30 | 
 31 | ```console
 32 | $ docker run --name ls-server -v /path/to/my-lightstreamer_log_conf.xml:/lightstreamer/conf/lightstreamer_log_conf.xml -v /path/to/logs:/lightstreamer/logs -d -p 80:8080 %%IMAGE%%
 33 | ```
 34 | 
 35 | If you also change in your `my-lightstreamer_log_conf.xml` file the default logging path from `../logs` to `/path/to/dest/logs`:
 36 | 
 37 | ```console
 38 | $ docker run --name ls-server -v /path/to/my-lightstreamer_log_conf.xml:/lightstreamer/conf/lightstreamer_log_conf.xml -v /path/to/hosted/logs:/path/to/dest/logs -d -p 80:8080 %%IMAGE%%
 39 | ```
 40 | 
 41 | Alternatively, the above tasks can be executed by deriving a new image through a `Dockerfile` as the following:
 42 | 
 43 | ```dockerfile
 44 | FROM %%IMAGE%%
 45 | 
 46 | # Please specify a COPY command only for the required custom configuration file
 47 | COPY my-lightstreamer_conf.xml /lightstreamer/conf/lightstreamer_conf.xml
 48 | COPY my-lightstreamer_log_conf.xml /lightstreamer/conf/lightstreamer_log_conf.xml
 49 | ```
 50 | 
 51 | where `my-lightstreamer_conf.xml` and `my-lightstreamer_log_conf.xml` are your custom configuration files, placed in the same directory as the Dockerfile. By simply running the command:
 52 | 
 53 | ```console
 54 | $ docker build -t my-lightstreamer .
 55 | ```
 56 | 
 57 | the new image will be built along with the provided files. After that, launch the container:
 58 | 
 59 | ```console
 60 | $ docker run --name ls-server -d -p 80:8080 my-lightstreamer
 61 | ```
 62 | 
 63 | To get more detailed information on how to configure the Lightstreamer server, please see the inline documentation in the `lightstreamer_conf.xml` and `lightstreamer_log_conf.xml` files you can find under the `conf` folder of the installation directory.
 64 | 
 65 | ## Deployment of Adapter Sets
 66 | 
 67 | You might want to use this image even with any Adapter Set, either developed by yourself or provided by third parties.
 68 | 
 69 | To accomplish such goal, you may use similar strategies to those illustrated above:
 70 | 
 71 | ### Deployment of a single Adapter Set
 72 | 
 73 | To deploy a single custom Adapter Set, the simplest way is to attach its files into the factory adapters folder, as follows:
 74 | 
 75 | ```console
 76 | $ docker run --name ls-server -v /path/to/my-adapter-set:/lightstreamer/adapters/my-adapter-set -d -p 80:8080 %%IMAGE%%
 77 | ```
 78 | 
 79 | ### Full replacement of the "adapters" folder
 80 | 
 81 | In the case you have many custom Adapter Sets to deploy, a more appropriate strategy is to replace the factory adapters folder with the one located in your host machine:
 82 | 
 83 | ```console
 84 | $ docker run --name ls-server -v /path/to/my-adapters:/lightstreamer/adapters -d -p 80:8080 %%IMAGE%%
 85 | ```
 86 | 
 87 | In this case, the `/path/to/my-adapters` folder has to be structured with the required layout for an adapters folder:
 88 | 
 89 | ```console
 90 | /path/to/my-adapters+
 91 |                     +my_adapter_set_1
 92 |                     +my_adapter_set_2
 93 |                     ...
 94 |                     +my_adapter_set_N
 95 | ```
 96 | 
 97 | ### Building a new image
 98 | 
 99 | Once again, a linear and clean approach is to make a new image including all needed files.
100 | 
101 | In this case, you could write a simple Docker file in which the list of all your Adapter Sets configuration files is provided:
102 | 
103 | ```dockerfile
104 | FROM %%IMAGE%%
105 | 
106 | # Will copy the contents of N Adapter Sets into the factory adapters folder
107 | COPY my-adapter-set-1 /lightstreamer/adapters/my-adapter-set-1
108 | COPY my-adapter-set-2 /lightstreamer/adapters/my-adapter-set-2
109 | COPY my-adapter-set-3 /lightstreamer/adapters/my-adapter-set-3
110 | ```
111 | 
112 | Then, just build and start the container as already explained.
113 | 
114 | ## Deployment of web server pages
115 | 
116 | There might be some circumstances where you would like to provide custom pages for the internal web server of the Lightstreamer Server. Even in this case, it is possible to customize the container by employing the same techniques as above.
117 | 
118 | For example, with the following command you will be able to fully replace the factory `pages` folder:
119 | 
120 | ```console
121 | $ docker run --name ls-server -v /path/to/custom/pages:/lightstreamer/pages -d -p 80:8080 %%IMAGE%%
122 | ```
123 | 
124 | where `/path/to/custom/pages` is the path in your host machine containing the replacing web content files.
125 | 
```

--------------------------------------------------------------------------------
/krakend/content.md:
--------------------------------------------------------------------------------

```markdown
  1 | %%LOGO%%
  2 | 
  3 | # What is KrakenD?
  4 | 
  5 | [KrakenD](https://www.krakend.io/) is a stateless, high-performance, enterprise-ready, open-source API gateway written in Go. Its engine (formerly known as *KrakenD Framework*) is now a **Linux Foundation Project** codenamed [Lura Project](https://luraproject.org/). Lura is the only enterprise-grade API Gateway hosted in a neutral, open forum.
  6 | 
  7 | KrakenD is lightweight and straightforward, as it only requires writing the configuration file. No Go knowledge is required. It offers connectivity to internal and external services, data transformation and filtering, and aggregation of multiple data sources (APIs, gRPC, queues and pub/sub, lambda, etc.) simultaneously or in cascade. It protects access to your API, throughputs its usage, and integrates with many third-parties.
  8 | 
  9 | All features are designed to offer extraordinary performance and infinite scalability.
 10 | 
 11 | ## How to use this image
 12 | 
 13 | KrakenD only needs a single configuration file to create an API Gateway, although you can have a complex setup reflecting your organization structure. The configuration file(s) can live anywhere in the container, but the default location (the workdir) is `/etc/krakend`.
 14 | 
 15 | To use the image, `COPY` your `krakend.json` file inside the container or mount it using a volume. The configuration is checked only once during the startup and never used again. Don't have a config file yet? Generate it with the [KrakenD Designer UI](https://designer.krakend.io).
 16 | 
 17 | ⚠️ **NOTICE**: KrakenD does not use live reload when your configuration changes. Restart the container.
 18 | 
 19 | ### Quick start
 20 | 
 21 | You can start an empty gateway with a health check with the following commands:
 22 | 
 23 | ```bash
 24 | docker run -d -p 8080:8080 -v "$PWD:/etc/krakend/" %%IMAGE%%
 25 | 
 26 | curl http://localhost:8080/__health
 27 | {"agents":{},"now":"2024-05-23 14:35:55.552591448 +0000 UTC m=+26.856583003","status":"ok"}
 28 | ```
 29 | 
 30 | ### More Examples
 31 | 
 32 | The following are several examples of running KrakenD. By default, the command `run` is executed, but you can pass other commands and flags at the end of the run command.
 33 | 
 34 | The configuration files are taken from the current directory (`$PWD`). Therefore, all examples expect to find at least the file `krakend.json`.
 35 | 
 36 | #### Run with the debug enabled (flag `-d`):
 37 | 
 38 | This flag is **SAFE to use in production**. It's meant to enable KrakenD as a fake backend itself by enabling a [`/__debug` endpoint](https://www.krakend.io/docs/endpoints/debug-endpoint/)
 39 | 
 40 | ```bash
 41 | docker run -p 8080:8080 -v "${PWD}:/etc/krakend/" %%IMAGE%% run -d -c /etc/krakend/krakend.json
 42 | ```
 43 | 
 44 | #### Checking the syntax of your configuration file
 45 | 
 46 | See the [check command](https://www.krakend.io/docs/commands/check/)
 47 | 
 48 | ```bash
 49 | docker run -it -v $PWD:/etc/krakend/ %%IMAGE%% check --config krakend.json
 50 | ```
 51 | 
 52 | #### Show the help:
 53 | 
 54 | ```bash
 55 | docker run --rm -it %%IMAGE%% help
 56 | ```
 57 | 
 58 | ### Building your custom KrakenD image
 59 | 
 60 | Most production deployments will not want to rely on mounting a volume for the container but to use their image based on `%%IMAGE%%`:
 61 | 
 62 | Your `Dockerfile` could look like this:
 63 | 
 64 | ```Dockerfile
 65 | FROM %%IMAGE%%:<version>
 66 | # NOTE: Avoid using :latest image on production. Stick to a major version instead.
 67 | 
 68 | COPY krakend.json ./
 69 | 
 70 | # Check and test that the file is valid
 71 | RUN krakend check -t --lint-no-network -c krakend.json
 72 | ```
 73 | 
 74 | If you want to manage your KrakenD configuration using multiple files and folders, reusing templates, and distributing the configuration amongst your teams, you can use the [flexible configuration (FC)](https://www.krakend.io/docs/configuration/flexible-config/). The following `Dockerfile` combines FC, the `krakend check` command, and a 2-step build.
 75 | 
 76 | ```Dockerfile
 77 | FROM %%IMAGE%%:<version> as builder
 78 | 
 79 | COPY krakend.tmpl .
 80 | COPY config .
 81 | 
 82 | # Save temporary output file to /tmp to avoid permission errors
 83 | RUN FC_ENABLE=1 \
 84 |     FC_OUT=/tmp/krakend.json \
 85 |     FC_PARTIALS="/etc/krakend/partials" \
 86 |     FC_SETTINGS="/etc/krakend/settings" \
 87 |     FC_TEMPLATES="/etc/krakend/templates" \
 88 |     krakend check -d -t -c krakend.tmpl
 89 | 
 90 | # Copy the output file only and discard any other files
 91 | FROM %%IMAGE%%:<version>
 92 | COPY --from=builder /tmp/krakend.json .
 93 | ```
 94 | 
 95 | Then build with `docker build -t my_krakend .`
 96 | 
 97 | The configuration above assumes you have a folder structure like the following:
 98 | 
 99 | 	.
100 | 	├── config
101 | 	│   ├── partials
102 | 	│   ├── settings
103 | 	│   │   └── env.json
104 | 	│   └── templates
105 | 	│       └── some.tmpl
106 | 	├── Dockerfile
107 | 	└── krakend.tmpl
108 | 
109 | ### Docker Compose example
110 | 
111 | Finally, a simple `docker compose` file to start KrakenD with your API would be:
112 | 
113 | ```yaml
114 | services:
115 |   krakend:
116 |     image: %%IMAGE%%:<version>
117 |     ports:
118 |       - "8080:8080"
119 |     volumes:
120 |       - ./:/etc/krakend
121 | ```
122 | 
123 | And another one that uses the flexible configuration and a custom template filename (`my_krakend.tmpl`) on each start:
124 | 
125 | ```yaml
126 | services:
127 |   krakend:
128 |     image: %%IMAGE%%:<version>
129 |     ports:
130 |       - "8080:8080"
131 |     volumes:
132 |       - ./:/etc/krakend
133 |     environment:
134 |       - FC_ENABLE=1
135 |       - FC_OUT=/tmp/krakend.json
136 |       - FC_PARTIALS="/etc/krakend/config/partials"
137 |       - FC_SETTINGS="/etc/krakend/config/settings/prod"
138 |       - FC_TEMPLATES="/etc/krakend/config/templates"
139 |     command:
140 |       command: ["krakend", "run", "-c", "krakend.tmpl", "-d"]
141 | ```
142 | 
143 | ### Container permissions and commands
144 | 
145 | All `krakend` commands are executed as `krakend` user (uid=1000), and the rest of the commands (e.g., `sh`) are executed as root.
146 | 
147 | You can directly use sub-commands of `krakend` like `run`, `help`, `version`, `check`, `check-plugin`, or `test-plugin` as the entrypoint will add the `krakend` command automatically. For example, the following lines are equivalent:
148 | 
149 | ```bash
150 | docker run --rm -it %%IMAGE%% help
151 | docker run --rm -it %%IMAGE%% krakend help
152 | ```
153 | 
```

--------------------------------------------------------------------------------
/emqx/content.md:
--------------------------------------------------------------------------------

```markdown
  1 | # What is EMQX
  2 | 
  3 | [EMQX](https://emqx.io/) is the world's most scalable open-source MQTT broker with a high performance that connects 100M+ IoT devices in 1 cluster, while maintaining 1M message per second throughput and sub-millisecond latency.
  4 | 
  5 | EMQX supports multiple open standard protocols like MQTT, HTTP, QUIC, and WebSocket. It's 100% compliant with MQTT 5.0 and 3.x standard, and secures bi-directional communication with MQTT over TLS/SSL and various authentication mechanisms.
  6 | 
  7 | With the built-in powerful SQL-based rules engine, EMQX can extract, filter, enrich and transform IoT data in real-time. In addition, it ensures high availability and horizontal scalability with a masterless distributed architecture, and provides ops-friendly user experience and great observability.
  8 | 
  9 | EMQX boasts more than 20K+ enterprise users across 50+ countries and regions, connecting 100M+ IoT devices worldwide, and is trusted by over 400 customers in mission-critical scenarios of IoT, IIoT, connected vehicles, and more, including over 70 Fortune 500 companies like HPE, VMware, Verifone, SAIC Volkswagen, and Ericsson.
 10 | 
 11 | %%LOGO%%
 12 | 
 13 | # How to use this image
 14 | 
 15 | ### Run EMQX
 16 | 
 17 | Execute some command under this docker image
 18 | 
 19 | ```console
 20 | $ docker run -d --name emqx %%IMAGE%%:${tag}
 21 | ```
 22 | 
 23 | For example
 24 | 
 25 | ```console
 26 | $ docker run -d --name emqx -p 18083:18083 -p 1883:1883 %%IMAGE%%:latest
 27 | ```
 28 | 
 29 | The EMQX broker runs as Linux user `emqx` in the docker container.
 30 | 
 31 | ### Configuration
 32 | 
 33 | All EMQX Configuration in [`etc/emqx.conf`](https://github.com/emqx/emqx/blob/master/apps/emqx/etc/emqx.conf) can be configured via environment variables.
 34 | 
 35 | Example:
 36 | 
 37 | 	EMQX_DASHBOARD__DEFAULT_PASSWORD       <--> dashboard.default_password
 38 | 	EMQX_NODE__COOKIE                      <--> node.cookie
 39 | 	EMQX_LISTENERS__SSL__default__ENABLE   <--> listeners.ssl.default.enable
 40 | 
 41 | Note: The lowercase use of 'default' is not a typo. It is used to demonstrate that lowercase environment variables are equivalent.
 42 | 
 43 | -	Prefix `EMQX_` is removed
 44 | -	All upper case letters are replaced with lower case letters
 45 | -	`__` is replaced with `.`
 46 | 
 47 | For example, set MQTT TCP port to 1883
 48 | 
 49 | ```console
 50 | $ docker run -d --name emqx -e EMQX_DASHBOARD__DEFAULT_PASSWORD=mysecret -p 18083:18083 -p 1883:1883 %%IMAGE%%:latest
 51 | ```
 52 | 
 53 | Please read more about EMQX configuration in the [official documentation](https://docs.emqx.com/en/emqx/latest/configuration/configuration.html)
 54 | 
 55 | #### EMQX node name configuration
 56 | 
 57 | Environment variable `EMQX_NODE__NAME` allows you to specify an EMQX node name, which defaults to `<container_name>@<container_ip>`.
 58 | 
 59 | If not specified, EMQX determines its node name based on the running environment or other environment variables used for node discovery.
 60 | 
 61 | ### Cluster
 62 | 
 63 | EMQX supports a variety of clustering methods, see our [documentation](https://docs.emqx.com/en/emqx/latest/deploy/cluster/create-cluster.html) for details.
 64 | 
 65 | Let's create a static node list cluster from Docker Compose.
 66 | 
 67 | -	Create `compose.yaml`:
 68 | 
 69 | ```yaml
 70 | services:
 71 |   emqx1:
 72 |     image: %%IMAGE%%:latest
 73 |     environment:
 74 |     - "[email protected]"
 75 |     - "EMQX_CLUSTER__DISCOVERY_STRATEGY=static"
 76 |     - "EMQX_CLUSTER__STATIC__SEEDS=[[email protected], [email protected]]"
 77 |     networks:
 78 |       emqx-bridge:
 79 |         aliases:
 80 |         - node1.emqx.io
 81 | 
 82 |   emqx2:
 83 |     image: %%IMAGE%%:latest
 84 |     environment:
 85 |     - "[email protected]"
 86 |     - "EMQX_CLUSTER__DISCOVERY_STRATEGY=static"
 87 |     - "EMQX_CLUSTER__STATIC__SEEDS=[[email protected], [email protected]]"
 88 |     networks:
 89 |       emqx-bridge:
 90 |         aliases:
 91 |         - node2.emqx.io
 92 | 
 93 | networks:
 94 |   emqx-bridge:
 95 |     driver: bridge
 96 | ```
 97 | 
 98 | -	Start the Docker Compose services
 99 | 
100 | ```bash
101 | docker compose -p my_emqx up -d
102 | ```
103 | 
104 | -	View cluster
105 | 
106 | ```bash
107 | $ docker exec -it my_emqx_emqx1_1 sh -c "emqx ctl cluster status"
108 | Cluster status: #{running_nodes => ['[email protected]','[email protected]'],
109 |                   stopped_nodes => []}
110 | ```
111 | 
112 | ### Persistence
113 | 
114 | If you want to persist the EMQX docker container, you need to keep the following directories:
115 | 
116 | -	`/opt/emqx/data`
117 | -	`/opt/emqx/log`
118 | 
119 | Since data in these folders are partially stored under the `/opt/emqx/data/mnesia/${node_name}`, the user also needs to reuse the same node name to see the previous state. To make this work, one needs to set the host part of `EMQX_NODE__NAME` to something static that does not change when you restart or recreate the container. It could be container name, hostname or loopback IP address `127.0.0.1` if you only have one node.
120 | 
121 | In if you use Docker Compose, the configuration would look something like this:
122 | 
123 | ```YAML
124 | volumes:
125 |   vol-emqx-data:
126 |     name: foo-emqx-data
127 |   vol-emqx-log:
128 |     name: foo-emqx-log
129 | 
130 | services:
131 |   emqx:
132 |     image: %%IMAGE%%:latest
133 |     restart: always
134 |     environment:
135 |       EMQX_NODE__NAME: [email protected]
136 |     volumes:
137 |       - vol-emqx-data:/opt/emqx/data
138 |       - vol-emqx-log:/opt/emqx/log
139 | ```
140 | 
141 | ### Kernel Tuning
142 | 
143 | Under Linux host machine, the easiest way is [Tuning guide](https://docs.emqx.com/en/emqx/latest/performance/tune.html).
144 | 
145 | If you want tune Linux kernel by docker, you must ensure your docker is latest version (>=1.12).
146 | 
147 | ```bash
148 | docker run -d --name emqx -p 18083:18083 -p 1883:1883 \
149 |     --sysctl fs.file-max=2097152 \
150 |     --sysctl fs.nr_open=2097152 \
151 |     --sysctl net.core.somaxconn=32768 \
152 |     --sysctl net.ipv4.tcp_max_syn_backlog=16384 \
153 |     --sysctl net.core.netdev_max_backlog=16384 \
154 |     --sysctl net.ipv4.ip_local_port_range=1000 65535 \
155 |     --sysctl net.core.rmem_default=262144 \
156 |     --sysctl net.core.wmem_default=262144 \
157 |     --sysctl net.core.rmem_max=16777216 \
158 |     --sysctl net.core.wmem_max=16777216 \
159 |     --sysctl net.core.optmem_max=16777216 \
160 |     --sysctl net.ipv4.tcp_rmem=1024 4096 16777216 \
161 |     --sysctl net.ipv4.tcp_wmem=1024 4096 16777216 \
162 |     --sysctl net.ipv4.tcp_max_tw_buckets=1048576 \
163 |     --sysctl net.ipv4.tcp_fin_timeout=15 \
164 |     %%IMAGE%%:latest
165 | ```
166 | 
167 | > REMEMBER: DO NOT RUN EMQX DOCKER PRIVILEGED OR MOUNT SYSTEM PROC IN CONTAINER TO TUNE LINUX KERNEL, IT IS UNSAFE.
168 | 
```

--------------------------------------------------------------------------------
/sonarqube/content.md:
--------------------------------------------------------------------------------

```markdown
  1 | # What is `sonarqube`?
  2 | 
  3 | `sonarqube` Docker repository stores the official Sonar images for SonarQube Server and SonarQube Community Build.
  4 | 
  5 | [SonarQube Server](https://www.sonarsource.com/products/sonarqube/) (formerly SonarQube) is an on-premise analysis tool designed to detect quality and security issues in 30+ languages, frameworks, and IaC platforms. The solution also provides fix recommendations leveraging AI with Sonar's AI CodeFix capability. By integrating directly with your CI pipeline or on one of the supported DevOps platforms, your code is checked against an extensive set of rules that cover many attributes of code, such as maintainability, reliability, and security issues on each merge/pull request.
  6 | 
  7 | [SonarQube Community Build](https://www.sonarsource.com/open-source-editions/sonarqube-community-edition/) (formerly SonarQube Community) is Sonar's self-managed free offering, released on a monthly schedule. It includes the latest core capabilities available in open source, providing essential features such as bug detection, identification of code smells, and basic security issue analysis across 21 programming languages and frameworks. For advanced security analysis, enterprise-grade integrations, and scalability features, the commercial version, SonarQube Server, is available.
  8 | 
  9 | ## How to use this image
 10 | 
 11 | Here, you'll find the Docker images for the SonarQube Server (Developer Edition, Enterprise Edition, and Data Center Edition), as well as for SonarQube Community Build.
 12 | 
 13 | ## Docker Host Requirements
 14 | 
 15 | Because SonarQube uses an embedded Elasticsearch, make sure that your Docker host configuration complies with the [Elasticsearch production mode requirements](https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html#docker-cli-run-prod-mode) and [File Descriptors configuration](https://www.elastic.co/guide/en/elasticsearch/reference/current/file-descriptors.html).
 16 | 
 17 | For example, on Linux, you can set the recommended values for the current session by running the following commands as root on the host:
 18 | 
 19 | ```console
 20 | sysctl -w vm.max_map_count=524288
 21 | sysctl -w fs.file-max=131072
 22 | ulimit -n 131072
 23 | ulimit -u 8192
 24 | ```
 25 | 
 26 | ## Demo
 27 | 
 28 | To quickly run a demo instance, see Using Docker on the [Try Out SonarQube](https://docs.sonarqube.org/latest/setup/get-started-2-minutes/) page. When you are ready to move to a more sustainable setup, take some time to read the **Installation** and **Configuration** sections below.
 29 | 
 30 | ## Installation
 31 | 
 32 | > **Multi-platform support**: Starting from SonarQube `9.9` LTS, the docker images support running both on `amd64` architecture and `arm64`-based Apple Silicon (M1).
 33 | 
 34 | For installation instructions, see Installing the Server from the Docker Image on the [Install the Server](https://docs.sonarqube.org/latest/setup/install-server/) page.
 35 | 
 36 | To run a cluster with the SonarQube Server Data Center Edition, please refer to Installing SonarQube Server from the Docker Image on the [Install the Server as a Cluster](https://docs.sonarqube.org/latest/setup/install-cluster/) page.
 37 | 
 38 | > The `lts` tag on Docker images is replaced with every new LTS release. If you want to avoid any automatic major upgrades, we recommend using the corresponding `9.9-<edition>` tag instead of `lts-<edition>`.
 39 | 
 40 | ## Configuration
 41 | 
 42 | ### Port binding
 43 | 
 44 | By default, the server running within the container will listen on port 9000. You can expose the container port 9000 to the host port 9000 with the `-p 9000:9000` argument to `docker run`, like the command below:
 45 | 
 46 | ```console
 47 | docker run --name sonarqube-custom -p 9000:9000 %%IMAGE%%:community
 48 | ```
 49 | 
 50 | You can then browse to `http://localhost:9000` or `http://host-ip:9000` in your web browser to access the web interface.
 51 | 
 52 | ### Database
 53 | 
 54 | By default, the image will use an embedded H2 database that is not suited for production.
 55 | 
 56 | > **Warning:** Only a single instance of SonarQube Server or SonarQube Community Build can connect to a database schema. If you're using a Docker Swarm or Kubernetes, make sure that multiple instances are never running on the same database schema simultaneously. This will cause the SonarQube to behave unpredictably, and data will be corrupted. There is no safeguard, as described on [SONAR-10362](https://jira.sonarsource.com/browse/SONAR-10362). The SonarQube Server Data Center Edition has the same limitation in that only one cluster can connect to one database schema at the same time.
 57 | 
 58 | Set up a database by following the ["Installing the Database"](https://docs.sonarsource.com/sonarqube/latest/setup-and-upgrade/install-the-server/installing-the-database/) section.
 59 | 
 60 | ### Use volumes
 61 | 
 62 | We recommend creating volumes for the following directories:
 63 | 
 64 | -	`/opt/sonarqube/data`: data files, such as the embedded H2 database and Elasticsearch indexes
 65 | -	`/opt/sonarqube/logs`: contains SonarQube logs about access, web process, CE process, Elasticsearch logs
 66 | -	`/opt/sonarqube/extensions`: for 3rd party plugins
 67 | 
 68 | > **Warning:** You cannot use the same volumes on multiple instances of SonarQube.
 69 | 
 70 | ## Upgrading
 71 | 
 72 | For upgrade instructions, see Upgrading from the Docker Image on the [Upgrade the Server](https://docs.sonarqube.org/latest/setup/upgrading/) page.
 73 | 
 74 | ## Advanced configuration
 75 | 
 76 | ### Customized image
 77 | 
 78 | In some environments, it may make more sense to prepare a custom image containing your configuration. A `Dockerfile` to achieve this may be as simple as:
 79 | 
 80 | ```dockerfile
 81 | FROM %%IMAGE%%:community
 82 | COPY sonar-custom-plugin-1.0.jar /opt/sonarqube/extensions/
 83 | ```
 84 | 
 85 | You could then build and try the image with something like:
 86 | 
 87 | ```console
 88 | $ docker build --tag=sonarqube-custom .
 89 | $ docker run -ti sonarqube-custom
 90 | ```
 91 | 
 92 | ### Avoid hard termination
 93 | 
 94 | The instance will stop gracefully, waiting for any tasks in progress to finish. Waiting for in-progress tasks to finish can take a large amount of time, which the docker does not expect by default when stopping. To avoid having the instance killed by the Docker daemon after 10 seconds, it is best to configure a timeout to stop the container with `--stop-timeout`. For example:
 95 | 
 96 | ```console
 97 | docker run --stop-timeout 3600 %%IMAGE%%
 98 | ```
 99 | 
100 | ## Administration
101 | 
102 | The administration guide can be found [here](https://redirect.sonarsource.com/doc/administration-guide.html).
103 | 
```

--------------------------------------------------------------------------------
/update.sh:
--------------------------------------------------------------------------------

```bash
  1 | #!/usr/bin/env bash
  2 | set -Eeuo pipefail
  3 | 
  4 | cd "$(dirname "$(readlink -f "$BASH_SOURCE")")"
  5 | helperDir='.template-helpers'
  6 | 
  7 | # usage: ./update.sh [--namespace NAMESPACE] [[NAMESPACE/]IMAGE ...]
  8 | #    ie: ./update.sh
  9 | #        ./update.sh debian golang
 10 | #        ./update.sh --namespace tianontesting debian golang
 11 | #        ./update.sh tianontesting/debian tianontestingmore/golang
 12 | #        BASHBREW_ARCH=windows-amd64 BASHBREW_ARCH_NAMESPACES='...' ./update.sh --namespace winamd64
 13 | 
 14 | forceNamespace=
 15 | if [ "${1:-}" = '--namespace' ]; then
 16 | 	shift
 17 | 	forceNamespace="$1"
 18 | 	shift
 19 | fi
 20 | 
 21 | images=( "$@" )
 22 | if [ ${#images[@]} -eq 0 ]; then
 23 | 	images=( */ )
 24 | fi
 25 | images=( "${images[@]%/}" )
 26 | 
 27 | if [ -n "$forceNamespace" ]; then
 28 | 	images=( "${images[@]/#/"$forceNamespace/"}" )
 29 | fi
 30 | 
 31 | replace_field() {
 32 | 	targetFile="$1"
 33 | 	field="$2"
 34 | 	content="$3"
 35 | 	extraSed="${4:-}"
 36 | 	sed_escaped_value="$(echo "$content" | sed 's/[\/&]/\\&/g')"
 37 | 	sed_escaped_value="${sed_escaped_value//$'\n'/\\n}"
 38 | 	sed -ri -e "s/${extraSed}%%${field}%%${extraSed}/$sed_escaped_value/g" "$targetFile"
 39 | }
 40 | 
 41 | for image in "${images[@]}"; do
 42 | 	repo="${image##*/}"
 43 | 	namespace="${image%$repo}"
 44 | 	namespace="${namespace%/}"
 45 | 
 46 | 	# this is used by subscripts to determine whether we're pushing /_/xxx or /r/ARCH/xxx
 47 | 	# (especialy for "supported tags")
 48 | 	export ARCH_SPECIFIC_DOCS=
 49 | 	if [ -n "$namespace" ] && [ -n "${BASHBREW_ARCH:-}" ]; then
 50 | 		export ARCH_SPECIFIC_DOCS=1
 51 | 	fi
 52 | 
 53 | 	if [ -x "$repo/update.sh" ]; then
 54 | 		( set -x; "$repo/update.sh" "$image" )
 55 | 	fi
 56 | 
 57 | 	if [ -e "$repo/content.md" ]; then
 58 | 		githubRepo="$(cat "$repo/github-repo")"
 59 | 		maintainer="$(cat "$repo/maintainer.md")"
 60 | 
 61 | 		issues="$(cat "$repo/issues.md" 2>/dev/null || cat "$helperDir/issues.md")"
 62 | 		getHelp="$(cat "$repo/get-help.md" 2>/dev/null || cat "$helperDir/get-help.md")"
 63 | 
 64 | 		license="$(cat "$repo/license.md" 2>/dev/null || true)"
 65 | 		licenseCommon="$(cat "$repo/license-common.md" 2>/dev/null || cat "$helperDir/license-common.md")"
 66 | 		if [ "$license" ]; then
 67 | 			license=$'\n\n''# License'$'\n\n'"$license"$'\n\n'"$licenseCommon"
 68 | 		fi
 69 | 
 70 | 		logo=
 71 | 		logoFile=
 72 | 		for f in png svg; do
 73 | 			if [ -e "$repo/logo.$f" ]; then
 74 | 				logoFile="$repo/logo.$f"
 75 | 				break
 76 | 			fi
 77 | 		done
 78 | 		if [ "$logoFile" ]; then
 79 | 			logoCommit="$(git log -1 --format='format:%H' -- "$logoFile" 2>/dev/null || true)"
 80 | 			[ "$logoCommit" ] || logoCommit='master'
 81 | 			logoUrl="https://raw.githubusercontent.com/docker-library/docs/$logoCommit/$logoFile"
 82 | 			if [ "${logoFile##*.}" = 'svg' ]; then
 83 | 				# https://stackoverflow.com/a/16462143/433558
 84 | 				logoUrl+='?sanitize=true'
 85 | 			fi
 86 | 			logo="![logo]($logoUrl)"
 87 | 		fi
 88 | 
 89 | 		compose=
 90 | 		composeYaml=
 91 | 		if [ -f "$repo/compose.yaml" ]; then
 92 | 			compose="$(cat "$repo/compose.md" 2>/dev/null || cat "$helperDir/compose.md")"
 93 | 			composeYaml=$'```yaml\n'"$(cat "$repo/compose.yaml")"$'\n```'
 94 | 		fi
 95 | 
 96 | 		deprecated=
 97 | 		if [ -f "$repo/deprecated.md" ]; then
 98 | 			deprecated="$(< "$repo/deprecated.md")"
 99 | 			if [ "${deprecated:0:2}" != '# ' ]; then
100 | 				deprecated=$'# **DEPRECATION NOTICE**\n\n'"$deprecated"
101 | 			fi
102 | 			deprecated+=$'\n\n'
103 | 		fi
104 | 
105 | 		if ! partial="$("$helperDir/generate-dockerfile-links-partial.sh" "$repo")"; then
106 | 			{
107 | 				echo
108 | 				echo "WARNING: failed to fetch tags for '$repo'; skipping!"
109 | 				echo
110 | 			} >&2
111 | 			continue
112 | 		fi
113 | 
114 | 		targetFile="$repo/README.md"
115 | 
116 | 		{
117 | 			cat "$helperDir/autogenerated-warning.md"
118 | 			echo
119 | 
120 | 			if [ -n "$ARCH_SPECIFIC_DOCS" ]; then
121 | 				echo '**Note:** this is the "per-architecture" repository for the `'"$BASHBREW_ARCH"'` builds of [the `'"$repo"'` official image](https://hub.docker.com/_/'"$repo"') -- for more information, see ["Architectures other than amd64?" in the official images documentation](https://github.com/docker-library/official-images#architectures-other-than-amd64) and ["An image'\''s source changed in Git, now what?" in the official images FAQ](https://github.com/docker-library/faq#an-images-source-changed-in-git-now-what).'
122 | 				echo
123 | 			fi
124 | 
125 | 			echo -n "$deprecated"
126 | 			cat "$helperDir/template.md"
127 | 		} > "$targetFile"
128 | 
129 | 		echo '  TAGS => generate-dockerfile-links-partial.sh "'"$repo"'"'
130 | 		if [ -z "$partial" ]; then
131 | 			if [ -n "$ARCH_SPECIFIC_DOCS" ]; then
132 | 				partial='**No supported tags found!**'$'\n\n''It is very likely that `%%REPO%%` does not support the currently selected architecture (`'"$BASHBREW_ARCH"'`).'
133 | 			else
134 | 				# opensuse, etc
135 | 				partial='**No supported tags**'
136 | 			fi
137 | 		elif [ -n "$ARCH_SPECIFIC_DOCS" ]; then
138 | 			jenkinsJobUrl="https://doi-janky.infosiftr.net/job/multiarch/job/$BASHBREW_ARCH/job/$repo/"
139 | 			jenkinsImageUrl="https://img.shields.io/jenkins/s/https/doi-janky.infosiftr.net/job/multiarch/job/$BASHBREW_ARCH/job/$repo.svg?label=%%IMAGE%%%20%20build%20job"
140 | 			partial+=$'\n\n''[![%%IMAGE%% build status badge]('"$jenkinsImageUrl"')]('"$jenkinsJobUrl"')'
141 | 		fi
142 | 		replace_field "$targetFile" 'TAGS' "$partial"
143 | 
144 | 		echo '  ARCHES => arches.sh "'"$repo"'"'
145 | 		arches="$("$helperDir/arches.sh" "$repo")"
146 | 		[ -n "$arches" ] || arches='**No supported architectures**'
147 | 		replace_field "$targetFile" 'ARCHES' "$arches"
148 | 
149 | 		echo '  CONTENT => '"$repo"'/content.md'
150 | 		replace_field "$targetFile" 'CONTENT' "$(cat "$repo/content.md")"
151 | 
152 | 		echo '  VARIANT => variant.sh'
153 | 		replace_field "$targetFile" 'VARIANT' "$("$helperDir/variant.sh" "$repo")"
154 | 
155 | 		# has to be after CONTENT because it's contained in content.md
156 | 		echo "  LOGO => $logo"
157 | 		replace_field "$targetFile" 'LOGO' "$logo" '\s*'
158 | 
159 | 		echo '  COMPOSE => '"$repo"'/compose.md'
160 | 		replace_field "$targetFile" 'COMPOSE' "$compose"
161 | 		echo '  COMPOSE-YAML => '"$repo"'/compose.yaml'
162 | 		replace_field "$targetFile" 'COMPOSE-YAML' "$composeYaml"
163 | 
164 | 		echo '  LICENSE => '"$repo"'/license.md'
165 | 		replace_field "$targetFile" 'LICENSE' "$license"
166 | 
167 | 		echo '  ISSUES => "'"$issues"'"'
168 | 		replace_field "$targetFile" 'ISSUES' "$issues"
169 | 
170 | 		echo '  GET-HELP => "'"$getHelp"'"'
171 | 		replace_field "$targetFile" 'GET-HELP' "$getHelp"
172 | 
173 | 		echo '  MAINTAINER => "'"$maintainer"'"'
174 | 		replace_field "$targetFile" 'MAINTAINER' "$maintainer"
175 | 
176 | 		echo '  IMAGE => "'"$image"'"'
177 | 		replace_field "$targetFile" 'IMAGE' "$image"
178 | 
179 | 		echo '  REPO => "'"$repo"'"'
180 | 		replace_field "$targetFile" 'REPO' "$repo"
181 | 
182 | 		echo '  GITHUB-REPO => "'"$githubRepo"'"'
183 | 		replace_field "$targetFile" 'GITHUB-REPO' "$githubRepo"
184 | 
185 | 		echo
186 | 	else
187 | 		echo >&2 "skipping $repo: missing repo/content.md"
188 | 	fi
189 | done
190 | 
```

--------------------------------------------------------------------------------
/perl/content.md:
--------------------------------------------------------------------------------

```markdown
 1 | # What is Perl?
 2 | 
 3 | Perl is a high-level, general-purpose, interpreted, dynamic programming language. The Perl language borrows features from other programming languages, including C, shell scripting (sh), AWK, and sed.
 4 | 
 5 | > [wikipedia.org/wiki/Perl](https://en.wikipedia.org/wiki/Perl)
 6 | 
 7 | %%LOGO%%
 8 | 
 9 | # How to use this image
10 | 
11 | ## Create a `Dockerfile` in your Perl app project
12 | 
13 | ```dockerfile
14 | FROM %%IMAGE%%:5.34
15 | COPY . /usr/src/myapp
16 | WORKDIR /usr/src/myapp
17 | CMD [ "perl", "./your-daemon-or-script.pl" ]
18 | ```
19 | 
20 | Then, build and run the Docker image:
21 | 
22 | ```console
23 | $ docker build -t my-perl-app .
24 | $ docker run -it --rm --name my-running-app my-perl-app
25 | ```
26 | 
27 | ## Run a single Perl script
28 | 
29 | For many simple, single file projects, you may find it inconvenient to write a complete `Dockerfile`. In such cases, you can run a Perl script by using the Perl Docker image directly:
30 | 
31 | ```console
32 | $ docker run -it --rm --name my-running-script -v "$PWD":/usr/src/myapp -w /usr/src/myapp %%IMAGE%%:5.34 perl your-daemon-or-script.pl
33 | ```
34 | 
35 | ## Coexisting with Debian's `/usr/bin/perl`
36 | 
37 | The *perl* binary built for this image is installed in `/usr/local/bin/perl`, along with other standard tools in the Perl distribution such as `prove` and `perldoc`, as well as [`cpanm`](https://metacpan.org/pod/App::cpanminus) for installing [CPAN](https://www.cpan.org) modules. Containers running this image will also have their `PATH` enviroment set like `/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin` to ensure that this *perl* binary will be found *first* in normal usage.
38 | 
39 | As this official image of Docker is built using the [buildpack-deps](https://hub.docker.com/_/buildpack-deps) image (or [debian:slim](https://hub.docker.com/_/debian) for `:slim` variants,) this image also contains a `/usr/bin/perl` as supplied by the [Debian](https://www.debian.org) project. This is needed for the underlying [dpkg](https://en.wikipedia.org/wiki/Dpkg)/[apt](https://en.wikipedia.org/wiki/APT_(software)) package management tools to work correctly, as docker-perl cannot be used here due to different configuration (such as `@INC` and installation paths, as well as other differences like whether `-Dusethreads` is included or not.)
40 | 
41 | See also [Perl/docker-perl#26](https://github.com/Perl/docker-perl/issues/26) for an extended discussion.
42 | 
43 | ## Signal handling behavior notice
44 | 
45 | As Perl will run as PID 1 by default in containers (unless an [ENTRYPOINT](https://docs.docker.com/reference/dockerfile/#entrypoint) is set,) special care needs to be considered when expecting to send signals (particularly SIGINT or SIGTERM) to it. For example, running
46 | 
47 | ```console
48 | $ docker run -it --name sleeping_beauty --rm %%IMAGE%%:5.34 perl -E 'sleep 300'
49 | ```
50 | 
51 | and doing on another terminal,
52 | 
53 | ```console
54 | $ docker exec sleeping_beauty kill 1
55 | ```
56 | 
57 | will *not* stop the perl running on the `sleeping_beauty` container (it will keep running until the `sleep 300` finishes.) To do so, one must set a signal handler like this:
58 | 
59 | ```console
60 | $ docker run -it --name quick_nap --rm %%IMAGE%%:5.34 perl -E '$SIG{TERM} = sub { $sig++; say "recv TERM" }; sleep 300; say "waking up" if $sig'
61 | ```
62 | 
63 | so doing `docker exec quick_nap kill 1` (or the simpler `docker stop quick_nap`) will immediately stop the container, and print `recv TERM` in the other terminal. Note that the signal handler does not stop the perl process itself unless it calls a `die` or `exit`; in this case, perl will continue and print `waking up` *after* it receives the signal.
64 | 
65 | If your Perl program is expected to handle signals and fork child processes, it is encouraged to use an init-like program for ENTRYPOINT, such as [dumb-init](https://github.com/Yelp/dumb-init) or [tini](https://github.com/krallin/tini) (the latter is available since Docker 1.13 via the `docker run --init` flag.)
66 | 
67 | See also [Signals in perlipc](https://perldoc.pl/perlipc#Signals) as well as [Perl/docker-perl#44](https://github.com/Perl/docker-perl/issues/44).
68 | 
69 | ### `COPY` and `WORKDIR` behavior in Debian Bookworm based images (Perl >= 5.38)
70 | 
71 | As our Perl images are based on the standard `buildpack-deps` and `debian` images, these inherit the new [merged-usr root filesystem layout](https://wiki.debian.org/UsrMerge) introduced in Debian 12 (Bookworm) which may affect certain build contexts that `COPY` their own `bin`, `sbin`, or `lib` directories into a `WORKDIR /`. Users are encouraged to set `WORKDIR` explicitly to a path other than `/` as much as possible, such as the `/usr/src/app` shown here in the examples, though as of current release our images now default to `WORKDIR /usr/src/app`.
72 | 
73 | See also [Perl/docker-perl#140](https://github.com/Perl/docker-perl/issues/140) for further information.
74 | 
75 | ## Example: Creating a reusable Carton image for Perl projects
76 | 
77 | Suppose you have a project that uses [Carton](https://metacpan.org/pod/Carton) to manage Perl dependencies. You can create a `%%IMAGE%%:carton` image that makes use of the [ONBUILD](https://docs.docker.com/reference/dockerfile/#onbuild) instruction in its `Dockerfile`, like this:
78 | 
79 | ```dockerfile
80 | FROM %%IMAGE%%:5.34
81 | 
82 | RUN cpanm Carton \
83 |     && mkdir -p /usr/src/app
84 | WORKDIR /usr/src/app
85 | 
86 | ONBUILD COPY cpanfile* /usr/src/app
87 | ONBUILD RUN carton install
88 | 
89 | ONBUILD COPY . /usr/src/app
90 | ```
91 | 
92 | Then, in your Carton project, you can now reduce your project's `Dockerfile` into a single line of `FROM %%IMAGE%%:carton`, which may be enough to build a stand-alone image.
93 | 
94 | Having a single `%%IMAGE%%:carton` base image is useful especially if you have multiple Carton-based projects in development, to avoid "boilerplate" coding of installing Carton and/or copying the project source files into the derived image. Keep in mind, though, about certain things to consider when using the Perl image in this way:
95 | 
96 | -	This kind of base image will hide the useful bits (such as the`COPY`/`RUN` above) in the image, separating it from more specific Dockerfiles using the base image. This might lead to confusion when creating further derived images, so be aware of how [ONBUILD triggers](https://docs.docker.com/reference/dockerfile/#onbuild) work and plan appropriately.
97 | -	There is the cost of maintaining an extra base image build, so if you're working on a single Carton project and/or plan to publish it, then it may be more preferable to derive directly from a versioned `perl` image instead.
98 | 
```

--------------------------------------------------------------------------------
/varnish/content.md:
--------------------------------------------------------------------------------

```markdown
  1 | # What is Varnish?
  2 | 
  3 | Varnish is an HTTP accelerator designed for content-heavy dynamic web sites as well as APIs. In contrast to other web accelerators, such as Squid, which began life as a client-side cache, or Apache and nginx, which are primarily origin servers, Varnish was designed as an HTTP accelerator. Varnish is focused exclusively on HTTP, unlike other proxy servers that often support FTP, SMTP and other network protocols.
  4 | 
  5 | > [wikipedia.org/wiki/Varnish_(software)](https://en.wikipedia.org/wiki/Varnish_(software))
  6 | 
  7 | %%LOGO%%
  8 | 
  9 | # How to use this image.
 10 | 
 11 | ## Basic usage
 12 | 
 13 | ### Using `VARNISH_BACKEND_HOST` and `VARNISH_BACKEND_PORT`
 14 | 
 15 | You just need to know where your backend (the server that Varnish will accelerate) is:
 16 | 
 17 | ```console
 18 | # we define VARNISH_BACKEND_HOST/VARNISH_BACKEND_PORT
 19 | # our workdir has to be mounted as tmpfs to avoid disk I/O,
 20 | # and we'll use port 8080 to talk to our container (internally listening on 80)
 21 | $ docker run \
 22 |     -e VARNISH_BACKEND_HOST=example.com -e VARNISH_BACKEND_PORT=80 \
 23 | 	--tmpfs /var/lib/varnish/varnishd:exec \
 24 | 	-p 8080:80 \
 25 | 	%%IMAGE%%
 26 | ```
 27 | 
 28 | From there, you can visit `localhost:8080` in your browser and see the example.com homepage.
 29 | 
 30 | ### Using a VCL file
 31 | 
 32 | If you already have a VCL file, you can directly mount it as `/etc/varnish/default.vcl`:
 33 | 
 34 | ```console
 35 | # we need the configuration file at /etc/varnish/default.vcl,
 36 | # our workdir has to be mounted as tmpfs to avoid disk I/O,
 37 | # and we'll use port 8080 to talk to our container (internally listening on 80)
 38 | $ docker run \
 39 | 	-v /path/to/default.vcl:/etc/varnish/default.vcl:ro \
 40 | 	--tmpfs /var/lib/varnish/varnishd:exec \
 41 | 	-p 8080:80 \
 42 | 	%%IMAGE%%
 43 | ```
 44 | 
 45 | Alternatively, a simple `Dockerfile` can be used to generate a new image that includes the necessary `default.vcl`:
 46 | 
 47 | ```dockerfile
 48 | FROM %%IMAGE%%
 49 | 
 50 | COPY default.vcl /etc/varnish/
 51 | ```
 52 | 
 53 | Place this file in the same directory as your `default.vcl`, run `docker build -t my-varnish .`, then start your container:
 54 | 
 55 | ```console
 56 | $ docker --tmpfs /var/lib/varnish/varnishd:exec -p 8080:80 my-varnish
 57 | ```
 58 | 
 59 | ## Reloading the configuration
 60 | 
 61 | The images all ship with [varnishreload](https://github.com/varnishcache/pkg-varnish-cache/blob/master/systemd/varnishreload#L42) which allows you to easily update the running configuration without restarting the container (and therefore losing your cache). At its most basic, you just need this:
 62 | 
 63 | ```console
 64 | # update the default.vcl in your container
 65 | docker cp new_default.vcl running_container:/etc/varnish/default.vcl
 66 | # run varnishreload
 67 | docker exec running_container varnishreload
 68 | ```
 69 | 
 70 | Note that `varnishreload` also supports reloading other files (it doesn't have to be `default.vcl`), labels (`l`), and garbage collection of old labeles (`-m`) among others. To know more, run
 71 | 
 72 | ```console
 73 | docker run varnish varnishreload -h
 74 | ```
 75 | 
 76 | ## Additional configuration
 77 | 
 78 | ### Cache size (VARNISH_SIZE)
 79 | 
 80 | By default, the containers will use a cache size of 100MB, which is usually a bit too small, but you can quickly set it through the `VARNISH_SIZE` environment variable:
 81 | 
 82 | ```console
 83 | $ docker run --tmpfs /var/lib/varnish/varnishd:exec -p 8080:80 -e VARNISH_SIZE=2G %%IMAGE%%
 84 | ```
 85 | 
 86 | ### Listening ports (VARNISH_HTTP_PORT/VARNISH_PROXY_PORT)
 87 | 
 88 | Varnish will listen to HTTP traffic on port `80`, and this can be overridden by setting the environment variable `VARNISH_HTTP_PORT`. Similarly, the variable `VARNISH_PROXY_PORT` (defaulting to `8443`) dictate the listening port for the [PROXY protocol](https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt) used notably to interact with [hitch](https://hub.docker.com/_/hitch) (which, coincidentally, uses `8443` as a default too!).
 89 | 
 90 | ```console
 91 | # instruct varnish to listening to port 7777 instead of 80
 92 | $ docker run --tmpfs /var/lib/varnish/varnishd:exec -p 8080:7777 -e VARNISH_HTTP_PORT=7777 %%IMAGE%%
 93 | ```
 94 | 
 95 | ### Extra arguments
 96 | 
 97 | Additionally, you can add arguments to `docker run` after `%%IMAGE%%`, if the first argument starts with a `-`, the whole list will be appendend to the [default command](https://github.com/varnish/docker-varnish/blob/master/fresh/debian/scripts/docker-varnish-entrypoint):
 98 | 
 99 | ```console
100 | # extend the default keep period
101 | $ docker run --tmpfs /var/lib/varnish/varnishd:exec -p 8080:80 -e VARNISH_SIZE=2G %%IMAGE%% -p default_keep=300
102 | ```
103 | 
104 | If your first argument after `%%IMAGE%%` doesn't start with `-`, it will be interpreted as a command to override the default one:
105 | 
106 | ```console
107 | # show the command-line options
108 | $ docker run %%IMAGE%% varnishd -?
109 | 
110 | # list parameters usable with -p
111 | $ docker run %%IMAGE%% varnishd -x parameter
112 | 
113 | # run the server with your own parameters (don't forget -F to not daemonize)
114 | $ docker run %%IMAGE%% varnishd -F -a :8080 -b 127.0.0.1:8181 -t 600 -p feature=+http2
115 | ```
116 | 
117 | ## vmods (since 7.1)
118 | 
119 | As mentioned above, you can use [vmod_dynamic](https://github.com/nigoroll/libvmod-dynamic) for backend resolution. The [varnish-modules](https://github.com/varnish/varnish-modules) collection is also included in the image. All the documentation regarding usage and syntax can be found in the [src/](https://github.com/varnish/varnish-modules/tree/master/src) directory of the repository.
120 | 
121 | On top of this, images include [install-vmod](https://github.com/varnish/toolbox/tree/master/install-vmod), a helper script to quickly download, compile and install vmods while creating your own images. Note that images set the `ENV` variable `VMOD_DEPS` to ease the task further.
122 | 
123 | ### Debian
124 | 
125 | ```dockerfile
126 | FROM %%IMAGE%%:7.1
127 | 
128 | # set the user to root, and install build dependencies
129 | USER root
130 | RUN set -e; \
131 |     apt-get update; \
132 |     apt-get -y install $VMOD_DEPS /pkgs/*.deb; \
133 |     \
134 | # install one, possibly multiple vmods
135 |    install-vmod https://github.com/varnish/varnish-modules/releases/download/0.20.0/varnish-modules-0.20.0.tar.gz; \
136 |     \
137 | # clean up and set the user back to varnish
138 |     apt-get -y purge --auto-remove $VMOD_DEPS varnish-dev; \
139 |     rm -rf /var/lib/apt/lists/*
140 | USER varnish
141 | ```
142 | 
143 | ### Alpine
144 | 
145 | ```dockerfile
146 | FROM %%IMAGE%%:7.1-alpine
147 | 
148 | # install build dependencies
149 | USER root
150 | RUN set -e; \
151 |     apk add --no-cache $VMOD_DEPS; \
152 |     \
153 | # install one, possibly multiple vmods
154 |     install-vmod https://github.com/varnish/varnish-modules/releases/download/0.20.0/varnish-modules-0.20.0.tar.gz; \
155 |     \
156 | # clean up
157 |     apk del --no-network $VMOD_DEPS
158 | USER varnish
159 | ```
160 | 
```

--------------------------------------------------------------------------------
/php-zendserver/content.md:
--------------------------------------------------------------------------------

```markdown
  1 | # What is Zend Server?
  2 | 
  3 | Zend Server is the integrated application platform for PHP mobile and web apps. Zend Server provides you with a highly available PHP production environment which includes, amongst other features, a highly reliable PHP stack, application monitoring, troubleshooting, and the all-new Z-Ray.
  4 | 
  5 | ### Boost your Development with Z-Ray
  6 | 
  7 | Using Zend Server Z-Ray is akin to wearing X-Ray goggles, effortlessly giving developers deep insight into how their code is running as they are developing it – all without having to change any of their habits or workflow. With Z-Ray, developers can immediately understand the impact of their code changes, enabling them to both improve quality and solve issues long before their code reaches production. In addition to the obvious benefits of this 'Left Shifting' – better performance, fewer production issues and faster recovery times – using Z-Ray is also downright fun!
  8 | 
  9 | ### Powering Continuous Delivery
 10 | 
 11 | Zend Server is the platform that enables Continuous Delivery, which provides consistency, automation and collaboration capabilities throughout the application delivery cycle. Patterns are available to integrate Zend Server with: Chef, Jenkins, Nagios, Vmware, Puppet.
 12 | 
 13 | ### Additional Resources
 14 | 
 15 | -	[http://www.zend.com/](http://www.zend.com/)
 16 | -	[http://support.roguewave.com/](http://support.roguewave.com/)
 17 | -	[http://files.zend.com/help/Zend-Server/zend-server.htm#faqs.htm](http://files.zend.com/help/Zend-Server/zend-server.htm#faqs.htm)
 18 | -	[http://files.zend.com/help/Zend-Server/zend-server.htm#getting\_started.htm](http://files.zend.com/help/Zend-Server/zend-server.htm#getting_started.htm)
 19 | 
 20 | # PHP-ZendServer
 21 | 
 22 | This is a cluster-enabled version of a Dockerized Zend Server container. With Zend Server on Docker, you'll get your PHP applications up and running on a highly available PHP production environment which includes, amongst other features, a highly reliable PHP stack, application monitoring, troubleshooting, and the new and innovative new technology - Z-Ray. Z-Ray gives developers unprecedented visibility into their code by tracking and displaying in a toolbar live and detailed info on how the various elements constructing their page are performing.
 23 | 
 24 | For development purposes we provide you with a time limited trial license. For production use you must provide a valid Zend Server license using the instructions below in the Usage section.
 25 | 
 26 | ## Usage
 27 | 
 28 | #### Launching the Container from Docker-Hub
 29 | 
 30 | Zend Server is shared on [Docker-Hub] as **php-zendserver**.
 31 | 
 32 | ##### Single instance
 33 | 
 34 | To start a single Zend Server instance, execute:
 35 | 
 36 | 	    $ docker run %%IMAGE%%
 37 | 
 38 | -	You can specify the PHP and Zend Server version by adding ':<php-version>' or ':&lt;ZS-version&gt;-php&lt;version&gt;' to the 'docker run' command.
 39 | 
 40 | 		for example: 
 41 | 		$docker run %%IMAGE%%:8.5-php5.6
 42 | 
 43 | #### Availible versions:
 44 | 
 45 | -	Zend Server 8
 46 | -	Zend Server 9 (With PHP 7 GA)(Default version)
 47 | -	Zend Server 2019 with multi PHP Version Support (7.1, 7.2 & 7.3)
 48 | 
 49 | ##### Cluster
 50 | 
 51 | To start a Zend Server cluster, execute the following command for each cluster node:
 52 | 
 53 | 	    $ docker run -e MYSQL_HOSTNAME=<db-ip> -e MYSQL_PORT=3306 -e MYSQL_USERNAME=<username> -e MYSQL_PASSWORD=<password> -e MYSQL_DBNAME=zend %%IMAGE%%
 54 | 
 55 | #### Bring your own license
 56 | 
 57 | To use your own Zend Server license: $ docker run %%IMAGE%% -e ZEND_LICENSE_KEY=<license-key> -e ZEND_LICENSE_ORDER=<order-number>
 58 | 
 59 | #### Launching the Container from Dockerfile
 60 | 
 61 | From a local folder containing this repo's clone, execute the following command to generate the image. The **image-id** will be outputted:
 62 | 
 63 | 	    $ docker build .
 64 | 
 65 | ##### Single instance from custom image
 66 | 
 67 | To start a single Zend Server instance, execute:
 68 | 
 69 | 	    $ docker run <image-id>
 70 | 
 71 | #### Cluster from custom image
 72 | 
 73 | To start a Zend Server cluster, execute the following command on each cluster node:
 74 | 
 75 | 	    $ docker run -e MYSQL_HOSTNAME=<db-ip> -e MYSQL_PORT=3306 -e MYSQL_USERNAME=<username> -e MYSQL_PASSWORD=<password> -e MYSQL_DBNAME=zend <image-id>
 76 | 
 77 | #### Accessing Zend server
 78 | 
 79 | Once started, the container will output the information required to access the PHP application and the Zend Server UI, including an automatically generated admin password.
 80 | 
 81 | #### Port forwarding (For remote access)
 82 | 
 83 | To access the container **remotely**, port forwarding must be configured, either manually or using docker. For example, this command redirects port 80 to port 88, and port 10081 (Zend Server UI port) to port 10088:
 84 | 
 85 | 	    $ docker run -p 88:80 -p 10088:10081 %%IMAGE%%
 86 | 
 87 | ##### For clustered instances:
 88 | 
 89 | To start a Zend Server cluster you must provide a Mysql compatible database:
 90 | 
 91 | 	    $ docker run -p 88:80 -p 10088:10081 -e MYSQL_HOSTNAME=<db-ip> -e MYSQL_PORT=3306 -e MYSQL_USERNAME=<username> -e MYSQL_PASSWORD=<password> -e MYSQL_DBNAME=zend <image-id>
 92 | 
 93 | Please note, when running multiple instances only one instance can be bound to a port. If you are running a cluster, either assign a port redirect to one node only, or assign a different port to each container.
 94 | 
 95 | #### Adding application files
 96 | 
 97 | Application files can be automatically pulled from a Git repo by setting the **GIT_URL** env var to the repo's URL. Alternatively, if building an image from Dockerfile, place the app files in the "app/" folder.
 98 | 
 99 | The files will be copied to the containers /var/www/html folder and defined in Zend Server as the default app. An example index.html file is included. this feature is available in Zend Server 8 and above.
100 | 
101 | #### Env variables
102 | 
103 | Env variables are passed in the run command with the "-e" switch.
104 | 
105 | ##### Optional env-variables:
106 | 
107 | To specify a pre-defined admin password for Zend Server use:
108 | 
109 | -	ZS\_ADMIN\_PASSWORD
110 | 
111 | Automatically Deploy an app from Git URL:
112 | 
113 | -	GIT\_URL
114 | 
115 | MySQL vars for clustered ops. *ALL* are required for the node to properly join a cluster:
116 | 
117 | -	MYSQL\_HOSTNAME - ip or hostname of MySQL database
118 | -	MYSQL\_PORT - MySQL listening port
119 | -	MYSQL\_USERNAME
120 | -	MYSQL\_PASSWORD
121 | -	MYSQL\_DBNAME - Name of the database Zend Server will use for cluster ops (created automatically if it does not exist).
122 | 
123 | To specify a pre-purchased license use the following env vars:
124 | 
125 | -	ZEND\_LICENSE\_KEY
126 | -	ZEND\_LICENSE\_ORDER
127 | 
128 | Set Zend Server to production mode by setting the following env var to "true". By default Zend Server is set to "development mode" with Z-Ray enabled:
129 | 
130 | -	ZS\_PRODUCTION
131 | 
132 | ### Minimal Requirements
133 | 
134 | Each Zend Server Docker container requires 1GB of availible memory.
135 | 
```

--------------------------------------------------------------------------------
/monica/content.md:
--------------------------------------------------------------------------------

```markdown
  1 | # What is Monica?
  2 | 
  3 | Monica is a great open source personal relationship management system to organize the interactions with your loved ones.
  4 | 
  5 | %%LOGO%%
  6 | 
  7 | ## How to use this image
  8 | 
  9 | There are two versions of the image you may choose from.
 10 | 
 11 | The `apache` tag contains a full Monica installation with an apache webserver. This points to the default `latest` tag too.
 12 | 
 13 | The `fpm` tag contains a fastCGI-Process that serves the web pages. This image should be combined with a webserver used as a proxy, like apache or nginx.
 14 | 
 15 | ### Using the apache image
 16 | 
 17 | This image contains a webserver that exposes port 80. Run the container with:
 18 | 
 19 | ```console
 20 | docker run --name some-%%REPO%% -d -p 8080:80 %%IMAGE%%
 21 | ```
 22 | 
 23 | ### Using the fpm image
 24 | 
 25 | This image serves a fastCGI server that exposes port 9000. You may need an additional web server that can proxy requests to the fpm port 9000 of the container. Run this container with:
 26 | 
 27 | ```console
 28 | docker run --name some-%%REPO%% -d -p 9000:9000 %%IMAGE%%:fpm
 29 | ```
 30 | 
 31 | ### Using an external database
 32 | 
 33 | You'll need to setup an external database. Monica currently support MySQL/MariaDB database. You can also link a database container, e. g. `--link my-mysql:db`, and then use `db` as the database host on setup. More info is in the Docker Compose section.
 34 | 
 35 | ### Persistent data storage
 36 | 
 37 | To have a persistent storage for your datas, you may want to create volumes for your db, and for monica you will have to save the `/var/www/html/storage` directory.
 38 | 
 39 | Run a container with this named volume:
 40 | 
 41 | ```console
 42 | docker run -d \
 43 |         -v monica_data:/var/www/html/storage \
 44 |         %%IMAGE%%
 45 | ```
 46 | 
 47 | ### Run commands inside the container
 48 | 
 49 | Like every Laravel application, the `php artisan` command is very usefull for Monica. To run a command inside the container, run
 50 | 
 51 | ```console
 52 | docker exec CONTAINER_ID php artisan COMMAND
 53 | ```
 54 | 
 55 | Or for Docker Compose:
 56 | 
 57 | ```console
 58 | docker compose exec %%REPO%% php artisan COMMAND
 59 | ```
 60 | 
 61 | where `%%REPO%%` is the name of the service in your `compose.yaml` file.
 62 | 
 63 | ## Configuration using environment variables
 64 | 
 65 | The Monica image will use environment variables to setup the application. See [Monica documentation](https://github.com/monicahq/monica/blob/4.x/.env.example) for common used variables you should setup.
 66 | 
 67 | ## Running the image with Docker Compose
 68 | 
 69 | See some examples of Docker Compose possibilities in the [example section](%%GITHUB-REPO%%/blob/main/.examples).
 70 | 
 71 | ---
 72 | 
 73 | ### Apache version
 74 | 
 75 | This version will use the apache image and add a mysql container. The volumes are set to keep your data persistent. This setup provides **no ssl encryption** and is intended to run behind a proxy.
 76 | 
 77 | Make sure to pass in values for `APP_KEY` variable before you run this setup.
 78 | 
 79 | 1.	Create a `compose.yaml` file
 80 | 
 81 | 	```yaml
 82 | 	services:
 83 | 	  app:
 84 | 	    image: monica
 85 | 	    depends_on:
 86 | 	      - db
 87 | 	    ports:
 88 | 	      - 8080:80
 89 | 	    environment:
 90 | 	      - APP_KEY= # Generate with `echo -n 'base64:'; openssl rand -base64 32`
 91 | 	      - DB_HOST=db
 92 | 	      - DB_USERNAME=monica
 93 | 	      - DB_PASSWORD=secret
 94 | 	    volumes:
 95 | 	      - data:/var/www/html/storage
 96 | 	    restart: always
 97 | 
 98 | 	  db:
 99 | 	    image: mariadb:11
100 | 	    environment:
101 | 	      - MYSQL_RANDOM_ROOT_PASSWORD=true
102 | 	      - MYSQL_DATABASE=monica
103 | 	      - MYSQL_USER=monica
104 | 	      - MYSQL_PASSWORD=secret
105 | 	    volumes:
106 | 	      - mysql:/var/lib/mysql
107 | 	    restart: always
108 | 
109 | 	volumes:
110 | 	  data:
111 | 	    name: data
112 | 	  mysql:
113 | 	    name: mysql
114 | 	```
115 | 
116 | 2.	Set a value for `APP_KEY` variable before you run this setup. It should be a random 32-character string. You can for instance copy and paste the output of `echo -n 'base64:'; openssl rand -base64 32`:
117 | 
118 | 3.	Run
119 | 
120 | 	```console
121 | 	docker compose up -d
122 | 	```
123 | 
124 | 	Wait until all migrations are done and then access Monica at http://localhost:8080/ from your host system. If this looks ok, add your first user account.
125 | 
126 | 4.	Run this command once:
127 | 
128 | 	```console
129 | 	docker compose exec app php artisan setup:production
130 | 	```
131 | 
132 | ### FPM version
133 | 
134 | When using FPM image, you will need another container with a webserver to proxy http requests. In this example we use nginx with a basic container to do this.
135 | 
136 | 1.	Download `nginx.conf` and `Dockerfile` file for nginx image. An example can be found on the [`example section`](%%GITHUB-REPO%%/blob/main/.examples/full/fpm/web/)
137 | 
138 | 	```sh
139 | 	mkdir web
140 | 	curl -sSL https://raw.githubusercontent.com/monicahq/docker/main/.examples/full/web/nginx.conf -o web/nginx.conf
141 | 	curl -sSL https://raw.githubusercontent.com/monicahq/docker/main/.examples/full/web/Dockerfile -o web/Dockerfile
142 | 	```
143 | 
144 | 	The `web` container image should be pre-build before each deploy with: `docker compose build`.
145 | 
146 | 2.	Create a `compose.yaml` file
147 | 
148 | 	```yaml
149 | 	services:
150 | 	  app:
151 | 	    image: monica:fpm
152 | 	    depends_on:
153 | 	      - db
154 | 	    environment:
155 | 	      - APP_KEY= # Generate with `echo -n 'base64:'; openssl rand -base64 32`
156 | 	      - DB_HOST=db
157 | 	      - DB_USERNAME=monica
158 | 	      - DB_PASSWORD=secret
159 | 	    volumes:
160 | 	      - data:/var/www/html/storage
161 | 	    restart: always
162 | 
163 | 	  web:
164 | 	    build: ./web
165 | 	    ports:
166 | 	      - 8080:80
167 | 	    depends_on:
168 | 	      - app
169 | 	    volumes:
170 | 	      - data:/var/www/html/storage:ro
171 | 	    restart: always
172 | 
173 | 	  db:
174 | 	    image: mariadb:11
175 | 	    environment:
176 | 	      - MYSQL_RANDOM_ROOT_PASSWORD=true
177 | 	      - MYSQL_DATABASE=monica
178 | 	      - MYSQL_USER=monica
179 | 	      - MYSQL_PASSWORD=secret
180 | 	    volumes:
181 | 	      - mysql:/var/lib/mysql
182 | 	    restart: always
183 | 
184 | 	volumes:
185 | 	  data:
186 | 	    name: data
187 | 	  mysql:
188 | 	    name: mysql
189 | 	```
190 | 
191 | 3.	Set a value for `APP_KEY` variable before you run this setup. It should be a random 32-character string. You can for instance copy and paste the output of `echo -n 'base64:'; openssl rand -base64 32`:
192 | 
193 | 4.	Run
194 | 
195 | 	```console
196 | 	docker compose up -d
197 | 	```
198 | 
199 | 	Wait until all migrations are done and then access Monica at http://localhost:8080/ from your host system. If this looks ok, add your first user account.
200 | 
201 | 5.	Run this command once:
202 | 
203 | 	```console
204 | 	docker compose exec app php artisan setup:production
205 | 	```
206 | 
207 | ## Make Monica available from the internet
208 | 
209 | To expose your Monica instance for the internet, it's important to set environment variable `APP_ENV=production`. In this case `https` mode will be mandatory.
210 | 
211 | ### Using a proxy webserver on the host
212 | 
213 | One way to expose your Monica instance is to use a proxy webserver from your host with SSL capabilities. This is possible with a reverse proxy.
214 | 
215 | ### Using a proxy webserver container
216 | 
217 | See some examples of Docker Compose possibilities in the [example section](%%GITHUB-REPO%%/blob/main/.examples) to show how to a proxy webserver with ssl capabilities.
218 | 
```

--------------------------------------------------------------------------------
/drupal/content.md:
--------------------------------------------------------------------------------

```markdown
  1 | # What is Drupal?
  2 | 
  3 | Drupal is a free and open-source content-management framework written in PHP and distributed under the GNU General Public License. It is used as a back-end framework for at least 2.1% of all Web sites worldwide ranging from personal blogs to corporate, political, and government sites including WhiteHouse.gov and data.gov.uk. It is also used for knowledge management and business collaboration.
  4 | 
  5 | > [wikipedia.org/wiki/Drupal](https://en.wikipedia.org/wiki/Drupal)
  6 | 
  7 | %%LOGO%%
  8 | 
  9 | # How to use this image
 10 | 
 11 | The basic pattern for starting a `%%REPO%%` instance is:
 12 | 
 13 | ```console
 14 | $ docker run --name some-%%REPO%% -d %%IMAGE%%
 15 | ```
 16 | 
 17 | If you'd like to be able to access the instance from the host without the container's IP, standard port mappings can be used:
 18 | 
 19 | ```console
 20 | $ docker run --name some-%%REPO%% -p 8080:80 -d %%IMAGE%%
 21 | ```
 22 | 
 23 | Then, access it via `http://localhost:8080` or `http://host-ip:8080` in a browser.
 24 | 
 25 | There are multiple database types supported by this image, most easily used via Docker networks. In the default configuration, SQLite can be used to avoid a second container and write to flat-files. More detailed instructions for different (more production-ready) database types follow.
 26 | 
 27 | When first accessing the webserver provided by this image, it will go through a brief setup process. The details provided below are specifically for the "Set up database" step of that configuration process.
 28 | 
 29 | ## MySQL
 30 | 
 31 | For using Drupal with a MySQL database you'll want to run a [MySQL](https://hub.docker.com/_/mysql/) container and configure it using environment variables for `MYSQL_DATABASE`, `MYSQL_USER`, `MYSQL_PASSWORD`, and `MYSQL_ROOT_PASSWORD`
 32 | 
 33 | ```console
 34 | $ docker run -d --name some-mysql --network some-network \
 35 | 	-e MYSQL_DATABASE=drupal \
 36 | 	-e MYSQL_USER=user \
 37 | 	-e MYSQL_PASSWORD=password \
 38 | 	-e MYSQL_ROOT_PASSWORD=password \
 39 | mysql:5.7
 40 | ```
 41 | 
 42 | In Drupal's "set up database" step on the web installation walkthrough enter the values for the environment variables you provided
 43 | 
 44 | -	Database name/username/password: `<details for accessing your MySQL instance>` (`MYSQL_USER`, `MYSQL_PASSWORD`, `MYSQL_DATABASE`; see environment variables in the description for [`mysql`](https://hub.docker.com/_/mysql/))
 45 | -	ADVANCED OPTIONS; Database host: `some-mysql` (Containers on the same [docker-network](https://docs.docker.com/v17.09/engine/userguide/networking/) are routable by their container-name)
 46 | 
 47 | ## PostgreSQL
 48 | 
 49 | For using Drupal with a PostgreSQL database you'll want to run a [Postgres](https://hub.docker.com/_/postgres) container and configure it using environment variables for `POSTGRES_DB`, `POSTGRES_USER`, and `POSTGRES_PASSWORD`
 50 | 
 51 | ```console
 52 | $ docker run -d --name some-postgres --network some-network \
 53 | 	-e POSTGRES_DB=drupal \
 54 | 	-e POSTGRES_USER=user \
 55 | 	-e POSTGRES_PASSWORD=pass \
 56 | postgres:11
 57 | ```
 58 | 
 59 | In Drupal's "set up database" step on the web installation walkthrough enter the values for the environment variables you provided
 60 | 
 61 | -	Database type: `PostgreSQL`
 62 | -	Database name/username/password: `<details for accessing your PostgreSQL instance>` (`POSTGRES_USER`, `POSTGRES_PASSWORD`, `POSTGRES_DB`; see environment variables in the description for [`postgres`](https://hub.docker.com/_/postgres/))
 63 | -	ADVANCED OPTIONS; Database host: `some-postgres` (Containers on the same [docker-network](https://docs.docker.com/v17.09/engine/userguide/networking/) are routable by their container-name)
 64 | 
 65 | ## Volumes
 66 | 
 67 | By default, this image does not include any volumes. There is a lot of good discussion on this topic in [docker-library/drupal#3](https://github.com/docker-library/drupal/issues/3), which is definitely recommended reading.
 68 | 
 69 | There is consensus that `/var/www/html/modules`, `/var/www/html/profiles`, and `/var/www/html/themes` are things that generally ought to be volumes (and might have an explicit `VOLUME` declaration in a future update to this image), but handling of `/var/www/html/sites` is somewhat more complex, since the contents of that directory *do* need to be initialized with the contents from the image.
 70 | 
 71 | If using bind-mounts, one way to accomplish pre-seeding your local `sites` directory would be something like the following:
 72 | 
 73 | ```console
 74 | $ docker run --rm %%IMAGE%% tar -cC /var/www/html/sites . | tar -xC /path/on/host/sites
 75 | ```
 76 | 
 77 | This can then be bind-mounted into a new container:
 78 | 
 79 | ```console
 80 | $ docker run --name some-%%REPO%% --network some-network -d \
 81 | 	-v /path/on/host/modules:/var/www/html/modules \
 82 | 	-v /path/on/host/profiles:/var/www/html/profiles \
 83 | 	-v /path/on/host/sites:/var/www/html/sites \
 84 | 	-v /path/on/host/themes:/var/www/html/themes \
 85 | 	%%IMAGE%%
 86 | ```
 87 | 
 88 | Another solution using Docker Volumes:
 89 | 
 90 | ```console
 91 | $ docker volume create %%REPO%%-sites
 92 | $ docker run --rm -v %%REPO%%-sites:/temporary/sites %%IMAGE%% cp -aRT /var/www/html/sites /temporary/sites
 93 | $ docker run --name some-%%REPO%% --network some-network -d \
 94 | 	-v %%REPO%%-modules:/var/www/html/modules \
 95 | 	-v %%REPO%%-profiles:/var/www/html/profiles \
 96 | 	-v %%REPO%%-sites:/var/www/html/sites \
 97 | 	-v %%REPO%%-themes:/var/www/html/themes \
 98 | 	%%IMAGE%%
 99 | ```
100 | 
101 | ## %%COMPOSE%%
102 | 
103 | Run `docker compose up`, wait for it to initialize completely, and visit `http://localhost:8080` or `http://host-ip:8080` (as appropriate). When installing select `postgres` as database with the following parameters: `dbname=postgres` `user=postgres` `pass=example` `hostname=postgres`
104 | 
105 | ## Adding additional libraries / extensions
106 | 
107 | This image does not provide any additional PHP extensions or other libraries, even if they are required by popular plugins. There are an infinite number of possible plugins, and they potentially require any extension PHP supports. Including every PHP extension that exists would dramatically increase the image size.
108 | 
109 | If you need additional PHP extensions, you'll need to create your own image `FROM` this one. The [documentation of the `php` image](https://github.com/docker-library/docs/blob/master/php/README.md#how-to-install-more-php-extensions) explains how to compile additional extensions. Additionally, the [`drupal:7` Dockerfile](https://github.com/docker-library/drupal/blob/bee08efba505b740a14d68254d6e51af7ab2f3ea/7/Dockerfile#L6-9) has an example of doing this.
110 | 
111 | The following Docker Hub features can help with the task of keeping your dependent images up-to-date:
112 | 
113 | -	[Automated Builds](https://docs.docker.com/docker-hub/builds/) let Docker Hub automatically build your Dockerfile each time you push changes to it.
114 | 
115 | ## Running as an arbitrary user
116 | 
117 | See [the "Running as an arbitrary user" section of the `php` image documentation](https://hub.docker.com/_/php/).
118 | 
```

--------------------------------------------------------------------------------
/ibmjava/content.md:
--------------------------------------------------------------------------------

```markdown
  1 | ### Overview
  2 | 
  3 | The images in this repository contain IBM® SDK, Java™ Technology Edition. For more information on the latest version and what's new, see [sdk8 on IBM developerWorks](https://developer.ibm.com/javasdk/downloads/sdk8/) and [jdk11 on IBM developerWorks](https://developer.ibm.com/javasdk/downloads/java-sdk-downloads-version-110/). See the license section for restrictions that relate to the use of this image. For more information about IBM® SDK, Java™ Technology Edition and API documentation as well as tutorials, recipes, and Java usage in IBM Cloud, see [IBM developerWorks](https://developer.ibm.com/javasdk/).
  4 | 
  5 | Java and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/or its affiliates.
  6 | 
  7 | ### Eclipse OpenJ9 Images
  8 | 
  9 | [Eclipse OpenJ9](https://www.eclipse.org/openj9) is a high performance, scalable, Java virtual machine (JVM) implementation that represents hundreds of person-years of effort. Contributed to the Eclipse project by IBM, the OpenJ9 JVM underpins the IBM SDK, Java Technology Edition product that is a core component of many IBM Enterprise software products. Continued development of OpenJ9 at the Eclipse foundation ensures wider collaboration, fresh innovation, and the opportunity to influence the development of OpenJ9 for the next generation of Java applications. The Eclipse OpenJ9 Docker images are available through [AdoptOpenJDK](https://adoptopenjdk.net/). They are available from [here](https://hub.docker.com/u/adoptopenjdk/).
 10 | 
 11 | ### Images
 12 | 
 13 | There are three types of Docker images here: the Software Developers Kit (SDK), and the Java Runtime Environment (JRE) and a small footprint version of the JRE (SFJ). These images can be used as the basis for custom built images for running your applications.
 14 | 
 15 | ##### Small Footprint JRE
 16 | 
 17 | The Small Footprint JRE ([SFJ](http://www.ibm.com/support/knowledgecenter/en/SSYKE2_8.0.0/com.ibm.java.lnx.80.doc/user/small_jre.html)) is designed specifically for web developers who want to develop and deploy cloud-based Java applications. Java tools and functions that are not required in the cloud environment, such as the Java control panel, are removed. The runtime environment is stripped down to provide core, essential function that has a greatly reduced disk and memory footprint.
 18 | 
 19 | ##### Alpine Linux
 20 | 
 21 | Consider using [Alpine Linux](http://alpinelinux.org/) if you are concerned about the size of the overall image. Alpine Linux is a stripped down version of Linux that is based on [musl libc](http://wiki.musl-libc.org/wiki/Functional_differences_from_glibc) and Busybox, resulting in a [Docker image](https://hub.docker.com/_/alpine/) size of approximately 5 MB. Due to its extremely small size and reduced number of installed packages, it has a much smaller attack surface which improves security. IBM SDK has a dependency on gnu glibc, the sources can be found [here](https://github.com/sgerrand/docker-glibc-builder/releases/). Installing this library adds an extra 8 MB to the image size. The following table compares Docker Image sizes based on the JRE version `8.0-3.10`.
 22 | 
 23 | | JRE    | JRE    | SFJ    | SFJ    |
 24 | |:------:|:------:|:------:|:------:|
 25 | | Ubuntu | Alpine | Ubuntu | Alpine |
 26 | | 305 MB | 184 MB | 220 MB | 101 MB |
 27 | 
 28 | **Note: Alpine Linux is not an officially supported operating system for IBM® SDK, Java™ Technology Edition.**
 29 | 
 30 | ##### Multi-Arch Image
 31 | 
 32 | Docker Images for the following architectures are now available:
 33 | 
 34 | -	[x86\_64](https://hub.docker.com/_/ibmjava/)
 35 | -	[i386](https://hub.docker.com/r/i386/ibmjava/)
 36 | -	[ppc64le](https://hub.docker.com/r/ppc64le/ibmjava/)
 37 | -	[s390x](https://hub.docker.com/r/s390x/ibmjava/)
 38 | 
 39 | ibmjava now has multi-arch support and so the exact same commands as below works on all supported architectures. This also means that it is no longer necessary to prefix the arch with the image name as that happens auto-magically.
 40 | 
 41 | ### How to use this Image
 42 | 
 43 | To run a pre-built jar file with the JRE image, use the following commands:
 44 | 
 45 | ```dockerfile
 46 | FROM %%IMAGE%%:jre
 47 | RUN mkdir /opt/app
 48 | COPY japp.jar /opt/app
 49 | CMD ["java", "-jar", "/opt/app/japp.jar"]
 50 | ```
 51 | 
 52 | You can build and run the Docker Image as shown in the following example:
 53 | 
 54 | ```console
 55 | docker build -t japp .
 56 | docker run -it --rm japp
 57 | ```
 58 | 
 59 | If you want to place the jar file on the host file system instead of inside the container, you can mount the host path onto the container by using the following commands:
 60 | 
 61 | ```dockerfile
 62 | FROM %%IMAGE%%:jre
 63 | CMD ["java", "-jar", "/opt/app/japp.jar"]
 64 | ```
 65 | 
 66 | ```console
 67 | docker build -t japp .
 68 | docker run -it -v /path/on/host/system/jars:/opt/app japp
 69 | ```
 70 | 
 71 | ### Using the Class Data Sharing feature
 72 | 
 73 | IBM SDK, Java Technology Edition provides a feature called [Class data sharing](http://www-01.ibm.com/support/knowledgecenter/SSYKE2_8.0.0/com.ibm.java.lnx.80.doc/diag/understanding/shared_classes.html). This mechanism offers transparent and dynamic sharing of data between multiple Java virtual machines (JVMs) running on the same host thereby reducing the amount of physical memory consumed by each JVM instance. By providing partially verified classes and possibly pre-loaded classes in memory, this mechanism also improves the start up time of the JVM.
 74 | 
 75 | To enable class data sharing between JVMs that are running in different containers on the same host, a common location must be shared between containers. This requirement can be satisfied through the host or a data volume container. When enabled, class data sharing creates a named "class cache", which is a memory-mapped file, at the common location. This feature is enabled by passing the `-Xshareclasses` option to the JVM as shown in the following Dockerfile example:
 76 | 
 77 | ```dockerfile
 78 | FROM %%IMAGE%%:jre
 79 | RUN mkdir /opt/shareclasses
 80 | RUN mkdir /opt/app
 81 | COPY japp.jar /opt/app
 82 | CMD ["java", "-Xshareclasses:cacheDir=/opt/shareclasses", "-jar", "/opt/app/japp.jar"]
 83 | ```
 84 | 
 85 | The `cacheDir` sub-option specifies the location of the class cache. For example `/opt/sharedclasses`. When sharing through the host, a host path must be mounted onto the container at the location the JVM expects to find the class cache. For example:
 86 | 
 87 | ```console
 88 | docker build -t japp .
 89 | docker run -it -v /path/on/host/shareclasses/dir:/opt/shareclasses japp
 90 | ```
 91 | 
 92 | When sharing through a data volume container, create a named data volume container that shares a volume.
 93 | 
 94 | ```console
 95 | docker create -v /opt/shareclasses --name classcache japp /bin/true
 96 | ```
 97 | 
 98 | Then start your JVM container by using `--volumes-from` flag to mount the shared volume, as shown in the following example:
 99 | 
100 | ```console
101 | docker run -it --volumes-from classcache japp
102 | ```
103 | 
104 | ### See Also
105 | 
106 | See the [Websphere-Liberty image](https://hub.docker.com/_/websphere-liberty/), which builds on top of this IBM docker image for Java.
107 | 
```

--------------------------------------------------------------------------------
/nginx/content.md:
--------------------------------------------------------------------------------

```markdown
  1 | # What is nginx?
  2 | 
  3 | Nginx (pronounced "engine-x") is an open source reverse proxy server for HTTP, HTTPS, SMTP, POP3, and IMAP protocols, as well as a load balancer, HTTP cache, and a web server (origin server). The nginx project started with a strong focus on high concurrency, high performance and low memory usage. It is licensed under the 2-clause BSD-like license and it runs on Linux, BSD variants, Mac OS X, Solaris, AIX, HP-UX, as well as on other *nix flavors. It also has a proof of concept port for Microsoft Windows.
  4 | 
  5 | > [wikipedia.org/wiki/Nginx](https://en.wikipedia.org/wiki/Nginx)
  6 | 
  7 | %%LOGO%%
  8 | 
  9 | # How to use this image
 10 | 
 11 | ## Hosting some simple static content
 12 | 
 13 | ```console
 14 | $ docker run --name some-nginx -v /some/content:/usr/share/nginx/html:ro -d %%IMAGE%%
 15 | ```
 16 | 
 17 | Alternatively, a simple `Dockerfile` can be used to generate a new image that includes the necessary content (which is a much cleaner solution than the bind mount above):
 18 | 
 19 | ```dockerfile
 20 | FROM %%IMAGE%%
 21 | COPY static-html-directory /usr/share/nginx/html
 22 | ```
 23 | 
 24 | Place this file in the same directory as your directory of content ("static-html-directory"), then run these commands to build and start your container:
 25 | 
 26 | ```console
 27 | $ docker build -t some-content-nginx .
 28 | $ docker run --name some-nginx -d some-content-nginx
 29 | ```
 30 | 
 31 | ## Exposing external port
 32 | 
 33 | ```console
 34 | $ docker run --name some-nginx -d -p 8080:80 some-content-nginx
 35 | ```
 36 | 
 37 | Then you can hit `http://localhost:8080` or `http://host-ip:8080` in your browser.
 38 | 
 39 | ## Customize configuration
 40 | 
 41 | You can mount your configuration file, or build a new image with it.
 42 | 
 43 | If you wish to adapt the default configuration, use something like the following to get it from a running nginx container:
 44 | 
 45 | ```console
 46 | $ docker run --rm --entrypoint=cat %%IMAGE%% /etc/nginx/nginx.conf > /host/path/nginx.conf
 47 | ```
 48 | 
 49 | And then edit `/host/path/nginx.conf` in your host file system.
 50 | 
 51 | For information on the syntax of the nginx configuration files, see [the official documentation](http://nginx.org/en/docs/) (specifically the [Beginner's Guide](http://nginx.org/en/docs/beginners_guide.html#conf_structure)).
 52 | 
 53 | ### Mount your configuration file
 54 | 
 55 | ```console
 56 | $ docker run --name my-custom-nginx-container -v /host/path/nginx.conf:/etc/nginx/nginx.conf:ro -d %%IMAGE%%
 57 | ```
 58 | 
 59 | ### Build a new image with your configuration file
 60 | 
 61 | ```dockerfile
 62 | FROM %%IMAGE%%
 63 | COPY nginx.conf /etc/nginx/nginx.conf
 64 | ```
 65 | 
 66 | If you add a custom `CMD` in the Dockerfile, be sure to include `-g daemon off;` in the `CMD` in order for nginx to stay in the foreground, so that Docker can track the process properly (otherwise your container will stop immediately after starting)!
 67 | 
 68 | Then build the image with `docker build -t custom-nginx .` and run it as follows:
 69 | 
 70 | ```console
 71 | $ docker run --name my-custom-nginx-container -d custom-nginx
 72 | ```
 73 | 
 74 | ### Using environment variables in %%IMAGE%% configuration (new in 1.19)
 75 | 
 76 | Out-of-the-box, %%IMAGE%% doesn't support environment variables inside most configuration blocks. But this image has a function, which will extract environment variables before %%IMAGE%% starts.
 77 | 
 78 | Here is an example using `compose.yaml`:
 79 | 
 80 | ```yaml
 81 | web:
 82 |   image: %%IMAGE%%
 83 |   volumes:
 84 |    - ./templates:/etc/nginx/templates
 85 |   ports:
 86 |    - "8080:80"
 87 |   environment:
 88 |    - NGINX_HOST=foobar.com
 89 |    - NGINX_PORT=80
 90 | ```
 91 | 
 92 | By default, this function reads template files in `/etc/nginx/templates/*.template` and outputs the result of executing `envsubst` to `/etc/nginx/conf.d`.
 93 | 
 94 | So if you place `templates/default.conf.template` file, which contains variable references like this:
 95 | 
 96 | 	listen       ${NGINX_PORT};
 97 | 
 98 | outputs to `/etc/nginx/conf.d/default.conf` like this:
 99 | 
100 | 	listen       80;
101 | 
102 | This behavior can be changed via the following environment variables:
103 | 
104 | -	`NGINX_ENVSUBST_TEMPLATE_DIR`
105 | 	-	A directory which contains template files (default: `/etc/nginx/templates`)
106 | 	-	When this directory doesn't exist, this function will do nothing about template processing.
107 | -	`NGINX_ENVSUBST_TEMPLATE_SUFFIX`
108 | 	-	A suffix of template files (default: `.template`)
109 | 	-	This function only processes the files whose name ends with this suffix.
110 | -	`NGINX_ENVSUBST_OUTPUT_DIR`
111 | 	-	A directory where the result of executing envsubst is output (default: `/etc/nginx/conf.d`)
112 | 	-	The output filename is the template filename with the suffix removed.
113 | 		-	ex.) `/etc/nginx/templates/default.conf.template` will be output with the filename `/etc/nginx/conf.d/default.conf`.
114 | 	-	This directory must be writable by the user running a container.
115 | 
116 | ## Running %%IMAGE%% in read-only mode
117 | 
118 | To run %%IMAGE%% in read-only mode, you will need to mount a Docker volume to every location where %%IMAGE%% writes information. The default %%IMAGE%% configuration requires write access to `/var/cache/nginx` and `/var/run`. This can be easily accomplished by running %%IMAGE%% as follows:
119 | 
120 | ```console
121 | $ docker run -d -p 80:80 --read-only -v $(pwd)/nginx-cache:/var/cache/nginx -v $(pwd)/nginx-pid:/var/run nginx
122 | ```
123 | 
124 | If you have a more advanced configuration that requires %%IMAGE%% to write to other locations, simply add more volume mounts to those locations.
125 | 
126 | ## Running nginx in debug mode
127 | 
128 | Images since version 1.9.8 come with `nginx-debug` binary that produces verbose output when using higher log levels. It can be used with simple CMD substitution:
129 | 
130 | ```console
131 | $ docker run --name my-nginx -v /host/path/nginx.conf:/etc/nginx/nginx.conf:ro -d %%IMAGE%% nginx-debug -g 'daemon off;'
132 | ```
133 | 
134 | Similar configuration in `compose.yaml` may look like this:
135 | 
136 | ```yaml
137 | web:
138 |   image: %%IMAGE%%
139 |   volumes:
140 |     - ./nginx.conf:/etc/nginx/nginx.conf:ro
141 |   command: [nginx-debug, '-g', 'daemon off;']
142 | ```
143 | 
144 | ## Entrypoint quiet logs
145 | 
146 | Since version 1.19.0, a verbose entrypoint was added. It provides information on what's happening during container startup. You can silence this output by setting environment variable `NGINX_ENTRYPOINT_QUIET_LOGS`:
147 | 
148 | ```console
149 | $ docker run -d -e NGINX_ENTRYPOINT_QUIET_LOGS=1 %%IMAGE%%
150 | ```
151 | 
152 | ## User and group id
153 | 
154 | Since 1.17.0, both alpine- and debian-based images variants use the same user and group ids to drop the privileges for worker processes:
155 | 
156 | ```console
157 | $ id
158 | uid=101(nginx) gid=101(nginx) groups=101(nginx)
159 | ```
160 | 
161 | ## Running %%IMAGE%% as a non-root user
162 | 
163 | It is possible to run the image as a less privileged arbitrary UID/GID. This, however, requires modification of %%IMAGE%% configuration to use directories writeable by that specific UID/GID pair:
164 | 
165 | ```console
166 | $ docker run -d -v $PWD/nginx.conf:/etc/nginx/nginx.conf %%IMAGE%%
167 | ```
168 | 
169 | where nginx.conf in the current directory should have the following directives re-defined:
170 | 
171 | ```nginx
172 | pid        /tmp/nginx.pid;
173 | ```
174 | 
175 | And in the http context:
176 | 
177 | ```nginx
178 | http {
179 |     client_body_temp_path /tmp/client_temp;
180 |     proxy_temp_path       /tmp/proxy_temp_path;
181 |     fastcgi_temp_path     /tmp/fastcgi_temp;
182 |     uwsgi_temp_path       /tmp/uwsgi_temp;
183 |     scgi_temp_path        /tmp/scgi_temp;
184 | ...
185 | }
186 | ```
187 | 
188 | Alternatively, check out the official [Docker NGINX unprivileged image](https://hub.docker.com/r/nginxinc/nginx-unprivileged).
189 | 
```

--------------------------------------------------------------------------------
/haskell/content.md:
--------------------------------------------------------------------------------

```markdown
  1 | # What is Haskell?
  2 | 
  3 | [Haskell](http://www.haskell.org) is a [lazy](http://en.wikibooks.org/wiki/Haskell/Laziness), functional, statically-typed programming language with advanced type system features such as higher-rank, higher-kinded parametric [polymorphism](http://en.wikibooks.org/wiki/Haskell/Polymorphism), monadic [effects](http://en.wikibooks.org/wiki/Haskell/Understanding_monads/IO), generalized algebraic data types ([GADT](http://en.wikibooks.org/wiki/Haskell/GADT)s), flexible [type classes](http://en.wikibooks.org/wiki/Haskell/Advanced_type_classes), associated [type families](http://en.wikipedia.org/wiki/Type_family), and more.
  4 | 
  5 | Haskell's [`ghc`](http://www.haskell.org/ghc) is a [portable](https://gitlab.haskell.org/ghc/ghc/-/wikis/platforms), optimizing compiler with a foreign-function interface ([FFI](http://en.wikibooks.org/wiki/Haskell/FFI)), an LLVM backend, and sophisticated runtime support for [concurrency](http://en.wikibooks.org/wiki/Haskell/Concurrency), explicit/implicit [parallelism](https://simonmar.github.io/pages/pcph.html), runtime [profiling](http://www.haskell.org/haskellwiki/ThreadScope), etc. Other Haskell tools like `criterion`, `quickcheck`, `hpc`, and `haddock` provide advanced benchmarking, property-based testing, code coverage, and documentation generation.
  6 | 
  7 | A large number of production-quality Haskell libraries are available from [Hackage](https://hackage.haskell.org) in the form of [Cabal](https://www.haskell.org/cabal/) packages. The traditional `cabal` tool, or the more recent [`stack`](http://docs.haskellstack.org/en/stable/README.html) tool (available in `7.10.3`+) can be used to streamline working with Cabal packages.
  8 | 
  9 | %%LOGO%%
 10 | 
 11 | ## About this image
 12 | 
 13 | This image ships a minimal Haskell toolchain (`ghc` and `cabal-install`) as well as the `stack` tool ([https://www.haskellstack.org/](https://www.haskellstack.org/)) where possible. [`stack` does not currently support `ARM64`](https://github.com/commercialhaskell/stack/issues/2103) so is not included for that processor architecture.
 14 | 
 15 | ARM64 support is new and should be considered experimental at this stage. Support has been added as of `8.10.7`, `9.0.2` and `9.2.1`.
 16 | 
 17 | Note: The GHC developers do not support legacy release branches (i.e. `7.8.x`). Only the two most recent minor releases will receive updates or be shown in the "Supported tags ..." section at the top of this page.
 18 | 
 19 | Additionally, we aim to support the two most recent versions of Debian (`stable` and `oldstable`) as variants, with the most recent being the default if not specified.
 20 | 
 21 | > Note: Currently `stable` Debian is version 11 bullseye, however it is not yet supported by Haskell tooling. Until that time the default will remain Debian 10 buster. We have dropped support for Debian 9 stretch.
 22 | 
 23 | ## How to use this image
 24 | 
 25 | Start an interactive interpreter session with `ghci`:
 26 | 
 27 | ```console
 28 | $ docker run -it --rm %%IMAGE%%:9
 29 | GHCi, version 9.0.1: http://www.haskell.org/ghc/  :? for help
 30 | Prelude>
 31 | ```
 32 | 
 33 | Dockerize an application using `stack`:
 34 | 
 35 | ```dockerfile
 36 | FROM %%IMAGE%%:8.10
 37 | RUN stack install --resolver lts-17.14 pandoc citeproc
 38 | ENTRYPOINT ["pandoc"]
 39 | ```
 40 | 
 41 | Dockerize an application using `cabal`:
 42 | 
 43 | ```dockerfile
 44 | FROM %%IMAGE%%:8.10
 45 | RUN cabal update && cabal install pandoc citeproc
 46 | ENTRYPOINT ["pandoc"]
 47 | ```
 48 | 
 49 | Iteratively develop a Haskell application with a `Dockerfile` utilizing the build cache:
 50 | 
 51 | ```dockerfile
 52 | FROM %%IMAGE%%:8
 53 | 
 54 | WORKDIR /opt/example
 55 | 
 56 | RUN cabal update
 57 | 
 58 | # Add just the .cabal file to capture dependencies
 59 | COPY ./example.cabal /opt/example/example.cabal
 60 | 
 61 | # Docker will cache this command as a layer, freeing us up to
 62 | # modify source code without re-installing dependencies
 63 | # (unless the .cabal file changes!)
 64 | RUN cabal build --only-dependencies -j4
 65 | 
 66 | # Add and Install Application Code
 67 | COPY . /opt/example
 68 | RUN cabal install
 69 | 
 70 | CMD ["example"]
 71 | ```
 72 | 
 73 | ### Considerations for `happy`, `alex`, etc
 74 | 
 75 | Some packages that also act as build dependencies, such as `happy` and `alex`, are no longer included in this image (as of `%%IMAGE%%:8.2.2` & `%%IMAGE%%:8.4.3`). There is a bootstrapping problem where one or more of these tools may be assumed to be available. If you run in to an error about missing dependencies that are not explicitly called out in a Cabal package, you will need to explicitly mark them for installation.
 76 | 
 77 | ### Considerations for Stack
 78 | 
 79 | The Stack tool is primarily designed to run directly on the host and comes with many advanced features such as GHC bootstrapping and Docker integration. Within the context of a container image, some of these features (`stack docker`) clash with the Docker abstraction and should be avoided.
 80 | 
 81 | Another common scenario that can be confusing is the default Stackage snapshot. A Stackage snapshot is a collection of Haskell packages pinned to specific versions for compatibility with a particular GHC release. When you ask Stack to resolve dependencies it refers to a particular snapshot via the `resolver` value. While you should be specifying a `resolver` explicitly in your projects, it is possible to run with the auto-generated default. That default is determined by the value obtained from the [upstream Stackage server](https://www.stackage.org/) at the time it was requested, and points to the latest "LTS" snapshot. If the snapshot refers to a different version of GHC than is provided in the Docker image, you may see a message like the following:
 82 | 
 83 | ```console
 84 | Step 2/3 : RUN stack install pandoc
 85 |  ---> Running in e20466d52060
 86 | Writing implicit global project config file to: /root/.stack/global-project/stack.yaml
 87 | Note: You can change the snapshot via the resolver field there.
 88 | Using latest snapshot resolver: lts-11.11
 89 | Downloading lts-11.11 build plan ...
 90 | Downloaded lts-11.11 build plan.
 91 | Compiler version mismatched, found ghc-8.4.3 (x86_64), but expected minor version match with ghc-8.2.2 (x86_64) (based on resolver setting in /root/.stack/global-project/stack.yaml).
 92 | To install the correct GHC into /root/.stack/programs/x86_64-linux/, try running "stack setup" or use the "--install-ghc" flag.
 93 | ```
 94 | 
 95 | In this case, the GHC release in the `%%IMAGE%%` Docker image got ahead of the default Stack resolver expected version of GHC. As the output suggests, manually setting the resolver (typically via `stack.yml`) is the recommended approach.
 96 | 
 97 | ```console
 98 | Step 2/3 : RUN stack install --resolver ghc-8.4.3 pandoc
 99 |  ---> Running in 0bd7f1fcc8b2
100 | Writing implicit global project config file to: /root/.stack/global-project/stack.yaml
101 | Note: You can change the snapshot via the resolver field there.
102 | Using resolver: ghc-8.4.3 specified on command line
103 | Updating package index Hackage (mirrored at https://s3.amazonaws.com/hackage.fpcomplete.com/) ...
104 | Selected mirror https://s3.amazonaws.com/hackage.fpcomplete.com/
105 | ```
106 | 
107 | The alternative to use `--install-ghc` doesn't make sense in a Docker image context, and hence the global `install-ghc` flag has been set to `false` (as of `%%IMAGE%%:8.2.2` & `%%IMAGE%%:8.4.3`) to avoid the default behavior of bootstrapping a new GHC in the container.
108 | 
```
Page 20/25FirstPrevNextLast