This is page 22 of 25. Use http://codebase.md/id/docs/get_started/create/presentation_layout.html?lines=true&page={x} to view the full context.
# Directory Structure
```
├── .ci
│ ├── check-markdownfmt.sh
│ ├── check-metadata.sh
│ ├── check-pr-no-readme.sh
│ ├── check-required-files.sh
│ ├── check-short.sh
│ ├── check-ymlfmt.sh
│ └── get-markdownfmt.sh
├── .common-templates
│ ├── maintainer-community.md
│ ├── maintainer-docker.md
│ ├── maintainer-hashicorp.md
│ └── maintainer-influxdata.md
├── .dockerignore
├── .github
│ └── workflows
│ └── ci.yml
├── .template-helpers
│ ├── arches.sh
│ ├── autogenerated-warning.md
│ ├── compose.md
│ ├── generate-dockerfile-links-partial.sh
│ ├── generate-dockerfile-links-partial.tmpl
│ ├── get-help.md
│ ├── issues.md
│ ├── license-common.md
│ ├── template.md
│ ├── variant-alpine.md
│ ├── variant-default-buildpack-deps.md
│ ├── variant-default-debian.md
│ ├── variant-default-ubuntu.md
│ ├── variant-onbuild.md
│ ├── variant-slim.md
│ ├── variant-windowsservercore.md
│ ├── variant.md
│ └── variant.sh
├── adminer
│ ├── compose.yaml
│ ├── content.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── aerospike
│ ├── content.md
│ ├── github-repo
│ ├── issues.md
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── almalinux
│ ├── content.md
│ ├── github-repo
│ ├── issues.md
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── alpine
│ ├── content.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── alt
│ ├── content.md
│ ├── github-repo
│ ├── issues.md
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── amazoncorretto
│ ├── content.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── amazonlinux
│ ├── content.md
│ ├── github-repo
│ ├── issues.md
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── api-firewall
│ ├── content.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.svg
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── arangodb
│ ├── content.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── archlinux
│ ├── content.md
│ ├── github-repo
│ ├── issues.md
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── backdrop
│ ├── compose.yaml
│ ├── content.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── bash
│ ├── content.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── bonita
│ ├── compose.yaml
│ ├── content.md
│ ├── get-help.md
│ ├── github-repo
│ ├── issues.md
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── buildpack-deps
│ ├── content.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── busybox
│ ├── content.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ ├── README.md
│ ├── variant-glibc.md
│ ├── variant-musl.md
│ ├── variant-uclibc.md
│ └── variant.md
├── caddy
│ ├── content.md
│ ├── get-help.md
│ ├── github-repo
│ ├── license.md
│ ├── logo-120.png
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── cassandra
│ ├── content.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── chronograf
│ ├── content.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── cirros
│ ├── content.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── clearlinux
│ ├── content.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── clefos
│ ├── content.md
│ ├── deprecated.md
│ ├── github-repo
│ ├── issues.md
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── clickhouse
│ ├── content.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.svg
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── clojure
│ ├── content.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── composer
│ ├── content.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── convertigo
│ ├── content.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── couchbase
│ ├── content.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── couchdb
│ ├── content.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── crate
│ ├── content.md
│ ├── get-help.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.svg
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── dart
│ ├── content.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.svg
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── debian
│ ├── content.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ ├── README.md
│ ├── variant-slim.md
│ └── variant.md
├── docker
│ ├── content.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ ├── README.md
│ ├── variant-rootless.md
│ └── variant-windowsservercore.md
├── Dockerfile
├── drupal
│ ├── compose.yaml
│ ├── content.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.svg
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ ├── README.md
│ └── variant-fpm.md
├── eclipse-mosquitto
│ ├── content.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── eclipse-temurin
│ ├── content.md
│ ├── get-help.md
│ ├── github-repo
│ ├── issues.md
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── eggdrop
│ ├── content.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── elasticsearch
│ ├── compose.yaml
│ ├── content.md
│ ├── get-help.md
│ ├── github-repo
│ ├── issues.md
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ ├── README.md
│ └── variant-alpine.md
├── elixir
│ ├── content.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── emqx
│ ├── content.md
│ ├── get-help.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.svg
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── erlang
│ ├── content.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── fedora
│ ├── content.md
│ ├── github-repo
│ ├── issues.md
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── flink
│ ├── content.md
│ ├── get-help.md
│ ├── github-repo
│ ├── issues.md
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── fluentd
│ ├── content.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── friendica
│ ├── compose.yaml
│ ├── content.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.svg
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── gazebo
│ ├── content.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── gcc
│ ├── content.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── generate-repo-stub-readme.sh
├── geonetwork
│ ├── compose.yaml
│ ├── content.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ ├── README.md
│ ├── variant-postgres.md
│ └── variant.md
├── get-categories.sh
├── ghost
│ ├── compose.yaml
│ ├── content.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── golang
│ ├── content.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ ├── README.md
│ ├── variant-alpine.md
│ └── variant-tip.md
├── gradle
│ ├── content.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── groovy
│ ├── content.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── haproxy
│ ├── content.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── haskell
│ ├── content.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ ├── README.md
│ └── variant-slim.md
├── haxe
│ ├── content.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── hello-world
│ ├── content.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ ├── README.md
│ └── update.sh
├── hitch
│ ├── content.md
│ ├── github-repo
│ ├── license.md
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── httpd
│ ├── content.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── hylang
│ ├── content.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── ibm-semeru-runtimes
│ ├── content.md
│ ├── github-repo
│ ├── issues.md
│ ├── license.md
│ ├── logo.svg
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── ibmjava
│ ├── content.md
│ ├── get-help.md
│ ├── github-repo
│ ├── issues.md
│ ├── license.md
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── influxdb
│ ├── content.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ ├── README.md
│ ├── variant-data.md
│ └── variant-meta.md
├── irssi
│ ├── content.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── jetty
│ ├── content.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── joomla
│ ├── compose.yaml
│ ├── content.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── jruby
│ ├── content.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── julia
│ ├── content.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── kapacitor
│ ├── content.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── kibana
│ ├── compose.yaml
│ ├── content.md
│ ├── get-help.md
│ ├── github-repo
│ ├── issues.md
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── kong
│ ├── content.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── krakend
│ ├── content.md
│ ├── get-help.md
│ ├── github-repo
│ ├── license.md
│ ├── logo-120.png
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── LICENSE
├── lightstreamer
│ ├── content.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── liquibase
│ ├── content.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── logstash
│ ├── content.md
│ ├── get-help.md
│ ├── github-repo
│ ├── issues.md
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ ├── README.md
│ └── variant-alpine.md
├── mageia
│ ├── content.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── mariadb
│ ├── compose.yaml
│ ├── content.md
│ ├── get-help.md
│ ├── github-repo
│ ├── issues.md
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── markdownfmt.sh
├── matomo
│ ├── content.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── maven
│ ├── content.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── mediawiki
│ ├── compose.yaml
│ ├── content.md
│ ├── github-repo
│ ├── issues.md
│ ├── license.md
│ ├── logo.svg
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── memcached
│ ├── content.md
│ ├── github-repo
│ ├── license.md
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── metadata.json
├── metadata.sh
├── mongo
│ ├── compose.yaml
│ ├── content.md
│ ├── github-repo
│ ├── license.md
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── mongo-express
│ ├── content.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── monica
│ ├── content.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.svg
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── mono
│ ├── content.md
│ ├── deprecated.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── mysql
│ ├── compose.yaml
│ ├── content.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── nats
│ ├── content.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── neo4j
│ ├── content.md
│ ├── get-help.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── neurodebian
│ ├── content.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── nextcloud
│ ├── content.md
│ ├── deprecated.md
│ ├── github-repo
│ ├── license.md
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── nginx
│ ├── content.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ ├── README.md
│ └── variant-perl.md
├── node
│ ├── content.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── notary
│ ├── content.md
│ ├── github-repo
│ ├── license.md
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── odoo
│ ├── content.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── open-liberty
│ ├── content.md
│ ├── get-help.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── openjdk
│ ├── content.md
│ ├── deprecated.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ ├── README.md
│ ├── variant-alpine.md
│ ├── variant-oracle.md
│ └── variant-slim.md
├── oraclelinux
│ ├── content.md
│ ├── get-help.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ ├── README.md
│ └── variant-slim.md
├── orientdb
│ ├── content.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── parallel-update.sh
├── percona
│ ├── compose.yaml
│ ├── content.md
│ ├── github-repo
│ ├── issues.md
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── perl
│ ├── content.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── photon
│ ├── content.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── php
│ ├── content.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ ├── README.md
│ ├── variant-apache.md
│ ├── variant-cli.md
│ ├── variant-fpm.md
│ └── variant.md
├── php-zendserver
│ ├── content.md
│ ├── deprecated.md
│ ├── github-repo
│ ├── issues.md
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── phpmyadmin
│ ├── compose.yaml
│ ├── content.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── plone
│ ├── content.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.svg
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── postfixadmin
│ ├── compose.yaml
│ ├── content.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ ├── README.md
│ ├── variant-apache.md
│ ├── variant-fpm-alpine.md
│ └── variant-fpm.md
├── postgres
│ ├── compose.yaml
│ ├── content.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── push.pl
├── push.sh
├── pypy
│ ├── content.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── python
│ ├── content.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ ├── README.md
│ └── variant-slim.md
├── r-base
│ ├── content.md
│ ├── github-repo
│ ├── issues.md
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── rabbitmq
│ ├── content.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── rakudo-star
│ ├── content.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── README.md
├── redis
│ ├── content.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── redmine
│ ├── compose.yaml
│ ├── content.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── registry
│ ├── content.md
│ ├── get-help.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── rethinkdb
│ ├── content.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── rocket.chat
│ ├── content.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.svg
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── rockylinux
│ ├── content.md
│ ├── github-repo
│ ├── issues.md
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── ros
│ ├── content.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── ruby
│ ├── content.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── rust
│ ├── content.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── sapmachine
│ ├── content.md
│ ├── get-help.md
│ ├── github-repo
│ ├── issues.md
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── satosa
│ ├── content.md
│ ├── get-help.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.svg
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── scratch
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── silverpeas
│ ├── content.md
│ ├── get-help.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── solr
│ ├── content.md
│ ├── get-help.md
│ ├── github-repo
│ ├── issues.md
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── sonarqube
│ ├── content.md
│ ├── get-help.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── spark
│ ├── content.md
│ ├── get-help.md
│ ├── github-repo
│ ├── issues.md
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── spiped
│ ├── content.md
│ ├── github-repo
│ ├── license.md
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── storm
│ ├── compose.yaml
│ ├── content.md
│ ├── get-help.md
│ ├── github-repo
│ ├── issues.md
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── swift
│ ├── content.md
│ ├── get-help.md
│ ├── github-repo
│ ├── issues.md
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── swipl
│ ├── content.md
│ ├── get-help.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── teamspeak
│ ├── compose.yaml
│ ├── content.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── telegraf
│ ├── content.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── tomcat
│ ├── content.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── tomee
│ ├── content.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── traefik
│ ├── content.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ ├── README.md
│ └── variant-alpine.md
├── ubuntu
│ ├── content.md
│ ├── github-repo
│ ├── issues.md
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── unit
│ ├── content.md
│ ├── get-help.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.svg
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── update.sh
├── varnish
│ ├── content.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── websphere-liberty
│ ├── content.md
│ ├── get-help.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── wordpress
│ ├── compose.yaml
│ ├── content.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ ├── README.md
│ ├── variant-cli.md
│ └── variant-fpm.md
├── xwiki
│ ├── content.md
│ ├── get-help.md
│ ├── github-repo
│ ├── issues.md
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ └── README.md
├── ymlfmt.sh
├── yourls
│ ├── compose.yaml
│ ├── content.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.svg
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ ├── README.md
│ └── variant-fpm.md
├── znc
│ ├── content.md
│ ├── github-repo
│ ├── license.md
│ ├── logo.png
│ ├── maintainer.md
│ ├── metadata.json
│ ├── README-short.txt
│ ├── README.md
│ └── variant-slim.md
└── zookeeper
├── compose.yaml
├── content.md
├── github-repo
├── license.md
├── logo.png
├── maintainer.md
├── metadata.json
├── README-short.txt
└── README.md
```
# Files
--------------------------------------------------------------------------------
/php/content.md:
--------------------------------------------------------------------------------
```markdown
1 | # What is PHP?
2 |
3 | PHP is a server-side scripting language designed for web development, but which can also be used as a general-purpose programming language. PHP can be added to straight HTML or it can be used with a variety of templating engines and web frameworks. PHP code is usually processed by an interpreter, which is either implemented as a native module on the web-server or as a common gateway interface (CGI).
4 |
5 | > [wikipedia.org/wiki/PHP](https://en.wikipedia.org/wiki/PHP)
6 |
7 | %%LOGO%%
8 |
9 | # How to use this image
10 |
11 | ### Create a `Dockerfile` in your PHP project
12 |
13 | ```dockerfile
14 | FROM %%IMAGE%%:8.2-cli
15 | COPY . /usr/src/myapp
16 | WORKDIR /usr/src/myapp
17 | CMD [ "php", "./your-script.php" ]
18 | ```
19 |
20 | Then, run the commands to build and run the Docker image:
21 |
22 | ```console
23 | $ docker build -t my-php-app .
24 | $ docker run -it --rm --name my-running-app my-php-app
25 | ```
26 |
27 | ### Run a single PHP script
28 |
29 | For many simple, single file projects, you may find it inconvenient to write a complete `Dockerfile`. In such cases, you can run a PHP script by using the PHP Docker image directly:
30 |
31 | ```console
32 | $ docker run -it --rm --name my-running-script -v "$PWD":/usr/src/myapp -w /usr/src/myapp %%IMAGE%%:8.2-cli php your-script.php
33 | ```
34 |
35 | ## How to install more PHP extensions
36 |
37 | Many extensions are already compiled into the image, so it's worth checking the output of `php -m` or `php -i` before going through the effort of compiling more.
38 |
39 | We provide the helper scripts `docker-php-ext-configure`, `docker-php-ext-install`, and `docker-php-ext-enable` to more easily install PHP extensions.
40 |
41 | In order to keep the images smaller, PHP's source is kept in a compressed tar file. To facilitate linking of PHP's source with any extension, we also provide the helper script `docker-php-source` to easily extract the tar or delete the extracted source. Note: if you do use `docker-php-source` to extract the source, be sure to delete it in the same layer of the docker image.
42 |
43 | ```Dockerfile
44 | FROM %%IMAGE%%:8.2-cli
45 | RUN docker-php-source extract \
46 | # do important things \
47 | && docker-php-source delete
48 | ```
49 |
50 | ### PHP Core Extensions
51 |
52 | For example, if you want to have a PHP-FPM image with the `gd` extension, you can inherit the base image that you like, and write your own `Dockerfile` like this:
53 |
54 | ```dockerfile
55 | FROM %%IMAGE%%:8.2-fpm
56 | RUN apt-get update && apt-get install -y \
57 | libfreetype-dev \
58 | libjpeg62-turbo-dev \
59 | libpng-dev \
60 | && docker-php-ext-configure gd --with-freetype --with-jpeg \
61 | && docker-php-ext-install -j$(nproc) gd
62 | ```
63 |
64 | Remember, you must install dependencies for your extensions manually. If an extension needs custom `configure` arguments, you can use the `docker-php-ext-configure` script like this example. There is no need to run `docker-php-source` manually in this case, since that is handled by the `configure` and `install` scripts.
65 |
66 | If you are having difficulty figuring out which Debian or Alpine packages need to be installed before `docker-php-ext-install`, then have a look at [the `install-php-extensions` project](https://github.com/mlocati/docker-php-extension-installer). This script builds upon the `docker-php-ext-*` scripts and simplifies the installation of PHP extensions by automatically adding and removing Debian (apt) and Alpine (apk) packages. For example, to install the GD extension you simply have to run `install-php-extensions gd`. This tool is contributed by community members and is not included in the images, please refer to their Git repository for installation, usage, and issues.
67 |
68 | See also ["Dockerizing Compiled Software"](https://tianon.xyz/post/2017/12/26/dockerize-compiled-software.html) for a description of the technique Tianon uses for determining the necessary build-time dependencies for any bit of software (which applies directly to compiling PHP extensions).
69 |
70 | ### Default extensions
71 |
72 | Some extensions are compiled by default. This depends on the PHP version you are using. Run `php -m` in the container to get a list for your specific version.
73 |
74 | ### PECL extensions
75 |
76 | Some extensions are not provided with the PHP source, but are instead available through [PECL](https://pecl.php.net/). To install a PECL extension, use `pecl install` to download and compile it, then use `docker-php-ext-enable` to enable it:
77 |
78 | ```dockerfile
79 | FROM %%IMAGE%%:8.2-cli
80 | RUN pecl install redis-5.3.7 \
81 | && pecl install xdebug-3.2.1 \
82 | && docker-php-ext-enable redis xdebug
83 | ```
84 |
85 | ```dockerfile
86 | FROM %%IMAGE%%:8.2-cli
87 | RUN apt-get update && apt-get install -y libmemcached-dev libssl-dev zlib1g-dev \
88 | && pecl install memcached-3.2.0 \
89 | && docker-php-ext-enable memcached
90 | ```
91 |
92 | It is *strongly* recommended that users use an explicit version number in their `pecl install` invocations to ensure proper PHP version compatibility (PECL does not check the PHP version compatibility when choosing a version of the extension to install, but does when trying to install it). Beyond the compatibility issue, it's also a good practice to ensure you know when your dependencies receive updates and can control those updates directly.
93 |
94 | Unlike PHP core extensions, PECL extensions should be installed in series to fail properly if something went wrong. Otherwise errors are just skipped by PECL. For example, `pecl install memcached-3.2.0 && pecl install redis-5.3.7` instead of `pecl install memcached-3.2.0 redis-5.3.7`. However, `docker-php-ext-enable memcached redis` is fine to be all in one command.
95 |
96 | ### Other extensions
97 |
98 | Some extensions are not provided via either Core or PECL; these can be installed too, although the process is less automated:
99 |
100 | ```dockerfile
101 | FROM %%IMAGE%%:8.2-cli
102 | RUN curl -fsSL '[url-to-custom-php-module]' -o module-name.tar.gz \
103 | && mkdir -p module-name \
104 | && sha256sum -c "[shasum-value] module-name.tar.gz" \
105 | && tar -xf module-name.tar.gz -C module-name --strip-components=1 \
106 | && rm module-name.tar.gz \
107 | && ( \
108 | cd module-name \
109 | && phpize \
110 | && ./configure --enable-module-name \
111 | && make -j "$(nproc)" \
112 | && make install \
113 | ) \
114 | && rm -r module-name \
115 | && docker-php-ext-enable module-name
116 | ```
117 |
118 | The `docker-php-ext-*` scripts *can* accept an arbitrary path, but it must be absolute (to disambiguate from built-in extension names), so the above example could also be written as the following:
119 |
120 | ```dockerfile
121 | FROM %%IMAGE%%:8.2-cli
122 | RUN curl -fsSL '[url-to-custom-php-module]' -o module-name.tar.gz \
123 | && mkdir -p /tmp/module-name \
124 | && sha256sum -c "[shasum-value] module-name.tar.gz" \
125 | && tar -xf module-name.tar.gz -C /tmp/module-name --strip-components=1 \
126 | && rm module-name.tar.gz \
127 | && docker-php-ext-configure /tmp/module-name --enable-module-name \
128 | && docker-php-ext-install /tmp/module-name \
129 | && rm -r /tmp/module-name
130 | ```
131 |
132 | ## Running as an arbitrary user
133 |
134 | For running the Apache variants as an arbitrary user, there are a couple choices:
135 |
136 | - If your kernel [is version 4.11 or newer](https://github.com/moby/moby/issues/8460#issuecomment-312459310), you can add `--sysctl net.ipv4.ip_unprivileged_port_start=0` (which [will be the default in a future version of Docker](https://github.com/moby/moby/pull/41030)) and then `--user` should work as it does for FPM.
137 | - If you adjust the Apache configuration to use an "unprivileged" port (greater than 1024 by default), then `--user` should work as it does for FPM regardless of kernel version.
138 |
139 | For running the FPM variants as an arbitrary user, the `--user` flag to `docker run` should be used (which can accept both a username/group in the container's `/etc/passwd` file like `--user daemon` or a specific UID/GID like `--user 1000:1000`).
140 |
141 | ## "`E: Package 'php-XXX' has no installation candidate`"
142 |
143 | As of [docker-library/php#542](https://github.com/docker-library/php/pull/542), this image blocks the installation of Debian's PHP packages. There is some additional discussion of this change in [docker-library/php#551 (comment)](https://github.com/docker-library/php/issues/551#issuecomment-354849074), but the gist is that installing Debian's PHP packages in this image leads to two conflicting installations of PHP in a single image, which is almost certainly not the intended outcome.
144 |
145 | For those broken by this change and looking for a workaround to apply in the meantime while a proper fix is developed, adding the following simple line to your `Dockerfile` should remove the block (**with the strong caveat that this will allow the installation of a second installation of PHP, which is definitely not what you're looking for unless you *really* know what you're doing**):
146 |
147 | ```dockerfile
148 | RUN rm /etc/apt/preferences.d/no-debian-php
149 | ```
150 |
151 | The *proper* solution to this error is to either use `FROM debian:XXX` and install Debian's PHP packages directly, or to use `docker-php-ext-install`, `pecl`, and/or `phpize` to install the necessary additional extensions and utilities.
152 |
153 | ## Configuration
154 |
155 | This image ships with the default [`php.ini-development`](https://github.com/php/php-src/blob/master/php.ini-development) and [`php.ini-production`](https://github.com/php/php-src/blob/master/php.ini-production) configuration files.
156 |
157 | It is *strongly* recommended to use the production config for images used in production environments!
158 |
159 | The default config can be customized by copying configuration files into the `$PHP_INI_DIR/conf.d/` directory.
160 |
161 | ### Example
162 |
163 | ```dockerfile
164 | FROM %%IMAGE%%:8.2-fpm-alpine
165 |
166 | # Use the default production configuration
167 | RUN mv "$PHP_INI_DIR/php.ini-production" "$PHP_INI_DIR/php.ini"
168 | ```
169 |
170 | In many production environments, it is also recommended to (build and) enable the PHP core OPcache extension for performance. See [the upstream OPcache documentation](https://www.php.net/manual/en/book.opcache.php) for more details.
171 |
```
--------------------------------------------------------------------------------
/zookeeper/content.md:
--------------------------------------------------------------------------------
```markdown
1 | # What is Apache Zookeeper?
2 |
3 | Apache ZooKeeper is a software project of the Apache Software Foundation, providing an open source distributed configuration service, synchronization service, and naming registry for large distributed systems. ZooKeeper was a sub-project of Hadoop but is now a top-level project in its own right.
4 |
5 | > [wikipedia.org/wiki/Apache_ZooKeeper](https://en.wikipedia.org/wiki/Apache_ZooKeeper)
6 |
7 | %%LOGO%%
8 |
9 | # How to use this image
10 |
11 | ## Start a Zookeeper server instance
12 |
13 | ```console
14 | $ docker run --name some-zookeeper --restart always -d %%IMAGE%%
15 | ```
16 |
17 | This image includes `EXPOSE 2181 2888 3888 8080` (the zookeeper client port, follower port, election port, AdminServer port respectively), so standard container linking will make it automatically available to the linked containers. Since the Zookeeper "fails fast" it's better to always restart it.
18 |
19 | ## Connect to Zookeeper from an application in another Docker container
20 |
21 | ```console
22 | $ docker run --name some-app --link some-zookeeper:zookeeper -d application-that-uses-zookeeper
23 | ```
24 |
25 | ## Connect to Zookeeper from the Zookeeper command line client
26 |
27 | ```console
28 | $ docker run -it --rm --link some-zookeeper:zookeeper %%IMAGE%% zkCli.sh -server zookeeper
29 | ```
30 |
31 | ## %%COMPOSE%%
32 |
33 | This will start Zookeeper in [replicated mode](https://zookeeper.apache.org/doc/current/zookeeperStarted.html#sc_RunningReplicatedZooKeeper). Run `docker compose up` and wait for it to initialize completely. Ports `2181-2183` will be exposed.
34 |
35 | > Please be aware that setting up multiple servers on a single machine will not create any redundancy. If something were to happen which caused the machine to die, all of the zookeeper servers would be offline. Full redundancy requires that each server have its own machine. It must be a completely separate physical server. Multiple virtual machines on the same physical host are still vulnerable to the complete failure of that host.
36 |
37 | Consider using [Docker Swarm](https://www.docker.com/products/docker-swarm) when running Zookeeper in replicated mode.
38 |
39 | ## Configuration
40 |
41 | Zookeeper configuration is located in `/conf`. One way to change it is mounting your config file as a volume:
42 |
43 | ```console
44 | $ docker run --name some-zookeeper --restart always -d -v $(pwd)/zoo.cfg:/conf/zoo.cfg %%IMAGE%%
45 | ```
46 |
47 | ## Environment variables
48 |
49 | ZooKeeper recommended defaults are used if `zoo.cfg` file is not provided. They can be overridden using the following environment variables.
50 |
51 | ```console
52 | $ docker run -e "ZOO_INIT_LIMIT=10" --name some-zookeeper --restart always -d %%IMAGE%%
53 | ```
54 |
55 | ### `ZOO_TICK_TIME`
56 |
57 | Defaults to `2000`. ZooKeeper's `tickTime`
58 |
59 | > The length of a single tick, which is the basic time unit used by ZooKeeper, as measured in milliseconds. It is used to regulate heartbeats, and timeouts. For example, the minimum session timeout will be two ticks
60 |
61 | ### `ZOO_INIT_LIMIT`
62 |
63 | Defaults to `5`. ZooKeeper's `initLimit`
64 |
65 | > Amount of time, in ticks (see tickTime), to allow followers to connect and sync to a leader. Increased this value as needed, if the amount of data managed by ZooKeeper is large.
66 |
67 | ### `ZOO_SYNC_LIMIT`
68 |
69 | Defaults to `2`. ZooKeeper's `syncLimit`
70 |
71 | > Amount of time, in ticks (see tickTime), to allow followers to sync with ZooKeeper. If followers fall too far behind a leader, they will be dropped.
72 |
73 | ### `ZOO_MAX_CLIENT_CNXNS`
74 |
75 | Defaults to `60`. ZooKeeper's `maxClientCnxns`
76 |
77 | > Limits the number of concurrent connections (at the socket level) that a single client, identified by IP address, may make to a single member of the ZooKeeper ensemble.
78 |
79 | ### `ZOO_STANDALONE_ENABLED`
80 |
81 | Defaults to `true`. Zookeeper's [`standaloneEnabled`](https://zookeeper.apache.org/doc/r3.5.7/zookeeperReconfig.html#sc_reconfig_standaloneEnabled)
82 |
83 | > Prior to 3.5.0, one could run ZooKeeper in Standalone mode or in a Distributed mode. These are separate implementation stacks, and switching between them during run time is not possible. By default (for backward compatibility) standaloneEnabled is set to true. The consequence of using this default is that if started with a single server the ensemble will not be allowed to grow, and if started with more than one server it will not be allowed to shrink to contain fewer than two participants.
84 |
85 | ### `ZOO_ADMINSERVER_ENABLED`
86 |
87 | Defaults to `true`. Zookeeper's [`admin.enableServer`](http://zookeeper.apache.org/doc/r3.5.7/zookeeperAdmin.html#sc_adminserver_config)
88 |
89 | > The AdminServer is an embedded Jetty server that provides an HTTP interface to the four letter word commands. By default, the server is started on port 8080, and commands are issued by going to the URL "/commands/[command name]", e.g., http://localhost:8080/commands/stat.
90 |
91 | ### `ZOO_AUTOPURGE_PURGEINTERVAL`
92 |
93 | Defaults to `0`. Zookeeper's [`autoPurge.purgeInterval`](https://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_advancedConfiguration)
94 |
95 | > The time interval in hours for which the purge task has to be triggered. Set to a positive integer (1 and above) to enable the auto purging. Defaults to 0.
96 |
97 | ### `ZOO_AUTOPURGE_SNAPRETAINCOUNT`
98 |
99 | Defaults to `3`. Zookeeper's [`autoPurge.snapRetainCount`](https://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_advancedConfiguration)
100 |
101 | > When enabled, ZooKeeper auto purge feature retains the autopurge.snapRetainCount most recent snapshots and the corresponding transaction logs in the dataDir and dataLogDir respectively and deletes the rest. Defaults to 3. Minimum value is 3.
102 |
103 | ### `ZOO_4LW_COMMANDS_WHITELIST`
104 |
105 | Defaults to `srvr`. Zookeeper's [`4lw.commands.whitelist`](https://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_clusterOptions)
106 |
107 | > A list of comma separated Four Letter Words commands that user wants to use. A valid Four Letter Words command must be put in this list else ZooKeeper server will not enable the command. By default the whitelist only contains "srvr" command which zkServer.sh uses. The rest of four letter word commands are disabled by default.
108 |
109 | ## Advanced configuration
110 |
111 | ### `ZOO_CFG_EXTRA`
112 |
113 | Not every Zookeeper configuration setting is exposed via the environment variables listed above. These variables are only meant to cover minimum configuration keywords and some often changing options. If [mounting your custom config file](#configuration) as a volume doesn't work for you, consider using `ZOO_CFG_EXTRA` environment variable. You can add arbitrary configuration parameters to Zookeeper configuration file using this variable. The following example shows how to enable Prometheus metrics exporter on port `7070`:
114 |
115 | ```console
116 | $ docker run --name some-zookeeper --restart always -e ZOO_CFG_EXTRA="metricsProvider.className=org.apache.zookeeper.metrics.prometheus.PrometheusMetricsProvider metricsProvider.httpPort=7070" %%IMAGE%%
117 | ```
118 |
119 | ### `JVMFLAGS`
120 |
121 | Many of the Zookeeper advanced configuration options can be set there using Java system properties in the form of `-Dproperty=value`. For example, you can use Netty instead of NIO (default option) as a server communication framework:
122 |
123 | ```console
124 | $ docker run --name some-zookeeper --restart always -e JVMFLAGS="-Dzookeeper.serverCnxnFactory=org.apache.zookeeper.server.NettyServerCnxnFactory" %%IMAGE%%
125 | ```
126 |
127 | See [Advanced Configuration](https://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_advancedConfiguration) for the full list of supported Java system properties.
128 |
129 | Another example use case for the `JVMFLAGS` is setting a maximum JWM heap size of 1 GB:
130 |
131 | ```console
132 | $ docker run --name some-zookeeper --restart always -e JVMFLAGS="-Xmx1024m" %%IMAGE%%
133 | ```
134 |
135 | ## Replicated mode
136 |
137 | Environment variables below are mandatory if you want to run Zookeeper in replicated mode.
138 |
139 | ### `ZOO_MY_ID`
140 |
141 | The id must be unique within the ensemble and should have a value between 1 and 255. Do note that this variable will not have any effect if you start the container with a `/data` directory that already contains the `myid` file.
142 |
143 | ### `ZOO_SERVERS`
144 |
145 | This variable allows you to specify a list of machines of the Zookeeper ensemble. Each entry should be specified as such: `server.id=<address1>:<port1>:<port2>[:role];[<client port address>:]<client port>` [Zookeeper Dynamic Reconfiguration](https://zookeeper.apache.org/doc/r3.5.7/zookeeperReconfig.html). Entries are separated with space. Do note that this variable will not have any effect if you start the container with a `/conf` directory that already contains the `zoo.cfg` file.
146 |
147 | ## Where to store data
148 |
149 | This image is configured with volumes at `/data` and `/datalog` to hold the Zookeeper in-memory database snapshots and the transaction log of updates to the database, respectively.
150 |
151 | > Be careful where you put the transaction log. A dedicated transaction log device is key to consistent good performance. Putting the log on a busy device will adversely affect performance.
152 |
153 | ## How to configure logging
154 |
155 | By default, ZooKeeper redirects stdout/stderr outputs to the console. Since 3.8 ZooKeeper is shipped with [LOGBack](https://logback.qos.ch/) as the logging backend. The ZooKeeper default `logback.xml` file resides in the `/conf` directory. To override default logging configuration mount your custom config as a volume:
156 |
157 | ```console
158 | $ docker run --name some-zookeeper --restart always -d -v $(pwd)/logback.xml:/conf/logback.xml %%IMAGE%%
159 | ```
160 |
161 | Check [ZooKeeper Logging](https://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_logging) for more details.
162 |
163 | ### Logging in 3.7
164 |
165 | You can redirect to a file located in `/logs` by passing environment variable `ZOO_LOG4J_PROP` as follows:
166 |
167 | ```console
168 | $ docker run --name some-zookeeper --restart always -e ZOO_LOG4J_PROP="INFO,ROLLINGFILE" %%IMAGE%%
169 | ```
170 |
171 | This will write logs to `/logs/zookeeper.log`. This image is configured with a volume at `/logs` for your convenience.
172 |
```
--------------------------------------------------------------------------------
/docker/content.md:
--------------------------------------------------------------------------------
```markdown
1 | # What is Docker in Docker?
2 |
3 | Although running Docker inside Docker is generally not recommended, there are some legitimate use cases, such as development of Docker itself.
4 |
5 | *Docker is an open-source project that automates the deployment of applications inside software containers, by providing an additional layer of abstraction and automation of operating-system-level virtualization on Linux, Mac OS and Windows.*
6 |
7 | > [wikipedia.org/wiki/Docker_(software)](https://en.wikipedia.org/wiki/Docker_%28software%29)
8 |
9 | %%LOGO%%
10 |
11 | Before running Docker-in-Docker, be sure to read through [Jérôme Petazzoni's excellent blog post on the subject](https://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/), where he outlines some of the pros and cons of doing so (and some nasty gotchas you might run into).
12 |
13 | If you are still convinced that you need Docker-in-Docker and not just access to a container's host Docker server, then read on.
14 |
15 | # How to use this image
16 |
17 | [](https://asciinema.org/a/378669)
18 |
19 | ## TLS
20 |
21 | Starting in 18.09+, the `dind` variants of this image will automatically generate TLS certificates in the directory specified by the `DOCKER_TLS_CERTDIR` environment variable.
22 |
23 | **Warning:** in 18.09, this behavior is disabled by default (for compatibility). If you use `--network=host`, shared network namespaces (as in Kubernetes pods), or otherwise have network access to the container (including containers started within the `dind` instance via their gateway interface), this is a potential security issue (which can lead to access to the host system, for example). It is recommended to enable TLS by setting the variable to an appropriate value (`-e DOCKER_TLS_CERTDIR=/certs` or similar). In 19.03+, this behavior is enabled by default.
24 |
25 | When enabled, the Docker daemon will be started with `--host=tcp://0.0.0.0:2376 --tlsverify ...` (and when disabled, the Docker daemon will be started with `--host=tcp://0.0.0.0:2375`).
26 |
27 | Inside the directory specified by `DOCKER_TLS_CERTDIR`, the entrypoint scripts will create/use three directories:
28 |
29 | - `ca`: the certificate authority files (`cert.pem`, `key.pem`)
30 | - `server`: the `dockerd` (daemon) certificate files (`cert.pem`, `ca.pem`, `key.pem`)
31 | - `client`: the `docker` (client) certificate files (`cert.pem`, `ca.pem`, `key.pem`; suitable for `DOCKER_CERT_PATH`)
32 |
33 | In order to make use of this functionality from a "client" container, at least the `client` subdirectory of the `$DOCKER_TLS_CERTDIR` directory needs to be shared (as illustrated in the following examples).
34 |
35 | To disable this image behavior, simply override the container command or entrypoint to run `dockerd` directly (`... %%IMAGE%%:dind dockerd ...` or `... --entrypoint dockerd %%IMAGE%%:dind ...`).
36 |
37 | ## Start a daemon instance
38 |
39 | ```console
40 | $ docker run --privileged --name some-docker -d \
41 | --network some-network --network-alias docker \
42 | -e DOCKER_TLS_CERTDIR=/certs \
43 | -v some-docker-certs-ca:/certs/ca \
44 | -v some-docker-certs-client:/certs/client \
45 | %%IMAGE%%:dind
46 | ```
47 |
48 | **Note:** `--privileged` is required for Docker-in-Docker to function properly, but it should be used with care as it provides full access to the host environment, as explained [in the relevant section of the Docker documentation](https://docs.docker.com/engine/reference/run/#runtime-privilege-and-linux-capabilities).
49 |
50 | ## Connect to it from a second container
51 |
52 | ```console
53 | $ docker run --rm --network some-network \
54 | -e DOCKER_TLS_CERTDIR=/certs \
55 | -v some-docker-certs-client:/certs/client:ro \
56 | %%IMAGE%%:latest version
57 | Client: Docker Engine - Community
58 | Version: 18.09.8
59 | API version: 1.39
60 | Go version: go1.10.8
61 | Git commit: 0dd43dd87f
62 | Built: Wed Jul 17 17:38:58 2019
63 | OS/Arch: linux/amd64
64 | Experimental: false
65 |
66 | Server: Docker Engine - Community
67 | Engine:
68 | Version: 18.09.8
69 | API version: 1.39 (minimum version 1.12)
70 | Go version: go1.10.8
71 | Git commit: 0dd43dd87f
72 | Built: Wed Jul 17 17:48:49 2019
73 | OS/Arch: linux/amd64
74 | Experimental: false
75 | ```
76 |
77 | ```console
78 | $ docker run -it --rm --network some-network \
79 | -e DOCKER_TLS_CERTDIR=/certs \
80 | -v some-docker-certs-client:/certs/client:ro \
81 | %%IMAGE%%:latest sh
82 | / # docker version
83 | Client: Docker Engine - Community
84 | Version: 18.09.8
85 | API version: 1.39
86 | Go version: go1.10.8
87 | Git commit: 0dd43dd87f
88 | Built: Wed Jul 17 17:38:58 2019
89 | OS/Arch: linux/amd64
90 | Experimental: false
91 |
92 | Server: Docker Engine - Community
93 | Engine:
94 | Version: 18.09.8
95 | API version: 1.39 (minimum version 1.12)
96 | Go version: go1.10.8
97 | Git commit: 0dd43dd87f
98 | Built: Wed Jul 17 17:48:49 2019
99 | OS/Arch: linux/amd64
100 | Experimental: false
101 | ```
102 |
103 | ```console
104 | $ docker run --rm --network some-network \
105 | -e DOCKER_TLS_CERTDIR=/certs \
106 | -v some-docker-certs-client:/certs/client:ro \
107 | %%IMAGE%%:latest info
108 | Containers: 0
109 | Running: 0
110 | Paused: 0
111 | Stopped: 0
112 | Images: 0
113 | Server Version: 18.09.8
114 | Storage Driver: overlay2
115 | Backing Filesystem: extfs
116 | Supports d_type: true
117 | Native Overlay Diff: true
118 | Logging Driver: json-file
119 | Cgroup Driver: cgroupfs
120 | Plugins:
121 | Volume: local
122 | Network: bridge host macvlan null overlay
123 | Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
124 | Swarm: inactive
125 | Runtimes: runc
126 | Default Runtime: runc
127 | Init Binary: docker-init
128 | containerd version: 894b81a4b802e4eb2a91d1ce216b8817763c29fb
129 | runc version: 425e105d5a03fabd737a126ad93d62a9eeede87f
130 | init version: fec3683
131 | Security Options:
132 | apparmor
133 | seccomp
134 | Profile: default
135 | Kernel Version: 4.19.0-5-amd64
136 | Operating System: Alpine Linux v3.10 (containerized)
137 | OSType: linux
138 | Architecture: x86_64
139 | CPUs: 12
140 | Total Memory: 62.79GiB
141 | Name: e174d61a4a12
142 | ID: HJXG:3OT7:MGDL:Y2BL:WCYP:CKSP:CGAM:4BLH:NEI4:IURF:4COF:AH6N
143 | Docker Root Dir: /var/lib/docker
144 | Debug Mode (client): false
145 | Debug Mode (server): false
146 | Registry: https://index.docker.io/v1/
147 | Labels:
148 | Experimental: false
149 | Insecure Registries:
150 | 127.0.0.0/8
151 | Live Restore Enabled: false
152 | Product License: Community Engine
153 |
154 | WARNING: bridge-nf-call-iptables is disabled
155 | WARNING: bridge-nf-call-ip6tables is disabled
156 | ```
157 |
158 | ```console
159 | $ docker run --rm -v /var/run/docker.sock:/var/run/docker.sock %%IMAGE%%:latest version
160 | Client: Docker Engine - Community
161 | Version: 18.09.8
162 | API version: 1.39
163 | Go version: go1.10.8
164 | Git commit: 0dd43dd87f
165 | Built: Wed Jul 17 17:38:58 2019
166 | OS/Arch: linux/amd64
167 | Experimental: false
168 |
169 | Server: Docker Engine - Community
170 | Engine:
171 | Version: 18.09.7
172 | API version: 1.39 (minimum version 1.12)
173 | Go version: go1.10.8
174 | Git commit: 2d0083d
175 | Built: Thu Jun 27 17:23:02 2019
176 | OS/Arch: linux/amd64
177 | Experimental: false
178 | ```
179 |
180 | ## Custom daemon flags
181 |
182 | ```console
183 | $ docker run --privileged --name some-docker -d \
184 | --network some-network --network-alias docker \
185 | -e DOCKER_TLS_CERTDIR=/certs \
186 | -v some-docker-certs-ca:/certs/ca \
187 | -v some-docker-certs-client:/certs/client \
188 | %%IMAGE%%:dind --storage-driver overlay2
189 | ```
190 |
191 | ## Runtime Settings Considerations
192 |
193 | Inspired by the [official systemd `docker.service` configuration](https://github.com/docker/docker-ce-packaging/blob/57ae892b13de399171fc33f878b70e72855747e6/systemd/docker.service#L30-L45), you may want to consider different values for the following runtime configuration options, especially for production Docker instances:
194 |
195 | ```console
196 | $ docker run --privileged --name some-docker -d \
197 | ... \
198 | --ulimit nofile=-1 \
199 | --ulimit nproc=-1 \
200 | --ulimit core=-1 \
201 | --pids-limit -1 \
202 | --oom-score-adj -500 \
203 | %%IMAGE%%:dind
204 | ```
205 |
206 | Some of these will not be supported based on the settings on the host's `dockerd`, such as `--ulimit nofile=-1`, giving errors that look like `error setting rlimit type 7: operation not permitted`, and some may inherit sane values from the host `dockerd` instance or may not apply for your usage of Docker-in-Docker (for example, you likely want to set `--oom-score-adj` to a value that's higher than `dockerd` on the host so that your Docker-in-Docker instance is killed before the host Docker instance is).
207 |
208 | ## Where to Store Data
209 |
210 | Important note: There are several ways to store data used by applications that run in Docker containers. We encourage users of the `%%REPO%%` images to familiarize themselves with the options available, including:
211 |
212 | - Let Docker manage the storage of your data [by writing to disk on the host system using its own internal volume management](https://docs.docker.com/storage/volumes/). This is the default and is easy and fairly transparent to the user. The downside is that the files may be hard to locate for tools and applications that run directly on the host system, i.e. outside containers.
213 | - Create a data directory on the host system (outside the container) and [mount this to a directory visible from inside the container](https://docs.docker.com/storage/bind-mounts/). This places the files in a known location on the host system, and makes it easy for tools and applications on the host system to access the files. The downside is that the user needs to make sure that the directory exists, and that e.g. directory permissions and other security mechanisms on the host system are set up correctly.
214 |
215 | The Docker documentation is a good starting point for understanding the different storage options and variations, and there are multiple blogs and forum postings that discuss and give advice in this area. We will simply show the basic procedure here for the latter option above:
216 |
217 | 1. Create a data directory on a suitable volume on your host system, e.g. `/my/own/var-lib-docker`.
218 | 2. Start your `%%REPO%%` container like this:
219 |
220 | ```console
221 | $ docker run --privileged --name some-docker -v /my/own/var-lib-docker:/var/lib/docker -d %%IMAGE%%:dind
222 | ```
223 |
224 | The `-v /my/own/var-lib-docker:/var/lib/docker` part of the command mounts the `/my/own/var-lib-docker` directory from the underlying host system as `/var/lib/docker` inside the container, where Docker by default will write its data files.
225 |
```
--------------------------------------------------------------------------------
/cassandra/content.md:
--------------------------------------------------------------------------------
```markdown
1 | # What is Cassandra?
2 |
3 | Apache Cassandra is an open source distributed database management system designed to handle large amounts of data across many commodity servers, providing high availability with no single point of failure. Cassandra offers robust support for clusters spanning multiple datacenters, with asynchronous masterless replication allowing low latency operations for all clients.
4 |
5 | > [wikipedia.org/wiki/Apache_Cassandra](https://en.wikipedia.org/wiki/Apache_Cassandra)
6 |
7 | %%LOGO%%
8 |
9 | # How to use this image
10 |
11 | ## Start a `%%REPO%%` server instance
12 |
13 | Starting a Cassandra instance is simple:
14 |
15 | ```console
16 | $ docker run --name some-%%REPO%% --network some-network -d %%IMAGE%%:tag
17 | ```
18 |
19 | ... where `some-%%REPO%%` is the name you want to assign to your container and `tag` is the tag specifying the Cassandra version you want. See the list above for relevant tags.
20 |
21 | ## Make a cluster
22 |
23 | Using the environment variables documented below, there are two cluster scenarios: instances on the same machine and instances on separate machines. For the same machine, start the instance as described above. To start other instances, just tell each new node where the first is.
24 |
25 | ```console
26 | $ docker run --name some-%%REPO%%2 -d --network some-network -e CASSANDRA_SEEDS=some-%%REPO%% %%IMAGE%%:tag
27 | ```
28 |
29 | For separate machines (ie, two VMs on a cloud provider), you need to tell Cassandra what IP address to advertise to the other nodes (since the address of the container is behind the docker bridge).
30 |
31 | Assuming the first machine's IP address is `10.42.42.42` and the second's is `10.43.43.43`, start the first with exposed gossip port:
32 |
33 | ```console
34 | $ docker run --name some-%%REPO%% -d -e CASSANDRA_BROADCAST_ADDRESS=10.42.42.42 -p 7000:7000 %%IMAGE%%:tag
35 | ```
36 |
37 | Then start a Cassandra container on the second machine, with the exposed gossip port and seed pointing to the first machine:
38 |
39 | ```console
40 | $ docker run --name some-%%REPO%% -d -e CASSANDRA_BROADCAST_ADDRESS=10.43.43.43 -p 7000:7000 -e CASSANDRA_SEEDS=10.42.42.42 %%IMAGE%%:tag
41 | ```
42 |
43 | ## Connect to Cassandra from `cqlsh`
44 |
45 | The following command starts another Cassandra container instance and runs `cqlsh` (Cassandra Query Language Shell) against your original Cassandra container, allowing you to execute CQL statements against your database instance:
46 |
47 | ```console
48 | $ docker run -it --network some-network --rm %%IMAGE%% cqlsh some-%%REPO%%
49 | ```
50 |
51 | More information about the CQL can be found in the [Cassandra documentation](https://cassandra.apache.org/doc/latest/cql/index.html).
52 |
53 | ## Container shell access and viewing Cassandra logs
54 |
55 | The `docker exec` command allows you to run commands inside a Docker container. The following command line will give you a bash shell inside your `%%REPO%%` container:
56 |
57 | ```console
58 | $ docker exec -it some-%%REPO%% bash
59 | ```
60 |
61 | The Cassandra Server log is available through Docker's container log:
62 |
63 | ```console
64 | $ docker logs some-%%REPO%%
65 | ```
66 |
67 | ## Configuring Cassandra
68 |
69 | The best way to provide configuration to the `%%REPO%%` image is to provide a custom `/etc/cassandra/cassandra.yaml` file. There are many ways to provide this file to the container (via short `Dockerfile` with `FROM` + `COPY`, via [Docker Configs](https://docs.docker.com/engine/swarm/configs/), via runtime bind-mount, etc), the details of which are left as an exercise for the reader.
70 |
71 | To use a different file name (for example, to avoid all image-provided configuration behavior), use `-Dcassandra.config=/path/to/cassandra.yaml` as an argument to the image (as in, `docker run ... %%IMAGE%% -Dcassandra.config=/path/to/cassandra.yaml`).
72 |
73 | There are a small number of environment variables supported by the image which will modify `/etc/cassandra/cassandra.yaml` in some way (but the script is modifying YAML, so is naturally fragile):
74 |
75 | - `CASSANDRA_LISTEN_ADDRESS`: This variable is for controlling which IP address to listen for incoming connections on. The default value is `auto`, which will set the [`listen_address`](http://docs.datastax.com/en/cassandra/3.0/cassandra/configuration/configCassandra_yaml.html?scroll=configCassandra_yaml__listen_address) option in `cassandra.yaml` to the IP address of the container as it starts. This default should work in most use cases.
76 |
77 | - `CASSANDRA_BROADCAST_ADDRESS`: This variable is for controlling which IP address to advertise to other nodes. The default value is the value of `CASSANDRA_LISTEN_ADDRESS`. It will set the [`broadcast_address`](http://docs.datastax.com/en/cassandra/3.0/cassandra/configuration/configCassandra_yaml.html?scroll=configCassandra_yaml__broadcast_address) and [`broadcast_rpc_address`](http://docs.datastax.com/en/cassandra/3.0/cassandra/configuration/configCassandra_yaml.html?scroll=configCassandra_yaml__broadcast_rpc_address) options in `cassandra.yaml`.
78 |
79 | - `CASSANDRA_RPC_ADDRESS`: This variable is for controlling which address to bind the thrift rpc server to. If you do not specify an address, the wildcard address (`0.0.0.0`) will be used. It will set the [`rpc_address`](http://docs.datastax.com/en/cassandra/3.0/cassandra/configuration/configCassandra_yaml.html?scroll=configCassandra_yaml__rpc_address) option in `cassandra.yaml`.
80 |
81 | - `CASSANDRA_START_RPC`: This variable is for controlling if the thrift rpc server is started. It will set the [`start_rpc`](http://docs.datastax.com/en/cassandra/3.0/cassandra/configuration/configCassandra_yaml.html?scroll=configCassandra_yaml__start_rpc) option in `cassandra.yaml`.
82 |
83 | - `CASSANDRA_SEEDS`: This variable is the comma-separated list of IP addresses used by gossip for bootstrapping new nodes joining a cluster. It will set the `seeds` value of the [`seed_provider`](http://docs.datastax.com/en/cassandra/3.0/cassandra/configuration/configCassandra_yaml.html?scroll=configCassandra_yaml__seed_provider) option in `cassandra.yaml`. The `CASSANDRA_BROADCAST_ADDRESS` will be added the seeds passed in so that the server will talk to itself as well.
84 |
85 | - `CASSANDRA_CLUSTER_NAME`: This variable sets the name of the cluster and must be the same for all nodes in the cluster. It will set the [`cluster_name`](http://docs.datastax.com/en/cassandra/3.0/cassandra/configuration/configCassandra_yaml.html?scroll=configCassandra_yaml__cluster_name) option of `cassandra.yaml`.
86 |
87 | - `CASSANDRA_NUM_TOKENS`: This variable sets number of tokens for this node. It will set the [`num_tokens`](http://docs.datastax.com/en/cassandra/3.0/cassandra/configuration/configCassandra_yaml.html?scroll=configCassandra_yaml__num_tokens) option of `cassandra.yaml`.
88 |
89 | - `CASSANDRA_DC`: This variable sets the datacenter name of this node. It will set the [`dc`](http://docs.datastax.com/en/cassandra/3.0/cassandra/architecture/archsnitchGossipPF.html) option of `cassandra-rackdc.properties`. You must set `CASSANDRA_ENDPOINT_SNITCH` to use the ["GossipingPropertyFileSnitch"](https://docs.datastax.com/en/cassandra/3.0/cassandra/architecture/archsnitchGossipPF.html) in order for Cassandra to apply `cassandra-rackdc.properties`, otherwise this variable will have no effect.
90 |
91 | - `CASSANDRA_RACK`: This variable sets the rack name of this node. It will set the [`rack`](http://docs.datastax.com/en/cassandra/3.0/cassandra/architecture/archsnitchGossipPF.html) option of `cassandra-rackdc.properties`. You must set `CASSANDRA_ENDPOINT_SNITCH` to use the ["GossipingPropertyFileSnitch"](https://docs.datastax.com/en/cassandra/3.0/cassandra/architecture/archsnitchGossipPF.html) in order for Cassandra to apply `cassandra-rackdc.properties`, otherwise this variable will have no effect.
92 |
93 | - `CASSANDRA_ENDPOINT_SNITCH`: This variable sets the snitch implementation this node will use. It will set the [`endpoint_snitch`](http://docs.datastax.com/en/cassandra/3.0/cassandra/configuration/configCassandra_yaml.html?scroll=configCassandra_yaml__endpoint_snitch) option of `cassandra.yml`.
94 |
95 | # Caveats
96 |
97 | ## Where to Store Data
98 |
99 | Important note: There are several ways to store data used by applications that run in Docker containers. We encourage users of the `%%REPO%%` images to familiarize themselves with the options available, including:
100 |
101 | - Let Docker manage the storage of your database data [by writing the database files to disk on the host system using its own internal volume management](https://docs.docker.com/storage/volumes/). This is the default and is easy and fairly transparent to the user. The downside is that the files may be hard to locate for tools and applications that run directly on the host system, i.e. outside containers.
102 | - Create a data directory on the host system (outside the container) and [mount this to a directory visible from inside the container](https://docs.docker.com/storage/bind-mounts/). This places the database files in a known location on the host system, and makes it easy for tools and applications on the host system to access the files. The downside is that the user needs to make sure that the directory exists, and that e.g. directory permissions and other security mechanisms on the host system are set up correctly.
103 |
104 | The Docker documentation is a good starting point for understanding the different storage options and variations, and there are multiple blogs and forum postings that discuss and give advice in this area. We will simply show the basic procedure here for the latter option above:
105 |
106 | 1. Create a data directory on a suitable volume on your host system, e.g. `/my/own/datadir`.
107 | 2. Start your `%%REPO%%` container like this:
108 |
109 | ```console
110 | $ docker run --name some-%%REPO%% -v /my/own/datadir:/var/lib/cassandra -d %%IMAGE%%:tag
111 | ```
112 |
113 | The `-v /my/own/datadir:/var/lib/cassandra` part of the command mounts the `/my/own/datadir` directory from the underlying host system as `/var/lib/cassandra` inside the container, where Cassandra by default will write its data files.
114 |
115 | ## No connections until Cassandra init completes
116 |
117 | If there is no database initialized when the container starts, then a default database will be created. While this is the expected behavior, this means that it will not accept incoming connections until such initialization completes. This may cause issues when using automation tools, such as Docker Compose, which start several containers simultaneously.
118 |
```
--------------------------------------------------------------------------------
/geonetwork/content.md:
--------------------------------------------------------------------------------
```markdown
1 | # What is GeoNetwork?
2 |
3 | GeoNetwork is a catalog application to **manage spatially referenced resources**. It provides powerful **metadata editing** and **search** functions as well as an interactive **web map viewer**.
4 |
5 | The GeoNetwork project started out in year 2001 as a Spatial Data Catalogue System for the Food and Agriculture organisation of the United Nations (FAO), the United Nations World Food Programme (WFP) and the United Nations Environmental Programme (UNEP).
6 |
7 | At present the project is widely used as the basis of **Spatial Data Infrastructures** all around the world.
8 |
9 | GeoNetwork has been developed to connect spatial information communities and their data using a modern architecture, which is at the same time powerful and low cost, based on the principles of Free and Open Source Software (FOSS) and International and Open Standards for services and protocols (e.g.: ISO/TC211, OGC).
10 |
11 | The project is part of the Open Source Geospatial Foundation ( [OSGeo](http://www.osgeo.org/) ) and can be found at [GeoNetwork opensource](http://www.geonetwork-opensource.org). GeoNetwork has been developed to connect spatial information communities and their data using a modern architecture, which is at the same time powerful and low cost.
12 |
13 | %%LOGO%%
14 |
15 | ## How to use this image
16 |
17 | GeoNetwork 4 uses an Elasticsearch server to store the index of the documents it manages so **it can't be run without configuring the URL of the Elasticsearch server**.
18 |
19 | This is a quick example of how to get GeoNetwork 4.4 Latest up and running for demo purposes. This configuration doesn't keep the data if containers are removed.
20 |
21 | ```console
22 | docker pull elasticsearch:7.17.15
23 | docker pull %%IMAGE%%:4
24 |
25 | docker network create gn-network
26 |
27 | docker run -d --name my-es-host --network gn-network -e "discovery.type=single-node" elasticsearch:7.17.15
28 | docker run --name %%REPO%%-host --network gn-network -e GN_CONFIG_PROPERTIES="-Des.host=my-es-host -Des.protocol=http -Des.port=9200 -Des.url=http://my-es-host:9200" -p 8080:8080 %%IMAGE%%:4
29 | ```
30 |
31 | For GeoNetwork 4.2 Stable:
32 |
33 | ```console
34 | docker pull elasticsearch:7.17.15
35 | docker pull %%IMAGE%%:4.2
36 |
37 | docker network create gn-network
38 |
39 | docker run -d --name my-es-host --network gn-network -e "discovery.type=single-node" elasticsearch:7.17.15
40 | docker run --name %%REPO%%-host --network gn-network -e ES_HOST=my-es-host -e ES_PROTOCOL=http -e ES_PORT=9200 -p 8080:8080 %%IMAGE%%:4.2
41 | ```
42 |
43 | To be sure about what Elasticsearch version to use you can check the [GeoNetwork documentation](https://docs.geonetwork-opensource.org/4.4/install-guide/installing-index/) for your GN version or the `es.version` property in the [`pom.xml`](https://github.com/geonetwork/core-geonetwork/blob/main/pom.xml#L1528C17-L1528C24) file of the GeoNetwork release used.
44 |
45 | ### Default credentials
46 |
47 | After installation, use the default credentials: **`admin`** (username) and **`admin`** (password). It is recommended to update the default password after installation.
48 |
49 | ### Elasticsearch configuration
50 |
51 | #### Java properties (version 4.4.0 and newer)
52 |
53 | Since GeoNetwork 4.4.0, use Java properties passed in the `GN_CONFIG_PROPERTIES` environment variable for Elasticsearch connection configuration:
54 |
55 | - `es.host`: *optional* (default `localhost`): The host name of the Elasticsearch server.
56 | - `es.port` *optional* (default `9200`): The port where Elasticsearch server is listening to.
57 | - `es.protocol` *optional* (default `http`): The protocol used to talk to Elasticsearch. Can be `http` or `https`.
58 | - `es.url`: **mandatory if host, port or protocol aren't the default values** (default `http://localhost:9200`): Full URL of the Elasticsearch server.
59 | - `es.index.records` *optional* (default `gn_records`): In case you have more than GeoNetwork instance using the same Elasticsearch cluster each one needs to use a different index name. Use this variable to define the name of the index used by each GeoNetwork.
60 | - `es.username` *optional* (default empty): username used to connect to Elasticsearch.
61 | - `es.password` *optional* (default empty): password used to connect to Elasticsearch.
62 | - `kb.url` *optional* (default `http://localhost:5601`): The URL where Kibana is listening.
63 |
64 | Example Docker Compose YAML snippet:
65 |
66 | ```yaml
67 | services:
68 | %%REPO%%:
69 | image: %%IMAGE%%:4.4
70 | environment:
71 | GN_CONFIG_PROPERTIES: >-
72 | -Des.host=elasticsearch
73 | -Des.protocol=http
74 | -Des.port=9200
75 | -Des.url=http://elasticsearch:9200
76 | -Des.username=my_es_username
77 | -Des.password=my_es_password
78 | -Dkb.url=http://kibana:5601
79 | ```
80 |
81 | #### Environment variables (version 4.2 and older)
82 |
83 | For versions older than 4.4.0, configure Elasticsearch using environment variables:
84 |
85 | - `ES_HOST` **mandatory**: The host name of the Elasticsearch server.
86 | - `ES_PORT` *optional* (default `9200`): The port where Elasticsearch server is listening to.
87 | - `ES_PROTOCOL` *optional* (default `http`): The protocol used to talk to Elasticsearch. Can be `http` or `https`.
88 | - `ES_INDEX_RECORDS` *optional* (default `gn_records`): In case you have more than GeoNetwork instance using the same Elasticsearch cluster each one needs to use a different index name. Use this variable to define the name of the index used by each GeoNetwork.
89 | - `ES_USERNAME` *optional* (default empty): username used to connect to Elasticsearch.
90 | - `ES_PASSWORD` *optional* (default empty): password used to connect to Elasticsearch.
91 | - `KB_URL` *Optional* (default `http://localhost:5601`): The URL where Kibana is listening.
92 |
93 | ### Database configuration
94 |
95 | By default GeoNetwork uses a local **H2 database** for demo use (this one is **not recommended for production**). The image contains JDBC drivers for PostgreSQL and MySQL. To configure the database connection use these environment variables:
96 |
97 | - `GEONETWORK_DB_TYPE`: The type of database to use. Valid values are `postgres`, `postgres-postgis`, `mysql`. The image can be extended including other drivers and these other types could be used too: `db2`, `h2`, `oracle`, `sqlserver`. The JAR drivers for these other databases would need to be added to `/opt/geonetwork/WEB-INF/lib` mounting them as binds or extending the official image.
98 | - `GEONETWORK_DB_HOST`: The database host name.
99 | - `GEONETWORK_DB_PORT`: The database port.
100 | - `GEONETWORK_DB_NAME`: The database name.
101 | - `GEONETWORK_DB_USERNAME`: The username used to connect to the database.
102 | - `GEONETWORK_DB_PASSWORD`: The password used to connect to the database.
103 | - `GEONETWORK_DB_CONNECTION_PROPERTIES`: Additional properties to be added to the connection string, for example `search_path=test,public&ssl=true` will produce a JDBC connection string like `jdbc:postgresql://localhost:5432/postgres?search_path=test,public&ssl=true`
104 |
105 | ### Start GeoNetwork
106 |
107 | This command will start a debian-based container, running a Tomcat (GN 3) or Jetty (GN 4) web server, with a GeoNetwork WAR deployed on the server:
108 |
109 | ```console
110 | docker run --name some-%%REPO%% -d %%IMAGE%%
111 | ```
112 |
113 | ### Publish port
114 |
115 | GeoNetwork listens on port `8080`. If you want to access the container at the host, **you must publish this port**. For instance, this, will redirect all the container traffic on port 8080, to the same port on the host:
116 |
117 | ```console
118 | docker run --name some-%%REPO%% -d -p 8080:8080 %%IMAGE%%
119 | ```
120 |
121 | Then, if you are running docker on Linux, you may access geonetwork at http://localhost:8080/geonetwork.
122 |
123 | ### Set the data directory and H2 db file
124 |
125 | The data directory is the location on the file system where the catalog stores much of its custom configuration and uploaded files. It is also where it stores a number of support files, used for various purposes (e.g.: spatial index, thumbnails). The default variant also uses a local H2 database to store the metadata catalog itself.
126 |
127 | By default, GeoNetwork sets the data directory on `/opt/geonetwork/WEB-INF/data` and H2 database file to the Jetty dir `/var/lib/jetty/gn.h2.db` (since GN 4.0.0) or Tomcat `/usr/local/tomcat/gn.h2.db` (for GN 3), but you may override these values by injecting environment variables into the container: - `-e DATA_DIR=...` (defaults to `/opt/geonetwork/WEB-INF/data`) and `-e GEONETWORK_DB_NAME=...` (defaults to `gn` which sets up database `gn.h2.db` in tomcat bin dir `/usr/local/tomcat`). Note that setting the database location via `GEONETWORK_DB_NAME` only works from version 3.10.3 onwards.
128 |
129 | Since version 4.4.0 the data directory needs to be configued using Java properties passed in the `GN_CONFIG_PROPERTIES` environment variable. For example:
130 |
131 | ```console
132 | docker run --name some-%%REPO%% -d -p 8080:8080 -e GN_CONFIG_PROPERTIES="-Dgeonetwork.dir=/catalogue-data" -e GEONETWORK_DB_NAME=/catalogue-data/db/gn %%IMAGE%%
133 | ```
134 |
135 | ### Persisting data
136 |
137 | To set the data directory to `/catalogue-data/data` and H2 database file to `/catalogue-data/db/gn.h2.db` so they both persist through restarts:
138 |
139 | - GeoNetwork 4.2 and older
140 |
141 | ```console
142 | docker run --name some-%%REPO%% -d -p 8080:8080 -e DATA_DIR=/catalogue-data/data -e GEONETWORK_DB_NAME=/catalogue-data/db/gn %%IMAGE%%:3
143 | ```
144 |
145 | - Since GeoNetwork 4.4.0
146 |
147 | ```console
148 | docker run --name some-%%REPO%% -d -p 8080:8080 -e GN_CONFIG_PROPERTIES="-Dgeonetwork.dir=/catalogue-data" -e GEONETWORK_DB_NAME=/catalogue-data/db/gn %%IMAGE%%
149 | ```
150 |
151 | If you want the data directory to live beyond restarts, or even destruction of the container, you can mount a directory from the docker engine's host into the container. - `-v /host/path:/path/to/data/directory`. For instance this, will mount the host directory `/host/%%REPO%%-docker` into `/catalogue-data` on the container:
152 |
153 | - GeoNetwork 4.2 and older
154 |
155 | ```console
156 | docker run --name some-%%REPO%% -d -p 8080:8080 -e DATA_DIR=/catalogue-data/data -e GEONETWORK_DB_NAME=/catalogue-data/db/gn -v /host/%%REPO%%-docker:/catalogue-data %%IMAGE%%:3
157 | ```
158 |
159 | - GeoNetwork 4.4.0 and newer
160 |
161 | ```console
162 | docker run --name some-%%REPO%% -d -p 8080:8080 -e GN_CONFIG_PROPERTIES="-Dgeonetwork.dir=/catalogue-data" -e GEONETWORK_DB_NAME=/catalogue-data/db/gn -v /host/%%REPO%%-docker:/catalogue-data %%IMAGE%%
163 | ```
164 |
165 | ### %%COMPOSE%%
166 |
167 | Run `docker compose up`, wait for it to initialize completely, and visit `http://localhost:8080/geonetwork` or `http://host-ip:8080/geonetwork` (as appropriate).
168 |
169 | ### Default credentials
170 |
171 | After installation a default user with name `admin` and password `admin` is created. Use this credentials to start with. It is recommended to update the default password after installation.
172 |
```
--------------------------------------------------------------------------------
/couchdb/content.md:
--------------------------------------------------------------------------------
```markdown
1 | # What is Apache CouchDB?
2 |
3 | Apache CouchDB™ lets you access your data where you need it by defining the Couch Replication Protocol that is implemented by a variety of projects and products that span every imaginable computing environment from globally distributed server-clusters, over mobile phones to web browsers. Software that is compatible with the Couch Replication Protocol include PouchDB and Cloudant.
4 |
5 | Store your data safely, on your own servers, or with any leading cloud provider. Your web- and native applications love CouchDB, because it speaks JSON natively and supports binary for all your data storage needs. The Couch Replication Protocol lets your data flow seamlessly between server clusters to mobile phones and web browsers, enabling a compelling, offline-first user-experience while maintaining high performance and strong reliability. CouchDB comes with a developer-friendly query language, and optionally MapReduce for simple, efficient, and comprehensive data retrieval.
6 |
7 | > [couchdb.apache.org](https://couchdb.apache.org)
8 |
9 | %%LOGO%%
10 |
11 | # How to use this image
12 |
13 | ## Start a CouchDB instance
14 |
15 | Starting a CouchDB instance is simple:
16 |
17 | ```console
18 | $ docker run -d --name my-couchdb %%IMAGE%%:tag
19 | ```
20 |
21 | where `my-couchdb` is the name you want to assign to your container, and `tag` is the tag specifying the CouchDB version you want. See the list above for relevant tags.
22 |
23 | ## Connect to CouchDB from an application in another Docker container
24 |
25 | This image exposes the standard CouchDB port `5984`, so standard container linking will make it automatically available to the linked containers. Start your application container like this in order to link it to the Cassandra container:
26 |
27 | ```console
28 | $ docker run --name my-couchdb-app --link my-%%REPO%%:%%REPO%% -d app-that-uses-couchdb
29 | ```
30 |
31 | ## Exposing CouchDB to the outside world
32 |
33 | If you want to expose the port to the outside world, run
34 |
35 | ```console
36 | $ docker run -p 5984:5984 -d %%IMAGE%%
37 | ```
38 |
39 | *WARNING*: Do not do this until you have established an admin user and setup permissions correctly on any databases you have created.
40 |
41 | If you intend to network this CouchDB instance with others in a cluster, you will need to map additional ports; see the [official CouchDB documentation](http://docs.couchdb.org/en/stable/setup/cluster.html) for details.
42 |
43 | ## Make a cluster
44 |
45 | Start your multiple CouchDB instances, then follow the Setup Wizard in the [official CouchDB documentation](http://docs.couchdb.org/en/stable/setup/cluster.html) to complete the process.
46 |
47 | For a CouchDB cluster you need to provide the `NODENAME` setting as well as the erlang cookie. Settings to Erlang can be made with the environment variable `ERL_FLAGS`, e.g. `ERL_FLAGS=-setcookie "brumbrum"`. Further information can be found [here](http://docs.couchdb.org/en/stable/cluster/setup.html).
48 |
49 | There is also a [Kubernetes helm chart](https://github.com/helm/charts/tree/master/incubator/couchdb) available.
50 |
51 | ## Container shell access, `remsh`, and viewing logs
52 |
53 | The `docker exec` command allows you to run commands inside a Docker container. The following command line will give you a bash shell inside your `%%REPO%%` container:
54 |
55 | ```console
56 | $ docker exec -it my-%%REPO%% bash
57 | ```
58 |
59 | If you need direct access to the Erlang runtime:
60 |
61 | ```console
62 | $ docker exec -it my-%%REPO%% /opt/couchdb/bin/remsh
63 | ```
64 |
65 | The CouchDB log is available through Docker's container log:
66 |
67 | ```console
68 | $ docker logs my-%%REPO%%
69 | ```
70 |
71 | ## Configuring CouchDB
72 |
73 | The best way to provide configuration to the `%%REPO%%` image is to provide a custom `ini` file to CouchDB, preferably stored in the `/opt/couchdb/etc/local.d/` directory. There are many ways to provide this file to the container (via short `Dockerfile` with `FROM` + `COPY`, via [Docker Configs](https://docs.docker.com/engine/swarm/configs/), via runtime bind-mount, etc), the details of which are left as an exercise for the reader.
74 |
75 | Keep in mind that run-time reconfiguration of CouchDB will overwrite the [last file in the configuration chain](http://docs.couchdb.org/en/stable/config/intro.html#configuration-files), and that this Docker container creates the `/opt/couchdb/etc/local.d/docker.ini` file at startup.
76 |
77 | CouchDB also uses `/opt/couchdb/etc/vm.args` to store Erlang runtime-specific changes. Changing these values is less common. If you need to change the epmd port, for instance, you will want to bind mount this file as well. (Note: files cannot be bind-mounted on Windows hosts.)
78 |
79 | In addition, a few environment variables are provided to set very common parameters:
80 |
81 | - `COUCHDB_USER` and `COUCHDB_PASSWORD` will create an ini-file based local admin user with the given username and password in the file `/opt/couchdb/etc/local.d/docker.ini`.
82 | - `COUCHDB_SECRET` will set the CouchDB shared cluster secret value, in the file `/opt/couchdb/etc/local.d/docker.ini`.
83 | - `NODENAME` will set the name of the CouchDB node inside the container to `couchdb@${NODENAME}`, in the file `/opt/couchdb/etc/vm.args`. This is used for clustering purposes and can be ignored for single-node setups.
84 | - Erlang Environment Variables like `ERL_FLAGS` will be used by Erlang itself. For a complete list have a look [here](http://erlang.org/doc/man/erl.html#environment-variables)
85 |
86 | # Caveats
87 |
88 | ## Where to Store Data
89 |
90 | Important note: There are several ways to store data used by applications that run in Docker containers. We encourage users of the `%%REPO%%` images to familiarize themselves with the options available, including:
91 |
92 | - Let Docker manage the storage of your database data [by writing the database files to disk on the host system using its own internal volume management](https://docs.docker.com/storage/volumes/). This is the default and is easy and fairly transparent to the user. The downside is that the files may be hard to locate for tools and applications that run directly on the host system, i.e. outside containers.
93 | - Create a data directory on the host system (outside the container) and [mount this to a directory visible from inside the container](https://docs.docker.com/storage/bind-mounts/). This places the database files in a known location on the host system, and makes it easy for tools and applications on the host system to access the files. The downside is that the user needs to make sure that the directory exists, and that e.g. directory permissions and other security mechanisms on the host system are set up correctly.
94 |
95 | The Docker documentation is a good starting point for understanding the different storage options and variations, and there are multiple blogs and forum postings that discuss and give advice in this area. We will simply show the basic procedure here for the latter option above:
96 |
97 | 1. Create a data directory on a suitable volume on your host system, e.g. `/home/couchdb/data`.
98 | 2. Start your `%%REPO%%` container like this:
99 |
100 | ```bash
101 | $ docker run --name some-%%REPO% -v /home/couchdb/data:/opt/couchdb/data -d %%IMAGE%%:tag
102 | ```
103 |
104 | The `-v /home/couchdb/data:/opt/couchdb/data` part of the command mounts the `/home/couchdb/data` directory from the underlying host system as `/opt/couchdb/data` inside the container, where CouchDB by default will write its data files.
105 |
106 | ## No system databases until the installation is finalized
107 |
108 | Please note that CouchDB no longer autocreates system databases for you, as it is not known at startup time if this is a single-node or clustered CouchDB installation. In a cluster, the databases must only be created once all nodes have been joined together.
109 |
110 | If you use the [Cluster Setup Wizard](http://docs.couchdb.org/en/stable/setup/cluster.html#the-cluster-setup-wizard) or the [Cluster Setup API](http://docs.couchdb.org/en/stable/setup/cluster.html#the-cluster-setup-api), these databases will be created for you when you complete the process.
111 |
112 | If you choose not to use the Cluster Setup wizard or API, you will have to create `_global_changes`, `_replicator` and `_users` manually.
113 |
114 | ## Admin party mode
115 |
116 | The node will also start in [admin party mode](https://docs.couchdb.org/en/stable/intro/security.html#the-admin-party). Be sure to create an admin user! The [Cluster Setup Wizard](http://docs.couchdb.org/en/stable/setup/cluster.html#the-cluster-setup-wizard) or the [Cluster Setup API](http://docs.couchdb.org/en/stable/setup/cluster.html#the-cluster-setup-api) will do this for you.
117 |
118 | You can also use the two environment variables `COUCHDB_USER` and `COUCHDB_PASSWORD` to set up an admin user:
119 |
120 | ```console
121 | $ docker run -e COUCHDB_USER=admin -e COUCHDB_PASSWORD=password -d %%IMAGE%%
122 | ```
123 |
124 | Note that if you are setting up a clustered CouchDB, you will want to pre-hash this password and use the identical hashed text across all nodes to ensure sessions work correctly when a load balancer is placed in front of the cluster. Hashing can be accomplished by running the container with the `/opt/couchdb/etc/local.d` directory mounted as a volume, allowing CouchDB to hash the password you set, then copying out the hashed version and using this value in the future.
125 |
126 | ## Using a persistent CouchDB configuration file
127 |
128 | The CouchDB configuration is specified in `.ini` files in `/opt/couchdb/etc`. Take a look at the [CouchDB configuration documentation](http://docs.couchdb.org/en/stable/config/index.html) to learn more about CouchDB's configuration structure.
129 |
130 | If you want to use a customized CouchDB configuration, you can create your configuration file in a directory on the host machine and then mount that directory as `/opt/couchdb/etc/local.d` inside the `%%REPO%%` container.
131 |
132 | ```console
133 | $ docker run --name my-couchdb -v /home/couchdb/etc:/opt/couchdb/etc/local.d -d %%IMAGE%%
134 | ```
135 |
136 | The `-v /home/couchdb/etc:/opt/couchdb/etc/local.d` part of the command mounts the `/home/couchdb/etc` directory from the underlying host system as `/opt/couchdb/etc/local.d` inside the container, where CouchDB by default will write its dynamic configuration files.
137 |
138 | You can also use `couchdb` as the base image for your own couchdb instance and provide your own version of the `local.ini` config file:
139 |
140 | Example Dockerfile:
141 |
142 | ```dockerfile
143 | FROM %%IMAGE%%
144 |
145 | COPY local.ini /opt/couchdb/etc/
146 | ```
147 |
148 | and then build and run
149 |
150 | ```console
151 | $ docker build -t you/awesome-couchdb .
152 | $ docker run -d -p 5984:5984 you/awesome-couchdb
153 | ```
154 |
155 | Remember that, with this approach, any newly written changes will still appear in the `/opt/couchdb/etc/local.d` directory, so it is still recommended to map this to a host path for persistence.
156 |
157 | ## Logging
158 |
159 | By default containers run from this image only log to `stdout`. You can enable logging to file in the [configuration](http://docs.couchdb.org/en/2.1.0/config/logging.html).
160 |
161 | For example in `local.ini`:
162 |
163 | ```ini
164 | [log]
165 | writer = file
166 | file = /opt/couchdb/log/couch.log
167 | ```
168 |
169 | It is recommended to then mount this path to a directory on the host, as CouchDB logging can be quite voluminous.
170 |
```
--------------------------------------------------------------------------------
/arangodb/content.md:
--------------------------------------------------------------------------------
```markdown
1 | # What is ArangoDB?
2 |
3 | ArangoDB is a scalable graph database system to drive value from connected data, faster. Native graphs, an integrated search engine, and JSON support, via a single query language.
4 |
5 | ArangoDB runs everywhere: On-prem, in the cloud, and as a managed cloud service: [ArangoGraph Insights Platform](https://cloud.arangodb.com/home).
6 |
7 | > [arangodb.com](https://arangodb.com)
8 |
9 | %%LOGO%%
10 |
11 | ## Key Features in ArangoDB
12 |
13 | **Native Graph** Store both data and relationships, for faster queries even with multiple levels of joins and deeper insights that simply aren't possible with traditional relational and document database systems.
14 |
15 | **Document Store** Every node in your graph is a JSON document: flexible, extensible, and easily imported from your existing document database.
16 |
17 | **ArangoSearch** Natively integrated cross-platform indexing, text-search and ranking engine for information retrieval, optimized for speed and memory.
18 |
19 | #### ArangoDB Documentation
20 |
21 | - [Learn ArangoDB](https://arangodb.com/learn/)
22 | - [Documentation](https://docs.arangodb.com/)
23 |
24 | ## How to use this image
25 |
26 | ### Start an ArangoDB instance
27 |
28 | In order to start an ArangoDB instance, run:
29 |
30 | ```console
31 | docker run -d -p 8529:8529 -e ARANGO_RANDOM_ROOT_PASSWORD=1 --name arangodb-instance %%IMAGE%%
32 | ```
33 |
34 | Docker chooses the processor architecture for the image that matches your host CPU by default. If this is not the case, for example, because you have the `DOCKER_DEFAULT_PLATFORM` environment variable set to a different architecture, you can pass the `--platform` flag to the `docker run` command to specify the appropriate operating system and architecture for the container. For x86-64, use `linux/amd64`. On ARM, especially Apple silicon with no emulation for the required AVX instruction set extension, use `linux/arm64/v8`:
35 |
36 | ```console
37 | docker run -d -p 8529:8529 -e ARANGO_RANDOM_ROOT_PASSWORD=1 --name arangodb-instance --platform linux/arm64/v8 %%IMAGE%%
38 | ```
39 |
40 | This creates and launches the %%IMAGE%% Docker instance as a background process. The Identifier of the process is printed. By default, ArangoDB listens on port `8529` for requests.
41 |
42 | In order to get the IP ArangoDB listens on, run:
43 |
44 | ```console
45 | docker inspect --format '{{ .NetworkSettings.IPAddress }}' arangodb-instance
46 | ```
47 |
48 | ### Initialize the server language
49 |
50 | When using Docker, you need to specify the language you want to initialize the server to on the first run in one of the following ways:
51 |
52 | - Set the environment variable `LANG` to a locale in the `docker run` command, e.g. `-e LANG=sv` for a Swedish locale.
53 |
54 | - Use an `arangod.conf` configuration file that sets a language and mount it into the container. For example, create a configuration file on your host system in which you set `icu-language = sv` at the top (before any `[section]`) and then mount the file over the default configuration file like `docker run -v /your-host-path/arangod.conf:/etc/arangodb3/arangod.conf ...`.
55 |
56 | Note that you cannot set the language using only a startup option on the command-line, like `docker run ... %%IMAGE%% --icu-language sv`.
57 |
58 | If you don't specify a language explicitly, the default is `en_US` up to ArangoDB v3.11 and `en_US_POSIX` from ArangoDB v3.12 onward.
59 |
60 | ### Using the instance
61 |
62 | To use the running instance from an application, link the container:
63 |
64 | ```console
65 | docker run -e ARANGO_RANDOM_ROOT_PASSWORD=1 --name my-app --link arangodb-instance:db-link %%IMAGE%%
66 | ```
67 |
68 | This uses the instance named `arangodb-instance` and links it into the application container. The application container contains environment variables, which can be used to access the database.
69 |
70 | DB_LINK_PORT_8529_TCP=tcp://172.17.0.17:8529
71 | DB_LINK_PORT_8529_TCP_ADDR=172.17.0.17
72 | DB_LINK_PORT_8529_TCP_PORT=8529
73 | DB_LINK_PORT_8529_TCP_PROTO=tcp
74 | DB_LINK_NAME=/naughty_ardinghelli/db-link
75 |
76 | ### Exposing the port to the outside world
77 |
78 | If you want to expose the port to the outside world, run:
79 |
80 | ```console
81 | docker run -e ARANGO_RANDOM_ROOT_PASSWORD=1 -p 8529:8529 -d %%IMAGE%%
82 | ```
83 |
84 | ArangoDB listen on port 8529 for request and the image includes `EXPOSE
85 | 8529`. The `-p 8529:8529` exposes this port on the host.
86 |
87 | ### Choosing an authentication method
88 |
89 | The ArangoDB image provides several authentication methods which can be specified via environment variables (-e) when using `docker run`
90 |
91 | 1. `ARANGO_RANDOM_ROOT_PASSWORD=1`
92 |
93 | Generate a random root password when starting. The password will be printed to stdout (may be inspected later using `docker logs`)
94 |
95 | 2. `ARANGO_NO_AUTH=1`
96 |
97 | Disable authentication. Useful for testing.
98 |
99 | **WARNING** Doing so in production will expose all your data. Make sure that ArangoDB is not directly accessible from the internet!
100 |
101 | 3. `ARANGO_ROOT_PASSWORD=somepassword`
102 |
103 | Specify your own root password.
104 |
105 | Note: this way of specifying logins only applies to single server installations. With clusters you have to provision the users via the root user with empty password once the system is up.
106 |
107 | ### Command line options
108 |
109 | You can pass arguments to the ArangoDB server by appending them to the end of the Docker command:
110 |
111 | ```console
112 | docker run -e ARANGO_RANDOM_ROOT_PASSWORD=1 %%IMAGE%% --help
113 | ```
114 |
115 | The entrypoint script starts the `arangod` binary by default and forwards your arguments.
116 |
117 | You may also start other binaries, such as the ArangoShell:
118 |
119 | ```console
120 | docker run -it %%IMAGE%% arangosh --server.database myDB ...
121 | ```
122 |
123 | Note that you need to set up networking for containers if `arangod` runs in one container and you want to access it with `arangosh` running in another container. It is easier to execute it in the same container instead. Use `docker ps` to find out the container ID / name of a running container:
124 |
125 | ```console
126 | docker ps
127 | ```
128 |
129 | It prints something similar to the following:
130 |
131 | ```console
132 | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
133 | 1234567890ab arangodb "/entrypoint.sh aran…" 2 hours ago Up 2 hours 0.0.0.0:8529->8529/tcp jolly_joker
134 | ```
135 |
136 | Then use `docker exec` and the ID / name to run something inside of the existing container:
137 |
138 | ```console
139 | docker exec -it jolly_joker arangosh
140 | ```
141 |
142 | For more information, see the ArangoDB documentation about [Configuration](https://docs.arangodb.com/stable/operations/administration/configuration/).
143 |
144 | ### Limiting resource utilization
145 |
146 | `arangod` checks the following environment variables, which can be used to restrict how much memory and how many CPU cores it should use:
147 |
148 | - `ARANGODB_OVERRIDE_DETECTED_TOTAL_MEMORY` *(introduced in v3.6.3)*
149 |
150 | This variable can be used to override the automatic detection of the total amount of RAM present on the system. One can specify a decimal number (in bytes). Furthermore, if `G` or `g` is appended, the value is multiplied by `2^30`. If `M` or `m` is appended, the value is multiplied by `2^20`. If `K` or `k` is appended, the value is multiplied by `2^10`. That is, `64G` means 64 gigabytes.
151 |
152 | The total amount of RAM detected is logged as an INFO message at server start. If the variable is set, the overridden value is shown. Various default sizes are calculated based on this value (e.g. RocksDB buffer cache size).
153 |
154 | Setting this option can in particular be useful in two cases:
155 |
156 | 1. If `arangod` is running in a container and its cgroup has a RAM limitation, then one should specify this limitation in this environment variable, since it is currently not automatically detected.
157 | 2. If `arangod` is running alongside other services on the same machine and thus sharing the RAM with them, one should limit the amount of memory using this environment variable.
158 |
159 | - `ARANGODB_OVERRIDE_DETECTED_NUMBER_OF_CORES` *(introduced in v3.7.1)*
160 |
161 | This variable can be used to override the automatic detection of the number of CPU cores present on the system.
162 |
163 | The number of CPU cores detected is logged as an INFO message at server start. If the variable is set, the overridden value is shown. Various default values for threading are calculated based on this value.
164 |
165 | Setting this option is useful if `arangod` is running in a container or alongside other services on the same machine and shall not use all available CPUs.
166 |
167 | ## Persistent Data
168 |
169 | ArangoDB supports two different storage engines from version 3.2 to 3.6. You can choose them while instantiating the container with the environment variable `ARANGO_STORAGE_ENGINE`. With `mmfiles` you choose the classic storage engine (not available in 3.7 and later), `rocksdb` will choose the storage engine based on [RocksDB](http://rocksdb.org/). The default choice is `rocksdb` from version 3.4 on.
170 |
171 | ArangoDB uses the volume `/var/lib/arangodb3` as database directory to store the collection data and the volume `/var/lib/arangodb3-apps` as apps directory to store any extensions. These directories are marked as docker volumes.
172 |
173 | See `docker inspect --format "{{ .Config.Volumes }}" %%IMAGE%%` for all volumes.
174 |
175 | A good explanation about persistence and docker container can be found here: [Docker In-depth: Volumes](http://container42.com/2014/11/03/docker-indepth-volumes/), [Why Docker Data Containers are Good](https://medium.com/@ramangupta/why-docker-data-containers-are-good-589b3c6c749e)
176 |
177 | ### Using host directories
178 |
179 | You can map the container's volumes to a directory on the host, so that the data is kept between runs of the container. This path `/tmp/arangodb` is in general not the correct place to store you persistent files - it is just an example!
180 |
181 | ```console
182 | unix> mkdir /tmp/arangodb
183 | unix> docker run -e ARANGO_RANDOM_ROOT_PASSWORD=1 -p 8529:8529 -d \
184 | -v /tmp/arangodb:/var/lib/arangodb3 \
185 | %%IMAGE%%
186 | ```
187 |
188 | This will use the `/tmp/arangodb` directory of the host as database directory for ArangoDB inside the container.
189 |
190 | ### Using a data container
191 |
192 | Alternatively you can create a container holding the data.
193 |
194 | ```console
195 | docker create --name arangodb-persist %%IMAGE%% true
196 | ```
197 |
198 | And use this data container in your ArangoDB container.
199 |
200 | ```console
201 | docker run -e ARANGO_RANDOM_ROOT_PASSWORD=1 --volumes-from arangodb-persist -p 8529:8529 %%IMAGE%%
202 | ```
203 |
204 | If want to save a few bytes you can alternatively use [busybox](https://hub.docker.com/_/busybox) or [alpine](https://hub.docker.com/_/alpine) for creating the volume only containers. Please note that you need to provide the used volumes in this case. For example
205 |
206 | ```console
207 | docker run -d --name arangodb-persist -v /var/lib/arangodb3 busybox true
208 | ```
209 |
210 | ### Usage as a base image
211 |
212 | If you are using the image as a base image please make sure to wrap any CMD in the [exec](https://docs.docker.com/reference/dockerfile/#cmd) form. Otherwise the default entrypoint will not do its bootstrapping work.
213 |
214 | When deriving the image, you can control the instantiation via putting files into `/docker-entrypoint-initdb.d/`.
215 |
216 | - `*.sh` - files ending with .sh will be run as a bash shellscript.
217 | - `*.js` - files will be executed with arangosh. You can specify additional arangosh arguments via the `ARANGOSH_ARGS` environment variable.
218 | - `dumps/` - in this directory you can place subdirectories containing database dumps generated using [arangodump](https://docs.arangodb.com/stable/components/tools/arangodump/). They can be restored using [arangorestore](https://docs.arangodb.com/stable/components/tools/arangorestore/).
219 |
```
--------------------------------------------------------------------------------
/aerospike/content.md:
--------------------------------------------------------------------------------
```markdown
1 | # Aerospike Database Docker Images
2 |
3 | ## What is Aerospike?
4 |
5 | [Aerospike](http://aerospike.com) is a distributed NoSQL database purposefully designed for high performance web scale applications. Aerospike supports key-value and document data models, and has multiple data types including List, Map, HyperLogLog, GeoJSON, and Blob. Aerospike's patented hybrid memory architecture delivers predictable high performance at scale and high data density per node.
6 |
7 | %%LOGO%%
8 |
9 | ## Getting Started
10 |
11 | Aerospike Enterprise Edition requires a feature key file to start and to ungate certain features in the database, such as compression. Enterprise customers can use their production or development keys.
12 |
13 | Anyone can [sign up](https://www.aerospike.com/lp/try-now/) to get an evaluation feature key file for a full-featured, single-node Aerospike Enterprise Edition.
14 |
15 | Aerospike Community Edition supports the same developer APIs as Aerospike Enterprise Edition, and differs in ease of operation and enterprise features. See the [product matrix](https://www.aerospike.com/products/product-matrix/) for more.
16 |
17 | ### Running an Aerospike EE node with a feature key file in a mapped directory
18 |
19 | ```console
20 | docker run -d -v DIR:/opt/aerospike/etc/ -e "FEATURE_KEY_FILE=/opt/aerospike/etc/features.conf" --name aerospike -p 3000-3002:3000-3002 %%IMAGE%%:ee-[version]
21 | ```
22 |
23 | Above, *DIR* is a directory on your machine where you drop your feature key file. Make sure Docker Desktop has file sharing permission to bind mount it into Docker containers.
24 |
25 | ### Running a node with a feature key file in an environment variable
26 |
27 | ```console
28 | FEATKEY=$(base64 ~/Desktop/evaluation-features.conf)
29 | docker run -d -e "FEATURES=$FEATKEY" -e "FEATURE_KEY_FILE=env-b64:FEATURES" --name aerospike -p 3000-3002:3000-3002 %%IMAGE%%:ee-[version]
30 | ```
31 |
32 | ### Running an Aerospike CE node
33 |
34 | ```console
35 | docker run -d --name aerospike -p 3000-3002:3000-3002 %%IMAGE%%:ce-[version]
36 | ```
37 |
38 | ## Advanced Configuration
39 |
40 | The Aerospike Docker image has a default configuration file template that can be populated with individual configuration parameters, as we did before with `FEATURE_KEY_FILE`. Alternatively, it can be replaced with a custom configuration file.
41 |
42 | The following sections describe both advanced options.
43 |
44 | ### Injecting configuration parameters
45 |
46 | You can inject parameters into the configuration template using container-side environment variables with the `-e` flag.
47 |
48 | For example, to set the default [namespace](https://www.aerospike.com/docs/architecture/data-model.html) name to *demo*:
49 |
50 | ```console
51 | docker run -d --name aerospike -e "NAMESPACE=demo" -p 3000-3002:3000-3002 -v /my/dir:/opt/aerospike/etc/ -e "FEATURE_KEY_FILE=/opt/aerospike/etc/features.conf" %%IMAGE%%:ee-[version]
52 | ```
53 |
54 | Injecting configuration parameters into the configuration template isn't compatible with using a custom configuration file. You can use one or the other.
55 |
56 | #### List of template variables
57 |
58 | - `FEATURE_KEY_FILE` - the [`feature_key_file`](https://www.aerospike.com/docs/reference/configuration/index.html#feature-key-file) is only required for the EE image. Default: /etc/aerospike/features.conf
59 | - `SERVICE_THREADS` - the [`service_threads`](https://www.aerospike.com/docs/reference/configuration/index.html#service-threads). Default: Number of vCPUs
60 | - `LOGFILE` - the [`file`](https://www.aerospike.com/docs/reference/configuration/index.html#file) param of the `logging` context. Default: /dev/null, do not log to file, log to stdout
61 | - `SERVICE_ADDRESS` - the bind [`address`](https://www.aerospike.com/docs/reference/configuration/index.html#address) of the `networking.service` subcontext. Default: any
62 | - `SERVICE_PORT` - the [`port`](https://www.aerospike.com/docs/reference/configuration/index.html#port) of the `networking.service` subcontext. Default: 3000
63 | - `HB_ADDRESS` - the `networking.heartbeat` [`address`](https://www.aerospike.com/docs/reference/configuration/index.html#address) for cross cluster mesh. Default: any
64 | - `HB_PORT` - the [`port`](https://www.aerospike.com/docs/reference/configuration/index.html#port) for `networking.heartbeat` communications. Default: 3002
65 | - `FABRIC_ADDRESS` - the [`address`](https://www.aerospike.com/docs/reference/configuration/index.html#address) of the `networking.fabric` subcontext. Default: any
66 | - `FABRIC_PORT` - the [`port`](https://www.aerospike.com/docs/reference/configuration/index.html#port) of the `networking.fabric` subcontext. Default: 3001
67 |
68 | The single preconfigured namespace is [in-memory with filesystem persistence](https://www.aerospike.com/docs/operations/configure/namespace/storage/#recipe-for-a-hdd-storage-engine-with-data-in-index-engine)
69 |
70 | - `NAMESPACE` - the name of the namespace. Default: test
71 | - `REPL_FACTOR` - the namespace [`replication-factor`](https://www.aerospike.com/docs/reference/configuration/index.html#replication-factor). Default: 2
72 | - `MEM_GB` - the namespace [`memory-size`](https://www.aerospike.com/docs/reference/configuration/index.html#memory-size). Default: 1, the unit is always `G` (GB)
73 | - `DEFAULT_TTL` - the namespace [`default-ttl`](https://www.aerospike.com/docs/reference/configuration/index.html#default-ttl). Default: 30d
74 | - `STORAGE_GB` - the namespace persistence `file` size. Default: 4, the unit is always `G` (GB)
75 | - `NSUP_PERIOD` - the namespace [`nsup-period`](https://www.aerospike.com/docs/reference/configuration/index.html#nsup-period). Default: 120 , nsup-period in seconds
76 |
77 | ### Using a custom configuration file
78 |
79 | You can override the use of the configuration file template by providing your own aerospike.conf, as described in [Configuring Aerospike Database](https://www.aerospike.com/docs/operations/configure/index.html).
80 |
81 | You should first `-v` map a local directory, which Docker will bind mount. Next, drop your aerospike.conf file into this directory. Finally, use the `--config-file` option to tell Aerospike where in the container the configuration file is (the default path is /etc/aerospike/aerospike.conf). Remember that the feature key file is required, so use `feature-key-file` in your config file to point to a mounted path (such as /opt/aerospike/etc/feature.conf).
82 |
83 | For example:
84 |
85 | ```console
86 | docker run -d -v /opt/aerospike/etc/:/opt/aerospike/etc/ --name aerospike -p 3000-3002:3000-3002 %%IMAGE%%:ee-[version] --config-file /opt/aerospike/etc/aerospike.conf
87 | ```
88 |
89 | ### Persistent Data Directory
90 |
91 | With Docker, the files within the container are not persisted past the life of the container. To persist data, you will want to mount a directory from the host to the container's /opt/aerospike/data using the `-v` option:
92 |
93 | For example:
94 |
95 | ```console
96 | docker run -d -v /opt/aerospike/data:/opt/aerospike/data -v /opt/aerospike/etc:/opt/aerospike/etc/ --name aerospike -p 3000-3002:3000-3002 -e "FEATURE_KEY_FILE=/opt/aerospike/etc/features.conf" %%IMAGE%%:ee-[version]
97 | ```
98 |
99 | The example above uses the configuration template, where the single defined namespace is in-memory with file-based persistence. Just mounting the predefined /opt/aerospike/data directory enables the data to be persisted on the host.
100 |
101 | Alternatively, a custom configuration file is used with the parameter `file` set to be a file in the mounted /opt/aerospike/data, such as in the following config snippet:
102 |
103 | namespace test {
104 | # :
105 | storage-engine device {
106 | file /opt/aerospike/data/test.dat
107 | filesize 4G
108 | data-in-memory true
109 | }
110 | }
111 |
112 | In this example we also mount the data directory in a similar way, using a custom configuration file.
113 |
114 | ```console
115 | docker run -d -v /opt/aerospike/data:/opt/aerospike/data -v /opt/aerospike/etc/:/opt/aerospike/etc/ --name aerospike -p 3000-3002:3000-3002 %%IMAGE%%:ee-[version] --config-file /opt/aerospike/etc/aerospike.conf
116 | ```
117 |
118 | ### Block Storage
119 |
120 | Docker provides an ability to expose a host's block devices to a running container. The `--device` option can be used to map a host block device within a container.
121 |
122 | Update the `storage-engine device` section of the namespace in the custom aerospike configuration file.
123 |
124 | namespace test {
125 | # :
126 | storage-engine device {
127 | device /dev/xvdc
128 | write-block-size 128k
129 | }
130 | }
131 |
132 | Now to map a host drive /dev/sdc to /dev/xvdc on a container
133 |
134 | ```console
135 | docker run -d --device '/dev/sdc:/dev/xvdc' -v /opt/aerospike/etc/:/opt/aerospike/etc/ --name aerospike -p 3000-3002:3000-3002 %%IMAGE%%:ee-[version] --config-file /opt/aerospike/etc/aerospike.conf
136 | ```
137 |
138 | ### Persistent Lua Cache
139 |
140 | Upon restart, your lua cache will become emptied. To persist the cache, you will want to mount a directory from the host to the container's `/opt/aerospike/usr/udf/lua` using the `-v` option:
141 |
142 | ```sh
143 | docker run -d -v /opt/aerospike/lua:/opt/aerospike/usr/udf/lua -v /opt/aerospike/data:/opt/aerospike/data --name aerospike -p 3000-3002:3000-3002 -e "FEATURE_KEY_FILE=/opt/etc/aerospike/features.conf" %%IMAGE%%:ee-[version]
144 | ```
145 |
146 | ## Clustering
147 |
148 | Developers using the Aerospike EE single-node evaluation, and most others using Docker Desktop on their machine for development, will not need to configure the node for clustering. If you're interested in using clustering and have a feature key file without a single node limit, read the following sections.
149 |
150 | ### Configuring the node's access address
151 |
152 | In order for the Aerospike node to properly broadcast its address to the cluster and applications, the [`access-address`](https://www.aerospike.com/docs/reference/configuration/index.html#access-address) configuration parameter needs to be set in the configuration file. If it is not set, then the IP address within the container will be used, which is not accessible to other nodes.
153 |
154 | network {
155 | service {
156 | address any # Listening IP Address
157 | port 3000 # Listening Port
158 | access-address 192.168.1.100 # IP Address used by cluster nodes and applications
159 | }
160 |
161 | ### Mesh Clustering
162 |
163 | Mesh networking requires setting up links between each node in the cluster. This can be achieved in two ways:
164 |
165 | 1. Add a configuration for each node in the cluster, as defined in [Network Heartbeat Configuration](http://www.aerospike.com/docs/operations/configure/network/heartbeat/#mesh-unicast-heartbeat).
166 | 2. Use `asinfo` to send the `tip` command, to make the node aware of another node, as defined in [tip command in asinfo](http://www.aerospike.com/docs/tools/asinfo/#tip).
167 |
168 | For more, see [How do I get a 2 nodes Aerospike cluster running quickly in Docker without editing a single file?](https://medium.com/aerospike-developer-blog/how-do-i-get-a-2-node-aerospike-cluster-running-quickly-in-docker-without-editing-a-single-file-1c2a94564a99?source=friends_link&sk=4ff6a22f0106596c42aa4b77d6cdc3a5)
169 |
170 | ## Image Versions
171 |
172 | These images are based on [ubuntu:24.04](https://hub.docker.com/_/ubuntu).
173 |
174 | ### ee-[version]
175 |
176 | These tags are for Aerospike EE images, and will require a feature key file, such as the one you get with the single-node EE evaluation, or one associated with a commercial enterprise license agreement.
177 |
178 | ### ce-[version]
179 |
180 | These tags are for Aerospike CE images, and do not require a feature key file to start. As mentioned above, the developer API for both is the same, but the editions differ in operational features.
181 |
182 | ## Reporting Issues
183 |
184 | If you have any problems with or questions about this image, please post on the [Aerospike discussion forum](https://discuss.aerospike.com).
185 |
186 | Enterprise customers are welcome to participate in the community forum, but can also report issues through the enterprise support system.
187 |
188 | Aerospike EE evaluation users can open an issue in [aerospike/aerospike-server-enterprise.docker](https://github.com/aerospike/aerospike-server-enterprise.docker/issues).
189 |
190 | Aerospike CE users can open an issue in [aerospike/aerospike-server.docker](https://github.com/aerospike/aerospike-server.docker/issues).
191 |
```
--------------------------------------------------------------------------------
/percona/content.md:
--------------------------------------------------------------------------------
```markdown
1 | # Percona Server for MySQL
2 |
3 | Percona Server for MySQL is a fork of the MySQL relational database management system created by Percona.
4 |
5 | It aims to retain close compatibility to the official MySQL releases, while focusing on performance and increased visibility into server operations. Also included in Percona Server is XtraDB, Percona's fork of the InnoDB Storage Engine.
6 |
7 | > [wikipedia.org/wiki/Percona_Server](https://en.wikipedia.org/wiki/Percona_Server)
8 |
9 | %%LOGO%%
10 |
11 | # How to use this image
12 |
13 | ## Start a `%%IMAGE%%` server instance
14 |
15 | Starting a Percona Server for MySQL instance is simple:
16 |
17 | ```console
18 | $ docker run --name some-%%REPO%% -e MYSQL_ROOT_PASSWORD=my-secret-pw -d %%IMAGE%%:tag
19 | ```
20 |
21 | ... where `some-%%REPO%%` is the name you want to assign to your container, `my-secret-pw` is the password to be set for the MySQL root user and `tag` is the tag specifying the MySQL version you want. See the list above for relevant tags.
22 |
23 | ## Connect to Percona Server from the MySQL command line client
24 |
25 | The following command starts another `%%IMAGE%%` container instance and runs the `mysql` command line client against your original `%%IMAGE%%` container, allowing you to execute SQL statements against your database instance:
26 |
27 | ```console
28 | $ docker run -it --network some-network --rm %%IMAGE%% mysql -hsome-%%REPO%% -uexample-user -p
29 | ```
30 |
31 | ... where `some-%%REPO%%` is the name of your original `%%IMAGE%%` container (connected to the `some-network` Docker network).
32 |
33 | This image can also be used as a client for non-Docker or remote instances:
34 |
35 | ```console
36 | $ docker run -it --rm %%IMAGE%% mysql -hsome.mysql.host -usome-mysql-user -p
37 | ```
38 |
39 | More information about the MySQL command line client can be found in the [MySQL documentation](http://dev.mysql.com/doc/en/mysql.html)
40 |
41 | ## %%COMPOSE%%
42 |
43 | Run `docker compose up`, wait for it to initialize completely, and visit `http://localhost:8080` or `http://host-ip:8080` (as appropriate).
44 |
45 | ## Container shell access and viewing MySQL logs
46 |
47 | The `docker exec` command allows you to run commands inside a Docker container. The following command line will give you a bash shell inside your `%%IMAGE%%` container:
48 |
49 | ```console
50 | $ docker exec -it some-%%REPO%% bash
51 | ```
52 |
53 | The log is available through Docker's container log:
54 |
55 | ```console
56 | $ docker logs some-%%REPO%%
57 | ```
58 |
59 | ## Using a custom MySQL configuration file
60 |
61 | The startup configuration is specified in the file `/etc/my.cnf`, and that file in turn includes any files found in the `/etc/my.cnf.d` directory that end with `.cnf`. Settings in files in this directory will augment and/or override settings in `/etc/my.cnf`. If you want to use a customized MySQL configuration, you can create your alternative configuration file in a directory on the host machine and then mount that directory location as `/etc/my.cnf.d` inside the `%%IMAGE%%` container.
62 |
63 | If `/my/custom/config-file.cnf` is the path and name of your custom configuration file, you can start your `%%IMAGE%%` container like this (note that only the directory path of the custom config file is used in this command):
64 |
65 | ```console
66 | $ docker run --name some-%%REPO%% -v /my/custom:/etc/my.cnf.d -e MYSQL_ROOT_PASSWORD=my-secret-pw -d %%IMAGE%%:tag
67 | ```
68 |
69 | This will start a new container `some-%%REPO%%` where the Percona Server for MySQL instance uses the combined startup settings from `/etc/my.cnf` and `/etc/my.cnf.d/config-file.cnf`, with settings from the latter taking precedence.
70 |
71 | ### Configuration without a `cnf` file
72 |
73 | Many configuration options can be passed as flags to `mysqld`. This will give you the flexibility to customize the container without needing a `cnf` file. For example, if you want to change the default encoding and collation for all tables to use UTF-8 (`utf8mb4`) just run the following:
74 |
75 | ```console
76 | $ docker run --name some-%%REPO%% -e MYSQL_ROOT_PASSWORD=my-secret-pw -d %%IMAGE%%:tag --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci
77 | ```
78 |
79 | If you would like to see a complete list of available options, just run:
80 |
81 | ```console
82 | $ docker run -it --rm %%IMAGE%%:tag --verbose --help
83 | ```
84 |
85 | ## Environment Variables
86 |
87 | When you start the `%%IMAGE%%` image, you can adjust the configuration of the instance by passing one or more environment variables on the `docker run` command line. Do note that none of the variables below will have any effect if you start the container with a data directory that already contains a database: any pre-existing database will always be left untouched on container startup.
88 |
89 | ### `MYSQL_ROOT_PASSWORD`
90 |
91 | This variable is mandatory and specifies the password that will be set for the `root` superuser account. In the above example, it was set to `my-secret-pw`.
92 |
93 | ### `MYSQL_ROOT_HOST`
94 |
95 | By default, `root` can connect from anywhere. This option restricts root connections to be from the specified host only. Also `localhost` can be used here for the local-only root access.
96 |
97 | ### `MYSQL_DATABASE`
98 |
99 | This variable is optional and allows you to specify the name of a database to be created on image startup. If a user/password was supplied (see below) then that user will be granted superuser access ([corresponding to `GRANT ALL`](https://dev.mysql.com/doc/refman/en/creating-accounts.html)) to this database.
100 |
101 | ### `MYSQL_USER`, `MYSQL_PASSWORD`
102 |
103 | These variables are optional, used in conjunction to create a new user and to set that user's password. This user will be granted superuser permissions (see above) for the database specified by the `MYSQL_DATABASE` variable. Both variables are required for a user to be created.
104 |
105 | Do note that there is no need to use this mechanism to create the root superuser, that user gets created by default with the password specified by the `MYSQL_ROOT_PASSWORD` variable.
106 |
107 | ### `MYSQL_ALLOW_EMPTY_PASSWORD`
108 |
109 | This is an optional variable. Set to `yes` to allow the container to be started with a blank password for the root user. *NOTE*: Setting this variable to `yes` is not recommended unless you really know what you are doing, since this will leave your instance completely unprotected, allowing anyone to gain complete superuser access.
110 |
111 | ### `MYSQL_RANDOM_ROOT_PASSWORD`
112 |
113 | This is an optional variable. Set to `yes` to generate a random initial password for the root user (using `pwmake`). The generated root password will be printed to stdout (`GENERATED ROOT PASSWORD: .....`).
114 |
115 | ### `MYSQL_ONETIME_PASSWORD`
116 |
117 | Sets root (*not* the user specified in `MYSQL_USER`!) user as expired once init is complete, forcing a password change on first login. *NOTE*: This feature is supported on MySQL 5.6+ only. Using this option on MySQL 5.5 will throw an appropriate error during initialization.
118 |
119 | ### `MYSQL_INITDB_SKIP_TZINFO`
120 |
121 | At first run MySQL automatically loads from the local system the timezone information needed for the `CONVERT_TZ()` function. If it's is not what is intended, this option disables timezone loading.
122 |
123 | ### `INIT_TOKUDB`
124 |
125 | Tuns on TokuDB Engine. It can be activated only when *transparent huge pages* (THP) are disabled.
126 |
127 | ### `INIT_ROCKSDB`
128 |
129 | Tuns on RocksDB Engine.
130 |
131 | ## Docker Secrets
132 |
133 | As an alternative to passing sensitive information via environment variables, `_FILE` may be appended to the previously listed environment variables, causing the initialization script to load the values for those variables from files present in the container. In particular, this can be used to load passwords from Docker secrets stored in `/run/secrets/<secret_name>` files. For example:
134 |
135 | ```console
136 | $ docker run --name some-mysql -e MYSQL_ROOT_PASSWORD_FILE=/run/secrets/mysql-root -d %%IMAGE%%:tag
137 | ```
138 |
139 | Currently, this is only supported for `MYSQL_ROOT_PASSWORD`, `MYSQL_ROOT_HOST`, `MYSQL_DATABASE`, `MYSQL_USER`, and `MYSQL_PASSWORD`.
140 |
141 | ## Telemetry
142 |
143 | Starting with Percona Server 8.0.35-27, telemetry will be enabled by default. If you decide not to send usage data to Percona, you can set the `PERCONA_TELEMETRY_DISABLE=1` environment variable. For example:
144 |
145 | ```console
146 | $ docker run --name some-mysql -e PERCONA_TELEMETRY_DISABLE=1 -d %%IMAGE%%:tag
147 | ```
148 |
149 | # Initializing a fresh instance
150 |
151 | When a container is started for the first time, a new database with the specified name will be created and initialized with the provided configuration variables. Furthermore, it will execute files with extensions `.sh`, `.sql` and `.sql.gz` that are found in `/docker-entrypoint-initdb.d`. Files will be executed in alphabetical order. You can easily populate your `%%IMAGE%%` services by [mounting a SQL dump into that directory](https://docs.docker.com/storage/bind-mounts/) and provide [custom images](https://docs.docker.com/reference/dockerfile/) with contributed data. SQL files will be imported by default to the database specified by the `MYSQL_DATABASE` variable.
152 |
153 | # Caveats
154 |
155 | ## Where to Store Data
156 |
157 | Important note: There are several ways to store data used by applications that run in Docker containers. We encourage users of the `%%IMAGE%%` images to familiarize themselves with the options available, including:
158 |
159 | - Let Docker manage the storage of your database data [by writing the database files to disk on the host system using its own internal volume management](https://docs.docker.com/storage/volumes/). This is the default and is easy and fairly transparent to the user. The downside is that the files may be hard to locate for tools and applications that run directly on the host system, i.e. outside containers.
160 | - Create a data directory on the host system (outside the container) and [mount this to a directory visible from inside the container](https://docs.docker.com/storage/bind-mounts/). This places the database files in a known location on the host system, and makes it easy for tools and applications on the host system to access the files. The downside is that the user needs to make sure that the directory exists, and that e.g. directory permissions and other security mechanisms on the host system are set up correctly.
161 |
162 | The Docker documentation is a good starting point for understanding the different storage options and variations, and there are multiple blogs and forum postings that discuss and give advice in this area. We will simply show the basic procedure here for the latter option above:
163 |
164 | 1. Create a data directory on a suitable volume on your host system, e.g. `/my/own/datadir`.
165 | 2. Start your `%%IMAGE%%` container like this:
166 |
167 | ```console
168 | $ docker run --name some-%%REPO%% -v /my/own/datadir:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=my-secret-pw -d %%IMAGE%%:tag
169 | ```
170 |
171 | The `-v /my/own/datadir:/var/lib/mysql` part of the command mounts the `/my/own/datadir` directory from the underlying host system as `/var/lib/mysql` inside the container, where MySQL by default will write its data files.
172 |
173 | ## No connections until MySQL init completes
174 |
175 | If there is no database initialized when the container starts, then a default database will be created. While this is the expected behavior, this means that it will not accept incoming connections until such initialization completes. This may cause issues when using automation tools, such as Docker Compose, which start several containers simultaneously.
176 |
177 | If the application you're trying to connect to MySQL does not handle MySQL downtime or waiting for MySQL to start gracefully, then a putting a connect-retry loop before the service starts might be necessary. For an example of such an implementation in the official images, see [WordPress](https://github.com/docker-library/wordpress/blob/1b48b4bccd7adb0f7ea1431c7b470a40e186f3da/docker-entrypoint.sh#L195-L235) or [Bonita](https://github.com/docker-library/docs/blob/9660a0cccb87d8db842f33bc0578d769caaf3ba9/bonita/stack.yml#L28-L44).
178 |
179 | ## Usage against an existing database
180 |
181 | If you start your `%%IMAGE%%` container instance with a data directory that already contains a database (specifically, a `mysql` subdirectory), the `$MYSQL_ROOT_PASSWORD` variable should be omitted from the run command line; it will in any case be ignored, and the pre-existing database will not be changed in any way.
182 |
183 | ## Creating database dumps
184 |
185 | Most of the normal tools will work, although their usage might be a little convoluted in some cases to ensure they have access to the `mysqld` server. A simple way to ensure this is to use `docker exec` and run the tool from the same container, similar to the following:
186 |
187 | ```console
188 | $ docker exec some-%%REPO%% sh -c 'exec mysqldump --all-databases -uroot -p"$MYSQL_ROOT_PASSWORD"' > /some/path/on/your/host/all-databases.sql
189 | ```
190 |
191 | ## Restoring data from dump files
192 |
193 | For restoring data. You can use `docker exec` command with `-i` flag, similar to the following:
194 |
195 | ```console
196 | $ docker exec -i some-%%REPO%% sh -c 'exec mysql -uroot -p"$MYSQL_ROOT_PASSWORD"' < /some/path/on/your/host/all-databases.sql
197 | ```
198 |
```
--------------------------------------------------------------------------------
/mysql/content.md:
--------------------------------------------------------------------------------
```markdown
1 | # What is MySQL?
2 |
3 | MySQL is the world's most popular open source database. With its proven performance, reliability and ease-of-use, MySQL has become the leading database choice for web-based applications, covering the entire range from personal projects and websites, via e-commerce and information services, all the way to high profile web properties including Facebook, Twitter, YouTube, Yahoo! and many more.
4 |
5 | For more information and related downloads for MySQL Server and other MySQL products, please visit [www.mysql.com](http://www.mysql.com).
6 |
7 | %%LOGO%%
8 |
9 | # How to use this image
10 |
11 | ## Start a `%%IMAGE%%` server instance
12 |
13 | Starting a MySQL instance is simple:
14 |
15 | ```console
16 | $ docker run --name some-%%REPO%% -e MYSQL_ROOT_PASSWORD=my-secret-pw -d %%IMAGE%%:tag
17 | ```
18 |
19 | ... where `some-%%REPO%%` is the name you want to assign to your container, `my-secret-pw` is the password to be set for the MySQL root user and `tag` is the tag specifying the MySQL version you want. See the list above for relevant tags.
20 |
21 | ## Connect to MySQL from the MySQL command line client
22 |
23 | The following command starts another `%%IMAGE%%` container instance and runs the `mysql` command line client against your original `%%IMAGE%%` container, allowing you to execute SQL statements against your database instance:
24 |
25 | ```console
26 | $ docker run -it --network some-network --rm %%IMAGE%% mysql -hsome-%%REPO%% -uexample-user -p
27 | ```
28 |
29 | ... where `some-%%REPO%%` is the name of your original `%%IMAGE%%` container (connected to the `some-network` Docker network).
30 |
31 | This image can also be used as a client for non-Docker or remote instances:
32 |
33 | ```console
34 | $ docker run -it --rm %%IMAGE%% mysql -hsome.mysql.host -usome-mysql-user -p
35 | ```
36 |
37 | More information about the MySQL command line client can be found in the [MySQL documentation](http://dev.mysql.com/doc/en/mysql.html)
38 |
39 | ## %%COMPOSE%%
40 |
41 | Run `docker compose up`, wait for it to initialize completely, and visit `http://localhost:8080` or `http://host-ip:8080` (as appropriate).
42 |
43 | ## Container shell access and viewing MySQL logs
44 |
45 | The `docker exec` command allows you to run commands inside a Docker container. The following command line will give you a bash shell inside your `%%IMAGE%%` container:
46 |
47 | ```console
48 | $ docker exec -it some-%%REPO%% bash
49 | ```
50 |
51 | The log is available through Docker's container log:
52 |
53 | ```console
54 | $ docker logs some-%%REPO%%
55 | ```
56 |
57 | ## Using a custom MySQL configuration file
58 |
59 | The default configuration for MySQL can be found in `/etc/mysql/my.cnf`, which may `!includedir` additional directories such as `/etc/mysql/conf.d` or `/etc/mysql/mysql.conf.d`. Please inspect the relevant files and directories within the `%%IMAGE%%` image itself for more details.
60 |
61 | If `/my/custom/config-file.cnf` is the path and name of your custom configuration file, you can start your `%%IMAGE%%` container like this (note that only the directory path of the custom config file is used in this command):
62 |
63 | ```console
64 | $ docker run --name some-%%REPO%% -v /my/custom:/etc/mysql/conf.d -e MYSQL_ROOT_PASSWORD=my-secret-pw -d %%IMAGE%%:tag
65 | ```
66 |
67 | This will start a new container `some-%%REPO%%` where the MySQL instance uses the combined startup settings from `/etc/mysql/my.cnf` and `/etc/mysql/conf.d/config-file.cnf`, with settings from the latter taking precedence.
68 |
69 | ### Configuration without a `cnf` file
70 |
71 | Many configuration options can be passed as flags to `mysqld`. This will give you the flexibility to customize the container without needing a `cnf` file. For example, if you want to change the default encoding and collation for all tables to use UTF-8 (`utf8mb4`) just run the following:
72 |
73 | ```console
74 | $ docker run --name some-%%REPO%% -e MYSQL_ROOT_PASSWORD=my-secret-pw -d %%IMAGE%%:tag --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci
75 | ```
76 |
77 | If you would like to see a complete list of available options, just run:
78 |
79 | ```console
80 | $ docker run -it --rm %%IMAGE%%:tag --verbose --help
81 | ```
82 |
83 | ## Environment Variables
84 |
85 | When you start the `%%IMAGE%%` image, you can adjust the configuration of the MySQL instance by passing one or more environment variables on the `docker run` command line. Do note that none of the variables below will have any effect if you start the container with a data directory that already contains a database: any pre-existing database will always be left untouched on container startup.
86 |
87 | See also https://dev.mysql.com/doc/refman/5.7/en/environment-variables.html for documentation of environment variables which MySQL itself respects (especially variables like `MYSQL_HOST`, which is known to cause issues when used with this image).
88 |
89 | ### `MYSQL_ROOT_PASSWORD`
90 |
91 | This variable is mandatory and specifies the password that will be set for the MySQL `root` superuser account. In the above example, it was set to `my-secret-pw`.
92 |
93 | ### `MYSQL_DATABASE`
94 |
95 | This variable is optional and allows you to specify the name of a database to be created on image startup. If a user/password was supplied (see below) then that user will be granted superuser access ([corresponding to `GRANT ALL`](https://dev.mysql.com/doc/refman/en/creating-accounts.html)) to this database.
96 |
97 | ### `MYSQL_USER`, `MYSQL_PASSWORD`
98 |
99 | These variables are optional, used in conjunction to create a new user and to set that user's password. This user will be granted superuser permissions (see above) for the database specified by the `MYSQL_DATABASE` variable. Both variables are required for a user to be created.
100 |
101 | Do note that there is no need to use this mechanism to create the root superuser, that user gets created by default with the password specified by the `MYSQL_ROOT_PASSWORD` variable.
102 |
103 | ### `MYSQL_ALLOW_EMPTY_PASSWORD`
104 |
105 | This is an optional variable. Set to a non-empty value, like `yes`, to allow the container to be started with a blank password for the root user. *NOTE*: Setting this variable to `yes` is not recommended unless you really know what you are doing, since this will leave your MySQL instance completely unprotected, allowing anyone to gain complete superuser access.
106 |
107 | ### `MYSQL_RANDOM_ROOT_PASSWORD`
108 |
109 | This is an optional variable. Set to a non-empty value, like `yes`, to generate a random initial password for the root user (using `openssl`). The generated root password will be printed to stdout (`GENERATED ROOT PASSWORD: .....`).
110 |
111 | ### `MYSQL_ONETIME_PASSWORD`
112 |
113 | Sets root (*not* the user specified in `MYSQL_USER`!) user as expired once init is complete, forcing a password change on first login. Any non-empty value will activate this setting. *NOTE*: This feature is supported on MySQL 5.6+ only. Using this option on MySQL 5.5 will throw an appropriate error during initialization.
114 |
115 | ### `MYSQL_INITDB_SKIP_TZINFO`
116 |
117 | By default, the entrypoint script automatically loads the timezone data needed for the `CONVERT_TZ()` function. If it is not needed, any non-empty value disables timezone loading.
118 |
119 | ## Docker Secrets
120 |
121 | As an alternative to passing sensitive information via environment variables, `_FILE` may be appended to the previously listed environment variables, causing the initialization script to load the values for those variables from files present in the container. In particular, this can be used to load passwords from Docker secrets stored in `/run/secrets/<secret_name>` files. For example:
122 |
123 | ```console
124 | $ docker run --name some-mysql -e MYSQL_ROOT_PASSWORD_FILE=/run/secrets/mysql-root -d %%IMAGE%%:tag
125 | ```
126 |
127 | Currently, this is only supported for `MYSQL_ROOT_PASSWORD`, `MYSQL_ROOT_HOST`, `MYSQL_DATABASE`, `MYSQL_USER`, and `MYSQL_PASSWORD`.
128 |
129 | # Initializing a fresh instance
130 |
131 | When a container is started for the first time, a new database with the specified name will be created and initialized with the provided configuration variables. Furthermore, it will execute files with extensions `.sh`, `.sql` and `.sql.gz` that are found in `/docker-entrypoint-initdb.d`. Files will be executed in alphabetical order. You can easily populate your `%%IMAGE%%` services by [mounting a SQL dump into that directory](https://docs.docker.com/storage/bind-mounts/) and provide [custom images](https://docs.docker.com/reference/dockerfile/) with contributed data. SQL files will be imported by default to the database specified by the `MYSQL_DATABASE` variable.
132 |
133 | # Caveats
134 |
135 | ## Where to Store Data
136 |
137 | Important note: There are several ways to store data used by applications that run in Docker containers. We encourage users of the `%%IMAGE%%` images to familiarize themselves with the options available, including:
138 |
139 | - Let Docker manage the storage of your database data [by writing the database files to disk on the host system using its own internal volume management](https://docs.docker.com/storage/volumes/). This is the default and is easy and fairly transparent to the user. The downside is that the files may be hard to locate for tools and applications that run directly on the host system, i.e. outside containers.
140 | - Create a data directory on the host system (outside the container) and [mount this to a directory visible from inside the container](https://docs.docker.com/storage/bind-mounts/). This places the database files in a known location on the host system, and makes it easy for tools and applications on the host system to access the files. The downside is that the user needs to make sure that the directory exists, and that e.g. directory permissions and other security mechanisms on the host system are set up correctly.
141 |
142 | The Docker documentation is a good starting point for understanding the different storage options and variations, and there are multiple blogs and forum postings that discuss and give advice in this area. We will simply show the basic procedure here for the latter option above:
143 |
144 | 1. Create a data directory on a suitable volume on your host system, e.g. `/my/own/datadir`.
145 | 2. Start your `%%IMAGE%%` container like this:
146 |
147 | ```console
148 | $ docker run --name some-%%REPO%% -v /my/own/datadir:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=my-secret-pw -d %%IMAGE%%:tag
149 | ```
150 |
151 | The `-v /my/own/datadir:/var/lib/mysql` part of the command mounts the `/my/own/datadir` directory from the underlying host system as `/var/lib/mysql` inside the container, where MySQL by default will write its data files.
152 |
153 | ## No connections until MySQL init completes
154 |
155 | If there is no database initialized when the container starts, then a default database will be created. While this is the expected behavior, this means that it will not accept incoming connections until such initialization completes. This may cause issues when using automation tools, such as Docker Compose, which start several containers simultaneously.
156 |
157 | If the application you're trying to connect to MySQL does not handle MySQL downtime or waiting for MySQL to start gracefully, then putting a connect-retry loop before the service starts might be necessary. For an example of such an implementation in the official images, see [WordPress](https://github.com/docker-library/wordpress/blob/1b48b4bccd7adb0f7ea1431c7b470a40e186f3da/docker-entrypoint.sh#L195-L235) or [Bonita](https://github.com/docker-library/docs/blob/9660a0cccb87d8db842f33bc0578d769caaf3ba9/bonita/stack.yml#L28-L44).
158 |
159 | ## Usage against an existing database
160 |
161 | If you start your `%%IMAGE%%` container instance with a data directory that already contains a database (specifically, a `mysql` subdirectory), the `$MYSQL_ROOT_PASSWORD` variable should be omitted from the run command line; it will in any case be ignored, and the pre-existing database will not be changed in any way.
162 |
163 | ## Running as an arbitrary user
164 |
165 | If you know the permissions of your directory are already set appropriately (such as running against an existing database, as described above) or you have need of running `mysqld` with a specific UID/GID, it is possible to invoke this image with `--user` set to any value (other than `root`/`0`) in order to achieve the desired access/configuration:
166 |
167 | ```console
168 | $ mkdir data
169 | $ ls -lnd data
170 | drwxr-xr-x 2 1000 1000 4096 Aug 27 15:54 data
171 | $ docker run -v "$PWD/data":/var/lib/mysql --user 1000:1000 --name some-%%REPO%% -e MYSQL_ROOT_PASSWORD=my-secret-pw -d %%IMAGE%%:tag
172 | ```
173 |
174 | ## Creating database dumps
175 |
176 | Most of the normal tools will work, although their usage might be a little convoluted in some cases to ensure they have access to the `mysqld` server. A simple way to ensure this is to use `docker exec` and run the tool from the same container, similar to the following:
177 |
178 | ```console
179 | $ docker exec some-%%REPO%% sh -c 'exec mysqldump --all-databases -uroot -p"$MYSQL_ROOT_PASSWORD"' > /some/path/on/your/host/all-databases.sql
180 | ```
181 |
182 | ## Restoring data from dump files
183 |
184 | For restoring data. You can use `docker exec` command with `-i` flag, similar to the following:
185 |
186 | ```console
187 | $ docker exec -i some-%%REPO%% sh -c 'exec mysql -uroot -p"$MYSQL_ROOT_PASSWORD"' < /some/path/on/your/host/all-databases.sql
188 | ```
189 |
```
--------------------------------------------------------------------------------
/couchbase/content.md:
--------------------------------------------------------------------------------
```markdown
1 | # Introduction to Couchbase Server
2 |
3 | Built on the most powerful NoSQL technology, Couchbase Server delivers unparalleled performance at scale, in any cloud. With features like memory-first architecture, geo-distributed deployments, and workload isolation, Couchbase Server excels at supporting mission-critical applications at scale while maintaining sub-millisecond latencies and 99.999% availability. Plus, with the most comprehensive SQL-compatible query language (N1QL), migrating from RDBMS to Couchbase Server is easy with ANSI join.
4 |
5 | ## Unmatched agility and flexibility
6 |
7 | Support rapidly changing business requirements with the flexibility of JSON and the power of a comprehensive query language (N1QL). Develop engaging applications with multiple access methods from a single platform: key-value, query, and search. Event-driven workloads allow you to execute data-driven business logic from a centralized platform.
8 |
9 | ## Unparalleled performance at any scale
10 |
11 | Deliver consistent, fast experiences at scale, powered by a memory-first architecture. High-performance indexes and index partitioning provides unparalleled query performance with complex joins, predicates, and aggregate evaluations. And, with end-to-end data compression, Couchbase significantly reduces the cost of network, memory, and storage required for your existing workloads.
12 |
13 | ## Easiest platform to manage
14 |
15 | Deploy Couchbase Server in any cloud, at any scale. Reduce operational overhead with cloud integrations like Kubernetes, and support multi-cloud deployments globally with built-in support for active-active cross datacenter replication (XDCR).
16 |
17 | %%LOGO%%
18 |
19 | ## QuickStart with Couchbase Server and Docker
20 |
21 | Here is how to get a single node Couchbase Server cluster running on Docker containers:
22 |
23 | **Step - 1 :** Run Couchbase Server docker container
24 |
25 | `docker run -d --name db -p 8091-8097:8091-8097 -p 9123:9123 -p 11207:11207 -p 11210:11210 -p 11280:11280 -p 18091-18097:18091-18097 couchbase`
26 |
27 | Note: Couchbase Server can require a variety of ports to be exposed depending on the usage scenario. Please see https://docs.couchbase.com/server/current/install/install-ports.html for further information.
28 |
29 | **Step - 2 :** Next, visit `http://localhost:8091` on the host machine to see the Web Console to start Couchbase Server setup.
30 |
31 | 
32 |
33 | Walk through the Setup wizard and accept the default values.
34 |
35 | - Note: You may need to lower the RAM allocated to various services to fit within the bounds of the resource of the containers.
36 | - Enable the beer-sample bucket to load some sample data.
37 |
38 | 
39 |
40 | 
41 |
42 | 
43 |
44 | 
45 |
46 | **Note :** For detailed information on configuring the Server, see [Deployment Guidelines](https://docs.couchbase.com/server/current/install/install-production-deployment.html).
47 |
48 | ### Running A N1QL Query on the Couchbase Server Cluster
49 |
50 | N1QL is the SQL based query language for Couchbase Server. Simply switch to the Query tab on the Web Console at `http://localhost:8091` and run the following N1QL Query in the query window:
51 |
52 | SELECT name FROM `beer-sample` WHERE brewery_id="mishawaka_brewing";
53 |
54 | You can also execute N1QL queries from the command line. To run a query from command line query tool, run the cbq command line tool, authenticating using the credentials you provided to the wizard, and execute the N1QL Query on the beer-sample bucket
55 |
56 | ```console
57 | $ docker exec -it db cbq --user Administrator
58 | cbq> SELECT name FROM `beer-sample` WHERE brewery_id ="mishawaka_brewing";
59 | ```
60 |
61 | For more query samples, refer to [Run your first N1QL query](https://docs.couchbase.com/server/current/getting-started/try-a-query.html).
62 |
63 | ### Connect to the Couchbase Server Cluster via Applications and SDKs
64 |
65 | Couchbase Server SDKs comes in many languages: C, Go, Java, .NET, Node.js, PHP, Python. Simply run your application through the Couchbase Server SDK of your choice on the host, and point it to [http://localhost:8091/pools](http://localhost:8091/pools) to connect to the container.
66 |
67 | For running a sample application, refer to the [Sample Application](https://docs.couchbase.com/java-sdk/current/hello-world/sample-application.html) guide.
68 |
69 | ## Requirements and Best Practices
70 |
71 | ### Container Requirements
72 |
73 | Official Couchbase Server images on Docker Hub are based on the latest supported version of Ubuntu.
74 |
75 | **Docker Container Resource Requirements :** For minimum container requirements, you can follow [System Resource Requirements](https://docs.couchbase.com/server/current/install/pre-install.html) for development, test and production environments.
76 |
77 | ### Best Practices
78 |
79 | **Avoid a Single Point of Failure :** Couchbase Server's resilience and high-availability are achieved through creating a cluster of independent nodes and replicating data between them so that any individual node failure doesn't lead to loss of access to your data. In a containerized environment, if you were to run multiple nodes on the same piece of physical hardware, you can inadvertently re-introduce a single point of failure. In environments where you control VM placement, we advise ensuring each Couchbase Server node runs on a different piece of physical hardware.
80 |
81 | **Sizing your containers :** Physical hardware performance characteristics are well understood. Even though containers insert a lightweight layer between Couchbase Server and the underlying OS, there is still a small overhead in running Couchbase Server in containers. For stability and better performance predictability, It is recommended to have at least 2 cores dedicated to the container in development environments and 4 cores dedicated to the container rather than shared across multiple containers for Couchbase Server instances running in production. With an over-committed environment you can end up with containers competing with each other causing unpredictable performance and sometimes stability issues.
82 |
83 | **Map Couchbase Node Specific Data to a Local Folder :** A Couchbase Server Docker container will write all persistent and node-specific data under the directory `/opt/couchbase/var` by default. It is recommended to map this directory to a directory on the host file system using the `-v` option to `docker run` to get persistence and performance.
84 |
85 | - Persistence: Storing `/opt/couchbase/var` outside the container with the `-v` option allows you to delete the container and recreate it later without losing the data in Couchbase Server. You can even update to a container running a later release/version of Couchbase Server without losing your data.
86 |
87 | - Performance: In a standard Docker environment using a union file system, leaving `/opt/couchbase/var` inside the container results in some amount of performance degradation.
88 |
89 | NOTE for SELinux : If you have SELinux enabled, mounting the host volumes in a container requires an extra step. Assuming you are mounting the `~/couchbase` directory on the host file system, you need to run the following command once before running your first container on that host:
90 |
91 | `mkdir ~/couchbase && chcon -Rt svirt_sandbox_file_t ~/couchbase`
92 |
93 | **Increase ULIMIT in Production Deployments :** Couchbase Server normally expects the following changes to ulimits:
94 |
95 | ulimit -n 200000 # nofile: max number of open files
96 | ulimit -l unlimited # memlock: maximum locked-in-memory address space
97 |
98 | These ulimit settings are necessary when running under heavy load. If you are just doing light testing and development, you can omit these settings, and everything will still work.
99 |
100 | To set the ulimits in your container, you will need to run Couchbase Docker containers with the following additional `--ulimit` flags:
101 |
102 | `docker run -d --ulimit nofile=40960:40960 --ulimit core=100000000:100000000 --ulimit memlock=100000000:100000000 --name db -p 8091-8097:8091-8097 -p 9123:9123 -p 11207:11207 -p 11210:11210 -p 11280:11280 -p 18091-18097:18091-18097 couchbase`
103 |
104 | Since "unlimited" is not supported as a value, it sets the core and memlock values to 100 GB. If your system has more than 100 GB RAM, you will want to increase this value to match the available RAM on the system.
105 |
106 | Note: The `--ulimit` flags only work on Docker 1.6 or later.
107 |
108 | **Network Configuration and Ports :** Couchbase Server communicates on many different ports (see the [Couchbase Server documentation](https://docs.couchbase.com/server/current/install/install-ports.html#ports-listed-by-communication-path)). Also, it is generally not supported that the cluster nodes be placed behind any NAT. For these reasons, Docker's default networking configuration is not ideally suited to Couchbase Server deployments. For production deployments it is recommended to use `--net=host` setting to avoid performance and reliability issues.
109 |
110 | ## Multi Node Couchbase Server Cluster Deployment Topologies
111 |
112 | With multi node Couchbase Server clusters, there are 2 popular topologies.
113 |
114 | ### All Couchbase Server containers on one physical machine
115 |
116 | This model is commonly used for scale-minimized deployments simulating production deployments for development and test purposes. Placing all containers on a single physical machine means all containers will compete for the same resources. Placing all containers on a single physical machine also eliminates the built-in protection against Couchbase Server node failures afforded by replication. When the single physical machine fails, all containers experience unavailability at the same time, losing all replicas. These restrictions may be acceptable for test systems, however it isn't recommended for applications in production.
117 |
118 | You can find more details on setting up Couchbase Server in this topology in Couchbase Server [documentation](https://docs.couchbase.com/server/current/install/getting-started-docker.html#multi-node-cluster-one-host).
119 |
120 | ┌──────────────────────────────────────────────────────────┐
121 | │ Host OS (Linux) │
122 | │ │
123 | │ ┌───────────────┐ ┌───────────────┐ ┌───────────────┐ │
124 | │ │ Container OS │ │ Container OS │ │ Container OS │ │
125 | │ │ (Ubuntu) │ │ (Ubuntu) │ │ (Ubuntu) │ │
126 | │ │ ┌───────────┐ │ │ ┌───────────┐ │ │ ┌───────────┐ │ │
127 | │ │ │ Couchbase │ │ │ │ Couchbase │ │ │ │ Couchbase │ │ │
128 | │ │ │ Server │ │ │ │ Server │ │ │ │ Server │ │ │
129 | │ │ └───────────┘ │ │ └───────────┘ │ │ └───────────┘ │ │
130 | │ └───────────────┘ └───────────────┘ └───────────────┘ │
131 | └──────────────────────────────────────────────────────────┘
132 |
133 | ### Each Couchbase Server container on its own machine
134 |
135 | This model is commonly used for production deployments. It prevents Couchbase Server nodes from stepping over each other and gives you better performance predictability. This is the supported topology in production with Couchbase Server 5.5 and higher.
136 |
137 | You can find more details on setting up Couchbase Server in this topology in Couchbase Server [documentation](https://docs.couchbase.com/server/current/install/getting-started-docker.html#multi-node-cluster-many-hosts).
138 |
139 | ┌───────────────────────┐ ┌───────────────────────┐ ┌───────────────────────┐
140 | │ Host OS (Linux) │ │ Host OS (Linux) │ │ Host OS (Linux) │
141 | │ ┌─────────────────┐ │ │ ┌─────────────────┐ │ │ ┌─────────────────┐ │
142 | │ │ Container OS │ │ │ │ Container OS │ │ │ │ Container OS │ │
143 | │ │ (Ubuntu) │ │ │ │ (Ubuntu) │ │ │ │ (Ubuntu) │ │
144 | │ │ ┌───────────┐ │ │ │ │ ┌───────────┐ │ │ │ │ ┌───────────┐ │ │
145 | │ │ │ Couchbase │ │ │ │ │ │ Couchbase │ │ │ │ │ │ Couchbase │ │ │
146 | │ │ │ Server │ │ │ │ │ │ Server │ │ │ │ │ │ Server │ │ │
147 | │ │ └───────────┘ │ │ │ │ └───────────┘ │ │ │ │ └───────────┘ │ │
148 | │ └─────────────────┘ │ │ └─────────────────┘ │ │ └─────────────────┘ │
149 | └───────────────────────┘ └───────────────────────┘ └───────────────────────┘
150 |
151 | ## Try Couchbase Cloud Free
152 |
153 | Couchbase Cloud is a fully managed NoSQL Database-as-a-Service (DBaaS) for mission-critical applications. We deploy Couchbase Cloud in your AWS VPC and manage the workload. You'll enjoy incredible price-performance and operational transparency.
154 |
155 | Start Free Trial - https://cloud.couchbase.com/sign-up
156 |
157 | # Additional References
158 |
159 | - [Couchbase Server and Containers](https://docs.couchbase.com/server/current/cloud/couchbase-cloud-deployment.html)
160 |
161 | - [Getting Started with Couchbase Server and Docker](https://docs.couchbase.com/server/current/install/getting-started-docker.html)
162 |
```
--------------------------------------------------------------------------------
/silverpeas/content.md:
--------------------------------------------------------------------------------
```markdown
1 | # What is Silverpeas
2 |
3 | [Silverpeas](https://www.silverpeas.org) is a Collaborative and Social-Networking Portal built to facilitate and to leverage the collaboration, the knowledge-sharing and the feedback of persons, teams and organizations.
4 |
5 | Accessible from a simple web browser or from a smartphone, Silverpeas is used every days by ourselves. With about 30 ready-to-use applications and with its conversional and relational functions, it makes possible for users to work together, share their knowledge and good practices, and in general to improve their reciprocal empathy, therefore their willingness to collaborate.
6 |
7 | Silverpeas is usually used as an intranet and extranet platform dedicated to collaboration and information sharing.
8 |
9 | %%LOGO%%
10 |
11 | # How to use this image
12 |
13 | Docker images of Silverpeas require one of the following database system in order to be used:
14 |
15 | - the open-source, powerful and recommended PostgreSQL database system,
16 | - the Microsoft SQLServer database system,
17 | - the Oracle database system.
18 |
19 | The Silverpeas images support actually only the two first database systems; because of the non-free licensing issues with the Oracle JDBC drivers, Silverpeas cannot include these drivers by default and consequently it cannot use transparently an Oracle database system as a persistence backend.
20 |
21 | For the same reasons, the Docker images of Silverpeas aren't shipped with the Oracle JVM but with OpenJDK. Silverpeas uses the Wildfly application server as runtime.
22 |
23 | The Silverpeas images use the following environment variables to set the database access parameters:
24 |
25 | - `DB_SERVERTYPE` to specify the database system to use with Silverpeas: POSTGRESQL for PostgreSQL, MSSQL for Microsoft SQLServer, ORACLE for Oracle. By default, it is set to POSTGRESQL.
26 | - `DB_SERVER` to specify the IP address or the name of the host on which the database system is running. By default, it is set to `database`, so that any container running a database can be linked to the Silverpeas container with this name.
27 | - `DB_NAME` to specify the database to use with Silverpeas. By default, it is set to `Silverpeas`.
28 | - `DB_USER` to specify the user identifier to use by Silverpeas to access the database. By default, it is set to `silverpeas` (it is recommended to dedicate a user account in the database for each application).
29 | - `DB_PASSWORD` to specify the password associated with the user identifier above.
30 |
31 | These environment variables can be also defined as properties into the Silverpeas global configuration file `config.properties` (see below).
32 |
33 | ## Start a Silverpeas instance with a database from a Docker container
34 |
35 | In [Docker Hub](https://hub.docker.com/), no Docker images of Microsoft SQLServer are currently available, but you will find a lot of images of PostgreSQL. For example, with an [official PostgreSQL docker image](https://hub.docker.com/_/postgres/), you can start a PostgreSQL instance initialized with a superuser `postgres` with as password `mysecretpassword`:
36 |
37 | ```console
38 | $ docker run --name postgresql -d \
39 | -e POSTGRES_PASSWORD="mysecretpassword" \
40 | -v postgresql-data:/var/lib/postgresql/data \
41 | postgres:12.3
42 | ```
43 |
44 | We recommend strongly to mount the directory with the database file on the host so the data won't be lost when upgrading PostgreSQL to a newer version (a Data Volume Container can be used instead). For any information how to start a PostgreSQL container, you can refer its [documentation](https://hub.docker.com/_/postgres/).
45 |
46 | Once the database system is running, a database for Silverpeas has to be created and a user with administrative rights on this database (and only on this database) should be added; it is recommended for a security reason to create a dedicated user account in the database for each application and therefore for Silverpeas. In this document, and by default, a database `Silverpeas` and a user `silverpeas` for that database are created. For example:
47 |
48 | ```console
49 | $ docker exec -it postgresql psql -U postgres
50 | psql (12.3 (Debian 12.3-1.pgdg100+1))
51 | Type "help" for help.
52 |
53 | postgres=# create database "Silverpeas";
54 | CREATE DATABASE
55 | postgres=# create user silverpeas with password 'thesilverpeaspassword';
56 | CREATE ROLE
57 | postgres=# grant all privileges on database "Silverpeas" to silverpeas;
58 | GRANT
59 | postgres=# \q
60 | $
61 | ```
62 |
63 | ### Start a Silverpeas instance with the default configuration
64 |
65 | Finally, a Silverpeas instance can be started by specifying the required database access parameters with the environment variables. In the example, the database is named `Silverpeas` and the priviledged user is `silverpeas` with as password `thesilverpeaspassword`:
66 |
67 | ```console
68 | $ docker run --name silverpeas -p 8080:8000 -d \
69 | -e DB_NAME="Silverpeas" \
70 | -e DB_USER="silverpeas" \
71 | -e DB_PASSWORD="thesilverpeaspassword" \
72 | -v silverpeas-log:/opt/silverpeas/log \
73 | -v silverpeas-data:/opt/silverpeas/data \
74 | --link postgresql:database \
75 | %%IMAGE%%
76 | ```
77 |
78 | By default, `database` is the default hostname used by Silverpeas for its persistence backend. So, as the PostgreSQL database is linked here under the alias `database`, we don't have to explicitly indicate its hostname with the `DB_SERVER` environment variable. The Silverpeas images expose the 8000 port and here this port is mapped to the 8080 port of the host.
79 |
80 | Silverpeas is then accessible at [http://localhost:8080/silverpeas](http://localhost:8080/silverpeas). You can sign in Silverpeas with the administrator account `SilverAdmin` and with as password `SilverAdmin`. Don't forget to change the password of the administrator account.
81 |
82 | By default, some volumes are created inside the container, so that we can access them in the host. (Refers the [Docker Documentation](https://docs.docker.com/engine/tutorials/dockervolumes/#locating-a-volume) to locate them.) Among them `/opt/silverpeas/log` and `/opt/silverpeas/data`: the first volume contains the logs produced by Silverpeas whereas the second volume contains all the data that are created and managed by the users in Silverpeas. Because the latter has already a directories structure created at image creation, a host directory cannot be mounted into the container at `opt/silverpeas/data` without losing the volume's content (the mount point overlays the pre-existing content of the volume). In our example, in order to easily locate the two volumes, we label them explicitly with respectively the labels `silverpeas-log` and `silverpeas-data`. (Using a [Data Volume Container](https://docs.docker.com/engine/userguide/containers/dockervolumes/) to map `/opt/silverpeas/log` and `/opt/silverpeas/data` is a better solution.)
83 |
84 | Silverpeas takes some time to start, so we recommend you to glance at the logs the complete starting of Silverpeas (see the section about the logs).
85 |
86 | ### Start a Silverpeas instance with a finer configuration
87 |
88 | The Silverpeas global configuration is defined in the `/opt/silverpeas/configuration/config.properties` file whose a sample can be found [here](https://raw.githubusercontent.com/Silverpeas/Silverpeas-Distribution/master/src/main/dist/configuration/sample_config.properties) or in the container directory `/opt/silverpeas/configuration/`. You can explicitly create the `config.properties` file with, additionally to the database access parameters (don't forget in that case to specify the `DB_SERVER` property with as value `database`), your peculiar configuration parameters and then start a Silverpeas instance with this configuration file:
89 |
90 | ```console
91 | $ docker run --name silverpeas -p 8080:8000 -d \
92 | -v /etc/silverpeas/config.properties:/opt/silverpeas/configuration/config.properties
93 | -v silverpeas-log:/opt/silverpeas/log \
94 | -v silverpeas-data:/opt/silverpeas/data \
95 | --link postgresql:database \
96 | %%IMAGE%%
97 | ```
98 |
99 | where `/etc/silverpeas/config.properties` is your own configuration file on the host. For security reason, we strongly recommend to set explicitly the administrator's credentials with the properties `SILVERPEAS_ADMIN_LOGIN` and `SILVERPEAS_ADMIN_PASSWORD` in the `config.properties` file. (Don't forget to set also the administrator email address with the property `SILVERPEAS_ADMIN_EMAIL`.)
100 |
101 | Below an example of such a configuration file:
102 |
103 | SILVERPEAS_ADMIN_LOGIN=SilverAdmin
104 | SILVERPEAS_ADMIN_PASSWORD=theadministratorpassword
105 | [email protected]
106 |
107 | DB_SERVERTYPE=POSTGRESQL
108 | DB_SERVER=database
109 | DB_NAME=Silverpeas
110 | DB_USER=silverpeas
111 | DB_PASSWORD=thesilverpeaspassword
112 |
113 | CONVERTER_HOST=libreoffice
114 | CONVERTER_PORT=8997
115 |
116 | SMTP_SERVER=smtp.foo.com
117 | SMTP_AUTHENTICATION=true
118 | SMTP_DEBUG=false
119 | SMTP_PORT=465
120 | SMTP_USER=silverpeas
121 | SMTP_PASSWORD=thesmtpsilverpeaspassword
122 | SMTP_SECURE=true
123 |
124 | ## Start a Silverpeas instance with a database on the host
125 |
126 | For a database system running on the host (or on a remote host) with 192.168.1.14 as IP address, you have to specify this host both to the container at starting and to Silverpeas by defining it into its global configuration file:
127 |
128 | ```console
129 | $ docker run --name silverpeas -p 8080:8000 -d \
130 | --add-host=database:192.168.1.14 \
131 | -v /etc/silverpeas/config.properties:/opt/silverpeas/configuration/config.properties \
132 | -v silverpeas-log:/opt/silverpeas/log \
133 | -v silverpeas-data:/opt/silverpeas/data \
134 | %%IMAGE%%
135 | ```
136 |
137 | where `database` is the hostname referred by the `DB_SERVER` parameter in your `/etc/silverpeas/config.properties` file as the host running the database system and that is mapped here to the actual IP address of this host. The hostname is added in the `/etc/hosts` file in the container.
138 |
139 | For a PostgreSQL database system, some configurations are required in order to be accessed from the Silverpeas container:
140 |
141 | - In the file `postgresql.conf`, edit the parameter `listen_addresses` to add the address of the PostgreSQL host (`192.168.1.14` in our example)
142 |
143 | listen_addresses = 'localhost,192.168.1.14'
144 |
145 | - In the file `pg_hba.conf`, add an entry for the Docker subnetwork
146 |
147 | host all all 172.17.0.0/16 md5
148 |
149 | - Don't forget to restart PostgreSQL for the changes to be taken into account.
150 |
151 | # Using a Data Volume Container
152 |
153 | The data produced by Silverpeas mean to be persistent, available to the next versions of Silverpeas, and they have to be accessible to other containers like the one running LibreOffice. For doing, the Docker team recommends to use a Data Volume Container.
154 |
155 | In Silverpeas, there are four types of data produced by the application:
156 |
157 | - the logging stored in `/opt/silverpeas/log`,
158 | - the user data and those produced by Silverpeas from the user data in `/opt/silverpeas/data`,
159 | - the workflows created by the workflow editor in `/opt/silverpeas/xmlcomponents/workflows`.
160 |
161 | Beside these directories, according to your specific needs, custom configuration scripts can be added in the directories `/opt/silverpeas/configuration/jboss` and `/opt/silverpeas/configuration/silverpeas`.
162 |
163 | The directories `/opt/silverpeas/log`, `/opt/silverpeas/data`, and `/opt/silverpeas/xmlcomponents/workflows` are all defined as volumes in the Docker image.
164 |
165 | All these different kind of data have to be consistent for a given state of Silverpeas; they form a coherent whole set. Then, defining a Data Volume Container to gather all of these volumes is a better solution over multiple shared-storage volume definitions. You can, with a such Data Volume Container, backup, restore or migrate more easily the full set of the data of Silverpeas.
166 |
167 | To define a Data Volume Container for Silverpeas, for example:
168 |
169 | ```console
170 | $ docker create --name silverpeas-store \
171 | -v silverpeas-data:/opt/silverpeas/data \
172 | -v silverpeas-log:/opt/silverpeas/log \
173 | -v silverpeas-workflows:/opt/silverpeas/xmlcomponents/workflows \
174 | -v /etc/silverpeas/config.properties:/opt/silverpeas/configuration/config.properties \
175 | %%IMAGE%% \
176 | /bin/true
177 | ```
178 |
179 | Then to mount the volumes in the Silverpeas container:
180 |
181 | ```console
182 | $ docker run --name silverpeas -p 8080:8000 -d \
183 | --link postgresql:database \
184 | --volumes-from silverpeas-store \
185 | %%IMAGE%%
186 | ```
187 |
188 | If you have to customize the settings of Silverpeas or add, for example, a new database definition, then specify these settings with the Data Volume Container, so that they will be available to the next versions of Silverpeas which will be then configured correctly like your previous Silverpeas installation:
189 |
190 | ```console
191 | $ docker create --name silverpeas-store \
192 | -v silverpeas-data:/opt/silverpeas/data \
193 | -v silverpeas-log:/opt/silverpeas/log \
194 | -v silverpeas-properties:/opt/silverpeas/properties \
195 | -v /etc/silverpeas/config.properties:/opt/silverpeas/configuration/config.properties \
196 | -v /etc/silverpeas/CustomerSettings.xml:/opt/silverpeas/configuration/silverpeas/CustomerSettings.xml \
197 | -v /etc/silverpeas/my-datasource.cli:/opt/silverpeas/configuration/jboss/my-datasource.cli \
198 | %%IMAGE%% \
199 | /bin/true
200 | ```
201 |
202 | # Logs
203 |
204 | You can follow the activity of Silverpeas by watching the logs generated in the mounted `/opt/silverpeas/log` directory.
205 |
206 | The output of Wildfly is redirected into the container standard output and so it can be watched as following:
207 |
208 | ```console
209 | $ docker logs -f silverpeas
210 | ```
211 |
212 | Silverpeas takes some time to start, so we recommend you to glance at the logs for the complete starting of Silverpeas.
213 |
```